CodeAlchemy

Jotting one man's journey through software development, programming, and technology


Project maintained by pablogarciaprado Hosted on GitHub Pages — Theme by mattgraham

◀️ Home

Random Concepts

AI

Generative AI vs LLMs

Generative artificial intelligence, which is commonly referred to as gen AI, is a subset of artificial intelligence that is capable of creating text, images, or other data using generative models, often in response to prompts.

Generative AI encompasses a broader range of models capable of generating various types of content beyond just text, while LLM specifically refers to a subset of generative AI models focusing on language tasks.

While both terms describe AI models capable of generating human-like responses based on input prompts in many references, it’s important to note they’re not identical.

LLMs

Large language models are highly sophisticated computer programs trained on gigantic amounts of data that can be text or images. LLMs refer to large, general-purpose language models that can be pre-trained and then fine-tuned for specific purposes.

In this context, large refers to: The size of the training dataset, which can sometimes be at the petabyte scale. And the number of parameters. Parameters are the memories and knowledge that the machine has learned during model training.

Prompts

When you submit a prompt to an LLM, it calculates the probability of the correct answer from its pre-trained model. The probability is determined through a task called pre-training. In this way, the LLM works like a fancy autocomplete, suggesting the most common correct response to the prompt.

Hallucinations

Hallucinations are words or phrases that are generated by the model that are often nonsensical or grammatically incorrect. This happens because LLMs can only understand the information they were trained on.

Algorithm

Piece of code that does an interesting thing.

Bootstrap

Refers to the initial process of setting up an application or system when it starts. Specifically, it refers to the time and resources required for the application to load and become fully operational after it is triggered or launched. This process can include tasks like loading dependencies, initializing configurations, connecting to databases, and preparing the application to handle requests.

To mitigate the impact of long startup times (especially in serverless or cloud environments), you can pre-warm instances, meaning that the application is kept alive and ready to handle traffic without having to go through the bootstrap process every time a new request is made. This helps reduce the delay that could occur when an instance is started from scratch (often referred to as cold start).

CI/CD

Continuous Integration and Continuous Deployment (or Delivery) is a set of practices and tools in software development aimed at automating and improving the process of delivering software.

Automates and streamlines the development, testing, and deployment process, enabling faster and more reliable software delivery.

Continuous Integration (CI)

CI is the practice of frequently integrating code changes into a shared repository, followed by automated builds and tests to detect errors early.

Key Features:

  1. Developers push code changes frequently (e.g., daily).

  2. Automated systems build and test the code after each change.

  3. Ensures new code integrates well with the existing codebase.

  4. Catches bugs early, improving overall software quality.

Tools: Jenkins, GitHub Actions, GitLab CI, Travis CI.

Continuous Deployment (CD)

CD extends CI by automating the process of deploying code to production or other environments after it passes all tests.

Key Features:

  1. Deployment happens automatically (or semi-automatically) after successful tests.

  2. Ensures faster delivery of new features and bug fixes to users.

  3. Reduces human intervention, minimizing errors in the deployment process.

Two Variants:

Tools: AWS CodePipeline, Azure DevOps, GitHub Actions, CircleCI.

Benefits of CI/CD

  1. Faster Development Cycles: Code changes are quickly tested and deployed.

  2. Higher Code Quality: Automated tests ensure fewer bugs reach production.

  3. Reduced Risks: Small, incremental updates are easier to test and rollback if needed.

  4. Increased Collaboration: Teams integrate and share changes more frequently.

Computer Infrastructure

Memory (RAM) – “Short-term brain”

Fast storage used to hold data that programs are currently using. Measured in: GB (gigabytes) or GiB (gibibytes, a binary version).

More memory = more or bigger apps can run at once without slowing down.

CPU (Processor) – “Worker speed & count”

The part that actually executes instructions.

Measured in cores: More cores = more things it can do at the same time. Clock speed (GHz) tells you how fast each core is, but core count is often the first concern.

Use Case Memory CPU Cores
Light web browsing 4–8 GiB 2
Coding + light tools 8–16 GiB 2–4
Video editing / VMs 16–32 GiB 4–8
Medium web server (API) 8–32 GiB 2–8
Big data / ML workloads 64+ GiB 16+

Data Storage Units

GB vs GiB

The difference between GB (gigabytes) and GiB (gibibytes) is in how the size is calculated—decimal vs binary.

Unit Value (in bytes) Based on
1 GB 1,000,000,000 bytes Decimal (base 10)
1 GiB 1,073,741,824 bytes Binary (base 2)

Easy rule of thumb: GiB is slightly bigger than GB when referring to the same number.

Modern Web Communication and Networking Protocols

Network protocols (TCP, HTTP/2)

TCP

The TCP protocol (Transmission Control Protocol) is a core internet protocol that ensures reliable, ordered, and error-checked data delivery between devices.

HTTP/2

HTTP/2 is an improved version of the HTTP protocol that makes web communication faster and more efficient by:

It speeds up loading websites and reduces latency.

Security (HTTPS, TLS certificates)

A TLS certificate is a digital file that enables secure, encrypted communication over HTTPS. It proves a website’s identity and ensures data is private and trusted between the user and the server. Without it, browsers show “Not Secure” warnings.

Communication patterns (HTTP requests, WebSockets, gRPC)

HTTP requests

HTTP requests are messages sent by a client (like a browser) to a server to ask for data or perform an action.

WebSockets

WebSockets are a communication protocol that allows a persistent, two-way connection between a client and server. Unlike regular HTTP (which is one request, one response), WebSockets let both sides send and receive data in real time — great for chats, games, or live updates.

gRPC

gRPC is a fast, open-source framework for remote procedure calls that lets different systems communicate efficiently using:

It’s great for connecting microservices with low latency and strong typing.

Infrastructure concepts

Endpoints

A reliable HTTPS endpoint is a secure web address (URL) that:

Ports

A TCP port is a numbered endpoint used by a computer to identify specific services or applications when communicating over the internet using the TCP protocol.

VPC Network

A VPC (Virtual Private Cloud) network is a private, isolated virtual network within a cloud provider where you can securely run your resources (like virtual machines or containers). It lets you control IP addresses, subnets, routing, and firewall rules—just like a traditional private network but in the cloud.

Prompting

A prompt is a specific instruction, question, or cue given to a computer. In other words, it is the text that you feed to the model. Prompt engineering is a way of articulating your prompts to get the best response from the model. The better structured a prompt is, the better the output from the model will be.

Types

Prompts can be in the form of a question, and are categorized into four categories:

Elements

The two elements of a prompt: the preamble and the input.

Best Practices

  1. Write detailed and explicit instructions. Be clear and concise in the prompts that you feed into the model.
  2. Be sure to define boundaries for the prompt. It’s better to instruct the model on what to do rather than what not to do.
  3. Adopt a persona for your input. Adding a persona for the model can provide meaningful context to help it focus on related questions, which can help improve accuracy.
  4. It’s a recommended practice to keep each sentence concise. Longer sentences can sometimes produce suboptimal results.

Proxy

A proxy is an intermediary that acts on behalf of something else. The meaning varies slightly by context:

Networking

A proxy server sits between a user and the internet. It forwards requests and responses between them.

Uses:

General Use

A proxy is someone or something authorized to act for another.

Programming:

A proxy object is a placeholder or interface that controls access to another object.

Use cases:

AI or Data Science:

A proxy variable is an indirect measure used when the actual variable isn’t available or measurable.

Unit Tests

A unit test is a type of software testing that focuses on verifying the correctness of individual units or components of a program. A “unit” typically refers to the smallest piece of code that can be tested in isolation, such as a function, method, or class.

Key Characteristics

  1. Small Scope:
  1. Automation:
  1. Isolation:
  1. Fast Execution:

Purpose

Example

def add(a, b):
    return a + b

A unit test for the function above might look like this:

import unittest

class TestMathFunctions(unittest.TestCase):
    def test_add(self):
        self.assertEqual(add(2, 3), 5)
        self.assertEqual(add(-1, 1), 0)
        self.assertEqual(add(0, 0), 0)

if __name__ == "__main__":
    unittest.main()

Webhook

A webhook is a way for one system to automatically send real-time data to another system when a specific event happens. It’s like a reverse API call: instead of your app asking for data, the webhook pushes the data to your app.

  1. You provide a URL (your webhook endpoint).
  2. Another service is set up to send data (usually via an HTTP POST request) to that URL when a certain event occurs.
  3. Your server receives and processes the data instantly.

Real-World Analogy: Imagine you place an order at a coffee shop and give them your phone number. Instead of waiting around, they text you when your drink is ready. That text is the webhook: it’s sent when the “drink is ready” event happens.