LLMs
- An LLM is basically a giant autocomplete: it predicts the next token (piece of text) given all previous tokens.
- Under the hood, text is split into tokens, turned into vectors (embeddings), run through a transformer network that looks at all tokens in context, and then generates the most likely next token repeatedly until it’s done.My favourite is Anthropic and then the usual ChatGPT, which I cancelled. I will give Mistral a try and I also use Perplexity a lot (which is not exactly a LLM)

Nano Banana is the name of a carnival theme group from Brazil
Prompt engineering
- Prompt engineering is the craft of structuring instructions and inputs so the model does what you want (role, task, format, constraints, examples).
- Techniques include breaking a task into sequential prompts, asking for step‑by‑step reasoning, asking it to generate needed background knowledge first, and iteratively refining the prompt based on previous outputs.Your learn to prompt better when you see how many credits you wasted.
Vibe coding
- Vibe coding is coding by describing what you want in natural language to an AI, letting it generate most of the code, and steering with feedback instead of hand‑writing every line.
- The developer focuses on high‑level intent, testing, and iteration, often accepting AI‑generated code without deeply reading it, “programming in English” more than in a specific language.So many Apps for this. I need to write a whole blog about it but so far I have been using only Figma Make, my goal is to move to Framer.
Tool use, function calling, MCP
- Tool use / function calling: the LLM decides when to call predefined functions (e.g.
search_flights,get_user_profile) with structured arguments, gets back JSON, and then explains results in natural language. - MCP (Model Context Protocol) is one way to expose a set of tools to the model via a registry, so the model can discover available tools and invoke them consistently from one place.
Context windows
- A context window is the maximum amount of text (tokens) the model can “see” at once—like its short‑term working memory for a single conversation or request.
- Everything inside the window (your prompt, prior turns, retrieved docs) is considered together with the model’s trained knowledge to produce the next tokens; anything beyond it is “forgotten” unless re‑included.
AI agents
- An AI agent is a system that uses models plus tools, memory, and sometimes planning to pursue goals and take actions on a user’s behalf.
- Compared to a basic chatbot, an agent can reason about what to do next, plan multi‑step tasks, act proactively (e.g. call APIs, send messages), and adapt based on what it observes.Currently using Computer from Perplexity and CoWork from Anthropic.
RAG (Retrieval‑Augmented Generation)
- RAG connects an LLM to an external knowledge base: for each query, it retrieves relevant documents, stuffs them into the context window, and then asks the model to answer using that material.
- This lets you keep the base model general, while keeping answers up‑to‑date and grounded in specific corpora (e.g. product docs, SOPs, internal Notion), reducing hallucinations.
AI evals
- AI evals are systematic ways to test and measure how well an LLM or agent performs on tasks—for example, accuracy, relevance, safety, bias, and task‑success rate.
- Methods range from fully automated checks, to using an LLM as a “judge” of another model’s outputs, to human review; strong setups combine several of these and track scores over time as you change prompts/models. I am using this concept in CAREVAL.
Designing AI interfaces
- AI‑driven UIs should clearly set expectations, show what the system is doing (and why), and make it easy to correct or refine outputs (e.g. editable prompts, chips, examples, and quick‑actions).
- For personalization and data use, explain why you need specific data, give control/opt‑outs, and adapt not just content but the interface state to what the user is doing right now, without feeling creepy.
- Best APP for this so far is a combo of Figma and Figma Make. You make your static UI then paste on Figma Make and ask for a pixel perfect copy of the the UI and if you want a working app, you detail the functionalities.
