My brain dump

So I have Ollama for a while on my Mac…do I use it? Almost never.

I also need to user Hugging Face more, for me it\’s likepart GitHub for ML, part distribution layer, part hosted inference layer.

I rarely use GitHub but since vibe coding I started using a bit more with the help of the Assistant feature inside Comet. I need sometimes to download the websites I build in Figma Make and upload them to Vercel

Ollama = “I want to run a model locally with simple commands and a local API.”

Hugging Face = “I want to find models, compare them, download them, fine-tune them, host demos, or call hosted inference.”

So what does that all mean? I was only affirming what I know and also researching new tools. I will give Replit a try and also I gave both Replit and

Em Português, o que eu faço?

Em todos os trabalhos que tive, as habilidades fundamentais eram as mesmas. O que mudava eram as atividades e os skills adjacentes, quase sempre definidos pelo contexto. Fiz pesquisa qualitativa e quantitativa, UI design e, em alguns momentos, tarefas típicas de PM. Mesmo quando atuei só como UX Researcher, meu background em UI ampliava a leitura do sistema.

Meu novo projeto: Tinytravelindex.com

Understanding LLMs: Key Concepts and Applications

LLMs

  • An LLM is basically a giant autocomplete: it predicts the next token (piece of text) given all previous tokens.
  • Under the hood, text is split into tokens, turned into vectors (embeddings), run through a transformer network that looks at all tokens in context, and then generates the most likely next token repeatedly until it’s done.My favourite is Anthropic and then the usual ChatGPT, which I cancelled. I will give Mistral a try and I also use Perplexity a lot (which is not exactly a LLM)

Nano Banana is the name of a carnival theme group from Brazil

Prompt engineering

  • Prompt engineering is the craft of structuring instructions and inputs so the model does what you want (role, task, format, constraints, examples).
  • Techniques include breaking a task into sequential prompts, asking for step‑by‑step reasoning, asking it to generate needed background knowledge first, and iteratively refining the prompt based on previous outputs.Your learn to prompt better when you see how many credits you wasted.

Vibe coding

  • Vibe coding is coding by describing what you want in natural language to an AI, letting it generate most of the code, and steering with feedback instead of hand‑writing every line.
  • The developer focuses on high‑level intent, testing, and iteration, often accepting AI‑generated code without deeply reading it, “programming in English” more than in a specific language.So many Apps for this. I need to write a whole blog about it but so far I have been using only Figma Make, my goal is to move to Framer.

Tool use, function calling, MCP

  • Tool use / function calling: the LLM decides when to call predefined functions (e.g. search_flightsget_user_profile) with structured arguments, gets back JSON, and then explains results in natural language.
  • MCP (Model Context Protocol) is one way to expose a set of tools to the model via a registry, so the model can discover available tools and invoke them consistently from one place.

Context windows

  • A context window is the maximum amount of text (tokens) the model can “see” at once—like its short‑term working memory for a single conversation or request.
  • Everything inside the window (your prompt, prior turns, retrieved docs) is considered together with the model’s trained knowledge to produce the next tokens; anything beyond it is “forgotten” unless re‑included.

AI agents

  • An AI agent is a system that uses models plus tools, memory, and sometimes planning to pursue goals and take actions on a user’s behalf.
  • Compared to a basic chatbot, an agent can reason about what to do next, plan multi‑step tasks, act proactively (e.g. call APIs, send messages), and adapt based on what it observes.Currently using Computer from Perplexity and CoWork from Anthropic.​

RAG (Retrieval‑Augmented Generation)

  • RAG connects an LLM to an external knowledge base: for each query, it retrieves relevant documents, stuffs them into the context window, and then asks the model to answer using that material.
  • This lets you keep the base model general, while keeping answers up‑to‑date and grounded in specific corpora (e.g. product docs, SOPs, internal Notion), reducing hallucinations.

AI evals

  • AI evals are systematic ways to test and measure how well an LLM or agent performs on tasks—for example, accuracy, relevance, safety, bias, and task‑success rate.
  • Methods range from fully automated checks, to using an LLM as a “judge” of another model’s outputs, to human review; strong setups combine several of these and track scores over time as you change prompts/models. I am using this concept in CAREVAL.

Designing AI interfaces

  • AI‑driven UIs should clearly set expectations, show what the system is doing (and why), and make it easy to correct or refine outputs (e.g. editable prompts, chips, examples, and quick‑actions).
  • For personalization and data use, explain why you need specific data, give control/opt‑outs, and adapt not just content but the interface state to what the user is doing right now, without feeling creepy.
  • ​Best APP for this so far is a combo of Figma and Figma Make. You make your static UI then paste on Figma Make and ask for a pixel perfect copy of the the UI and if you want a working app, you detail the functionalities.

Debugging Made Easier: Embrace AI in Development

I have been vibing coding a lot. A lot that now that Figma is enforcing the credits, I ran out of them. I ran out of the Enterprise account credits and my personal one ran out, but it reset in one day. My take is that just now with AI agents such as the one in Perplexity inside Comet and Computer, I have been able to pay more attention to debugging. Before I would debug without agents through prompts and I would lose a lot of credits. Now I am more economical and tech-savvy because I want to outline the design plan with the design system with tokens using certain frameworks or not before I start.

I ask AI to add .md files explaining all the functionalities too. Meanwhile, I use MomOps.org as my base to build:

  • Tinytripindex.com (Traveling with babies can be daunting, and knowing which hotels accommodate babies 6+ is not an easy task; this site makes it easier for you.)
  • Carefolio.io (I created a rank, so you will know which companies invest in and care about women.)
  • Femhealth.science (my main project and where I spend all my Figma Make credits.)
  • Archive of Possible (repository of lost projects, such as vaccines, research, etc.)
  • Countme In (about dyscalculia.)
  • ReturnKit (about the childcare gap and mothers returning to work.)
  • Littlebites.io (pivoted and now I rank baby food.)

I also have other projects going, but those are the main ones… nevermind, I do outside my day job work hours, so progress is not linear.

I have to say, doing that enhanced my work skills as well. It helped during maternity leave and postpartum too; it helped to stay a bit sane.