Vibe Coding Glossary
Plain-English definitions for AI coding terms. Bookmark this page—you'll reference it often as you explore AI-assisted development.
Updated: January 2025 · 24 terms defined
Start Here
New to AI coding? Start with these three terms: Diff (how you review AI changes), Repo-aware chat (how AI understands your code), and Agent (AI that can actually execute tasks).
Diff
A preview of changes between two versions of a file. Lines starting with + will be added; lines starting with - will be removed. Read a diff like a pull request before applying it. This is the foundation of safe AI coding.
Patch
One or more diffs bundled together, typically the exact text you can apply to modify files. When an AI suggests changes, it often provides a patch you can review and apply.
Hunk
A contiguous block of changes within a diff. The header (e.g., @@ -1,4 +1,7 @@) shows the affected line ranges. Large diffs contain multiple hunks.
Repo-aware chat(codebase-aware, context-aware)
Chat grounded in your repository's files, not generic examples—answers reference your code, paths, and types. Cursor's @codebase feature is a prime example.
Agent
A system that plans actions, calls tools (like file edits or web fetches), observes results, and iterates toward a goal. More autonomous than a simple chatbot. Examples: Cursor Composer, Replit Agent, Claude's computer use.
Plan & Execute(plan-then-apply)
An agent workflow that first outlines steps (plan) and then performs them (execute), often safer for large changes. Windsurf's Cascade uses this approach.
Tool use(function calling)
When an AI calls structured functions—e.g., "read file," "write file," "run tests"—instead of only producing text. This enables AI to actually do things, not just suggest them.
Multi-file edit
A coordinated set of changes across multiple files. Good assistants explain the plan and provide reviewable diffs. Cursor excels at this with its Composer feature.
RAG (Retrieval-Augmented Generation)
Supplying relevant documents or code snippets to the model at answer time so responses are grounded in the right context. How AI tools understand your specific codebase.
Context window
The maximum amount of text (tokens) the model can consider at once—larger windows can read more files but still benefit from focused prompts. Claude 3 has 100K+ tokens; GPT-4 has 8K-128K.
Embedding
A numeric representation of text or code used for semantic search (e.g., to find relevant files for RAG). How AI tools find the right context in large codebases.
Prompt
Your instruction to the model. Clear prompts specify files, goals, constraints, and success checks. Better prompts = better AI output.
System prompt
Hidden or fixed instructions that shape the assistant's behavior (tone, safety, tools). You usually can't see it in IDEs. Defines the AI's "personality" and constraints.
Temperature
Controls randomness. Lower values (0-0.3) are more deterministic and precise; higher values (0.7-1.0) produce more varied text but can drift. Use low temperature for code, higher for creative tasks.
Hallucination
When a model outputs something that looks plausible but is false (e.g., inventing APIs or files that don't exist). Always verify AI-generated code actually works.
Refactor
Change the structure of code without changing behavior—often across multiple files—with tests to confirm safety. AI tools excel at this when given clear constraints.
Test scaffolding
Minimal tests an assistant creates to verify a change. Start simple and extend by hand if needed. Ask AI to generate tests alongside any code changes.
Evals(evaluation, benchmarks)
Small, repeatable checks (manual or automated) used to compare assistants—e.g., "add a route," "rename a util," "write a unit test." How we benchmark AI coding tools.
Latency budget
The time you're willing to wait for a step or answer. Keeping tasks small helps stay under budget. Smaller requests = faster responses.
Token
A chunk of text the model processes (roughly ~4 characters in English). Context limits and costs are measured in tokens. "Hello world" = 2 tokens.
Chunking
Splitting large files/docs into smaller pieces for retrieval or processing. Good chunking improves relevance. How AI tools handle files larger than their context window.
Grounding
Tying answers to verifiable sources—your repo, docs, or data—so outputs cite where things came from. Reduces hallucinations significantly.
Guardrails
Rules or checks that constrain what a model can do (e.g., lint/test gates, file allowlists).Windsurf is known for its guardrails approach.
Vibe Coding
The AI-native approach to software development—using intelligent tools to build faster, cleaner, and more intentionally. The philosophy behind this entire site.
FAQ
What is vibe coding?
Vibe coding is the AI-native approach to software development—using intelligent tools like Cursor, Copilot, and Replit to build faster and more intentionally. Key principles include diff-first workflows, repo-aware chat, and small, reversible changes.
What's the most important term to understand for AI coding?
Diff. Understanding how to read and review diffs is the foundation of safe AI-assisted coding. A diff shows you exactly what the AI wants to change before you apply it—green lines are additions, red lines are deletions.
What's the difference between an AI assistant and an AI agent?
An AI assistant (like basic ChatGPT) responds to prompts with text. An AI agent (like Cursor Composer or Replit Agent) can plan actions, execute tools (file edits, terminal commands), observe results, and iterate toward a goal autonomously.
Why do AI tools sometimes make things up (hallucinate)?
LLMs generate text based on patterns, not facts. They can confidently produce plausible-sounding but incorrect code or non-existent APIs. The solution: use repo-aware tools (grounding), review all diffs, and run tests to verify changes work.