Prompting in practice
A vague prompt produces vague output. A prompt with shape produces work you can verify. This page gives you the shape, copyable templates, dictation tooling, and a cost frame for individuals.
The shape
Almost every useful prompt has the same five ingredients. Say all five.
Inputs · Outputs · Scope · Constraints · Verification
- Inputs: which files, docs, or context to read first
- Outputs: the artifact you want — a function, a diff, a test, a decision
- Scope: what NOT to change
- Constraints: the brief — design doc, spec, screenshot, example
- Verification: how you'll know it worked — test command, expected output, lint, type check
If a prompt is missing more than one of these, it's probably going to waste a turn.
Copyable templates
For a small fix
Read <path/to/file>. Fix the bug where <symptom>.
Don't touch <other-files> — the fix should be local.
Verify by running <test command> and confirming the failing test now passes.
For a new feature
Read <spec or doc>. Implement <feature> following its shape.
Constraints:
- Keep the public API unchanged
- Don't add new dependencies
- Use the patterns from <reference file>
Verify with <test command>. Show me the diff before applying.
For an investigation
Search the codebase for everywhere <pattern> is used.
Report a short summary: where it's used, what for, anything inconsistent.
Don't change any code. I'll decide what to do with the report.
For a refactor
Read <file>. Refactor <X> to <Y> without changing behaviour.
Run the existing tests after each step — they must stay green.
If a test fails, stop and tell me before continuing.
Dictate, don't type
The fastest way to hit the shape above is to stop typing. When you talk, you naturally include more context — 2–3× the detail in half the time. The shape becomes effortless because you're already talking like that to yourself anyway.
Tooling
- Built-in OS dictation — free, on every Mac (Edit → Start Dictation, or hit Fn twice) and Windows (Win+H). Good enough for most prompts.
- Whisper Flow / SuperWhisper — paid, follows you between apps, cleans up filler words, adds punctuation, much higher accuracy than the OS dictation. The pro upgrade.
- Phone dictation — for prompts on the go. Send to a notes app, paste later.
Try this today
Dictate your first build brief instead of typing it. Notice how much more detail you give the agent without thinking about it.
Cost calibration for individuals
You'll get asked: is paying for a Pro plan worth it? The honest answer for someone learning:
- The Gemini free tier gets you started — it handles most learning tasks.
- Gemini Pro at $20/month is the equivalent of a streaming service. If you're using it for 1+ hour a day, it pays for itself in time saved within a week.
- Vertex AI / Gemini API pricing matters when you're building things with the API (chatbots, MCP servers). For interactive coding, the Pro plan is almost always cheaper.
- Token efficiency still matters. Prefer CLIs over MCP servers when both work — a CLI returns 50 tokens of stdout where an MCP call sends 500+. Across a long session that's the difference between a $0.30 task and a $3 task.
If you're a student or early-career and money is tight: the Gemini free tier plus a careful pay-as-you-go Gemini API key for occasional heavy lifts is a defensible setup. Gemini Pro is the obvious upgrade once you're using AI more than an hour a day.