The Anatomy of a Good Prompt
The seven components of an effective prompt: role, task, context, details, format, examples, and tone. With annotated examples for every piece.
The difference between a mediocre prompt and a great one is structure
Most people write prompts the way they'd text a friend. One long sentence, no formatting, missing half the context. They hit enter, get a generic response, and blame the model.
The model is fine. The instructions are the problem.
A well-structured prompt has identifiable parts. Each part serves a specific purpose, like sections in a contract or fields on an intake form. You don't need all of them every time. But knowing what they are gives you a framework for building prompts that produce client-ready output instead of vague suggestions.
Seven components show up in almost every effective prompt: role, task, context, details, format, examples, and tone. Here is what all seven look like working together:
ROLE: You are a senior financial analyst with 10 years of SaaS experience.
TASK: Analyze this quarterly revenue report and identify the three biggest risks.
CONTEXT: [paste quarterly report data here]
DETAILS: Focus on churn rate, expansion revenue, and CAC. Ignore one-time charges.
FORMAT: Bullet points, one sentence each. Bold the risk level (high/medium/low).
EXAMPLE: "**High risk:** Net revenue retention dropped below 100% for the first
time in six quarters, signaling churn acceleration that could compound."
TONE: Direct and analytical. Write for a board audience expecting precision.
That single prompt contains everything the model needs to produce a focused, board-ready risk analysis. No follow-up questions. No generic filler.
Role tells the model who it's pretending to be
The first component is the role, sometimes called the persona. You tell the model what kind of expert it should act as when processing your request.
This sounds silly, but it measurably affects output quality. When you write "You are a world-class legal expert," the model draws on different training data than when you write "You are a really bad legal expert." The qualifier matters. Specifying not just the profession but the caliber of expertise shifts the sophistication of the response.
Here are three examples of how the same task changes with different roles:
Role: You are a corporate attorney specializing in employment law.
Role: You are a first-year paralegal reviewing documents.
Role: You are a compliance officer at a Fortune 500 company.
Each produces a meaningfully different analysis of the same document. The attorney focuses on liability and precedent. The paralegal flags missing signatures. The compliance officer looks for regulatory gaps.
For operators and fractional leaders, role-based prompting is particularly useful. You shift between domains constantly. Monday morning you're reviewing marketing copy. Monday afternoon you're analyzing a P&L. Assigning the right role ensures the model matches the lens you need for that task.
> Operator tip. Don't settle for generic roles like "You are a helpful assistant." Be specific about the domain and the seniority level. "You are a fractional CFO who specializes in Series A SaaS companies" produces far better financial analysis than "You are a finance expert."
Task is the one line you can't skip
Every prompt needs a clear task statement. This is the single instruction that tells the model what to do. Without it, you're sliding a blank piece of paper under the door.
Keep the task statement short and specific. One sentence is ideal. Two at most. The task answers exactly one question: what do you want the model to produce?
Task: Summarize this legal document.
Task: Draft a 90-day onboarding plan for a new marketing director.
Task: Rewrite this email to be shorter and more direct.
Task: Compare these two vendor proposals and recommend one.
Each task is a single, unambiguous action. The verb at the front defines the operation. The rest of the sentence defines the scope.
The most common mistake is cramming multiple tasks into one prompt. "Summarize this document, then write three follow-up emails, then create a project plan" is three separate tasks pretending to be one. The model will attempt all of them, but quality drops because its attention gets divided. If you need three outputs, write three prompts or structure them as numbered steps.
> When to stop here. If you write a clear task and get a good result on the first try, you don't need the other six components. If "summarize this document" gives you exactly what you need, adding a role and tone and format is wasted effort. Add components only when the output isn't matching what you expected.
Context and details give the model something to work with
Context is the raw material you want the model to process. A quarterly report, a client email, meeting notes, a contract. Without context, the model is guessing.
Details are the instructions that refine the task. What should the model focus on? What should it ignore? Are there constraints on length or scope?
Here's the difference:
| Component | What it provides | Example |
|---|---|---|
| Context | The actual content to analyze or transform | "Here is our Q3 revenue report: [pasted data]" |
| Details | Constraints and focus areas for the analysis | "Focus on churn and expansion revenue. Ignore one-time charges." |
Context and details work together but serve different purposes. Context is the input. Details are the filter you apply to that input.
Where you place context in your prompt can matter. OpenAI's documentation suggests placing long context at the end. Anthropic recommends putting it at the beginning. In practice the difference is often negligible, but it's worth testing both positions with very long documents.
Details are the biggest variable in prompt construction. This is where your domain expertise shows up. Anyone can write "summarize this report." Only someone who understands the business can write "focus on net revenue retention and flag any quarter where NRR dipped below 100%." The details section is where your industry knowledge translates directly into better AI output.
> Operator tip. When you're working across multiple client engagements, the task and format often stay the same. The context and details are what change. Build reusable prompt templates where the task and format are locked in, and swap the context and details per client.
Format controls how the answer comes back
If you don't specify a format, the model picks one for you. Sometimes you get six paragraphs when you needed a table, or bullet points when you needed prose.
Format instructions tell the model the shape of the output. Structure type, length constraints, markup language. Here are format instructions that produce very different outputs from the same task:
Format: Use markdown. No bullet points or lists of any kind.
Format: Create a table with columns for risk, severity, and recommended action.
Format: Respond in exactly three paragraphs of 50 words each.
Format: Return the analysis as a JSON object with keys for category, finding, and priority.
Format is where iteration happens most. You ask for a summary and get bullet points. You wanted paragraphs. You add "don't use bullet points" and get numbered lists instead. You refine to "don't use lists in any form" and get clean prose.
This is normal. The model followed your instructions precisely each time. Numbered lists were technically allowed under "no bullet points." Your format instructions need to be explicit about what you want and what you don't want.
| Format instruction | What you get |
|---|---|
| "Use markdown" | Headers, bullets, bold text, code blocks |
| "Use markdown but no bullet points" | Headers, numbered lists, bold text |
| "Use markdown with no lists of any kind" | Headers, paragraphs, bold text |
| "Respond in plain text only" | No formatting at all |
| "Respond in Python code" | A Python script or data structure |
Each step in that table is more specific. The more precise your format instruction, the closer the output matches what you actually need.
> Operator tip. If you're producing deliverables for clients, lock in your format early. A fractional COO sending weekly status updates should specify the exact structure once and reuse it. "Three sections: wins this week (2-3 bullets), blockers (1-2 bullets), next week priorities (2-3 bullets). Bold the section headers."
Examples teach the model by showing, not telling
This technique goes by several names: few-shot prompting, in-prompt fine-tuning, or learning by example. Instead of describing what you want, you show the model a sample input and a sample output. The model patterns its responses after your example.
Examples are the most powerful steering mechanism available to you. They convey nuance, voice, and formatting preferences more effectively than written instructions. The model infers patterns from examples that would take paragraphs to describe explicitly.
Here is how an example fits into a prompt:
Task: Categorize customer support tickets by priority.
Example input:
"Our entire team can't log in. We have a board meeting in 2 hours
and need access to the dashboard immediately."
Example output:
"Priority: CRITICAL. Reason: Complete access failure affecting
entire team with time-sensitive business impact."
Now categorize this ticket: [paste actual ticket here]
The model reads your example output and mirrors the structure, the labeling convention, the reasoning format, and the length. You didn't have to write rules for any of that. The example carried the instruction implicitly.
You can include multiple examples to tighten the pattern further. Two or three examples showing different scenarios give the model a stronger signal about what you expect. One critical ticket, one medium-priority, one low-priority, and the model has a solid framework for classifying anything you throw at it.
> Operator tip. Keep a library of example inputs and outputs for tasks you repeat often. When you get a response that's exactly right, save it. That output becomes an example in your next prompt, which means the quality of your results compounds over time.
Tone shapes how the model communicates
The final component is tone. This controls the personality and register of the model's response.
Tone has an outsized impact on whether the output feels useful. The same analysis can feel authoritative or uncertain depending on the tone you set. A board-ready risk assessment needs a different voice than an internal Slack summary for your team.
Here is what happens when you change nothing but the tone instruction:
Tone: Be snarky and irreverent.
Result: "Oh great, another set of amendments nobody reads. Here's the highlight reel..."
Tone: Be studious and highbrow.
Result: "The Bill of Rights represents an exemplar of constitutional ascent,
codifying individual liberties through ten amendments..."
Tone: Explain it like I'm five.
Result: "There are some special rules that say the government can't be mean
to people. If there's a problem, you get a speedy and fair trial..."
Same document. Same model. Three completely different outputs.
For client-facing deliverables, tone consistency matters as much as content accuracy. If your weekly reports alternate between casual and formal because you didn't specify a tone, the client notices. Defining the tone once in a reusable template solves this permanently.
Tone also interacts with role. A "senior financial analyst" with a "casual" tone produces different results than the same role with a "formal, board-ready" tone. The role determines what the model knows. The tone determines how it communicates.
> Operator tip. If you work with multiple clients who have different communication cultures, create tone profiles for each. "Client A: direct, data-focused, no hedging. Client B: warm, collaborative, explain the reasoning." Drop the right tone profile into your prompts and the output matches expectations without you rewriting anything.
Putting it all together
You don't need all seven components in every prompt. A quick summarization might only need a task and some context. A client deliverable might need all seven. The skill is knowing which components to add when the output falls short.
Here is a before-and-after that shows the difference structure makes:
Before (flat prompt):
Summarize this document for me.
After (structured prompt):
You are a world-class legal analyst specializing in constitutional law.
Summarize this legal document about historical legal matters. Consider the
key points, different sides of each argument, and overall purpose.
Use markdown formatting. No bullet points or lists of any kind.
Write in a direct, analytical tone for a senior audience.
[paste document here]
The flat prompt produces a generic summary. The structured prompt produces a focused analysis, formatted correctly and written for its audience. Four extra lines of instruction.
Here is how to decide which components to include:
| If the output is... | Add this component |
|---|---|
| Too generic or shallow | Role with specific domain and seniority |
| Missing the point | Task rewritten with a clearer verb and scope |
| Ignoring what matters | Details specifying focus areas and exclusions |
| Formatted wrong | Format with explicit structure instructions |
| Right content, wrong voice | Tone matching your audience expectations |
| Close but not quite right | Examples showing exactly what good looks like |
Start with the task. Add components one at a time. Test after each addition. When the output matches what you need, stop. That's the prompt.
> The real skill isn't memorizing these seven components. It's building an instinct for which ones a given task needs. That instinct comes from practice. Pick a deliverable you produce every week, write a structured prompt for it, and iterate until the output is something you'd send to a client without editing. Save that prompt as a template and reuse it across engagements.