Building an Improvement Prompt Step by Step
Walk through every piece of an improvement prompt: role, task, requirements, tone, output format, and real examples from your backlog.
Start with a specific task
Most improvement requests fail at the starting line. "Improve this" gives the model nothing to work with. It will make changes -- probably ones you don't want -- because you have not told it what "better" means in your context.
Improvement prompts are different from standard prompts in one important way: the task and the requirements are the same thing. Telling the model what to improve IS the task. There is no separate instruction layer.
Here is what that looks like in practice:
Vague (what most operators write):
Improve this ticket description.
Specific (what actually produces consistent results):
Rewrite this ticket description to include:
- A clear one-sentence summary of the feature
- Explicit acceptance criteria the dev team can test against
- A definition of scope boundaries (what is not included)
- User impact: who benefits and why it matters to them
The second version cannot be misunderstood. The model knows exactly what a "better" ticket looks like because you have defined it. Every bullet point is a testable quality standard.
This specificity is what separates operators who get client-ready output on the first attempt from those who spend 20 minutes re-prompting. You do not need to describe a perfect output -- you need to describe what the output must contain.
> Hard rule: Before you write any improvement prompt, list three to five things that are missing or wrong with the input. Those observations become your task bullets.
---
Assign the right role
Role selection is not a formality. It shapes the vocabulary the model uses, the tradeoffs it prioritizes, and the blind spots it brings to the task.
Consider the same ticket rewrite task given to three different roles:
| Role assigned | What changes |
|---|---|
| UX designer with 5+ years in enterprise web apps | Focuses on user workflows, interaction patterns, edge cases in complex environments |
| Technical writer | Focuses on clarity, precision, consistent terminology, removing ambiguity |
| Product manager | Focuses on business value, prioritization signals, stakeholder communication |
All three will produce technically acceptable results. Only one will produce the result that matches your actual need. If you are writing tickets for a dev team, the UX designer lens surfaces acceptance criteria in terms of user behavior. If you are writing docs for a client handoff, the technical writer lens tightens language and removes jargon. If you are writing for a board update, the product manager lens frames everything around outcome and priority.
How to choose: Think about who on your team would review this output and push back on it. That is the role to assign.
For the running example in this guide, the role is: UX designer with 5+ years of experience in enterprise web apps.
Adding this to the prompt so far:
## Role
You are a UX designer with 5+ years of experience in enterprise web apps.
## Task
Rewrite this ticket description to include:
- A clear one-sentence summary of the feature
- Explicit acceptance criteria the dev team can test against
- A definition of scope boundaries (what is not included)
- User impact: who benefits and why it matters to them
The structure is building. Each piece adds constraint. More constraint means less variance in the output.
---
Set the output format
Most operators skip format definition and then wonder why outputs look different every time they run the same prompt. The format is not decoration -- it is the quality standard. If you do not define it, the model invents one.
For improvement prompts specifically, a five-section output format works across almost any ticket or documentation type:
| Section | Purpose | Target length |
|---|---|---|
| Summary | One or two sentences stating what the feature is | 1-2 sentences |
| Detailed Description | Full context: why it exists, who needs it, what it replaces | 1-2 paragraphs |
| Technical Requirements | Bulleted list of what must be built | 4-8 bullets |
| Design Specifications | Bulleted list of UX/UI constraints and behaviors | 3-6 bullets |
| Accessibility Notes | WCAG compliance, keyboard navigation, screen reader considerations | 2-4 bullets |
This format works because it mirrors how dev teams actually consume ticket information. Developers scan the Summary first. They read Technical Requirements before touching code. Design and accessibility sections prevent the "we'll handle that later" conversations that create rework.
Adding this to the prompt:
## Output Format
**Summary**
1-2 sentences describing the feature.
**Detailed Description**
1-2 paragraphs covering context, user need, and what this replaces or adds.
**Technical Requirements**
- Bulleted list
**Design Specifications**
- Bulleted list
**Accessibility Notes**
- Bulleted list
The prompt now has a role, a task, and a defined output. It will produce consistent results. The next step makes it produce excellent ones.
---
Add a real example
Improvement prompts are the right place for few-shot prompting. Few-shot means giving the model one or more examples of the input-output pair you want it to replicate. For standard prompts, finding good examples takes research. For improvement prompts, you already have them -- they are sitting in your existing tickets, SOWs, and client docs.
Your team's written work is your training data. Every well-written ticket your best PM produced is an example. Every clear user story your design lead wrote is a template. You do not need to manufacture examples from scratch.
Here is what a before/after example looks like inside the prompt:
## Example
**Before (weak ticket):**
Add export to PDF button on reports page
**After (strong ticket):**
**Summary**
Add a PDF export feature to the Reports page that allows users
to generate and download formatted reports directly from the UI.
**Detailed Description**
Users currently copy report data manually into external tools
to create formatted PDFs for client distribution. This feature
adds a native export option that eliminates that workflow and
reduces report preparation time. The export should reflect the
current filtered view of the report, not the full dataset.
**Technical Requirements**
- Export button appears in the top-right action bar of the Reports page
- Clicking the button generates a PDF of the currently visible report view
- PDF filename defaults to: [ReportName]_[YYYY-MM-DD].pdf
- Export respects all active filters and date range selections
- Error state handles failed generation with a toast notification
**Design Specifications**
- Button uses the secondary action style from the design system
- Loading state shows a spinner with "Generating PDF..." label
- Downloaded file opens in the browser's default PDF viewer
**Accessibility Notes**
- Button is keyboard accessible and focus-visible
- ARIA label reads: "Export current report as PDF"
- Error toast is announced to screen readers
That before/after does two things at once. It shows the model what a weak input looks like and what a strong output looks like. More importantly, it calibrates quality level. "Strong" is now defined by a concrete example, not an abstract instruction.
> Tip: Pull your example from a real ticket your team has already approved. It does not need to be perfect -- it needs to be representative of the standard you want to replicate.
---
The complete prompt
Here is the full improvement prompt assembled from every step in this guide. Each section is labeled with its function.
# Rewrite Product Ticket
## Role <- defines expertise and perspective
You are a UX designer with 5+ years of experience in enterprise web apps.
## Task <- specific requirements = the quality standard
Rewrite this ticket description to include:
- A clear one-sentence summary of the feature
- Explicit acceptance criteria the dev team can test against
- A definition of scope boundaries (what is not included)
- User impact: who benefits and why it matters to them
## Output Format <- structure locks in consistency
**Summary**
1-2 sentences describing the feature.
**Detailed Description**
1-2 paragraphs covering context, user need, and what this replaces or adds.
**Technical Requirements**
- Bulleted list
**Design Specifications**
- Bulleted list
**Accessibility Notes**
- Bulleted list
## Example <- few-shot calibration
**Before (weak ticket):**
Add export to PDF button on reports page
**After (strong ticket):**
**Summary**
Add a PDF export feature to the Reports page that allows users
to generate and download formatted reports directly from the UI.
**Detailed Description**
Users currently copy report data manually into external tools to
create formatted PDFs for client distribution. This feature adds
a native export option that eliminates that workflow and reduces
report preparation time. The export should reflect the current
filtered view, not the full dataset.
**Technical Requirements**
- Export button appears in the top-right action bar of the Reports page
- Clicking the button generates a PDF of the currently visible report view
- PDF filename defaults to: [ReportName]\_[YYYY-MM-DD].pdf
- Export respects all active filters and date range selections
- Error state handles failed generation with a toast notification
**Design Specifications**
- Button uses the secondary action style from the design system
- Loading state shows a spinner with "Generating PDF..." label
- Downloaded file opens in the browser's default PDF viewer
**Accessibility Notes**
- Button is keyboard accessible and focus-visible
- ARIA label reads: "Export current report as PDF"
- Error toast is announced to screen readers
## Context <- domain-specific guardrails
The client uses the Atlassian design system. All UI components
must reference existing tokens and patterns. Do not introduce
new components without flagging them as net-new.
## Input
[Paste the weak ticket here]
Role defines the lens. Task defines the quality bar. Output Format locks in structure. Example calibrates the standard. Context adds the client-specific rules that prevent the model from producing work that looks polished but breaks your design system.
That last section -- Context -- is on the next page, where we show you how to build reusable context blocks for your client engagements.