JSON Prompt Templates for Client Work
Three JSON prompt templates for meeting extraction, competitive analysis, and engagement tracking. Plus validation and workflow chaining.
The JSON Prompt Template
The difference between a JSON prompt that works every time and one that breaks half the time is structural, not creative. When you define the schema, provide an example, and explicitly forbid extra text, the model has no ambiguity left to fill with garbage. Here is a complete annotated template operators can copy directly.
// SECTION 1: Role assignment
// Give the model a specific operational identity. Generic roles produce generic output.
You are a senior operations analyst. You extract structured data from unstructured text
with precision. You do not summarize, interpret, or add commentary -- you populate fields.
// SECTION 2: Task statement
// One sentence. What to do and what to do it with.
Extract the following data from the meeting notes below and return it as valid JSON.
// SECTION 3: Schema definition with keys and types
// Name every key. Specify every type. No assumptions.
Return an object with these exact keys:
- "meeting_date": string (ISO 8601 format, e.g. "2025-03-15")
- "attendees": array of strings (full names only)
- "decisions": array of strings (one decision per item)
- "action_items": array of objects, each with:
- "owner": string
- "deadline": string (ISO 8601 or "TBD")
- "description": string
- "follow_ups": array of strings
// SECTION 4: Validity and output constraint
// This line eliminates the most common failure mode: extra prose wrapping the JSON.
Only output valid JSON. No markdown formatting. No code fences. No explanatory text
before or after the JSON object. The response must begin with { and end with }.
// SECTION 5: Example object
// Show the shape you want. The model matches structure far more reliably than description.
Example output:
{
"meeting_date": "2025-03-15",
"attendees": ["Sarah Chen", "Marcus Webb"],
"decisions": ["Delay Q2 launch to April 14"],
"action_items": [
{
"owner": "Sarah Chen",
"deadline": "2025-03-22",
"description": "Revise go-to-market timeline"
}
],
"follow_ups": ["Confirm vendor availability for April window"]
}
// SECTION 6: Array wrapper pattern
// When processing multiple records, wrap in an array to keep parsing uniform.
// Use this when you're processing multiple meetings, contacts, or data sources.
If processing more than one record, wrap all objects in a top-level array:
[
{ ...record one... },
{ ...record two... }
]
[PASTE YOUR SOURCE MATERIAL HERE]
The comment blocks above are for your reference only -- strip them before submitting the prompt. The six components form a complete instruction set: role, task, schema, output constraint, example, and array wrapper. Leave any of them out and you hand the model room to guess wrong.
---
Three JSON patterns for client work
These templates cover three extraction tasks that come up repeatedly across client engagements. Each is ready to deploy. Replace the bracketed fields with your source material.
Pattern 1: Meeting notes extraction
Paste raw meeting notes -- transcript, bullet-point summary, or even a rough voice memo transcription -- and get back structured JSON with every decision and action item parsed out.
You are a senior operations analyst who extracts structured data from meeting notes.
You do not interpret or summarize -- you populate fields exactly as instructed.
Extract the following from the meeting notes below and return as valid JSON only.
No markdown. No code fences. Begin with { and end with }.
Schema:
{
"meeting_date": "string (ISO 8601)",
"attendees": ["string"],
"decisions": ["string"],
"action_items": [
{
"owner": "string",
"deadline": "string (ISO 8601 or TBD)",
"description": "string"
}
],
"follow_ups": ["string"]
}
[PASTE MEETING NOTES HERE]
If the notes reference a decision without naming an owner, populate "owner": "Unassigned" rather than omitting the field. This keeps your schema intact so any tool receiving the output -- a CRM, a spreadsheet, an automation -- can read it without errors.
Pattern 2: Competitive analysis
Paste competitor articles, press releases, pricing pages, or LinkedIn summaries and extract a structured competitive profile you can load into a comparison dashboard or CRM.
You are a competitive intelligence analyst. Extract structured data from the source
material below. Return valid JSON only. No markdown. No code fences. Begin with {
and end with }.
Schema:
{
"company_name": "string",
"strengths": ["string"],
"weaknesses": ["string"],
"pricing": {
"model": "string (e.g. per-seat, usage-based, flat)",
"entry_point": "string (lowest published tier or 'undisclosed')",
"notes": "string"
},
"market_position": "string (1-2 sentences, verbatim from source or inferred)"
}
[PASTE COMPETITOR SOURCE MATERIAL HERE]
Run this against multiple competitors using the array wrapper pattern. Pass each competitor's content as a separate labeled block and instruct the model to return an array of objects -- one per company.
Pattern 3: Client engagement tracker
Paste weekly status updates from your notes, Slack exports, or email threads and convert them into a structured record suitable for a project tracker or CRM update.
You are a project operations analyst. Extract structured data from the status update
below. Return valid JSON only. No markdown. No code fences. Begin with { and end with }.
Schema:
{
"client_name": "string",
"status": "string (On Track | At Risk | Blocked | Complete)",
"deliverables_completed": ["string"],
"blockers": ["string"],
"next_steps": [
{
"description": "string",
"owner": "string",
"due_date": "string (ISO 8601 or TBD)"
}
],
"hours_logged": "number (0 if not mentioned)"
}
[PASTE WEEKLY STATUS UPDATE HERE]
The "status" field uses a controlled vocabulary -- four values only. When you constrain enumerated fields like this in the schema, you can filter and sort records in any tool that receives the data without any additional cleanup.
---
Validating JSON output
The model will return broken JSON. Not always, but often enough that you need a fixed response to it before you build anything that depends on clean output.
Common failure modes across these three patterns:
| Failure Mode | What It Looks Like | Why It Happens |
|---|---|---|
| Trailing comma | "follow_ups": ["item",] | Model follows list rhythm without applying JSON rules |
| Unescaped quotes | "description": "She said "yes"" | Quotation marks inside string values aren't escaped |
| Markdown code fences | Response opens with `json | Model defaults to its standard code formatting behavior |
| Explanatory prose | "Here is the JSON you requested:" followed by the object | Model defaults to conversational framing |
| Missing required keys | Schema specifies 6 keys, response returns 4 | Source material was thin; model skipped unpopulated fields |
The markdown fence and explanatory prose failures are the easiest to prevent: the "Begin with { and end with }" instruction in your prompt eliminates both. The others require validation on receipt.
The parse-and-re-prompt loop handles the remaining failures reliably:
1. Attempt to parse the response programmatically or by pasting it into a JSON validator (jsonlint.com works fine). 2. If parsing fails, copy the exact error message. 3. Re-prompt with: The JSON you returned contains an error: [paste error]. Return the corrected JSON only. Do not change any values -- fix the syntax error only. 4. Parse again. Two rounds resolves 95% of structural errors.
For missing keys, re-prompt with: Your response is missing the following required keys: [list them]. Return the complete JSON object with all keys populated. Use null for any key where the source material contains no relevant information.
> Hard rule: Never trust JSON output from a model without validating it. A missing comma does not throw an error you can see -- it silently breaks the pipeline downstream. Validate before you pass the output anywhere.
---
Chaining JSON into workflows
The value of JSON output is what you do with it after. Operators build JSON extraction into their workflows because structured data connects directly to the tools already running the practice.
Plain text output from a model is a dead end. You can read it, you can copy from it, and that is all. JSON output is a live asset.
CRM updates. A client engagement tracker payload can populate a CRM record without manual data entry. Whether you're running HubSpot, Notion databases, or Airtable, their APIs accept JSON objects. The model does the extraction; the API call does the filing.
Slack notifications. Action items with owners and deadlines map directly to a Slack message format. A simple automation reads the action_items array and posts one message per item to the relevant channel, tagged to the owner. No copy-paste, no missed follow-ups.
Spreadsheet imports. Google Sheets and Excel both accept JSON via their APIs. A competitive analysis run across six competitors returns an array of six objects. Each object becomes a row. Your competitive tracker populates itself.
Dashboard data. If you're running a client reporting dashboard -- even a basic one in Retool, Notion, or a custom build -- JSON is the native input format. Structured weekly status updates feed directly into status charts and timeline views without transformation.
The practical difference for fractionals managing multiple client engagements is significant. Every hour spent copying data from a summary document into a tracker is an hour you did not bill. The extraction prompt does the copy-paste work. Your time goes to the interpretation and decisions that require your domain knowledge -- which is what your clients are actually paying for.
> Tip: Start with one workflow, not five. Pick the repetitive data task that costs you the most time per week -- usually meeting notes or weekly status updates -- and build the JSON extraction loop for that one task first. Once it's running cleanly, every workflow you add after that follows the same structure.