Prompt Engineering Resources Worth Reading Before You Start
Four resources from OpenAI and Anthropic that give operators grounding before the hands-on work begins. What each covers and when to read it.
--- title: Prompt Engineering Resources Worth Reading Before You Start description: Four external resources from OpenAI and Anthropic that give you solid grounding before the hands-on work begins. What each one covers and when to read it. author: FractionalSkill ---
Prompt Engineering Resources Worth Reading Before You Start
There is no shortage of content about prompt engineering. Most of it is noise. Four resources are worth your time specifically because they come from the teams building the models you are actually using.
None of these are required reading before the hands-on material. But operators who engage with at least one of them before writing serious prompts tend to get better results faster. Here is what each one covers.
OpenAI's prompt engineering guide
This is a light read. OpenAI publishes this as their official documentation on the subject, and it covers the basics: being specific, using examples, setting context, controlling output format.
If you are completely new to prompting, start here. It is short enough to read in 20 minutes and gives you a solid mental framework before you encounter more detailed techniques. Nothing in it will surprise experienced practitioners, but the organization is clean and the examples are clear.
OpenAI's guide for reasoning models
Standard models and reasoning models get prompted differently. This is a fact that catches most people off guard the first time they switch between them.
Standard models like GPT-4o or Claude Sonnet respond well to detailed, structured instructions. Reasoning models like OpenAI's o1 or Google's Gemini Thinking Flash work differently under the hood -- they do their own internal reasoning before generating a response, which means overloading them with step-by-step instructions can actually interfere with their output.
OpenAI publishes a separate guide specifically for prompting reasoning models. There is more depth on this topic in the upcoming lesson on model selection, but this resource gives you a useful primer if you are curious about the distinction now.
Anthropic's prompting guide
This is the one to prioritize if you only read one thing from this list.
Anthropic's documentation goes into more depth than OpenAI's equivalent guide. It covers chain-of-thought prompting, XML formatting, multi-shot examples, and how to be specific in ways that produce consistent output. These are techniques that show up repeatedly in the case study work ahead.
The documentation is organized by technique, which means you can read it top to bottom or jump directly to the sections most relevant to what you are working on. The practical advice is specific, not generic.
> Why Anthropic's docs in particular. The techniques covered in this curriculum align closely with how Anthropic thinks about prompting. Reading their documentation gives you the underlying reasoning, not just the rules.
Anthropic's prompt engineering video
This is a one hour and sixteen minute discussion from Anthropic's research team. It is not a tutorial -- it is a conversation between practitioners thinking carefully about how and why models respond to different types of input.
It covers topics that go beyond technique: how models interpret context, why certain prompt structures produce more reliable outputs, and how prompting relates to the broader challenge of getting consistent, reliable output from AI systems.
If you have the time, this is the highest-value item on the list. The Anthropic team is precise in how they discuss these things, and the discussion covers ground that would take months of independent experimentation to discover on your own. Operators who have watched it report that it changed how they think about prompting at a structural level, not just a tactical one.
How to approach these resources
You do not need to complete any of these before moving forward. The hands-on case studies are designed to teach through practice, not prerequisite reading.
That said, the operators who get the most out of the case studies tend to be the ones who come in with some mental model of why prompting works the way it does -- not just what to type.
Read the Anthropic guide if you have 45 minutes. Skim the OpenAI guide if you have 20. Watch the video when you want to go deeper on the reasoning behind the techniques.
Then get into the case studies with real content from your actual client engagements. The combination of conceptual grounding plus immediate application is what builds durable skill.