MCP Tools, Resources, and Prompts Explained

The three building blocks of every MCP server. Tools are LLM-controlled, resources are user-controlled, and prompts are reusable templates.

6 min read
MCP tools MCP resources MCP prompts MCP components MCP server building blocks

The three building blocks you actually need to know

MCP has a full specification with transport layers, lifecycle management, and client-server handshake protocols. You don't need to care about any of that. The SDK handles it for you the moment you start building.

What you do need to understand are the three primitives that every MCP server exposes: tools, resources, and prompts. These are the pieces you'll actually write code for. Everything else is plumbing.

Most people hear "MCP server" and think it's one monolithic thing. It's not. It's a container that holds some combination of these three building blocks. Some servers only expose tools. Others expose all three. The mix depends entirely on what you're trying to give your AI client access to.

Here's the fastest way to understand the difference between them: tools let the AI do things, resources let the AI read things, and prompts give the AI templates for how to approach things.

> Operator frame. Think of an MCP server like a well-organized workspace you're handing to an assistant. Tools are the actions they can perform (create a task, send a message). Resources are the reference materials on the desk (team lists, project data). Prompts are the SOPs taped to the wall (how to file a bug report, how to format a status update).

Tools are functions the AI can call

Tools are what 80-90% of MCP usage looks like right now. When your AI needs to reach out into the world and take an action, it calls a tool.

How they work. You register a function on your MCP server with a name, a description, and input parameters. When the AI decides it needs that capability, it calls the function, passes the required inputs, and gets a result back. The AI controls when and whether to invoke the tool based on the conversation context.

A concrete example. Say you've built an MCP server for your task management system. You register a tool called create_task with parameters for title, description, assignee, and priority. When a user tells the AI "create a high-priority bug for the checkout flow and assign it to Sarah," the AI maps that request to your tool, fills in the parameters, and executes the call.

server.tool("create_task", {
  title: "string",
  description: "string",
  assignee: "string",
  priority: "high | medium | low"
})

The description you write for each tool matters. It's what helps the model decide which tool to call when multiple options exist. Vague descriptions lead to wrong tool selection. Specific descriptions lead to reliable behavior.

> Key distinction. Tools are LLM-controlled. The AI decides when to call them based on what the user asks. You define the capability. The model decides when it's relevant.

Resources expose data as context

Resources are how your MCP server shares information with the AI client. Unlike tools, which perform actions, resources provide read-only data that gets loaded into the conversation as context.

How they work. You register a resource with a name and a function that returns data. The client application surfaces that resource to the user, who can then attach it to their conversation. The AI reads it as context, same as if you'd pasted the content into the chat yourself.

A concrete example. Your task management MCP server could expose a resource called teamlist that queries the API and returns every team in the workspace. Inside Claude Desktop, you'd click the attachment icon, select "teamlist" from your MCP server, and the full team roster loads into the conversation. You can then ask questions about your teams without manually copying that data.

server.resource("team_list", async () => {
  const teams = await api.getTeams();
  return JSON.stringify(teams);
})

Not every AI client supports resources yet. Claude Desktop does. Cursor and some other tools are still adding support. This will change over time, but it's worth knowing when you pick which features to build first.

> Key distinction. Resources are application-controlled or user-controlled. The user decides when to attach them. The AI reads the data but doesn't decide to fetch it on its own.

Prompts are reusable templates for common tasks

Prompts are the least discussed of the three, but they solve a real problem: consistency. When you have a workflow that needs to run the same way every time, a prompt template ensures the AI gets identical instructions regardless of who triggers it.

How they work. You register a prompt with a name, a description, and template variables. The client application surfaces it as a selectable option. When invoked, the user fills in the variables, and the completed template becomes the AI's instruction.

A concrete example. Your task management server could expose a prompt called bugreporttemplate with variables for component, severity, reproduction steps, and expected behavior. When someone selects it, they fill in those four fields, and the AI receives a fully structured prompt that produces consistent, detailed bug reports every time.

server.prompt("bug_report_template", {
  component: "string",
  severity: "critical | major | minor",
  steps_to_reproduce: "string",
  expected_behavior: "string"
})

This is the same concept as the custom slash commands in Claude Code, but built into the MCP server itself. The templates travel with the server, not with any single user's configuration.

> Key distinction. Prompts are user-controlled. The user selects and fills them in. They standardize how the AI gets instructed for recurring workflows.

How to decide which one to build

When you're designing your own MCP server, this decision framework will save you from building the wrong primitive for the job.

ToolsResourcesPrompts
What it doesPerforms an actionProvides read-only dataDelivers a structured instruction
Who controls itThe AI (LLM-controlled)The user or app (application-controlled)The user (user-controlled)
When it runsAI decides based on conversationUser attaches before or during chatUser selects from menu
Examplecreatetask, sendmessage, get_forecastteamlist, projectdata, config_settingsbugreporttemplate, statusupdate, handoffdoc
Best forActions with side effectsLoading reference data as contextEnforcing consistent workflows

Start with tools. If you're building your first MCP server, tools are where the immediate value lives. They represent the vast majority of what people deploy MCP servers for today. You register a function, the AI calls it when relevant, and you get real work done.

Add resources when you find yourself constantly pasting the same reference data into conversations. Team rosters, project lists, configuration details. Anything the AI needs to know rather than do. Resources eliminate that copy-paste step.

Add prompts when you have multi-step workflows that need to run identically every time. Bug reports, status updates, client handoff documents. If the quality of the output depends on the quality of the instruction, and you don't want that instruction to drift between team members, prompts are the answer.

> Where to focus your energy. The MCP specification will grow. Authentication, remote servers, and new primitives are on the roadmap. But tools, resources, and prompts are the foundation everything else builds on. Get comfortable with these three, and you'll be ready to adopt whatever comes next without starting over.

Keep Going

Ready to Start Building?

Pick the next step that matches where you are right now.

Tutorial
Claude Code Basics

Start with the terminal basics. A hands-on, step-by-step guide to your first 10 minutes with Claude Code.

Start the Tutorial
Guide
AI-Powered Workflows

Automate your client work. Learn how to connect AI tools into workflows that handle repetitive tasks for you.

Read the Guide
Community
Join the Community

Connect with other fractional leaders building with AI. Share workflows, get feedback, and learn from operators who are ahead of you.

Apply to Join