Adding Resources and Prompts to Your MCP Server

Use server.resource() for user-controlled context and server.prompt() for reusable templates. Plus a combined workflow with all three.

8 min read
MCP resources MCP prompts server.resource MCP server.prompt MCP MCP server components

--- title: MCP server resources and prompts description: Tools are only one third of the MCP protocol. Resources expose structured context and prompts create reusable templates. Here is how to build with all three. author: FractionalSkill ---

MCP server resources and prompts

Most MCP tutorials stop after showing you server.tool. That makes sense because tools are the flashy part. You describe a function, the LLM decides when to call it, and something happens in the outside world. But the MCP protocol has two other building blocks that change how you structure a server: resources and prompts.

Resources give your MCP client access to structured data without the LLM making that decision. Prompts give you reusable templates that pre-fill context for common tasks. When you combine all three, you get a server that can surface the right data, apply a standardized workflow, and execute the action in a single interaction.

Here is how each piece works, how to build them, and what a combined workflow looks like in practice.

Tools, resources, and prompts compared

Before writing any code, the distinction between these three matters. They solve different problems and they're controlled by different parties.

ToolsResourcesPrompts
Who controls itThe LLM decides when to callThe user or client selectsThe user selects from a menu
How it's registeredserver.tool()server.resource()server.prompt()
What it doesExecutes an action (create, update, delete)Exposes data as context for the conversationProvides a reusable template with arguments
AnalogyA function the AI can callA file the user can attachA form the user fills out
Client supportSupported by nearly all MCP clientsGrowing support (Claude Desktop yes, some editors not yet)Growing support (Claude Desktop yes, some editors not yet)

The practical difference: when you register something as a tool, the LLM reads its description and decides whether to invoke it based on the conversation. When you register something as a resource, the data appears in the client interface for the user to attach manually. The LLM never decides to fetch a resource on its own.

This matters for data you always want available. A list of team names and IDs, for instance, is something the user should attach to the conversation, not something the LLM should guess about when to retrieve.

> Client support is still catching up. Most MCP clients support tools today. Resources and prompts are being adopted, but not every client has shipped support yet. Claude Desktop supports all three. Some code editors are still rolling out resource and prompt integrations. Check your client's MCP documentation before building around these features.

Building a resource with server.resource

A resource exposes structured data to the MCP client. Instead of the LLM deciding to call a function, the user sees the resource in the client interface and chooses to attach it.

The TypeScript SDK makes registration straightforward. Here is a resource that exposes team data from Linear:

server.resource(
  "teams",           // Resource name
  "linear://teams",  // URI for grouping
  async (uri) => {
    // Fetch teams from the Linear API
    const teams = await linearClient.teams();
    const teamData = teams.nodes.map((team) => ({
      id: team.id,
      name: team.name,
      key: team.key,
    }));

    return {
      contents: [
        {
          uri: uri.href,
          text: JSON.stringify(teamData, null, 2),
        },
      ],
    };
  }
);

A few things to notice here. The second argument is a URI that acts like a path for how the client organizes your resources. The pattern linear://teams groups this under the Linear namespace. You can add more resources like linear://projects or linear://cycles and they'll be organized under the same prefix.

The function itself does the same API call you'd put inside a tool. The difference is only in how it reaches the conversation. With a tool, the LLM calls it. With a resource, the user attaches it.

When to convert a tool to a resource. If you have a tool that only reads data and the user always needs that data as context, it's a candidate for a resource. A "list teams" tool that the LLM calls on its own adds unnecessary round trips. A "teams" resource that the user attaches once gives the LLM the context immediately.

> URI naming convention. Use the pattern servicename://collection for your resource URIs. Examples: linear://teams, github://repos, notion://databases. This keeps things organized when a server exposes multiple resource types.

Building a prompt with server.prompt

Prompts are reusable templates with defined arguments. Think of them as forms that pre-fill a structured instruction for the LLM. The user selects the prompt, fills in the fields, and the result gets sent as the conversation's starting message.

Here is a prompt that creates a standardized task template for Linear:

server.prompt(
  "create-task-template",
  "Template for creating standardized Linear issues",
  {
    ticketId: z.string().describe("Ticket identifier"),
    title: z.string().describe("Task title"),
    team: z.string().describe("Team ID to assign the task to"),
    type: z.string().describe("Task type: feature, bug, or chore"),
    impact: z.string().describe("Expected impact: low, medium, or high"),
    context: z.string().describe("Background information and context"),
  },
  async ({ ticketId, title, team, type, impact, context }) => ({
    messages: [
      {
        role: "user",
        content: {
          type: "text",
          text: `Create a new Linear task with the following details:
Ticket: ${ticketId}
Title: ${title}
Team: ${team}
Type: ${type}
Impact: ${impact}
Context: ${context}

Follow our team's standard issue format. Include the type and impact in the description.`,
        },
      },
    ],
  })
);

The arguments use Zod schemas, the same validation library the SDK uses for tool parameters. Each argument gets a description that the client displays as a label next to the input field.

In Claude Desktop, prompts appear under the "Attach from MCP" menu. You click the prompt name, fill in each field, and the resulting message gets composed and sent to the conversation. The LLM then has everything it needs to call the appropriate tool.

The pattern here is deliberate. The prompt doesn't create the task itself. It builds a structured message that triggers the create-task tool. Prompts compose context. Tools execute actions. Keeping them separate means either piece can be updated independently.

Combining resources, prompts, and tools in one workflow

The real payoff comes when all three work together. Here is what a practical workflow looks like using a Linear MCP server with all three components registered:

Step 1: Attach the Teams resource. In Claude Desktop, you click on the Teams resource from your MCP server. This loads all your team names and IDs into the conversation as context. The LLM can now reference any team without you needing to look up IDs manually.

Step 2: Fill in the task template prompt. You select the "create-task-template" prompt from the Attach from MCP menu. Claude Desktop shows you the input fields. You paste the team ID from the resource data, fill in the title, type, impact, and context fields. The composed message enters the conversation.

Step 3: The LLM calls the create-task tool. Based on the structured message from the prompt and the team data from the resource, the LLM has enough context to call the create-task tool with all the right parameters. The task gets created in Linear with the standardized format your team expects.

One interaction. Three MCP components. No manual copying of team IDs, no free-form prompting that produces inconsistent issue formats, no back-and-forth with the LLM asking for missing information.

Resource (teams)    -->  Context loaded into conversation
Prompt (template)   -->  Structured message composed
Tool (create-task)  -->  Action executed in Linear

This is the architecture the MCP protocol is designed around. Tools handle execution. Resources handle context. Prompts handle structured input. Each piece does one thing and they compose cleanly.

> You don't need all three on day one. If your server only needs tools, ship it with tools. Add resources when you find yourself repeatedly telling the LLM to fetch the same data. Add prompts when you want to standardize how your team formats requests. Build incrementally based on what your actual workflow requires.

What's next for your MCP server

The three registration methods, server.tool, server.resource, and server.prompt, cover the full set of building blocks. The TypeScript SDK keeps the API surface small and consistent across all three. If you can build a tool, you can build a resource or a prompt with the same patterns.

The current gap is client support. Most MCP clients have shipped tool support. Resources and prompts are still being adopted. Claude Desktop supports all three today. Code editors are actively adding these features, and the servers you build now will work with those clients the moment they ship support.

If you've only built tools so far, look at your existing server and ask two questions. Is there a tool that only reads data and always gets called at the start of a conversation? That's a resource. Do you keep typing the same structured prompt to get consistent output? That's a prompt template.

Start with one resource or one prompt alongside your existing tools. Test it in Claude Desktop to see how the pieces compose. Each component is optional and additive. Nothing breaks by introducing them one at a time.

Keep Going

Ready to Start Building?

Pick the next step that matches where you are right now.

Tutorial
Claude Code Basics

Start with the terminal basics. A hands-on, step-by-step guide to your first 10 minutes with Claude Code.

Start the Tutorial
Guide
AI-Powered Workflows

Automate your client work. Learn how to connect AI tools into workflows that handle repetitive tasks for you.

Read the Guide
Community
Join the Community

Connect with other fractional leaders building with AI. Share workflows, get feedback, and learn from operators who are ahead of you.

Apply to Join