How to Create a Model Context Protocol Server

The Model Context Protocol (MCP) represents a significant leap forward in how AI applications interact with external data sources and tools. Developed by Anthropic, MCP establishes a standardized way for language models to connect with various resources, from local file systems to remote APIs. If you’re looking to extend Claude’s capabilities or build sophisticated AI integrations, understanding how to create an MCP server is essential.

In this guide, we’ll walk through the practical steps of building your own MCP server, exploring the architecture, implementation details, and best practices that will help you create robust, production-ready solutions.

Understanding MCP Server Architecture

Before diving into implementation, it’s crucial to understand what an MCP server actually does. At its core, an MCP server acts as a bridge between AI models and external resources. Think of it as a translator that speaks both the language of your application’s data and the standardized protocol that AI models understand.

The architecture follows a client-server model where the MCP server exposes three primary types of capabilities:

  • Resources: Data sources that the model can read, such as files, database records, or API responses
  • Tools: Functions the model can execute to perform actions or computations
  • Prompts: Reusable prompt templates that provide structured ways to interact with your server

This separation of concerns makes MCP servers both powerful and maintainable. When Claude or another AI model needs to access external information, it communicates with your MCP server through JSON-RPC 2.0 messages over standard transport protocols. The server receives these requests, processes them, and returns structured responses that the model can understand and act upon.

Setting Up Your Development Environment

Creating an MCP server begins with choosing your technology stack. Anthropic provides official SDKs for TypeScript and Python, both of which significantly simplify the development process. For this guide, we’ll focus on the TypeScript implementation, though the concepts translate directly to Python.

Start by initializing a new Node.js project and installing the necessary dependencies:

npm init -y
npm install @modelcontextprotocol/sdk
npm install -D @types/node typescript

Configure your TypeScript compiler with a tsconfig.json that targets modern Node.js environments. You’ll want to enable strict mode to catch potential errors early and ensure your server is robust from the start. Set your module system to ES modules and configure the output directory for compiled JavaScript files.

Implementing Core Server Functionality

The foundation of any MCP server is the server instance itself. Using the MCP SDK, you create a server object and define handlers for the capabilities you want to expose. Here’s where the real work begins.

Creating the Server Instance

Your server starts with instantiating the Server class from the MCP SDK. This object will handle all incoming requests and route them to the appropriate handlers:

import { Server } from "@modelcontextprotocol/sdk/server/index.js";
import { StdioServerTransport } from "@modelcontextprotocol/sdk/server/stdio.js";

const server = new Server({
  name: "my-mcp-server",
  version: "1.0.0"
}, {
  capabilities: {
    resources: {},
    tools: {},
    prompts: {}
  }
});

This initialization declares what your server can do. By specifying capabilities, you’re telling clients which features are available before they even make requests. The name and version help with debugging and logging when multiple MCP servers are running simultaneously.

Implementing Resources

Resources are read-only data sources that the model can access. These might be files, database entries, API responses, or any other information your application manages. Implementing resources involves two key handlers: listing available resources and reading specific resources.

The list handler returns metadata about all available resources:

server.setRequestHandler(ListResourcesRequestSchema, async () => {
  return {
    resources: [
      {
        uri: "file:///project/README.md",
        name: "Project README",
        mimeType: "text/markdown",
        description: "Main project documentation"
      },
      {
        uri: "file:///project/config.json",
        name: "Configuration",
        mimeType: "application/json",
        description: "Application configuration settings"
      }
    ]
  };
});

The read handler fetches the actual content when requested:

server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
  const uri = request.params.uri;
  
  if (uri === "file:///project/README.md") {
    const content = await fs.readFile("./README.md", "utf-8");
    return {
      contents: [
        {
          uri: uri,
          mimeType: "text/markdown",
          text: content
        }
      ]
    };
  }
  
  throw new Error(`Resource not found: ${uri}`);
});

The URI scheme is flexible—you can use file://, db://, api://, or any custom scheme that makes sense for your application. This flexibility allows you to represent various data sources in a unified way.

Building Tools

Tools are where MCP servers become truly powerful. They allow the AI model to execute functions, perform calculations, query databases, or interact with external APIs. Each tool has a defined schema that describes its inputs and outputs.

Here’s an example of a simple calculation tool:

server.setRequestHandler(ListToolsRequestSchema, async () => {
  return {
    tools: [
      {
        name: "calculate_sum",
        description: "Adds two numbers together",
        inputSchema: {
          type: "object",
          properties: {
            a: { type: "number", description: "First number" },
            b: { type: "number", description: "Second number" }
          },
          required: ["a", "b"]
        }
      }
    ]
  };
});

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  if (request.params.name === "calculate_sum") {
    const { a, b } = request.params.arguments;
    const result = a + b;
    
    return {
      content: [
        {
          type: "text",
          text: `The sum of ${a} and ${b} is ${result}`
        }
      ]
    };
  }
  
  throw new Error(`Unknown tool: ${request.params.name}`);
});

For more complex scenarios, you might create tools that query databases, call external APIs, or perform file operations. The key is to provide clear descriptions and well-defined input schemas so the AI model understands when and how to use each tool.

Creating Prompts

Prompts are reusable templates that help structure interactions with your server. They’re particularly useful when you have common patterns of interaction that should be consistent across multiple uses.

server.setRequestHandler(ListPromptsRequestSchema, async () => {
  return {
    prompts: [
      {
        name: "analyze_file",
        description: "Analyze a project file",
        arguments: [
          {
            name: "filepath",
            description: "Path to the file to analyze",
            required: true
          }
        ]
      }
    ]
  };
});

server.setRequestHandler(GetPromptRequestSchema, async (request) => {
  if (request.params.name === "analyze_file") {
    const filepath = request.params.arguments?.filepath;
    return {
      messages: [
        {
          role: "user",
          content: {
            type: "text",
            text: `Please analyze the file at ${filepath} and provide insights about its structure, purpose, and any potential improvements.`
          }
        }
      ]
    };
  }
  
  throw new Error(`Unknown prompt: ${request.params.name}`);
});

Establishing Transport and Connection

Once your handlers are defined, you need to establish how your server communicates with clients. The most common transport mechanism is stdio (standard input/output), which works seamlessly with the Claude desktop application and other MCP clients.

const transport = new StdioServerTransport();
await server.connect(transport);

This simple connection enables your server to receive requests through stdin and send responses through stdout. The MCP SDK handles all the JSON-RPC protocol details, allowing you to focus on implementing your business logic.

For more advanced scenarios, you might implement HTTP or WebSocket transports, especially if your server needs to be accessed over a network or handle multiple concurrent clients.

Error Handling and Validation

Robust error handling is critical for production MCP servers. Every handler should validate inputs and provide meaningful error messages when something goes wrong:

server.setRequestHandler(CallToolRequestSchema, async (request) => {
  try {
    // Validate tool exists
    if (!validTools.includes(request.params.name)) {
      throw new Error(`Tool '${request.params.name}' not found`);
    }
    
    // Validate arguments
    const args = request.params.arguments;
    if (!args || typeof args !== 'object') {
      throw new Error("Invalid arguments provided");
    }
    
    // Execute tool logic
    return await executeTool(request.params.name, args);
    
  } catch (error) {
    // Log error for debugging
    console.error(`Tool execution error:`, error);
    
    // Return user-friendly error
    return {
      content: [
        {
          type: "text",
          text: `Error: ${error.message}`
        }
      ],
      isError: true
    };
  }
});

Input validation is especially important for tools that interact with file systems, databases, or external APIs. Always sanitize user inputs and enforce strict type checking to prevent security vulnerabilities.

Testing Your MCP Server

Before deploying your server, thorough testing is essential. The MCP SDK provides testing utilities, but you can also test manually using the MCP Inspector tool or by integrating with Claude Desktop.

Create a test script that exercises all your server’s capabilities:

async function testServer() {
  // Test resource listing
  const resources = await server.request({
    method: "resources/list"
  });
  console.log("Resources:", resources);
  
  // Test tool execution
  const toolResult = await server.request({
    method: "tools/call",
    params: {
      name: "calculate_sum",
      arguments: { a: 5, b: 3 }
    }
  });
  console.log("Tool result:", toolResult);
}

Integration testing with actual AI model interactions helps ensure your server behaves correctly in real-world scenarios. Pay special attention to edge cases, such as missing resources, invalid tool arguments, and network timeouts.

Configuring Claude Desktop Integration

To use your MCP server with Claude Desktop, you need to add it to the configuration file. On macOS, this file is located at ~/Library/Application Support/Claude/claude_desktop_config.json:

{
  "mcpServers": {
    "my-server": {
      "command": "node",
      "args": ["/path/to/your/server/build/index.js"]
    }
  }
}

After updating the configuration, restart Claude Desktop. Your server will automatically connect, and Claude will be able to access its resources, tools, and prompts. You can verify the connection by asking Claude to list available tools or resources.

Performance Optimization

As your MCP server grows, performance becomes increasingly important. Here are key optimization strategies:

  • Cache frequently accessed resources: If you’re reading the same files or database records repeatedly, implement a caching layer to reduce latency
  • Implement streaming for large responses: For resources that return substantial amounts of data, consider streaming the response instead of loading everything into memory
  • Use connection pooling for databases: If your tools interact with databases, maintain a connection pool rather than creating new connections for each request
  • Add request timeouts: Prevent hanging requests by implementing reasonable timeout values for all external operations
  • Monitor resource usage: Track memory consumption and CPU usage to identify bottlenecks

For tools that perform time-consuming operations, consider implementing progress updates or breaking the work into smaller chunks that can be processed incrementally.

Security Considerations

Security should be at the forefront of your MCP server design. Since these servers can access sensitive data and perform actions on behalf of AI models, implementing proper security measures is non-negotiable:

  • Validate all inputs rigorously: Never trust data from incoming requests, even if it comes from a trusted AI model
  • Implement access controls: If your server handles sensitive data, implement authentication and authorization mechanisms
  • Sanitize file paths: When working with file systems, always validate and sanitize paths to prevent directory traversal attacks
  • Rate limiting: Protect your server from abuse by implementing rate limits on expensive operations
  • Audit logging: Maintain detailed logs of all tool executions and resource accesses for security auditing

Remember that MCP servers run with the permissions of the user who launches them. Be cautious about what operations you allow and always follow the principle of least privilege.

Conclusion

Creating a Model Context Protocol server opens up vast possibilities for extending AI capabilities with your own data and tools. By following the patterns and practices outlined in this guide, you can build robust, efficient servers that seamlessly integrate with Claude and other AI applications.

The key to success lies in careful planning of your server’s capabilities, rigorous testing, and attention to security and performance. Start with simple resources and tools, then gradually expand your server’s functionality as you gain confidence with the MCP architecture. With these foundations in place, you’re well-equipped to build powerful AI integrations that transform how users interact with your applications.

Leave a Comment