Where to Start
If you've been working with AI agents for any length of time, you've probably hit the wall where your agent needs access to something it doesn't have: a proprietary API, an internal database, a custom workflow. The Model Context Protocol (MCP) is the answer to that, and building your own server is more approachable than it looks.
This walkthrough assumes you're comfortable with TypeScript and have a basic understanding of what MCP does conceptually. We'll go from project setup to a working, testable server with real tool definitions and resource handling.
Project Setup and Dependencies
Start with a fresh TypeScript project. You'll need the official SDK from Anthropic, which is published as @modelcontextprotocol/sdk on npm. As of early 2025, the package is at version 0.6.x and the API surface has stabilized considerably from the early betas.
mkdir my-mcp-server
cd my-mcp-server
npm init -y
npm install @modelcontextprotocol/sdk zod
npm install -D typescript @types/node ts-nodeZod is worth pulling in from the start. The SDK uses it internally for schema validation, and you'll want it for defining your tool input schemas in a way that's both type-safe and serializable to JSON Schema automatically.
Your tsconfig.json should target ES2022 or later and use NodeNext module resolution. The SDK ships as ESM, so you'll want your project to match:
{
"compilerOptions": {
"target": "ES2022",
"module": "NodeNext",
"moduleResolution": "NodeNext",
"strict": true,
"outDir": "dist"
}
}Architecture Decisions Before You Write a Line
Before touching the SDK, spend five minutes on architecture. MCP servers communicate over stdio by default, which is how Claude Desktop and most agent frameworks expect to connect. But the SDK also supports HTTP with Server-Sent Events for remote deployments. Picking the wrong transport early creates refactoring pain later.
For local development and integration with tools like Claude Desktop or Cursor, stdio is the right call. For a server you want to expose as a shared resource across a team, HTTP transport makes more sense. The SDK abstracts this well enough that switching isn't catastrophic, but your deployment assumptions will shape how you handle things like authentication and state.
The other decision is statefulness. MCP servers can maintain state across a session, but most well-designed servers treat each tool call as relatively independent. If your server needs to maintain a database connection pool or cache an auth token, that's fine, just initialize it at startup rather than per-request.
Creating the Server Instance
The entry point for any MCP server is the Server class from the SDK. You instantiate it with metadata about your server, then attach handlers for the protocol messages you want to support.
import { Server } from '@modelcontextprotocol/sdk/server/index.js';
import { StdioServerTransport } from '@modelcontextprotocol/sdk/server/stdio.js';
import {
CallToolRequestSchema,
ListToolsRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
const server = new Server(
{
name: 'my-mcp-server',
version: '1.0.0',
},
{
capabilities: {
tools: {},
resources: {},
},
}
);The capabilities object tells the client what your server supports. If you declare tools: {}, you're expected to handle tools/list and tools/call requests. Same pattern for resources. Don't declare capabilities you haven't implemented; the client will try to use them.
Defining Tools
Tools are the most commonly used MCP primitive. They're callable functions that an AI agent can invoke, with structured inputs and outputs. A well-defined tool has a clear name, a description the model can reason about, and a tight input schema.
Here's a concrete example: a tool that queries a GitHub repository's open issues via the GitHub REST API.
import { z } from 'zod';
import { zodToJsonSchema } from 'zod-to-json-schema';
const GetIssuesSchema = z.object({
owner: z.string().describe('The repository owner or organization'),
repo: z.string().describe('The repository name'),
state: z.enum(['open', 'closed', 'all']).default('open'),
limit: z.number().int().min(1).max(100).default(20),
});
const TOOLS = [
{
name: 'get_github_issues',
description:
'Fetch open issues from a GitHub repository. Returns issue titles, numbers, labels, and URLs.',
inputSchema: zodToJsonSchema(GetIssuesSchema),
},
];The description field matters more than most people realize. The model uses it to decide when to call your tool, so vague descriptions lead to missed calls or incorrect usage. Be specific about what the tool returns, not just what it does.
Register the list handler so clients can discover your tools:
server.setRequestHandler(ListToolsRequestSchema, async () => {
return { tools: TOOLS };
});Implementing Tool Execution
The call handler is where your actual logic lives. The SDK gives you the tool name and arguments; you validate, execute, and return a result.
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
if (name === 'get_github_issues') {
const parsed = GetIssuesSchema.safeParse(args);
if (!parsed.success) {
return {
content: [{ type: 'text', text: `Invalid arguments: ${parsed.error.message}` }],
isError: true,
};
}
const { owner, repo, state, limit } = parsed.data;
const url = `https://api.github.com/repos/${owner}/${repo}/issues?state=${state}&per_page=${limit}`;
const response = await fetch(url, {
headers: {
Authorization: `Bearer ${process.env.GITHUB_TOKEN}`,
Accept: 'application/vnd.github.v3+json',
},
});
if (!response.ok) {
return {
content: [{ type: 'text', text: `GitHub API error: ${response.status}` }],
isError: true,
};
}
const issues = await response.json();
const formatted = issues.map((i: any) =>
`#${i.number}: ${i.title} (${i.html_url})`
).join('\n');
return {
content: [{ type: 'text', text: formatted || 'No issues found.' }],
};
}
return {
content: [{ type: 'text', text: `Unknown tool: ${name}` }],
isError: true,
};
});A few things worth noting here. Always use safeParse rather than parse so you can return a structured error instead of throwing. Return isError: true for failures rather than throwing exceptions; the protocol handles errors in the response payload, not via thrown exceptions. And keep your error messages informative since the model will read them and potentially retry with corrected inputs.
Resource Handling
Resources are a different primitive from tools. Where tools are actions, resources are data sources the model can read, similar to files or documents. They're identified by URIs and can be static or dynamic.
A practical use case: exposing a set of internal documentation pages as resources so the model can read them on demand rather than stuffing everything into the system prompt.
import {
ListResourcesRequestSchema,
ReadResourceRequestSchema,
} from '@modelcontextprotocol/sdk/types.js';
const RESOURCES = [
{
uri: 'docs://api/authentication',
name: 'Authentication Guide',
mimeType: 'text/markdown',
},
{
uri: 'docs://api/rate-limits',
name: 'Rate Limits Reference',
mimeType: 'text/markdown',
},
];
server.setRequestHandler(ListResourcesRequestSchema, async () => {
return { resources: RESOURCES };
});
server.setRequestHandler(ReadResourceRequestSchema, async (request) => {
const { uri } = request.params;
const docMap: Record = {
'docs://api/authentication': '# Authentication\n\nUse Bearer tokens in the Authorization header...',
'docs://api/rate-limits': '# Rate Limits\n\nThe API allows 1000 requests per hour per token...',
};
const content = docMap[uri];
if (!content) {
throw new Error(`Resource not found: ${uri}`);
}
return {
contents: [{ uri, mimeType: 'text/markdown', text: content }],
};
});In a real implementation, you'd fetch these from a CMS, a filesystem, or a database rather than hardcoding them. The URI scheme is yours to define; just keep it consistent and meaningful.
Starting the Server
Wiring everything together at the bottom of your entry file:
async function main() {
const transport = new StdioServerTransport();
await server.connect(transport);
console.error('MCP server running on stdio');
}
main().catch((err) => {
console.error('Fatal error:', err);
process.exit(1);
});Note that logging goes to stderr, not stdout. The stdio transport uses stdout for protocol messages, so anything you log to stdout will corrupt the message stream. This is a common first-time mistake that produces confusing parse errors on the client side.
Testing Your Server
The SDK ships with an InMemoryTransport that's designed for testing. It lets you create a connected client/server pair in the same process without any actual I/O, which makes unit testing straightforward.
import { Client } from '@modelcontextprotocol/sdk/client/index.js';
import { InMemoryTransport } from '@modelcontextprotocol/sdk/inMemory.js';
async function createTestClient() {
const [clientTransport, serverTransport] = InMemoryTransport.createLinkedPair();
const client = new Client(
{ name: 'test-client', version: '1.0.0' },
{ capabilities: {} }
);
await server.connect(serverTransport);
await client.connect(clientTransport);
return client;
}
// In your test file (using Vitest or Jest):
test('lists available tools', async () => {
const client = await createTestClient();
const result = await client.listTools();
expect(result.tools).toHaveLength(1);
expect(result.tools[0].name).toBe('get_github_issues');
});
test('returns error for invalid arguments', async () => {
const client = await createTestClient();
const result = await client.callTool({
name: 'get_github_issues',
arguments: { owner: 123 }, // wrong type
});
expect(result.isError).toBe(true);
});For integration testing against the actual GitHub API, use MSW (Mock Service Worker) to intercept HTTP calls. This lets you test your full tool execution path without hitting rate limits or requiring credentials in CI.
Beyond unit tests, the MCP Inspector is worth knowing about. It's a CLI tool (npx @modelcontextprotocol/inspector) that connects to your server and lets you manually invoke tools and browse resources through a simple UI. It's faster than wiring up a full agent just to verify your server responds correctly.
Security Considerations Before You Ship
A few things that are easy to overlook. First, validate all inputs even if you're using Zod schemas, because the JSON that arrives over the wire is untyped and a malicious client could send anything. The safeParse pattern handles this, but make sure you're not passing raw args directly to any downstream system.
Second, be careful about what you expose through resources. Resources are readable by any client that connects to your server, so don't serve anything through a resource URI that you wouldn't be comfortable with the model seeing and potentially including in its output.
Third, scope your tool permissions tightly. If your tool only needs read access to a database, don't give it a connection string with write permissions. The blast radius of a prompt injection attack, where a malicious document tricks the model into calling your tool with unexpected arguments, is directly proportional to what your tool is allowed to do.
When you list your server on a directory like Skillful.sh, the security scanner will flag common issues like overly broad input schemas, missing input validation, and environment variable handling. Running through that checklist before publishing saves you from a low security grade that discourages adoption.
Connecting to Claude Desktop
Once your server is built and tested, connecting it to Claude Desktop is a matter of adding an entry to the config file at ~/Library/Application Support/Claude/claude_desktop_config.json on macOS:
{
"mcpServers": {
"my-mcp-server": {
"command": "node",
"args": ["/absolute/path/to/dist/index.js"],
"env": {
"GITHUB_TOKEN": "your-token-here"
}
}
}
}Restart Claude Desktop and your tools will appear in the interface. If something isn't working, check the MCP logs at ~/Library/Logs/Claude/mcp*.log, which capture both the protocol messages and any stderr output from your server process.
From there, the iteration cycle is straightforward: update your tool definitions, rebuild, restart Claude Desktop, and test. The whole loop takes under a minute once you have the scaffolding in place.
Related Reading
- What the Model Context Protocol Actually Does
- How MCP Servers Differ from Traditional APIs
- MCP vs Function Calling: Understanding the Tradeoffs
- Why Open Source MCP Servers Dominate the Ecosystem
Browse MCP servers on Skillful.sh. Search 137,000+ AI tools on Skillful.sh.