Skip to content
Nicolas Demay
Building your own MCP servers

Building your own MCP servers

Published on 10 min read

When Anthropic released the Model Context Protocol, the idea was simple: give LLMs a clean way to interact with external services. Since then, the ecosystem has exploded. Every vendor quickly published their own MCP server, ready to use.

That’s convenient. But for me, the real power of MCP isn’t there. It’s in the ability to build your own servers, tools tailored to your project and your infra. PHPUnit in a specific Docker container, scoped access to the local database, static analysis with PHPStan, tools the model can call directly, without guessing.

This is what I use daily with Claude Code. Here’s how.

The MCP protocol

One important thing: as you plug in MCP servers, all tools are loaded into the model’s input context. Three or four servers, and it can mean thousands of tokens just to describe the available tools. The context window takes a hit, and performance with it.

To address this, Claude Code introduced Tool Search, a deferred loading mechanism, enabled by default. Full tool definitions are no longer injected at startup. The model only receives the list of available names, and when it needs a specific tool, it searches for it on demand via a dedicated tool (ToolSearch).

This reduces the footprint on the context window. The trade-off is that sometimes the model “forgets” a tool exists and doesn’t think to search for it. But you just need to mention the tools in your orchestration.

MCP = tools, skills = orchestration

Alongside MCP, Claude Code offers skills, complex, reusable prompts that describe a complete workflow in a Markdown file. The model can invoke them like a command. You might think skills replace MCP. Why bother with an MCP server when a well-written prompt does the job?

Because the two do different things. An MCP exposes typed tools, functions with defined parameters, structured returns, deterministic behavior. It’s the equivalent of an API. When the model calls mcp__youtrack__get_issue with a parameter issueId: "PROJ-42", it knows exactly what it’s sending and what it’ll receive.

A skill exposes orchestration, a prompt that describes a workflow, steps, decisions. The skill can reference MCP tools, bash commands, files to read, in a logical chain.

The two combine naturally. In my skills, I reference MCP tools directly by name:

.claude/skills/commit/SKILL.md
---
name: commit
description: Create a commit with auto-generated message
allowed-tools: mcp__project__phpstan, mcp__project__phpunit
---

The allowed-tools field in a skill’s frontmatter explicitly declares which MCP tools the model can use during the skill’s execution.

The problem: a model blind beyond code

By default, Claude Code can read and write files, and execute bash. That’s already a lot. But it doesn’t know your project’s specific tools, PHPStan, PHPUnit, bin/console, your local database.

It can guess bash commands. But without structure or project context, it’s a shot in the dark. When the model runs docker compose exec php bin/console and the container is named php-fpm-worktree-feature-42, it fails. It fumbles, tries variants, wastes time and context.

This is exactly where MCP shines: by exposing tools scoped to the project, you eliminate all ambiguity.

Building your own MCP

Many third-party services have published their own MCP server. But quality varies. When JetBrains published the YouTrack MCP, I tested it right away, and was disappointed. It didn’t retrieve media attached to tickets, returned errors on some calls, and lacked features essential to my workflow.

Since YouTrack exposes a public REST API, I asked Claude to create an MCP server from scratch. In a few minutes, I had a working Node.js server with exactly the tools I needed:

~/.claude/mcp-servers/youtrack/index.js
import { Server } from "@modelcontextprotocol/sdk/server/index.js";
const TOOLS = [
{
name: "get_issue",
description: "Retrieve a YouTrack issue with all details",
inputSchema: {
type: "object",
properties: {
issueId: {
type: "string",
description: "The issue ID (e.g., 'PROJECT-123')",
},
},
required: ["issueId"],
},
},
{
name: "get_issue_comments",
description: "Retrieve all comments for a YouTrack issue",
// ...
},
{
name: "get_issue_attachments",
description: "Retrieve media and attachments for an issue",
// ...
},
{
name: "update_issue",
description: "Update issue fields (state, assignee, etc.)",
// ...
},
];
// Handle tool calls
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
switch (name) {
case "get_issue": {
const response = await fetch(
`${YOUTRACK_URL}/api/issues/${args.issueId}?fields=idReadable,summary,description,fields(name,value(name))`,
{ headers: { Authorization: `Bearer ${YOUTRACK_TOKEN}` } }
);
const issue = await response.json();
return { content: [{ type: "text", text: JSON.stringify(issue) }] };
}
case "get_issue_comments":
// ...
case "get_issue_attachments":
// ...
case "update_issue":
// ...
}
});

The principle is simple: you declare tools with their input schema (what the model sees), then implement the handler that does the real work, here a call to the YouTrack REST API. The MCP SDK handles transport between Claude Code and the server.

For Claude Code to discover this server, you just declare it in settings.json, at the global or project level:

~/.claude/settings.json
{
"mcpServers": {
"youtrack": {
"command": "node",
"args": ["/home/nicolas/.claude/mcp-servers/youtrack/index.js"],
"env": {
"YOUTRACK_URL": "https://youtrack.example.com",
"YOUTRACK_TOKEN": "perm:xxx"
}
}
}
}

The command field tells how to launch the server, args passes arguments, and env injects the required environment variables, typically URLs and API tokens. The server starts automatically when Claude Code launches, and its tools become available in the session.

When I work on a ticket, I ask Claude to parse the full content, description, comments, attachments. All the business context from the ticket is injected in a single command. No copy-pasting from the browser, no approximate summary.

The Docker worktrees case

On my main projects, I work with git worktrees. Each branch runs in its own directory with its own Docker stack. It’s very powerful for parallelizing work, while one agent works on a feature, another can run on an urgent fix in a separate worktree.

But it adds infrastructure complexity. Each worktree has its own Docker containers with specific names. When the model needs to run PHPUnit or access the database, it doesn’t know which container to target. It guesses, gets it wrong, wastes three attempts fumbling around.

So I created a set of custom MCP tools that expose the essentials, scoped to the current worktree:

mcp-server.js
server.setRequestHandler(CallToolRequestSchema, async (request) => {
const { name, arguments: args } = request.params;
const exec = (cmd) => execSync(cmd, { encoding: "utf-8" });
const dc = `docker compose -f ${composeFile} exec`;
switch (name) {
case "phpunit": {
const path = args.testPath ? ` ${args.testPath}` : "";
const filter = args.filter ? ` --filter ${args.filter}` : "";
const output = exec(`${dc} php vendor/bin/phpunit${path}${filter}`);
return { content: [{ type: "text", text: parseTestOutput(output) }] };
}
case "console": {
const output = exec(`${dc} php bin/console ${args.command}`);
return { content: [{ type: "text", text: output }] };
}
case "mysql_query": {
if (!args.query.trim().toUpperCase().startsWith("SELECT")) {
return { content: [{ type: "text", text: "Error: only SELECT queries are allowed" }] };
}
const output = exec(`${dc} db mysql -e "${args.query}" ${dbName}`);
return { content: [{ type: "text", text: output }] };
}
}
});

So when Claude opens a project in a specific worktree, it searches through its available tools and targets the right container directly. No fumbling.

Controlling output: protecting the context window

An often underestimated advantage of custom MCP: output post-processing.

When you run a Behat test suite via bash, the output can be massive, hundreds of lines of verbose, setup, teardown. Without control, all of that lands in the model’s context window and pollutes the space available for reasoning.

With an MCP tool, you control what comes back to the model. My PHPUnit tool doesn’t return 200 lines of verbose, it parses the output and returns only the errors with the relevant file and line. It’s the difference between:

Terminal window
# Raw bash — all output lands in the context
docker compose exec php vendor/bin/phpunit
# 200+ lines of output...

And an MCP tool that returns:

{
"status": "failed",
"total": 142,
"passed": 139,
"failed": 3,
"errors": [
{"file": "tests/Service/InvoiceTest.php", "line": 47, "message": "Failed asserting that null is not null"},
{"file": "tests/Controller/OrderTest.php", "line": 112, "message": "Expected status 200, got 422"},
{"file": "tests/Repository/UserTest.php", "line": 83, "message": "Undefined method findByActive"}
]
}

The model receives exactly what it needs to fix the issue, nothing more. Try achieving that with a skill that calls a CLI, it’s technically possible with bash and grep, but it’s fragile and verbose.

Security through scope

In the previous article, I explained how to block Claude from any direct MySQL CLI access, permissions and hooks combined to block dangerous commands. But I still need it to interact with the project’s local database.

This is exactly where a custom MCP makes full sense. The mysql_query tool exposed by my server can only target the current project’s database. The model can check the schema, verify the state of a record, toggle a feature flag in the database, everything it needs to work. But it can’t go beyond that, it’s naturally scoped.

This is a safety net that a skill alone can’t provide. A skill is still a prompt. It can suggest the model shouldn’t touch prod, but nothing technically stops it. An MCP is code: the restriction is structural.

The self-equipping agent

Let’s be clear: all these MCP servers I’ve been showing you throughout this article, I’m not the one writing them. Claude is.

When I arrive on a new project with a specific Docker infrastructure, I ask it to analyze the docker-compose.yml and generate the corresponding MCP server. It understands the structure, identifies the services, and produces a working server with the right tools. This need comes up so often that I created a dedicated skill (via the official Anthropic plugin /skill-creator) that automatically generates this MCP server for any project.

The model creates the tools it needs to work effectively on the project.

Impact on the daily workflow

In short, the MCP + skills combination changes the way I work with Claude Code day to day.

The model can verify its own code before proposing a commit. In my commit skill, the prompt requires it to run PHPStan, and allowed-tools gives it access to the corresponding MCP tool. The model can’t “forget” to run static analysis, it’s in the procedure.

On the testing side, same thing: no need to copy-paste errors, the MCP returns the failing files and lines directly. It fixes, re-runs, iterates until everything passes.

And then there’s the database. I use it daily as a direct interface:

What’s the email for user 35?

What are the latest orders with pending status?

How many clients have enabled feature flag X?

It translates my question into SQL, queries the database via the scoped tool, and gives me the answer. No need to open a MySQL client for a quick question.

So the model no longer just suggests code, it acts directly on the project.

”MCP is dead”

That’s the take you see regularly from AI influencers. Delete your MCP servers, migrate everything to skills, MCP is just a useless middleman.

I think they’re missing the point. Third-party MCP servers, maybe. But custom MCP, the ones you build yourself, tailored to your project, your stack, that’s where it all happens. And if a third-party service you use hasn’t published its own MCP yet but exposes a public API, no need to wait. No official MCP, no CLI, a custom MCP remains the only way to give the model access. Skills orchestrate, MCP does the actual work.