Essay

MCP servers after 6 months: what builders actually use

APR 09, 2026 · 7 min read

MCP — Model Context Protocol — hit production use about a year and a half ago. Six months ago the ecosystem crossed some threshold where I stopped having to explain what MCP was and started having to explain which servers were worth your time. This is that conversation, written down.

Short version: there are maybe a dozen MCP servers that genuinely pull weight in my daily work. The rest are solutions in search of problems. I want to tell you which are which, and then say something about what I think MCP is actually for.

The thirty-second recap

MCP is a protocol that lets a model talk to tools and data sources without the application developer having to write a custom integration for each one. An MCP server exposes some capability — a filesystem, a GitHub API, a database — and any MCP-aware client (Claude Desktop, Claude Code, a custom agent you build) can use it.

The thing MCP gets right: it separates "what the tool does" from "which model is calling it." The thing MCP still wrestles with: it's one more abstraction layer, and abstraction layers have a cost.

The servers I actually use

Filesystem. Yes, obviously. I know this is boring. It's the one I use every day. Reading and writing files in a scoped directory, with sensible defaults, from whatever agent I have open. The official server is good and I haven't bothered to replace it.

GitHub. The MCP server that wraps the GitHub API is the one I recommend first to anyone starting with agentic workflows. Open issues, review PRs, look at CI status — all without switching context. The failure mode of using the GitHub CLI from a shell tool instead is that the model has to remember how gh commands are shaped. MCP's structured tool schema makes this ten times more reliable.

Postgres (read-only). I run a read-only Postgres MCP server pointed at staging. It's the fastest way I've found to let an agent answer questions like "how many users signed up last week" without giving it write access I'd regret. The key word is "read-only." You do not want an agent with write access to your database via MCP unless you have a very high tolerance for entertaining failure modes.

Linear. Useful if you use Linear. I do. Reading tickets, commenting, updating status — all things agents are decent at. Creating new tickets is the sharp edge; agents tend to open very enthusiastic duplicate issues if you're not careful.

Web fetch. A constrained HTTP fetcher with a safelist. Invaluable for letting agents pull docs, specs, or RFCs without giving them the open internet. The constrained version is the whole point. An unconstrained web-fetch MCP server is basically an SSRF tool with extra steps.

That's most of my list. Five servers, used daily, delivering real value.

The servers I tried and stopped using

Slack. Works fine, tells the agent what it wants to know, but the signal-to-noise in a typical work Slack is brutal. I kept getting agent summaries that confidently quoted old jokes as if they were technical decisions. The fault is not MCP's; the fault is that Slack is not a knowledge base.

Notion. Same problem, sharper. Notion pages are half-finished drafts crossed with meeting notes crossed with last quarter's strategy. Feeding this to an agent as ground truth produces hallucinated certainty. I now explicitly exclude Notion from anywhere an agent might pull context.

Most "code intelligence" servers. These sit between the model and your codebase and offer to do semantic search, symbol lookup, cross-reference, and so on. In theory great. In practice, I get better results by letting the agent grep and read files directly. The MCP layer adds latency and a new way to be wrong, without replacing the agent's existing ability to use grep.

Generic "memory" servers. A shared memory store the agent writes to and reads from across sessions. Conceptually exciting. In practice the memory fills up with stale garbage within a week and the agent starts citing outdated facts. I've gone back to committing important context as markdown files in the repo.

What I would actually want, and don't have

A really good Datadog / metrics MCP server. I can check Datadog by hand. I want an agent to do it for me when it asks "is this regression in production?" There are attempts. None feel like they cover the actual surface area of a real observability stack.

A browser MCP with good DOM tools. Not Playwright-as-a-service. I mean a server that gives the agent structured access to a rendered page — accessibility tree, interactive elements, network traffic — so it can actually debug a broken web app. This is harder to build than it looks and the attempts I've tried feel fragile.

An env server. Something that exposes a scoped view of environment variables and secrets without letting the agent read arbitrary files. I end up writing a small custom one for each project. Would be nice if there were a canonical version I trusted.

The pattern I use when building my own

About a third of the MCP servers I run are custom. Here's the pattern, which has served me well:

  1. Start with one tool, not ten. The first MCP server I build for any new capability exposes exactly one function. If it's useful, I add a second. Most don't graduate.
  2. Return structured data, not strings. The whole point of MCP's schema is that the model can reason about the shape of the response. Return a JSON object with named fields, not a pre-formatted string. Let the model format for display.
  3. Make errors explicit. If a tool call fails, return an object with { error: "...", recoverable: true | false }. Don't throw. Models are bad at recovering from exceptions; they're decent at reading error fields.
  4. Log every invocation with a correlation ID. When something goes weird, you want to be able to trace which tool call started the problem. Cheap to add at build time, expensive to retrofit.
  5. Don't MCP-ify what's already a shell command. If gh pr list works, wrapping it in MCP is usually unnecessary. Use MCP when you need structured I/O or permissions scoping that gh can't give you.

The servers I've regretted building are the ones where I wrapped something the agent could have used directly, added a layer, and then had to maintain that layer forever.

What MCP is actually for

Six months in, the strongest argument for MCP isn't "it's faster" or "it's cheaper." Both of those are marginal. The argument is: it makes the boundary between model and tool auditable. Every call is a named function with a schema and a log. When something goes wrong, I can find the exact tool call that went wrong. When permissions change, I can change them in one place.

That's worth a lot more in production than it is in a demo. The MCP ecosystem feels like it's still mostly demos. It'll mature as more people try to run agents in front of real users, hit real compliance reviews, and realize they need an audit trail that isn't "the model did something, I think."

Where I think MCP is going

The next twelve months will separate servers that are genuinely useful from servers that exist because building one was a weekend project for the publisher. Expect consolidation. A handful of servers for a handful of common stacks (filesystem, GitHub, a few databases, a few SaaS tools) will become defaults. The long tail will wither or get absorbed.

The other thing I expect: MCP's permissions story will mature. Right now, granting an MCP server access to your system is all-or-nothing in most clients. That's fine for toy workflows and unacceptable for anything with regulated data. Fine-grained per-tool permissions with runtime prompts are the obvious next step, and they're already showing up in some clients.

If you're building agentic software in 2026 and ignoring MCP, you're leaving capability on the table. If you're adopting every MCP server you can find, you're making your attack surface worse and your context noisier. The move, as usual, is to be picky about which of these things you let into your stack.

The good ones are very good. The rest are background noise. The skill is knowing which is which.