The Rise of Model Context Protocol (MCP): Why Every Developer Is Talking About It

The Rise of Model Context Protocol (MCP): Why Every Developer Is Talking About It

Written by Massa Medi

It seems like every developer in the world right now is catching the MCP wave. Model Context Protocol is the latest, buzzing, can’t-miss way to build APIs, and if you’re scratching your head wondering what on earth that is—brace yourself, because you might just be NGMI (Not Gonna Make It, for those still catching up with the lingo).

Wild experimentation is happening as we speak. For instance, a developer managed to get Claude (Anthropic’s AI model) to create 3D art in Blender, powered almost entirely by “Vibes”—yes, you read that correctly. And just recently, MCP became an official standard in the OpenAI agents SDK. That’s how you know this isn’t another fleeting trend—it's an emerging pillar of the AI-driven internet.

If you’ve been a loyal follower of this channel, you probably have REST APIs down pat. Maybe you dabbled with GraphQL, or recall wrestling with RPC in your junior years—or perhaps, you’re still traumatized by SOAP. Back in the dark ages, software engineering gatekeepers would assert that mastery of the distinctions between these architectures and protocols was the sacred rite required to call yourself a web developer.

But times have changed (the “turns have tabled,” if you will). The old guard has been straight-up obliterated. Modern developers? We're all “Vibe coders” now—living in the age of exponential AI, half-joking that code doesn’t even exist anymore. Instead, we just hang out with massive language models and let them do our bidding.

To earn your true Vibe Coder card, though, you’ve got to know about Model Context Protocol. Think of MCP as the USB-C port for AI applications: a universal, plug-and-play interface standard, designed by Anthropic (the team behind Claude) to let large language models access context in a reliable, scalable way.

Anthropic is so confident in MCP that their CEO predicts that by year’s end, virtually all code will be written by AI. That’s a staggering claim—one that's making waves across the industry.

Let’s Build: MCP Server from Scratch

Welcome to March 31, 2025—this is The Code Report. And forget those rumors: Fireship is alive, well, and absolutely still a tutorial channel. So today, we’re rolling up our sleeves and connecting the dots between a storage bucket, a Postgres database, and a classic REST API using the Model Context Protocol. Why? So that Claude can access heretofore unseen data, execute code right on our server (like writing to the database or uploading files)—and push the boundaries of what’s possible with LLMs.

The world is already getting creative with MCP. Imagine automated trading of stonks and shitcoins, industrial-scale web scraping, or fully AI-managed Kubernetes clusters. The possibilities border on both the exhilarating and the mildly terrifying.

Essential Cloud Infrastructure—With a Sponsor Shoutout

For our adventure, we'll need some robust cloud infrastructure. Enter Cevola—a platform powered by Google Kubernetes Engine and Cloudflare. (They also happen to be sponsoring this video, but honestly, the platform is just way simpler than AWS, offers predictable pricing, and the free tier is perfect for experimental projects like this one.)

MCP Architecture: Clients, Servers, and Semantic Simplicity

Like other APIs, MCP operates with a client and a server. Our client? Claude Desktop. The server? That’s what we’ll build—maintaining a permanent “chatty” connection with Claude. Data flows between them via the transport layer.

In REST, you send GETs, POSTs, and other HTTP requests to various endpoints. With Model Context Protocol, life gets a bit more elegant: we're focused on two central concepts—resources and tools.

A (Very) Real Startup Example: Horse Tinder

I’ve been working on what I consider my magnum opus: Horse Tinder. As it turns out, horses aren’t great at swiping left or right (who knew, right? No fingers!), so it's time to pivot—embracing AI just like every other startup in Silicon Valley.

Here’s the infrastructure we already have, shown in the Cevola console:

The icing on this (horse-shaped) cake? Everything is organized in a Git repo, complete with a CI/CD pipeline. So once our MCP server is up and running, deploying to dev or staging is as easy as pushing to a branch and letting Cevola handle deployments and cache busting automatically.

Let’s Get Coding: Building the MCP Server

In our example, I’m using a Deno project. First observation: we import MCPServer from the official SDK. Not a TypeScript fan? No worries—SDKs exist for Python, Java, and beyond.

Key Tool: Zod—an excellent schema validation library. This ensures that whatever shapes of data you want the LLM to handle, it won’t just spit out random nonsense (or “hallucinate” wild outputs).

The workflow:

  1. Create a server: Start by spinning up an MCPServer instance.
  2. Add resources: Each resource needs:
    • A catchy name (e.g. “Horse is looking for love”).
    • A URI identifying it in the system.
    • A callback function to fetch data (in this example, querying our Postgres DB in the cloud via postgresjs).
  3. Limit resources to reading only: Use resources strictly for fetching data (GET-style).
  4. Define tools for actions: Want the AI to generate matches or set up horse dates? Use tools. Behind the scenes, these could call your REST API endpoints—essentially letting you build “an API for your API.” As silly as that sounds, standardizing access through MCP makes mixing, matching, and plugging LLMs into your stack way more robust.
  5. Validate everything with Zod: Define schemas and types so the LLM knows exactly what arguments and data shapes it’s supposed to use. No more hallucinating wild data!

Running the server is easy: locally, use Standard IO as the transport layer. In production, opt for Server-Sent Events or HTTP.

Putting MCP to Work: From Server to Claude

So, you’ve got an MCP server—now what? To actually use it, you’ll need a client that supports the Model Context Protocol. Claude Desktop is a prime example, but alternatives like Cursor and Windsor exist, or you could even roll your own client (but that’s a topic for another day).

Once you’ve installed Claude Desktop, jump into the developer settings—this opens up a config file where you dump in the commands to run your MCP servers. For this project, the command runs your deno process serving up your main.ts MCP code. After a restart, Claude should recognize that your MCP server (“horse”) is running. (If it escapes, you’ll have to go catch it. 🐎)

Back in the Claude prompt, you attach to this server—Claude fetches resources (e.g., profile data, images) in realtime for use in your next prompt. Because Claude is multimodal, you can also layer in PDFs, graphs, images (all those lovely horse pictures), or any other contextual data you want.

Magically, you can now ask Claude questions tailored to your app’s state—e.g., “which horses are single and ready to mingle?”—and it’ll instantly consult your real database for answers.

Want Claude to pair up two horses? Prompt it accordingly; after you grant permissions, Claude will use the validated schemas (thanks, Zodiac) and server-side tools to authoritatively update your actual database.

The Future: Automation, Risks, and Responsible Vibe Coding

What could possibly go wrong? (That’s not a rhetorical question.) Anthropic is bullish, claiming that soon 90% of coding will be AI-driven, and virtually all code will be AI-generated within a year. But, I’m pressing X to doubt—because, honestly, it’s probably just a matter of time before some rogue agent wipes out millions of dollars in data, or develops enough curiosity to click “delete” for fun.

Despite these potential risks, the explosion of tools being created with MCP is genuinely inspiring. If you want to see what the future holds, check out the awesome MCP repo for community projects. Just remember: always, vibe code responsibly.

Closing Notes & Thanks

Huge thanks to Cevola for supporting this project and making cloud infrastructure accessible for everyone. If you want to test their platform, enjoy this $50 stimulus check they’re offering to try it out.

This has been The Code Report. Thanks for reading, stay curious, and see you in the next one!