GuidesMar 4, 2026Yash Khare

What Is MCP? The Model Context Protocol Explained

A complete guide to the Model Context Protocol (MCP) — the open standard that lets AI assistants like ChatGPT and Claude connect to external tools, data, and services.

MCP (Model Context Protocol) is an open standard for connecting AI assistants to external tools and data. Think of it as USB-C for AI apps — a single, universal connector that lets any AI client talk to any external service. Instead of building custom integrations for every AI platform, you build one MCP server and it works everywhere.

If you have used ChatGPT plugins, function calling, or tool use in Claude, you have seen the problem MCP solves. Every AI platform had its own way of connecting to external tools. Developers built the same integration multiple times — once for ChatGPT, once for Claude, once for Cursor. MCP replaces all of that with one protocol.

The problem MCP solves

AI assistants are powerful reasoners, but they are isolated. Out of the box, they cannot read your files, query your database, check your calendar, or interact with your company's API. Every useful AI workflow requires connecting the assistant to external systems.

Before MCP, this meant:

  • Custom integrations for every platform — ChatGPT had plugins (now deprecated) and function calling. Claude had tool use. Cursor had its own system. Each required different code.
  • No standard for discovery — There was no way for an AI to discover what tools were available. Everything was hardcoded.
  • No standard for rich responses — Most integrations returned plain text or JSON. If you wanted interactive UI, you built it from scratch per platform.
  • Fragmented security models — Each platform handled auth, permissions, and data privacy differently.

This is where MCP comes in. Anthropic launched it in November 2024 as an open-source standard, and within months it was adopted by OpenAI, Google, Microsoft, JetBrains, and dozens more. The reference server repository now has over 80,000 GitHub stars — making it one of the fastest-adopted protocols in recent memory.

The USB-C analogy

The MCP documentation uses a helpful analogy: MCP is like USB-C for AI applications.

Before USB-C, every device had its own connector — Micro-USB, Lightning, Mini-USB, proprietary charging ports. You needed a different cable for every device. USB-C standardized that. One connector, universal compatibility.

MCP does the same thing for AI integrations. Instead of building a different connector for ChatGPT, Claude, Cursor, VS Code Copilot, and Gemini, you build one MCP server. Any AI client that speaks MCP can use it.

This matters because the number of AI clients is growing fast. There are now over 70 AI clients that support MCP — from consumer products like ChatGPT to developer tools like Cursor to enterprise platforms like Microsoft Copilot. Building a custom integration for each one is impractical. Building one MCP server that works with all of them is the same amount of work as building for one.

The three primitives

MCP defines three types of capabilities that a server can expose to AI clients. The official documentation calls these "primitives":

Tools

Tools are functions the AI can invoke. When a user asks "what is the weather in Berlin?", the AI calls a weather tool with city: "Berlin" and gets back structured data. Tools are the most common primitive — they are MCP's equivalent of API endpoints, but designed for AI consumption.

Each tool has a name, a description (in natural language, so the AI knows when to use it), and an input schema (JSON Schema that defines what parameters it accepts).

Resources

Resources are data the AI can read but not invoke. Think of them as context files — a database schema, a company knowledge base, a product catalog. The AI uses resources to inform its reasoning without triggering any side effects.

Resources are useful for giving the AI background knowledge it needs to use your tools effectively. An analytics tool works better when the AI has access to the database schema as a resource.

Prompts

Prompts are pre-written templates that guide the AI's behavior when using your tools. They are like starter instructions — "when querying the CRM, always include the customer's account tier" or "when showing product results, use the carousel layout."

Most MCP servers use tools extensively, resources occasionally, and prompts rarely. But all three work together to create a complete integration surface.

How the architecture works

MCP uses a client-server architecture with three layers, described in detail in the architecture documentation:

Host

The host is the AI application the user interacts with — ChatGPT, Claude Desktop, Cursor, VS Code Copilot. The host manages the conversation, handles user input, and decides when to invoke tools.

Client

The client is a protocol-level component inside the host that handles the MCP connection. Each client maintains a 1:1 connection with a single MCP server. A host can have multiple clients, each connected to a different server — this is how you can use a weather tool, a GitHub tool, and a CRM tool in the same conversation.

Server

The server is what you build. It exposes tools, resources, and prompts to the AI client over one of two transport mechanisms:

  • stdio — For local servers running on the user's machine. The host launches the server as a subprocess and communicates via standard input/output. This is how most Claude Desktop MCP servers work.
  • Streamable HTTP — For remote servers running in the cloud. The client connects to an HTTP endpoint and communicates via JSON-RPC over HTTP. This is how ChatGPT and most web-based clients connect.

Under the hood, all communication uses JSON-RPC 2.0 — a lightweight protocol for structured request-response messaging. If you have worked with the Language Server Protocol (LSP) for code editors, MCP will feel familiar. The MCP specification explicitly cites LSP as an architectural inspiration.

For a deeper technical walkthrough of the protocol internals, see MCP Architecture Explained.

A real-world example

Here is what an MCP interaction looks like in practice. Say you have a flight booking tool:

  1. User says: "Find me flights from Berlin to Tokyo next Friday"
  2. Host (ChatGPT) reasons: This looks like a flight search. I have an MCP tool called search_flights that accepts origin, destination, and date parameters.
  3. Client sends a tool call to the MCP server: { method: "tools/call", params: { name: "search_flights", arguments: { origin: "BER", destination: "TYO", date: "2026-03-13" } } }
  4. Server executes the logic — calls the airline API, processes results, maps them to a widget schema.
  5. Server returns structured content — a carousel of flight cards with prices, times, airlines, and "Book" action buttons.
  6. Host renders the widget inline in the chat. The user sees interactive flight cards, not a wall of text.
  7. User clicks "Book" on a card. The action sends a follow-up message to the AI with the flight ID, which triggers another tool call to start the booking flow.

This entire flow — from natural language query to interactive widget to follow-up action — is what MCP enables. Without it, you would need to build the AI integration, the rendering layer, and the action handling separately for every AI platform.

MCP vs. traditional API integrations

One thing that trips people up: MCP is not a replacement for REST APIs. It is a protocol layer that sits on top of them.

FeatureREST APIMCP
ProtocolHTTP/RESTJSON-RPC 2.0
StatefulnessStatelessStateful (session lifecycle)
DiscoveryManual (docs, OpenAPI)Automatic (capability negotiation)
AI-awarenessNoneNative (descriptions, schemas)
Rich UIBuild your ownWidget primitives in-protocol
AuthPer-APIStandardized in spec
AudienceDevelopersAI assistants

Your existing REST APIs still do the heavy lifting — fetching data, writing records, executing business logic. MCP wraps them in a layer that AI assistants understand. For a detailed comparison, see MCP vs. REST APIs.

Who supports MCP

As of March 2026, MCP is supported by virtually every major AI platform:

  • Anthropic — Claude Desktop, Claude.ai, Claude API (the original creator)
  • OpenAIChatGPT supports remote MCP tools via streamable HTTP
  • Microsoft — VS Code Copilot, GitHub Copilot
  • Google — Gemini API, Android Studio
  • JetBrains — IntelliJ, PyCharm, WebStorm (all JetBrains IDEs)
  • Cursor — Full MCP support in the AI code editor
  • Windsurf, Zed, Replit, Sourcegraph — And many more

The MCP specification is versioned and maintained by the community. SDKs exist for TypeScript, Python, Java, Kotlin, C#, Go, Ruby, Rust, Swift, and Elixir.

For a head-to-head comparison of how different AI clients implement MCP, see MCP Client Comparison.

Building MCP apps

There are two ways to build MCP servers:

Code-first

Use the official TypeScript SDK or Python SDK to build a server from scratch. This gives you full control but requires understanding the protocol specification, managing server infrastructure, handling authentication, and building widget UIs manually.

This path is best for teams with MCP experience who need custom protocol-level behavior.

Visual builder (drio)

Use drio to build MCP tools visually — define tool schemas, connect APIs, design widgets, and deploy to all AI clients from a single canvas. No server code, no infrastructure management, no widget development.

drio is a compiler, not an AI code generator. Your visual configuration gets compiled into a typesafe, spec-compliant MCP server. The output is deterministic — same input, same output, every time.

This path is best for product teams, designers, entrepreneurs, and developers who want to ship fast without protocol-level plumbing. For a step-by-step walkthrough, check out Building MCP Tools with Rich UIs.

If you want to jump straight into building, see our guide on How to Add MCP Tools to ChatGPT for the most popular AI client.

What makes MCP different from function calling

If you have used OpenAI's function calling or Claude's tool use, you might wonder why MCP is needed. The key difference: function calling is a feature within a specific AI platform. MCP is a protocol that works across all of them.

Function calling in ChatGPT requires you to define tool schemas in the OpenAI API format. Claude's tool use requires Anthropic's format. They are similar but not interchangeable.

MCP defines a universal format. Build your tool once as an MCP server, and it works with ChatGPT, Claude, Cursor, VS Code Copilot, and every other MCP-compatible client. No format translation needed.

The other major difference is lifecycle management. Function calling is stateless — each API call is independent. MCP maintains a session with initialization, capability negotiation, and graceful shutdown. This enables features like progress updates, streaming results, and stateful multi-step workflows.

Takeaways

  • MCP is an open protocol for connecting AI assistants to external tools and data. One server, every AI client.
  • Three primitives: Tools (functions the AI calls), Resources (data the AI reads), Prompts (templates that guide behavior).
  • Client-server architecture with two transports: stdio for local, streamable HTTP for remote.
  • Supported by everyone — Anthropic, OpenAI, Google, Microsoft, JetBrains, and 70+ AI clients.
  • Not a replacement for APIs — MCP wraps your existing APIs in a layer AI assistants understand.
  • You can build MCP tools with code (TypeScript/Python SDKs) or visually with drio.

MCP is the standard that makes AI tool interoperability practical. Understanding it is the first step to building tools that work everywhere AI assistants do.