Bridge AI Models to Live Data with the Model Context Protocol (MCP)

Why MCP matters

AI models trained on static datasets are powerful but limited when they can’t access live data, external tools, or up-to-date context. The Model Context Protocol (MCP) provides a structured way to plug models into dynamic resources and modular tools so systems can fetch information, run computations, and maintain conversational context in real time.

Core building blocks

MCP rests on three core concepts: resources, tools, and messages. Resources represent external data (documents, APIs, datasets), tools are pluggable handlers that perform actions (analysis, summarization, search), and messages form the contextual memory of interactions between clients and servers.

The tutorial begins by defining these basic data structures in Python:

import json
import asyncio
from dataclasses import dataclass, asdict
from typing import Dict, List, Any, Optional, Callable
from datetime import datetime
import random


@dataclass
class Resource:
   uri: str
   name: str
   description: str
   mime_type: str
   content: Any = None


@dataclass
class Tool:
   name: str
   description: str
   parameters: Dict[str, Any]
   handler: Optional[Callable] = None


@dataclass
class Message:
   role: str
   content: str
   timestamp: str = None
   def __post_init__(self):
       if not self.timestamp:
           self.timestamp = datetime.now().isoformat()

Server: managing resources and tools

The MCPServer encapsulates available resources and tools and exposes asynchronous APIs for retrieval and execution. Its responsibilities include registering resources and tools, serving resource content, and invoking tool handlers. The implementation emphasizes async calls to keep I/O efficient.

class MCPServer:
   def __init__(self, name: str):
       self.name = name
       self.resources: Dict[str, Resource] = {}
       self.tools: Dict[str, Tool] = {}
       self.capabilities = {"resources": True, "tools": True, "prompts": True, "logging": True}
       print(f"✓ MCP Server '{name}' initialized with capabilities: {list(self.capabilities.keys())}")
   def register_resource(self, resource: Resource) -> None:
       self.resources[resource.uri] = resource
       print(f"  → Resource registered: {resource.name} ({resource.uri})")
   def register_tool(self, tool: Tool) -> None:
       self.tools[tool.name] = tool
       print(f"  → Tool registered: {tool.name}")
   async def get_resource(self, uri: str) -> Optional[Resource]:
       await asyncio.sleep(0.1)
       return self.resources.get(uri)
   async def execute_tool(self, tool_name: str, arguments: Dict[str, Any]) -> Any:
       if tool_name not in self.tools:
           raise ValueError(f"Tool '{tool_name}' not found")
       tool = self.tools[tool_name]
       if tool.handler:
           return await tool.handler(**arguments)
       return {"status": "executed", "tool": tool_name, "args": arguments}
   def list_resources(self) -> List[Dict[str, str]]:
       return [{"uri": r.uri, "name": r.name, "description": r.description} for r in self.resources.values()]
   def list_tools(self) -> List[Dict[str, Any]]:
       return [{"name": t.name, "description": t.description, "parameters": t.parameters} for t in self.tools.values()]

Client: context and interaction

MCPClient connects to one or more MCPServer instances, queries available resources, fetches content, and invokes tools. It also maintains a context window — a list of Message objects — that records system-level events and can be used by models to inform their decisions.

class MCPClient:
   def __init__(self, client_id: str):
       self.client_id = client_id
       self.connected_servers: Dict[str, MCPServer] = {}
       self.context: List[Message] = []
       print(f"n✓ MCP Client '{client_id}' initialized")
   def connect_server(self, server: MCPServer) -> None:
       self.connected_servers[server.name] = server
       print(f"  → Connected to server: {server.name}")
   async def query_resources(self, server_name: str) -> List[Dict[str, str]]:
       if server_name not in self.connected_servers:
           raise ValueError(f"Not connected to server: {server_name}")
       return self.connected_servers[server_name].list_resources()
   async def fetch_resource(self, server_name: str, uri: str) -> Optional[Resource]:
       if server_name not in self.connected_servers:
           raise ValueError(f"Not connected to server: {server_name}")
       server = self.connected_servers[server_name]
       resource = await server.get_resource(uri)
       if resource:
           self.add_to_context(Message(role="system", content=f"Fetched resource: {resource.name}"))
       return resource
   async def call_tool(self, server_name: str, tool_name: str, **kwargs) -> Any:
       if server_name not in self.connected_servers:
           raise ValueError(f"Not connected to server: {server_name}")
       server = self.connected_servers[server_name]
       result = await server.execute_tool(tool_name, kwargs)
       self.add_to_context(Message(role="system", content=f"Tool '{tool_name}' executed"))
       return result
   def add_to_context(self, message: Message) -> None:
       self.context.append(message)
   def get_context(self) -> List[Dict[str, Any]]:
       return [asdict(msg) for msg in self.context]

Tools: modular handlers

Tools are implemented as asynchronous handlers that accept inputs and return structured outputs. The tutorial uses simple mock handlers for sentiment analysis, text summarization, and knowledge search to demonstrate how tools plug into MCP.

async def analyze_sentiment(text: str) -> Dict[str, Any]:
   await asyncio.sleep(0.2)
   sentiments = ["positive", "negative", "neutral"]
   return {"text": text, "sentiment": random.choice(sentiments), "confidence": round(random.uniform(0.7, 0.99), 2)}


async def summarize_text(text: str, max_length: int = 100) -> Dict[str, str]:
   await asyncio.sleep(0.15)
   summary = text[:max_length] + "..." if len(text) > max_length else text
   return {"original_length": len(text), "summary": summary, "compression_ratio": round(len(summary) / len(text), 2)}


async def search_knowledge(query: str, top_k: int = 3) -> List[Dict[str, Any]]:
   await asyncio.sleep(0.25)
   mock_results = [{"title": f"Result {i+1} for '{query}'", "score": round(random.uniform(0.5, 1.0), 2)} for i in range(top_k)]
   return sorted(mock_results, key=lambda x: x["score"], reverse=True)

A complete demo

The tutorial ties everything together in a run_mcp_demo function that spins up a server, registers resources and tools, connects a client, and runs a series of demos: listing resources, fetching a dataset, analyzing sentiment, summarizing text, searching the knowledge base, and inspecting the client’s context window.

async def run_mcp_demo():
   print("=" * 60)
   print("MODEL CONTEXT PROTOCOL (MCP) - ADVANCED TUTORIAL")
   print("=" * 60)
   print("n[1] Setting up MCP Server...")
   server = MCPServer("knowledge-server")
   print("n[2] Registering resources...")
   server.register_resource(Resource(uri="docs://python-guide", name="Python Programming Guide", description="Comprehensive Python documentation", mime_type="text/markdown", content="# Python GuidenPython is a high-level programming language..."))
   server.register_resource(Resource(uri="data://sales-2024", name="2024 Sales Data", description="Annual sales metrics", mime_type="application/json", content={"q1": 125000, "q2": 142000, "q3": 138000, "q4": 165000}))
   print("n[3] Registering tools...")
   server.register_tool(Tool(name="analyze_sentiment", description="Analyze sentiment of text", parameters={"text": {"type": "string", "required": True}}, handler=analyze_sentiment))
   server.register_tool(Tool(name="summarize_text", description="Summarize long text", parameters={"text": {"type": "string", "required": True}, "max_length": {"type": "integer", "default": 100}}, handler=summarize_text))
   server.register_tool(Tool(name="search_knowledge", description="Search knowledge base", parameters={"query": {"type": "string", "required": True}, "top_k": {"type": "integer", "default": 3}}, handler=search_knowledge))
   client = MCPClient("demo-client")
   client.connect_server(server)
   print("n" + "=" * 60)
   print("DEMONSTRATION: MCP IN ACTION")
   print("=" * 60)
   print("n[Demo 1] Listing available resources...")
   resources = await client.query_resources("knowledge-server")
   for res in resources:
       print(f"  • {res['name']}: {res['description']}")
   print("n[Demo 2] Fetching sales data resource...")
   sales_resource = await client.fetch_resource("knowledge-server", "data://sales-2024")
   if sales_resource:
       print(f"  Data: {json.dumps(sales_resource.content, indent=2)}")
   print("n[Demo 3] Analyzing sentiment...")
   sentiment_result = await client.call_tool("knowledge-server", "analyze_sentiment", text="MCP is an amazing protocol for AI integration!")
   print(f"  Result: {json.dumps(sentiment_result, indent=2)}")
   print("n[Demo 4] Summarizing text...")
   summary_result = await client.call_tool("knowledge-server", "summarize_text", text="The Model Context Protocol enables seamless integration between AI models and external data sources...", max_length=50)
   print(f"  Summary: {summary_result['summary']}")
   print("n[Demo 5] Searching knowledge base...")
   search_result = await client.call_tool("knowledge-server", "search_knowledge", query="machine learning", top_k=3)
   print("  Top results:")
   for result in search_result:
       print(f"    - {result['title']} (score: {result['score']})")
   print("n[Demo 6] Current context window...")
   context = client.get_context()
   print(f"  Context length: {len(context)} messages")
   for i, msg in enumerate(context[-3:], 1):
       print(f"  {i}. [{msg['role']}] {msg['content']}")
   print("n" + "=" * 60)
   print("✓ MCP Tutorial Complete!")
   print("=" * 60)
   print("nKey Takeaways:")
   print("• MCP enables modular AI-to-resource connections")
   print("• Resources provide context from external sources")
   print("• Tools enable dynamic operations and actions")
   print("• Async design supports efficient I/O operations")


if __name__ == "__main__":
   import sys
   if 'ipykernel' in sys.modules or 'google.colab' in sys.modules:
       await run_mcp_demo()
   else:
       asyncio.run(run_mcp_demo())

How this changes AI design

By separating data and action from model weights, MCP enables models to be treated as agents that can query fresh information, call specialized tools, and update their internal context. This makes AI systems more adaptable, auditable, and easier to extend with domain-specific capabilities.

Practical next steps

Use the provided server and client patterns to prototype connectors for real APIs, databases, or computation services. Replace mock tool handlers with real implementations (NLP pipelines, search indices, analytics engines) and expand resource registries to reflect production datasets.