This guide explains how MCP (Model Context Protocol) servers work and how this implementation is structured.
MCP (Model Context Protocol) is an open protocol introduced by Anthropic that standardizes how AI models (LLMs) interact with external systems, tools, and data sources. It enables:
- Standardized Communication: Consistent way for LLMs to access external capabilities
- Resource Discovery: LLMs can discover and access data resources
- Tool Invocation: LLMs can invoke actions through tools
- Secure Access: Controlled access to system capabilities
MCP follows a client-server architecture:
┌─────────────┐ ┌─────────────┐ ┌─────────────┐
│ LLM │────────▶│ Client │────────▶│ Server │
│ (Claude) │ │ (Cursor) │ │ (This App) │
└─────────────┘ └─────────────┘ └─────────────┘
- LLM: The language model that needs to access external capabilities
- Client: Intermediary that connects LLM to servers (e.g., Cursor IDE)
- Server: Exposes resources and tools via MCP protocol
Resources are read-only data that LLMs can access. They're identified by URIs and can be:
- Static Resources: Fixed data like configuration files
- Dynamic Resources: Generated on-demand like package lists
- Template Resources: Parameterized resources (e.g.,
codebase://file?path={path})
Example resource URIs:
python:packages://installed- List of installed packagesproject://index- Complete project indexcodebase://file?path=src/main.py- File content
Tools are actions that LLMs can invoke. They have:
- Name: Unique identifier
- Description: What the tool does
- Input Schema: JSON schema defining parameters
- Handler Function: Code that executes the action
Example tools:
install- Install Python packagesindex_project- Index a project directoryanalyze_codebase- Analyze codebase structure
Prompts are reusable prompt templates that servers expose to clients. They allow LLMs to use pre-defined, structured prompts with customizable arguments.
- Name: Unique identifier
- Description: What the prompt does
- Arguments: Parameters that customize the prompt
- Messages: Formatted prompt content (can include multiple messages with roles)
Example prompts:
analyze_package_dependencies- Analyze dependencies and suggest updatescode_review- Review code for best practicesdependency_audit- Audit dependencies for security
MCP supports multiple transport mechanisms:
- Stdio: Standard input/output (for local development)
- HTTP/SSE: HTTP with Server-Sent Events (for remote deployments)
The server is initialized in server.py:
from mcp.server import Server
server = Server("python-package-mcp-server")Resources are registered using decorators:
@server.list_resources()
async def list_resources() -> list[Resource]:
# Return list of available resources
pass
@server.read_resource()
async def read_resource(uri: str) -> str:
# Return resource content
passTools are registered similarly:
@server.list_tools()
async def list_tools() -> list[Tool]:
# Return list of available tools
pass
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list:
# Execute tool and return result
passPrompts are registered with two handlers:
@server.list_prompts()
async def list_prompts() -> list[Prompt]:
# Return list of available prompts
pass
@server.get_prompt()
async def get_prompt(name: str, arguments: dict) -> GetPromptResult:
# Return formatted prompt with arguments filled in
passThe server runs with a transport:
# Stdio transport
from mcp.server.stdio import stdio_server
async with stdio_server() as (read_stream, write_stream):
await server.run(read_stream, write_stream, ...)- LLM requests resource:
python:packages://installed - Client forwards request to server
- Server's
read_resource()handler is called - Handler fetches data (e.g., runs
uv pip list) - Server returns JSON response
- Client forwards response to LLM
- LLM decides to invoke tool:
installwith{"packages": ["requests"]} - Client forwards tool call to server
- Server's
call_tool()handler is called - Handler executes action (e.g., runs
uv pip install requests) - Server returns result
- Client forwards result to LLM
- LLM requests prompt:
code_reviewwith{"file_path": "src/server.py"} - Client forwards prompt request to server
- Server's
get_prompt()handler is called - Handler fills in arguments and formats prompt text
- Server returns formatted prompt messages
- Client forwards prompt to LLM for processing
- Create resource handler in
resources/:
def get_my_resource() -> list[Resource]:
return [Resource(
uri="my:resource://data",
name="My Resource",
description="My resource description",
mimeType="application/json",
)]
def read_my_resource(uri: str) -> str:
data = fetch_data()
return json.dumps(data)- Register in
server.py:
from .resources import my_resource
@server.list_resources()
async def list_resources() -> list[Resource]:
resources = []
resources.extend(my_resource.get_my_resource())
return resources- Create tool handler in
tools/:
def get_my_tool() -> list[Tool]:
return [Tool(
name="my_tool",
description="My tool description",
inputSchema={
"type": "object",
"properties": {
"param": {"type": "string"}
}
}
)]
async def handle_my_tool(arguments: dict) -> list[TextContent]:
result = do_something(arguments["param"])
return [TextContent(type="text", text=result)]- Register in
server.py:
from .tools import my_tool
@server.list_tools()
async def list_tools() -> list:
tools = []
tools.extend(my_tool.get_my_tool())
return tools
@server.call_tool()
async def call_tool(name: str, arguments: dict) -> list:
if name == "my_tool":
return await my_tool.handle_my_tool(arguments)- Add prompt definition to
list_prompts()inserver.py:
@server.list_prompts()
async def list_prompts() -> list[Prompt]:
return [
# ... existing prompts ...
Prompt(
name="my_prompt",
description="My prompt description",
arguments=[
PromptArgument(
name="arg_name",
description="Argument description",
required=True
)
]
)
]- Add handler to
get_prompt():
@server.get_prompt()
async def get_prompt(name: str, arguments: dict[str, str] | None = None) -> GetPromptResult:
arguments = arguments or {}
# ... existing handlers ...
elif name == "my_prompt":
arg_value = arguments.get("arg_name", "")
prompt_text = f"Custom prompt with {arg_value}"
return GetPromptResult(
description="My prompt description",
messages=[
PromptMessage(
role="user",
content=TextContent(type="text", text=prompt_text)
)
]
)For HTTP transport, authentication is handled via API keys:
from .security.auth import AuthMiddleware
auth = AuthMiddleware(api_key="secret-key", enable_auth=True)
auth.authenticate(provided_key)Package installation can be restricted:
from .security.policy import PolicyEngine
policy = PolicyEngine(
allowed_packages=["requests", "pytest.*"],
blocked_packages=["malicious.*"]
)
policy.check_package("requests") # OK
policy.check_package("malicious-pkg") # Raises PolicyViolationErrorAll tool invocations are logged:
from .security.audit import AuditLogger
audit = AuditLogger()
audit.log_tool_invocation(
"install",
parameters={"packages": ["requests"]},
success=True
)- Error Handling: Always handle errors gracefully and return meaningful messages
- Input Validation: Validate all inputs before processing
- Logging: Use structured logging for debugging and auditing
- Resource Efficiency: Cache expensive operations when possible
- Security: Never trust user input; validate and sanitize
- Documentation: Document all resources and tools clearly
- Read the architecture documentation
- Explore the codebase starting with
server.py - Try extending the server with your own resources or tools
- Review the enterprise deployment guide