Doramagic Project Pack Β· Human Manual
langchain-mcp-adapters
Related topics: Installation, Quick Start Guide
Introduction
Related topics: Installation, Quick Start Guide
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Installation, Quick Start Guide
Introduction
LangChain MCP Adapters is a Python library that bridges the gap between the Model Context Protocol (MCP) ecosystem and LangChain/LangGraph applications. This library provides a lightweight wrapper that converts MCP tools, prompts, and resources into LangChain-compatible formats, enabling seamless integration of MCP servers with AI agents and applications built on the LangChain framework.
Overview
The Model Context Protocol (MCP) is an open protocol developed by Anthropic that enables AI applications to connect with external data sources, tools, and services. MCP defines a standard interface for AI models to interact with various resources through a client-server architecture.
LangChain MCP Adapters serves as the integration layer between these two ecosystems. It allows developers to:
- Use MCP servers as tool providers for LangChain and LangGraph agents
- Load tools from multiple MCP servers simultaneously
- Convert MCP resources into LangChain Blob objects for processing
- Transform MCP prompts into formats compatible with LangChain
- Intercept and modify tool call behavior through a configurable middleware pattern
Sources: README.md:1-20
Architecture
The library follows a modular architecture with clear separation of concerns across several key components:
graph TD
A[LangChain/LangGraph Agent] --> B[langchain-mcp-adapters]
B --> C[Tools Adapter]
B --> D[Resources Adapter]
B --> E[Prompts Adapter]
B --> F[MultiServerMCPClient]
C --> G[MCP ClientSession]
D --> G
E --> G
F --> H[Connection Manager]
H --> I[StdioConnection]
H --> J[StreamableHttpConnection]
H --> K[SSEConnection]
H --> L[WebsocketConnection]
G --> M[MCP Server 1]
G --> N[MCP Server 2]
G --> O[MCP Server N]Core Components
| Component | File | Purpose |
|---|---|---|
MultiServerMCPClient | client.py | Manages connections to multiple MCP servers |
load_mcp_tools() | tools.py | Converts MCP tools to LangChain tools |
load_mcp_resources() | resources.py | Converts MCP resources to LangChain Blobs |
load_mcp_prompt() | prompts.py | Converts MCP prompts to LangChain prompts |
ToolCallInterceptor | interceptors.py | Middleware for tool call lifecycle management |
Sources: langchain_mcp_adapters/__init__.py:1-12
Supported Transports
The library supports multiple transport mechanisms for connecting to MCP servers. Each transport type is implemented in the sessions module and provides different capabilities for various deployment scenarios.
graph LR
A[Client Application] --> B[Transport Layer]
B --> C[stdio]
B --> D[streamable-http]
B --> E[SSE]
B --> F[WebSocket]
C --> G[Local Process]
D --> H[HTTP Server]
E --> H
F --> HTransport Comparison
| Transport | Use Case | Headers Support | Stateful | Notes |
|---|---|---|---|---|
stdio | Local subprocesses | No | Yes | Standard I/O communication |
streamable-http | HTTP-based servers | Yes | Configurable | Recommended for stateless deployments |
sse | Server-Sent Events | Yes | Yes | Bidirectional communication |
websocket | Persistent connections | No | Yes | Low latency, real-time |
Sources: langchain_mcp_adapters/sessions.py
Tool Conversion Process
When loading MCP tools, the library performs a series of conversions to transform the tool definitions into LangChain-compatible StructuredTool objects. This process involves mapping MCP tool schemas, descriptions, and execution semantics.
graph TD
A[MCP Tool Definition] --> B[Extract inputSchema]
B --> C[Create StructuredTool]
C --> D[Wrap with interceptor chain]
D --> E[Return BaseTool]
E --> F[Used by LangChain Agent]
F --> G[Tool call invocation]
G --> H[MCP ClientSession.call_tool]
H --> I[Result conversion]
I --> J[Return to Agent]Tool Result Handling
The tool adapter handles various content types returned by MCP tools:
| MCP Content Type | LangChain Output | Notes |
|---|---|---|
TextContent | {"type": "text", "text": ...} | Direct text conversion |
ImageContent | {"type": "image", "base64": ..., "mime_type": ...} | Image data with MIME type |
ResourceLink (image/*) | {"type": "image", "url": ...} | Image URL reference |
ResourceLink (other) | {"type": "file", "url": ...} | File URL reference |
EmbeddedResource (text) | {"type": "text", "text": ...} | Embedded text content |
EmbeddedResource (blob) | {"type": "image"/"file", ...} | Binary content |
Sources: langchain_mcp_adapters/tools.py:70-130
Interceptor System
The library provides a powerful interceptor mechanism that allows developers to intercept and modify tool call behavior. This follows the onion pattern (also known as decorator pattern) for composable middleware.
graph TD
A[Request] --> B[Interceptor 1]
B --> C[Interceptor 2]
C --> D[Interceptor N]
D --> E[Base Handler<br/>session.call_tool]
E --> F[Interceptor N Result]
F --> G[Interceptor 2 Result]
G --> H[Interceptor 1 Result]
H --> I[Response]ToolCallInterceptor Interface
Interceptors implement the ToolCallInterceptor protocol and can:
- Modify tool arguments before execution
- Change the tool name being called
- Add or modify HTTP headers for requests
- Transform or wrap the result
- Handle errors and retry logic
- Support LangGraph's
Commandfor state modification
Sources: langchain_mcp_adapters/interceptors.py:1-50
Resource Conversion
MCP resources are converted to LangChain Blob objects, enabling integration with LangChain's document loading and processing capabilities.
graph TD
A[MCP Resource URI] --> B[session.read_resource]
B --> C[ResourceContents]
C --> D{Content Type?}
D -->|TextResourceContents| E[Extract text]
D -->|BlobResourceContents| F[base64 decode]
E --> G[Blob.from_data]
F --> G
G --> H[LangChain Blob]Sources: langchain_mcp_adapters/resources.py:1-60
Basic Usage Patterns
Single Server with load_mcp_tools
from mcp import ClientSession
from langchain_mcp_adapters.tools import load_mcp_tools
# Initialize MCP client session
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(session)
# Use tools with LangChain agent
Multi-Server with MultiServerMCPClient
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"math": {
"command": "python",
"args": ["./math_server.py"],
"transport": "stdio",
},
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
})
tools = await client.get_tools()
Sources: README.md:40-80
Installation
The library can be installed via pip:
pip install langchain-mcp-adapters
For LangGraph integration with full agent capabilities:
pip install langchain-mcp-adapters langgraph "langchain[openai]"
Sources: README.md:25-30
Key Features Summary
| Feature | Description |
|---|---|
| Tool Conversion | Convert MCP tools to LangChain StructuredTool objects |
| Multi-Server Support | Connect to multiple MCP servers simultaneously |
| Resource Loading | Convert MCP resources to LangChain Blobs |
| Transport Flexibility | Support for stdio, HTTP, SSE, and WebSocket transports |
| Interceptor Middleware | Hook into tool call lifecycle for custom behavior |
| LangGraph Integration | Full compatibility with LangGraph agents and state management |
| Pagination Support | Automatic handling of paginated tool listings |
Related Documentation
- Tools Module - Detailed guide on tool conversion and execution
- Client Module - Multi-server client configuration and usage
- Resources Module - Resource loading and conversion
- Interceptors - Middleware and request/response modification
- Sessions - Transport layer implementation details
Sources: [README.md:1-20](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)
Installation
Related topics: Introduction, Quick Start Guide
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Introduction, Quick Start Guide
Installation
This page documents how to install and set up the langchain-mcp-adapters library, which provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph.
Overview
The langchain-mcp-adapters library bridges MCP servers with LangChain/LangGraph ecosystems. It enables:
- Converting MCP tools into LangChain tools
- Connecting to multiple MCP servers simultaneously
- Loading and managing MCP resources as LangChain Blob objects
- Intercepting and modifying tool call execution
Sources: README.md:1-20
Prerequisites
Python Version
| Version | Support Status |
|---|---|
| Python 3.10+ | Required |
| Python 3.11+ | Recommended |
| Python 3.12+ | Supported |
Required Dependencies
The following packages are automatically installed as dependencies:
| Package | Purpose | Min Version |
|---|---|---|
langchain-core | Core LangChain functionality | Latest stable |
mcp | Model Context Protocol SDK | Latest stable |
pydantic | Data validation and settings | V2 |
httpx | HTTP client for streamable HTTP transport | Latest stable |
Optional Dependencies
| Package | Purpose | Install Command |
|---|---|---|
langgraph | For LangGraph agent support | pip install langgraph |
langchain[openai] | OpenAI integration for agents | pip install "langchain[openai]" |
Sources: langchain_mcp_adapters/tools.py:1-50
Basic Installation
Standard Installation
Install the core package using pip:
pip install langchain-mcp-adapters
Sources: README.md:32
With LangGraph Support
For full LangGraph agent functionality:
pip install langchain-mcp-adapters langgraph "langchain[openai]"
This installs:
- The MCP adapters library
- LangGraph for building stateful agents
- OpenAI integration for LLM-powered agents
Sources: README.md:32-36
Environment Configuration
OpenAI API Key
If using OpenAI models with the library, set your API key:
export OPENAI_API_KEY=<your_api_key>
Alternatively, pass it programmatically:
import os
os.environ["OPENAI_API_KEY"] = "your-api-key"
Package Dependencies Graph
graph TD
subgraph "langchain-mcp-adapters"
A[tools.py] --> B[Base Tools Module]
A --> C[Tool Interceptors]
D[resources.py] --> E[Resource Adapter]
F[client.py] --> G[MultiServerMCPClient]
H[sessions.py] --> I[Session Management]
end
subgraph "Required Dependencies"
J[langchain-core] --> B
J --> E
K[mcp Python SDK] --> B
K --> G
K --> I
L[pydantic] --> B
M[httpx] --> I
end
subgraph "Optional Dependencies"
N[langgraph] -.->|if installed| B
N -.->|if installed| G
endInstallation Verification
After installation, verify the package is correctly installed:
import langchain_mcp_adapters
print(langchain_mcp_adapters.__version__)
Test basic MCP tool loading:
from langchain_mcp_adapters.tools import load_mcp_tools
from langchain_mcp_adapters.client import MultiServerMCPClient
# Verify imports work
print("Installation verified successfully!")
Transport-Specific Installation Notes
The library supports multiple MCP server transport types, each with specific requirements:
Standard I/O (stdio) Transport
No additional dependencies required. Uses the built-in mcp SDK stdio client.
Sources: langchain_mcp_adapters/sessions.py:1-100
Streamable HTTP Transport
Requires httpx for HTTP client functionality (included by default).
pip install langchain-mcp-adapters
# httpx is installed as a dependency
Server-Sent Events (SSE) Transport
Requires httpx with SSE support (included by default).
Sources: langchain_mcp_adapters/sessions.py:100-200
Installing Development Version
From Source
To install the latest development version from the repository:
git clone https://github.com/langchain-ai/langchain-mcp-adapters.git
cd langchain-mcp-adapters
pip install -e .
With Development Dependencies
git clone https://github.com/langchain-ai/langchain-mcp-adapters.git
cd langchain-mcp-adapters
pip install -e ".[dev]"
Dependency Resolution
Core Dependencies
The package requires these core dependencies which are installed automatically:
# From pyproject.toml
dependencies = [
"langchain-core>=0.0.1",
"mcp>=1.0.0",
"pydantic>=2.0.0",
"httpx>=0.25.0",
]
Optional Feature Dependencies
| Feature | Dependencies |
|---|---|
| LangGraph Support | langgraph |
| All Features | langgraph, langchain[openai] |
Sources: pyproject.toml
Importing the Package
After installation, import the main components:
# Core tools module
from langchain_mcp_adapters.tools import load_mcp_tools, convert_mcp_tool_to_langchain_tool
# Multi-server client
from langchain_mcp_adapters.client import MultiServerMCPClient
# Resource adapter
from langchain_mcp_adapters.resources import load_mcp_resources, get_mcp_resource
# Session management
from langchain_mcp_adapters.sessions import create_session, Connection
# Interceptors (optional)
from langchain_mcp_adapters.interceptors import ToolCallInterceptor
Sources: langchain_mcp_adapters/tools.py:1-50
Next Steps
After installation, proceed to:
- Quickstart Guide - Get started with basic MCP tool usage
- Multi-Server Setup - Connect to multiple MCP servers
- LangGraph Integration - Build agents with MCP tools
- Client Configuration - Configure connection options and transports
Sources: [README.md:1-20]()
Quick Start Guide
Related topics: Introduction, Tool Conversion, MultiServerMCPClient
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Introduction, Tool Conversion, MultiServerMCPClient
Quick Start Guide
This guide provides a comprehensive introduction to langchain-mcp-adapters, a library that bridges Anthropic's Model Context Protocol (MCP) servers with LangChain and LangGraph applications.
Overview
The langchain-mcp-adapters library serves two primary purposes:
- Tool Conversion: Transform MCP tools into LangChain-compatible tools that integrate seamlessly with LangGraph agents
- Multi-Server Client: Manage connections to multiple MCP servers simultaneously
The library provides a lightweight wrapper that enables developers to leverage MCP servers' capabilities within the LangChain ecosystem without additional boilerplate code.
Installation
Install the core package along with required dependencies:
pip install langchain-mcp-adapters
For development with OpenAI models:
pip install langchain-mcp-adapters langgraph "langchain[openai]"
Architecture Overview
The library follows a layered architecture where MCP client sessions interact with server tools, prompts, and resources through adapter classes that convert data formats between MCP and LangChain standards.
graph TD
A[LangChain / LangGraph Application] --> B[langchain-mcp-adapters]
B --> C[MultiServerMCPClient]
B --> D[Individual Tool Conversion]
C --> E[MCP Server 1]
C --> F[MCP Server 2]
C --> N[MCP Server N]
D --> E
D --> F
D --> N
E --> G[stdio Transport]
F --> H[HTTP Transport]
F --> I[SSE Transport]
F --> J[WebSocket Transport]Core Components
MultiServerMCPClient
The MultiServerMCPClient manages connections to multiple MCP servers and provides unified access to their tools, prompts, and resources.
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/client.py:1-50
#### Connection Configuration
| Parameter | Type | Description |
|---|---|---|
command | str | Executable command (e.g., "python", "node") |
args | list[str] | Command arguments |
transport | str | Transport type: stdio, http, sse, websocket |
url | str | Server URL for HTTP/SSE/WebSocket transports |
headers | dict[str, str] | Custom HTTP headers for requests |
#### Supported Transports
| Transport | Use Case | Notes |
|---|---|---|
stdio | Local subprocess servers | Communication via stdin/stdout |
http | Remote HTTP servers | REST-based communication |
sse | Servers using Server-Sent Events | Real-time streaming |
websocket | WebSocket connections | Bidirectional communication |
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/client.py:1-100
Basic Usage Patterns
Pattern 1: Direct Session Usage
For single-server scenarios, create an MCP session and load tools directly:
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from langchain_mcp_adapters.tools import load_mcp_tools
async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(session)
# Use tools with LangChain/LangGraph
θ΅ζζ₯ζΊοΌREADME.md:1-50
Pattern 2: MultiServerMCPClient with stdio
Connect to locally running MCP servers using standard I/O:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient(
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
}
)
tools = await client.get_tools()
θ΅ζζ₯ζΊοΌREADME.md:50-100
Pattern 3: MultiServerMCPClient with HTTP
Connect to remote MCP servers via HTTP transport:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient(
{
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
tools = await client.get_tools()
θ΅ζζ₯ζΊοΌREADME.md:100-150
Pattern 4: Explicit Session Management
For advanced scenarios requiring direct session access:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools
client = MultiServerMCPClient({...})
async with client.session("math") as session:
tools = await load_mcp_tools(session)
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/client.py:50-80
Tool Loading
load_mcp_tools Function
The load_mcp_tools function retrieves all available tools from an MCP session and converts them to LangChain tools.
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/tools.py:100-200
#### Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
session | ClientSession | Yes | MCP client session |
connection | Connection | No | Connection config if session is None |
callbacks | Callbacks | No | Event notification handlers |
tool_interceptors | list[ToolCallInterceptor] | No | Interceptors for tool call processing |
server_name | str | No | Server identifier for logging |
tool_name_prefix | bool | No | Prefix tool names with server name (default: False) |
#### Return Value
Returns a list[BaseTool] containing LangChain-compatible tool objects. Each tool's metadata includes annotations from the MCP tool definition.
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/tools.py:200-300
Integration with LangGraph
Complete Agent Setup
The following example demonstrates a full LangGraph agent setup using MCP tools:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-4.1")
client = MultiServerMCPClient(
{
"math": {
"command": "python",
"args": ["./examples/math_server.py"],
"transport": "stdio",
},
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
tools = await client.get_tools()
def call_model(state: MessagesState):
response = model.bind_tools(tools).invoke(state["messages"])
return {"messages": response}
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges(
"call_model",
tools_condition,
)
# Continue with compile and execution
θ΅ζζ₯ζΊοΌREADME.md:150-200
Workflow Diagram
graph LR
A[User Message] --> B[call_model Node]
B --> C{tools_condition}
C -->|END| D[Response to User]
C -->|tools| E[ToolNode]
E --> F[MCP Tool Execution]
F --> G[Tool Result]
G --> BTool Interceptors
Tool interceptors allow you to modify tool call requests and responses in an onion-pattern chain:
graph TD
A[Request] --> B[Interceptor 1]
B --> C[Interceptor 2]
C --> D[Interceptor N]
D --> E[Execute Tool]
E --> D
D --> C
C --> B
B --> F[Response]θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/interceptors.py:1-50
Creating a Custom Interceptor
from langchain_mcp_adapters.interceptors import (
ToolCallInterceptor,
MCPToolCallRequest,
MCPToolCallResult,
)
async def logging_interceptor(
request: MCPToolCallRequest,
next_handler
) -> MCPToolCallResult:
print(f"Calling tool: {request.name} with args: {request.args}")
result = await next_handler(request)
print(f"Tool result: {result}")
return result
client = MultiServerMCPClient(
{...},
tool_interceptors=[logging_interceptor]
)
Resource Loading
The library also supports loading MCP resources as LangChain Blob objects:
from langchain_mcp_adapters.resources import load_mcp_resources
# Load all resources
blobs = await load_mcp_resources(session)
# Load specific resources
blobs = await load_mcp_resources(session, uris=["resource://file1", "resource://file2"])
# Load single resource
from langchain_mcp_adapters.resources import get_mcp_resource
blob = await get_mcp_resource(session, "resource://document")
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/resources.py:1-80
Creating an MCP Server
For testing, you can create a simple MCP server using FastMCP:
# math_server.py
from mcp.server.fastmcp import FastMCP
mcp = FastMCP("Math")
@mcp.tool()
def add(a: int, b: int) -> int:
"""Add two numbers"""
return a + b
@mcp.tool()
def multiply(a: int, b: int) -> int:
"""Multiply two numbers"""
return a * b
if __name__ == "__main__":
mcp.run()
θ΅ζζ₯ζΊοΌREADME.md:50-100
HTTP Server Setup
For remote access, use the provided streamable HTTP server example:
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
This starts a stateless HTTP server on port 3000 that can be accessed via the streamablehttp_client.
θ΅ζζ₯ζΊοΌexamples/servers/streamable-http-stateless/mcp_simple_streamablehttp_stateless/__main__.py:1-10
Response Format
All tool calls return results in the content_and_artifact format:
| Component | Type | Description |
|---|---|---|
content | list[ToolMessageContentBlock] | Primary tool response content |
artifact | MCPToolArtifact | Structured data from MCP tool (if any) |
θ΅ζζ₯ζΊοΌlangchain_mcp_adapters/tools.py:50-120
Next Steps
- Explore the API Reference for detailed function signatures
- Review the example applications in the
examples/directory - Implement custom tool interceptors for logging, caching, or authentication
- Integrate with LangGraph's streaming capabilities for real-time tool execution
Source: https://github.com/langchain-ai/langchain-mcp-adapters / Human Manual
System Architecture
Related topics: Package Structure, Tool Conversion, MultiServerMCPClient
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Package Structure, Tool Conversion, MultiServerMCPClient
System Architecture
Overview
The langchain-mcp-adapters library provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph Sources: README.md
The library acts as a bridge between MCP servers and LangChain applications, enabling:
- Tool Conversion: Transform MCP tools into LangChain-compatible tools
- Multi-Server Support: Connect to multiple MCP servers simultaneously
- Resource Management: Convert MCP resources to LangChain Blob objects
- Prompt Integration: Load MCP prompts into LangChain format
- Interceptor Support: Customizable tool call interception and modification
Sources: [langchain_mcp_adapters/__init__.py:1-10]()
Package Structure
Related topics: System Architecture, Tool Conversion
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: System Architecture, Tool Conversion
Package Structure
Overview
The langchain-mcp-adapters package provides a lightweight wrapper that makes Anthropic Model Context Protocol (MCP) tools compatible with LangChain and LangGraph. The package bridges MCP servers with LangChain applications by converting MCP tools, prompts, and resources into LangChain-compatible formats.
Sources: README.md
Package Architecture
The package follows a modular architecture with distinct responsibilities for each module:
graph TD
subgraph "langchain_mcp_adapters Package"
A["__init__.py<br/>Package Entry"] --> B["client.py<br/>MultiServerMCPClient"]
B --> C["sessions.py<br/>Connection Management"]
B --> D["tools.py<br/>Tool Conversion"]
B --> E["resources.py<br/>Resource Conversion"]
B --> F["prompts.py<br/>Prompt Loading"]
C --> G["callbacks.py<br/>Callback Handling"]
C --> H["interceptors.py<br/>Tool Call Interceptors"]
end
I["MCP Servers"] --> C
D --> J["LangChain Tools"]
E --> K["LangChain Blobs"]
F --> L["LangChain Prompts"]Directory Structure
langchain_mcp_adapters/
βββ __init__.py # Package initialization and exports
βββ client.py # MultiServerMCPClient for managing multiple servers
βββ tools.py # MCP to LangChain tool conversion
βββ resources.py # MCP resource to Blob conversion
βββ prompts.py # MCP prompt loading
βββ sessions.py # Connection handling for different transports
βββ callbacks.py # Event and notification callbacks
βββ interceptors.py # Tool call interception and modification
Core Modules
1. tools.py β Tool Conversion
The tools.py module handles conversion of MCP tools to LangChain-compatible tools.
| Component | Purpose |
|---|---|
load_mcp_tools() | Loads all available MCP tools and converts them to LangChain tools |
_convert_mcp_content_to_lc_block() | Converts MCP content blocks (Text, Image, Audio, Resource) to LangChain content blocks |
_convert_call_tool_result() | Converts MCP CallToolResult to LangChain tool result format |
MCPToolArtifact | TypedDict wrapping structured content from MCP tool calls |
Key Type Definitions:
ToolMessageContentBlock = TextContentBlock | ImageContentBlock | FileContentBlock
ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage | Command # if langgraph installed
Sources: langchain_mcp_adapters/tools.py:1-150
2. client.py β MultiServerMCPClient
The client.py module provides the MultiServerMCPClient class for managing connections to multiple MCP servers.
| Parameter | Type | Description |
|---|---|---|
connections | dict[str, Connection] | Dictionary mapping server names to connection configurations |
callbacks | Callbacks | Optional callbacks for handling notifications |
tool_interceptors | list[ToolCallInterceptor] | Optional interceptors for tool call processing |
tool_name_prefix | bool | Prefix tool names with server name (default: False) |
Supported Connection Configurations:
The client supports multiple transport types with their respective parameters:
| Transport | Required Parameters |
|---|---|
stdio | command, args |
http | url |
sse | url, optional headers |
streamable_http | url, optional headers |
websocket | url |
Sources: langchain_mcp_adapters/client.py:1-100
3. sessions.py β Connection Management
The sessions.py module handles connection management for different MCP transport types.
| Connection Type | Class | Purpose |
|---|---|---|
| Stdio | StdioConnection | stdio-based communication with subprocess |
| HTTP | McpHttpClientFactory, StreamableHttpConnection | HTTP-based communication |
| SSE | SSEConnection | Server-Sent Events transport |
| WebSocket | WebsocketConnection | WebSocket-based communication |
Session Creation Flow:
graph TD
A["create_session()"] --> B{"Connection Type?"}
B -->|Stdio| C["_create_stdio_session()"]
B -->|HTTP| D["_create_http_session()"]
B -->|SSE| E["_create_sse_session()"]
B -->|WebSocket| F["_create_websocket_session()"]
C --> G["ClientSession"]
D --> G
E --> G
F --> GThe create_session() function returns an async generator that yields an initialized ClientSession:
@asynccontextmanager
async def create_session(connection: Connection) -> AsyncIterator[ClientSession]:
Environment Variable Expansion:
Sessions support environment variable expansion in configuration values using ${VAR} or ${VAR:default} syntax.
Sources: langchain_mcp_adapters/sessions.py:1-100
4. resources.py β Resource Conversion
The resources.py module converts MCP resources into LangChain Blob objects.
| Function | Purpose |
|---|---|
convert_mcp_resource_to_langchain_blob() | Converts a single MCP resource content to a Blob |
get_mcp_resource() | Fetches a single MCP resource by URI |
load_mcp_resources() | Loads multiple MCP resources and converts them to Blobs |
Supported Content Types:
| MCP Type | Conversion |
|---|---|
TextResourceContents | Raw text data |
BlobResourceContents | Base64-decoded binary data |
Sources: langchain_mcp_adapters/resources.py:1-80
5. prompts.py β Prompt Loading
The prompts.py module handles loading MCP prompts into LangChain prompt formats. The module provides functionality to convert MCP prompt definitions into LangChain-compatible prompt structures.
Sources: langchain_mcp_adapters/prompts.py:1-50
6. callbacks.py β Callback Handling
The callbacks.py module provides callback infrastructure for handling notifications and events during MCP operations.
| Component | Purpose |
|---|---|
Callbacks | Main callback container class |
CallbackContext | Context passed to callbacks with server/tool information |
The CallbackContext dataclass holds:
@dataclass
class CallbackContext:
server_name: str | None = None
tool_name: str | None = None
Sources: langchain_mcp_adapters/callbacks.py:1-60
7. interceptors.py β Tool Call Interceptors
The interceptors.py module provides interceptor interfaces for wrapping and controlling MCP tool call execution.
| Component | Purpose |
|---|---|
ToolCallInterceptor | Protocol for intercepting tool calls |
MCPToolCallRequest | Request object passed to interceptors |
_build_interceptor_chain() | Builds composed handler chain with interceptors in onion pattern |
Interceptor Pattern:
graph TD
A["Request"] --> B["Interceptor 1<br/>(Outer Layer)"]
B --> C["Interceptor 2"]
C --> D["..."]
D --> E["Interceptor N"]
E --> F["execute_tool<br/>(Innermost)"]
F --> G["Result"]
G --> E
G --> D
G --> C
G --> B
G --> H["Response"]The interceptor chain follows an onion pattern where each interceptor wraps the next, allowing pre-processing before and post-processing after tool execution.
MCPToolCallRequest Structure:
@dataclass
class MCPToolCallRequest:
name: str
args: dict[str, Any]
server_name: str
headers: dict[str, Any] | None
runtime: Any
Result Type (Conditional):
if LANGGRAPH_PRESENT:
MCPToolCallResult = CallToolResult | ToolMessage | Command
else:
MCPToolCallResult = CallToolResult | ToolMessage
Sources: langchain_mcp_adapters/interceptors.py:1-80
Data Flow Architecture
Tool Execution Flow
sequenceDiagram
participant User
participant MultiServerMCPClient
participant load_mcp_tools
participant ToolCallInterceptor
participant ClientSession
participant MCPServer
User->>MultiServerMCPClient: get_tools()
MultiServerMCPClient->>load_mcp_tools: load_mcp_tools(session)
load_mcp_tools->>load_mcp_tools: Create StructuredTool
Note over load_mcp_tools: Register call_tool coroutine
User->>StructuredTool: invoke(args)
StructuredTool->>load_mcp_tools: call_tool(args)
alt With Interceptors
load_mcp_tools->>ToolCallInterceptor: intercept(request)
ToolCallInterceptor->>ToolCallInterceptor: modify/validate
end
load_mcp_tools->>ClientSession: call_tool(name, args)
ClientSession->>MCPServer: MCP CallToolRequest
MCPServer-->>ClientSession: CallToolResult
ClientSession-->>load_mcp_tools: CallToolResult
alt Error Result
load_mcp_tools->>load_mcp_tools: Check isError flag
load_mcp_tools->>ToolException: raise
end
load_mcp_tools->>_convert_call_tool_result: format result
Note over load_mcp_tools: Convert content blocks to LC format
load_mcp_tools-->>User: (content, artifact)Content Conversion Flow
graph LR
subgraph "MCP Content Types"
A["TextContent"]
B["ImageContent"]
C["AudioContent"]
D["ResourceLink"]
E["EmbeddedResource"]
end
subgraph "Conversion Functions"
F["_convert_mcp_content_to_lc_block"]
end
subgraph "LangChain Content Blocks"
G["TextContentBlock"]
H["ImageContentBlock"]
I["FileContentBlock"]
end
A --> F
B --> F
D --> F
E --> F
C -.->|NotImplementedError| F
F --> G
F --> H
F --> IType System
Conditional Type Definitions
The package uses conditional type definitions based on whether langgraph is installed:
try:
from langgraph.types import Command
LANGGRAPH_PRESENT = True
except ImportError:
LANGGRAPH_PRESENT = False
| Type | Without langgraph | With langgraph | |||
|---|---|---|---|---|---|
ConvertedToolResult | `list[ToolMessageContentBlock] \ | ToolMessage` | `list[ToolMessageContentBlock] \ | ToolMessage \ | Command` |
MCPToolCallResult | `CallToolResult \ | ToolMessage` | `CallToolResult \ | ToolMessage \ | Command` |
Error Handling
Tool Exceptions
| Error Type | Trigger | Behavior |
|---|---|---|
ToolException | MCP tool returns isError: true | Raised with joined error message from content blocks |
NotImplementedError | AudioContent conversion attempted | Audio content is not yet supported |
ValueError | Unknown content type | Unknown MCP content types raise ValueError |
Connection Errors
| Error Type | Condition |
|---|---|
ValueError | Neither session nor connection provided to load_mcp_tools() |
Configuration Options
Tool Loading Options
| Parameter | Type | Default | Description |
|---|---|---|---|
session | ClientSession | None | MCP client session |
connection | Connection | None | Connection config for new session |
callbacks | Callbacks | None | Event callbacks |
tool_interceptors | list[ToolCallInterceptor] | None | Tool call interceptors |
server_name | str | None | Server name for context |
tool_name_prefix | bool | False | Prefix tool names with server |
Client Configuration
| Parameter | Type | Default | Description |
|---|---|---|---|
connections | dict[str, Connection] | {} | Server connection configs |
callbacks | Callbacks | Callbacks() | Default callbacks |
tool_interceptors | list[ToolCallInterceptor] | [] | Default interceptors |
tool_name_prefix | bool | False | Prefix tool names |
Dependencies
Required Dependencies
| Package | Purpose |
|---|---|
langchain-core | LangChain core functionality and BaseTool |
mcp | MCP client SDK |
pydantic | Data validation and model creation |
Optional Dependencies
| Package | Feature |
|---|---|
langgraph | LangGraph Command support, enhanced state management |
Package Exports
The __init__.py exports the main public API:
MultiServerMCPClient- Multi-server client classload_mcp_tools- Tool loading functionload_mcp_resources- Resource loading functionload_mcp_prompt- Prompt loading functionCallbacks,CallbackContext- Callback infrastructureToolCallInterceptor- Interceptor protocolConnection- Connection configuration types
Sources: [README.md]()
Tool Conversion
Related topics: MultiServerMCPClient, Transport Types, Package Structure
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: MultiServerMCPClient, Transport Types, Package Structure
Tool Conversion
Overview
Tool Conversion is the core mechanism that bridges MCP (Model Context Protocol) tools with LangChain tools, enabling interoperability between the MCP ecosystem and LangChain/LangGraph agents. This adapter transforms native MCP tool definitions into LangChain-compatible StructuredTool instances that can be used with LangChain agents and LangGraph state machines.
The conversion layer handles:
- Tool signature translation (MCP schema β LangChain Pydantic schema)
- Tool execution with proper session context
- Content block conversion (MCP content types β LangChain content blocks)
- Error handling and artifact wrapping
- Interceptor chain support for middleware patterns
Sources: langchain_mcp_adapters/tools.py:1-30
Architecture
graph TD
subgraph "MCP Layer"
MCPTool[MCP Tool Definition]
MCPToolCallResult[MCP CallToolResult]
end
subgraph "Adapter Layer"
convert_mcp_tool[convert_mcp_tool_to_langchain_tool]
load_mcp_tools[load_mcp_tools]
interceptor_chain[Interceptor Chain]
content_converter[_convert_mcp_content_to_lc_block]
result_converter[_convert_call_tool_result]
end
subgraph "LangChain Layer"
StructuredTool[StructuredTool]
ToolMessage[ToolMessage]
Command[Command<br/>langgraph.types]
MCPToolArtifact[MCPToolArtifact]
end
MCPTool --> convert_mcp_tool
MCPTool --> load_mcp_tools
load_mcp_tools --> convert_mcp_tool
convert_mcp_tool --> interceptor_chain
interceptor_chain --> content_converter
MCPToolCallResult --> result_converter
result_converter --> ToolMessage
result_converter --> Command
result_converter --> MCPToolArtifactConversion Flow
sequenceDiagram
participant Agent as LangChain Agent
participant LC_Tool as LangChain StructuredTool
participant Interceptor as ToolCallInterceptor
participant MCP_Session as MCP ClientSession
participant MCP_Server as MCP Server
Agent->>LC_Tool: invoke(name, args)
LC_Tool->>Interceptor: MCPToolCallRequest
Interceptor->>Interceptor: preprocess()
Interceptor->>MCP_Session: call_tool()
MCP_Session->>MCP_Server: protocol call
MCP_Server-->>MCP_Session: CallToolResult
MCP_Session-->>Interceptor: MCPToolCallResult
Interceptor->>Interceptor: postprocess()
Interceptor-->>LC_Tool: Converted Result
LC_Tool->>LC_Tool: _convert_call_tool_result()
LC_Tool-->>Agent: (content, artifact)Sources: langchain_mcp_adapters/tools.py:140-220
Core Functions
load_mcp_tools
Loads all available MCP tools from a session and converts them to LangChain tools.
async def load_mcp_tools(
session: ClientSession | None,
*,
connection: Connection | None = None,
callbacks: Callbacks | None = None,
tool_interceptors: list[ToolCallInterceptor] | None = None,
server_name: str | None = None,
tool_name_prefix: bool = False,
) -> list[BaseTool]
Parameters:
| Parameter | Type | Default | Description | |
|---|---|---|---|---|
session | `ClientSession \ | None` | required | MCP client session. If None, connection must be provided. |
connection | `Connection \ | None` | None | Connection config to create a new session if session is None. |
callbacks | `Callbacks \ | None` | None | Optional callbacks for handling notifications and events. |
tool_interceptors | `list[ToolCallInterceptor] \ | None` | None | Optional list of interceptors for tool call processing. |
server_name | `str \ | None` | None | Name of the server these tools belong to. |
tool_name_prefix | bool | False | If True, tool names are prefixed with server name (e.g., "weather_search"). |
Sources: langchain_mcp_adapters/tools.py:219-270
convert_mcp_tool_to_langchain_tool
Converts a single MCP tool to a LangChain StructuredTool.
def convert_mcp_tool_to_langchain_tool(
session: ClientSession | None,
tool: MCPTool,
*,
connection: Connection | None = None,
callbacks: Callbacks | None = None,
tool_interceptors: list[ToolCallInterceptor] | None = None,
server_name: str | None = None,
tool_name_prefix: bool = False,
) -> BaseTool
Returns: A LangChain StructuredTool with response_format="content_and_artifact".
Key Implementation Details:
- Creates an async
call_toolcoroutine that handles execution - Injects
runtimeviaInjectedToolArgfor LangGraph compatibility - Supports
ToolCallInterceptorchain via_build_interceptor_chain() - Wraps errors as
ToolException - Extracts
structuredContentintoMCPToolArtifact
Sources: langchain_mcp_adapters/tools.py:150-218
Content Block Conversion
The adapter converts MCP content types to LangChain content blocks for uniform handling.
Supported Conversions
| MCP Content Type | LangChain Content Block | Notes |
|---|---|---|
TextContent | {"type": "text", "text": ...} | Direct text conversion |
ImageContent | {"type": "image", "base64": ..., "mime_type": ...} | Base64 encoded image data |
ResourceLink (image/*) | {"type": "image", "url": ..., "mime_type": ...} | Image via URI reference |
ResourceLink (other) | {"type": "file", "url": ..., "mime_type": ...} | Generic file via URI reference |
EmbeddedResource (text) | {"type": "text", "text": ...} | Text from embedded resource |
EmbeddedResource (blob) | Image or file block | Based on MIME type |
AudioContent | β | Raises NotImplementedError |
Sources: langchain_mcp_adapters/tools.py:70-115
_convert_mcp_content_to_lc_block
def _convert_mcp_content_to_lc_block(
content: ContentBlock,
) -> ToolMessageContentBlock
This function handles the 1:1 mapping between MCP content types and LangChain content blocks.
graph LR
A[ContentBlock] --> B{Type Check}
B -->|TextContent| C[create_text_block]
B -->|ImageContent| D[create_image_block]
B -->|ResourceLink| E{MIME type?}
B -->|EmbeddedResource| F{Resource Type?}
B -->|AudioContent| G[NotImplementedError]
E -->|image/*| H[create_image_block<br/>url=uri]
E -->|other| I[create_file_block<br/>url=uri]
F -->|TextResourceContents| J[create_text_block]
F -->|BlobResourceContents| K{MIME type?}
K -->|image/*| L[create_image_block]
K -->|other| M[create_file_block]Sources: langchain_mcp_adapters/tools.py:70-115
Result Conversion
_convert_call_tool_result
Converts the result of an MCP tool call to LangChain format with support for multiple return types.
def _convert_call_tool_result(
call_tool_result: MCPToolCallResult,
) -> tuple[ConvertedToolResult, MCPToolArtifact | None]
Return Types:
The function returns a tuple where:
- First element: The converted content
- Second element: The artifact (if any)
Content Types Based on Input:
| Input Type | Output Content | Output Artifact |
|---|---|---|
ToolMessage | ToolMessage (passthrough) | None |
Command (LangGraph) | Command (passthrough) | None |
CallToolResult (MCP) | list[ToolMessageContentBlock] | MCPToolArtifact (if structuredContent present) |
Sources: langchain_mcp_adapters/tools.py:117-145
MCPToolArtifact
A TypedDict wrapping structured content from MCP tool calls:
class MCPToolArtifact(TypedDict):
"""Artifact returned from MCP tool calls."""
structured_content: dict[str, Any]
This allows downstream consumers to access MCP-specific structured data while maintaining compatibility with LangChain's tool result format.
Sources: langchain_mcp_adapters/tools.py:55-68
Interceptor Chain
The interceptor system implements the onion pattern for middleware-like processing of tool calls.
_build_interceptor_chain
def _build_interceptor_chain(
base_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
tool_interceptors: list[ToolCallInterceptor] | None,
) -> Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]
Execution Order:
- Interceptors are applied in reverse order (last in list = outermost layer)
- Each interceptor wraps the previous handler
- Request flows inward through interceptors, response flows outward
graph TD
subgraph "Request Flow (inward)"
R1[Request] --> I1[Interceptor 1<br/>outermost]
I1 --> I2[Interceptor 2]
I2 --> I3[Interceptor N<br/>innermost]
I3 --> BH[Base Handler<br/>execute_tool]
end
subgraph "Response Flow (outward)"
BH --> RT1[Response]
RT1 --> I4[Interceptor N]
I4 --> I5[Interceptor 2]
I5 --> I6[Interceptor 1]
I6 --> R2[Response]
endSources: langchain_mcp_adapters/tools.py:147-149
ToolCallInterceptor Interface
Interceptors implement the ToolCallInterceptor protocol:
@runtime_checkable
class ToolCallInterceptor(Protocol):
async def intercept(
self,
request: MCPToolCallRequest,
current_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
) -> MCPToolCallResult:
...
Usage Pattern:
class MyInterceptor:
async def intercept(
self,
request: MCPToolCallRequest,
current_handler: Callable,
) -> MCPToolCallResult:
# Pre-processing
modified_request = request.override(args={"modified": True})
# Call next handler
result = await current_handler(modified_request)
# Post-processing
return result
Sources: langchain_mcp_adapters/interceptors.py:1-50
Type Definitions
ConvertedToolResult
Conditional type based on LangGraph availability:
if LANGGRAPH_PRESENT:
ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage | Command
else:
ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage
ToolMessageContentBlock
ToolMessageContentBlock = TextContentBlock | ImageContentBlock | FileContentBlock
Import sourced from langchain_core.messages.content:
Sources: langchain_mcp_adapters/tools.py:15-35
Configuration Options
Tool Name Prefixing
When connecting to multiple MCP servers, tools may have name conflicts. Enable prefixing:
client = MultiServerMCPClient(
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
# With prefix: tool names become "math_add", "weather_get_weather"
tools = await client.get_tools(tool_name_prefix=True)
Session Management
| Mode | Description | Use Case |
|---|---|---|
| Shared Session | Single session for all tools | Single server, multiple tools |
| Per-Tool Session | New session created per call | Stateless servers |
| Explicit Session | User-managed session | Custom lifecycle control |
Sources: langchain_mcp_adapters/client.py:1-80
Error Handling
ToolException
Tool call errors are wrapped in ToolException:
if call_tool_result.isError:
error_parts = []
for item in tool_content:
if isinstance(item, str):
error_parts.append(item)
elif isinstance(item, dict) and item.get("type") == "text":
error_parts.append(item.get("text", ""))
error_msg = "\n".join(error_parts) if error_parts else str(tool_content)
raise ToolException(error_msg)
Sources: langchain_mcp_adapters/tools.py:130-140
Usage Examples
Basic Tool Loading
from mcp import ClientSession
from mcp.client.stdio import stdio_client
from langchain_mcp_adapters.tools import load_mcp_tools
server_params = StdioServerParameters(
command="python",
args=["/path/to/math_server.py"],
)
async with stdio_client(server_params) as (read, write):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(session)
With LangGraph Agent
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
client = MultiServerMCPClient({
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
}
})
tools = await client.get_tools()
builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", tools_condition)
Sources: README.md:1-100
See Also
- MultiServerMCPClient β Client for connecting to multiple MCP servers
- Tool Call Interceptors β Middleware for tool call processing
- Resource Conversion β Converting MCP resources to LangChain Blobs
Sources: [langchain_mcp_adapters/tools.py:1-30]()
MultiServerMCPClient
Related topics: Tool Conversion, Transport Types, Callbacks
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Tool Conversion, Transport Types, Callbacks
MultiServerMCPClient
The MultiServerMCPClient is the primary entry point for connecting LangChain applications to multiple Model Context Protocol (MCP) servers. It provides a unified interface to manage connections, load tools, resources, and prompts from various MCP server implementations.
Overview
MultiServerMCPClient serves as a central client that abstracts the complexity of connecting to multiple MCP servers simultaneously. It handles session management, tool conversion, and integrates seamlessly with LangChain and LangGraph agents.
graph TD
A[MultiServerMCPClient] --> B[Connection Manager]
B --> C[StdioConnection]
B --> D[SSEConnection]
B --> E[StreamableHttpConnection]
B --> F[WebsocketConnection]
G[load_mcp_tools] --> H[LangChain Tools]
I[load_mcp_resources] --> J[LangChain Blobs]
K[load_mcp_prompts] --> L[LangChain Messages]Initialization
Constructor Parameters
| Parameter | Type | Default | Description | |
|---|---|---|---|---|
connections | `dict[str, Connection] \ | None` | None | Mapping of server names to connection configurations |
callbacks | `Callbacks \ | None` | None | Optional callbacks for notifications and events |
tool_interceptors | `list[ToolCallInterceptor] \ | None` | None | Optional interceptors for modifying tool requests/responses |
tool_name_prefix | bool | False | Prefix tool names with server name to avoid conflicts |
Connection Configuration
Each server in the connections dictionary requires a transport-specific configuration:
| Transport | Required Parameters |
|---|---|
stdio | command, args |
http | url |
sse | url |
streamable_http | url |
websocket | url |
θ΅ζζ₯ζΊοΌclient.py:51-76
Connection Types
The library supports multiple transport protocols for connecting to MCP servers.
StdioConnection
Used for spawning local MCP server processes via standard I/O.
client = MultiServerMCPClient(
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
}
}
)
θ΅ζζ₯ζΊοΌREADME.md:82-90
HTTP/Streamable HTTP Connection
Used for connecting to HTTP-based MCP servers, including stateless streamable HTTP servers.
client = MultiServerMCPClient(
{
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
θ΅ζζ₯ζΊοΌREADME.md:37-45
WebSocket Connection
For WebSocket-based MCP server connections.
SSE Connection
Server-Sent Events transport for MCP server communication.
Usage Patterns
Basic Usage with get_tools()
The simplest pattern starts a new session for each tool call:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient(
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
}
}
)
all_tools = await client.get_tools()
θ΅ζζ₯ζΊοΌclient.py:51-74
Explicit Session Management
For more control, use explicit session management:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools
client = MultiServerMCPClient({...})
async with client.session("math") as session:
tools = await load_mcp_tools(session)
θ΅ζζ₯ζΊοΌclient.py:75-81
With LangGraph StateGraph
Integration with LangGraph for agent-based workflows:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
from langchain.chat_models import init_chat_model
model = init_chat_model("openai:gpt-4.1")
client = MultiServerMCPClient({...})
tools = await client.get_tools()
def call_model(state: MessagesState):
response = model.bind_tools(tools).invoke(state["messages"])
return {"messages": response}
builder = StateGraph(MessagesState)
builder.add_node(call_model)
builder.add_node(ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", tools_condition)
θ΅ζζ₯ζΊοΌREADME.md:103-126
Tool Name Prefixing
When tool_name_prefix=True, tool names are prefixed with the server name using an underscore separator:
# With prefix: "weather_search"
# Without prefix: "search"
client = MultiServerMCPClient(
{...},
tool_name_prefix=True
)
This helps avoid conflicts when multiple servers expose tools with identical names.
θ΅ζζ₯ζΊοΌclient.py:48-51
Runtime Headers
For HTTP and SSE transports, you can pass custom headers for authentication or tracing:
client = MultiServerMCPClient(
{
"weather": {
"transport": "http",
"url": "http://localhost:8000/mcp",
"headers": {
"Authorization": "Bearer YOUR_TOKEN",
"X-Custom-Header": "custom-value"
},
}
}
)
Onlysseandhttptransports support runtime headers.
θ΅ζζ₯ζΊοΌREADME.md:129-152
Tool Interceptors
Tool call interceptors allow you to modify requests and responses in an onion-pattern chain:
from langchain_mcp_adapters.interceptors import (
MCPToolCallRequest,
MCPToolCallResult,
ToolCallInterceptor
)
class CustomInterceptor(ToolCallInterceptor):
async def intercept(
self, request: MCPToolCallRequest, handler
) -> MCPToolCallResult:
# Modify request
modified_request = request.override(args={"modified": True})
# Process and potentially modify response
result = await handler(modified_request)
return result
client = MultiServerMCPClient(
{...},
tool_interceptors=[CustomInterceptor()]
)
θ΅ζζ₯ζΊοΌinterceptors.py:1-55
MCPToolArtifact
Tool call results that include structured content are wrapped in an MCPToolArtifact:
class MCPToolArtifact(TypedDict):
"""Artifact returned from MCP tool calls.
Attributes:
structured_content: The structured content returned by the MCP tool,
corresponding to the structuredContent field in CallToolResult.
"""
structured_content: dict[str, Any]
θ΅ζζ₯ζΊοΌtools.py:70-84
Content Conversion
The library automatically converts MCP content blocks to LangChain content blocks:
| MCP Type | LangChain Type |
|---|---|
TextContent | {"type": "text", "text": ...} |
ImageContent | {"type": "image", ...} |
FileContentBlock | {"type": "file", ...} |
ResourceLink | {"type": "image"} or {"type": "file"} |
EmbeddedResource | {"type": "text"}, {"type": "image"}, or {"type": "file"} |
AudioContent | NotImplementedError |
θ΅ζζ₯ζΊοΌtools.py:86-126
Limitations
Async Context Manager Deprecation
As of version 0.1.0, MultiServerMCPClient cannot be used as an async context manager:
# This is NOT allowed:
# async with MultiServerMCPClient(...) as client:
# ...
# Instead use:
client = MultiServerMCPClient(...)
tools = await client.get_tools()
θ΅ζζ₯ζΊοΌclient.py:55-68
Architecture
sequenceDiagram
participant Client as MultiServerMCPClient
participant Session as ClientSession
participant Loader as load_mcp_tools
participant Converter as Content Converter
participant LC as LangChain Tool
Client->>Session: create_session()
Session->>Loader: session.list_tools()
Loader->>Session: tool definitions
Session-->>Converter: Tool metadata
Converter->>LC: StructuredTool
LC-->>Client: BaseTool listSee Also
- load_mcp_tools() - Loading and converting MCP tools
- load_mcp_resources() - Loading MCP resources as Blobs
- load_mcp_prompts() - Loading MCP prompts as Messages
- ToolCallInterceptor - Intercepting tool calls
Source: https://github.com/langchain-ai/langchain-mcp-adapters / Human Manual
Transport Types
Related topics: MultiServerMCPClient
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: MultiServerMCPClient
Transport Types
LangChain MCP Adapters supports multiple transport types for connecting to MCP (Model Context Protocol) servers. Transport types define the communication mechanism used between the client and server, enabling flexibility in different deployment scenarios.
Overview
Transport types in langchain-mcp-adapters determine how MCP client sessions communicate with MCP servers. The library provides native support for four primary transport mechanisms, each suited for different use cases ranging from local development to production deployments.
graph TD
A[MultiServerMCPClient] --> B{Transport Type}
B --> C[stdio]
B --> D[http]
B --> E[sse]
B --> F[websocket]
C --> G[Local/Subprocess]
D --> H[HTTP Server]
E --> I[HTTP + SSE Events]
F --> J[WebSocket Server]
G --> K[StdioServerParameters]
H --> L[URL + Headers]
I --> M[URL + Headers]
J --> N[URL + Headers]Sources: langchain_mcp_adapters/client.py:1-50
Supported Transport Types
| Transport | Use Case | Session Creation | Header Support | Timeout Config |
|---|---|---|---|---|
stdio | Local subprocesses, development | In-process via stdin/stdout | N/A | Encoding handlers |
http | Remote HTTP servers, stateless | Streamable HTTP client | β | Request timeout |
sse | Server-Sent Events servers | HTTP + SSE endpoint | β | SSE read timeout |
websocket | Real-time bidirectional | WebSocket connection | β | Connection timeout |
Sources: langchain_mcp_adapters/sessions.py:1-100
Stdio Transport
The stdio transport uses standard input/output streams for communication. This is ideal for running MCP servers as local subprocesses or when the server runs on the same machine as the client.
Configuration Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
command | str | β | Executable command (e.g., "python", "node") |
args | list[str] | β | Command-line arguments |
env | dict[str, str] | β | Environment variables |
cwd | str | β | Working directory |
encoding | str | β | Character encoding (default: system default) |
encoding_error_handler | str | β | How to handle encoding errors |
session_kwargs | dict | β | Additional ClientSession arguments |
Sources: langchain_mcp_adapters/sessions.py:60-90
Environment Variable Expansion
The env parameter supports environment variable expansion in variable values:
env = {
"API_KEY": "${MY_API_KEY}", # Expands from current environment
"STATIC": "custom-value" # Passed through unchanged
}
Variable references use the pattern ${VAR_NAME}. Only values (not keys) are expanded. Unexpanded references trigger a warning.
Sources: langchain_mcp_adapters/sessions.py:80-85
Example: Stdio Connection
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
}
})
tools = await client.get_tools()
Sources: README.md:80-100
HTTP Transport
The http transport connects to MCP servers via HTTP protocol. This is designed for remote server deployments and supports stateless request/response patterns.
Configuration Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
url | str | β | Full URL to the MCP server endpoint |
headers | dict[str, str] | β | HTTP headers sent with each request |
timeout | float | β | Request timeout in seconds (default: 60.0) |
Header Support
HTTP transport supports runtime headers, enabling dynamic authentication and authorization:
from langchain_mcp_adapters.client import MultiServerMCPClient
client = MultiServerMCPClient({
"weather": {
"url": "http://localhost:8000/mcp",
"transport": "http",
"headers": {
"Authorization": "Bearer ${API_TOKEN}",
"X-Custom-Header": "custom-value"
}
}
})
Onlysseandhttptransports support runtime headers.
Sources: README.md:110-130
Example: HTTP Connection
# Start a streamable HTTP server
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from langchain_mcp_adapters.tools import load_mcp_tools
async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(session)
Sources: README.md:35-55
SSE Transport
SSE (Server-Sent Events) transport combines HTTP requests with server-side event streaming. This is useful when the MCP server needs to push updates or progress notifications to the client.
Configuration Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
url | str | β | Full URL to the MCP server SSE endpoint |
headers | dict[str, str] | β | HTTP headers sent with each request |
sse_read_timeout | float | β | SSE read timeout in seconds (default: 300.0) |
timeout | float | β | HTTP request timeout (default: 60.0) |
Progress Callbacks
SSE transport enables progress callback functionality through the MCP client callbacks system:
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext
class CustomCallbacks(Callbacks):
async def progress_callback(self, progress_token: str, progress: dict) -> None:
print(f"Progress: {progress}")
Sources: langchain_mcp_adapters/tools.py:180-200
WebSocket Transport
WebSocket transport provides bidirectional real-time communication between the client and MCP server. This is suitable for applications requiring low-latency, persistent connections.
Configuration Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
url | str | β | WebSocket endpoint URL |
headers | dict[str, str] | β | WebSocket handshake headers |
timeout | float | β | Connection timeout |
Connection Factory
The Connection abstract class defines the common interface for all transport implementations:
classDiagram
class Connection {
<<abstract>>
+session_kwarg: dict
+server_name: str
+get_session() ClientSession
}
class StdioConnection {
+command: str
+args: list
+env: dict
+get_session() ClientSession
}
class StreamableHttpConnection {
+url: str
+headers: dict
+timeout: float
+get_session() ClientSession
}
class SSEConnection {
+url: str
+headers: dict
+timeout: float
+sse_read_timeout: float
+get_session() ClientSession
}
class WebsocketConnection {
+url: str
+headers: dict
+timeout: float
+get_session() ClientSession
}
Connection <|-- StdioConnection
Connection <|-- StreamableHttpConnection
Connection <|-- SSEConnection
Connection <|-- WebsocketConnectionSources: langchain_mcp_adapters/sessions.py:1-50
Session Creation
All transport types ultimately create an MCP.ClientSession for tool execution:
from langchain_mcp_adapters.sessions import create_session
# Direct session creation
async with create_session(connection) as session:
tools = await load_mcp_tools(session)
Sources: langchain_mcp_adapters/sessions.py:1-30
MultiServerMCPClient Session Management
# Explicitly starting a session
client = MultiServerMCPClient({
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
}
})
async with client.session("math") as session:
tools = await load_mcp_tools(session)
MultiServerMCPClient cannot be used as a context manager directly. Use client.session(server_name) for explicit session control.
Sources: langchain_mcp_adapters/client.py:1-60
Tool Name Prefixing
When using multiple servers with overlapping tool names, enable the tool_name_prefix option to avoid conflicts:
client = MultiServerMCPClient(
{
"math": {"transport": "stdio", ...},
"weather": {"transport": "http", "url": "http://localhost:8000/mcp"}
},
tool_name_prefix=True # Enables prefixed tool names
)
tools = await client.get_tools()
# Tool names: "math_add", "weather_search" (prefixed with server name)
Sources: langchain_mcp_adapters/client.py:30-45
Transport Selection Guide
graph TD
A[Select Transport] --> B{Deployment Type}
B --> C[Local/Subprocess]
C --> D[Use stdio]
B --> E[Remote Server]
E --> F{Need Real-time Events?}
F --> G[Yes]
G --> H[Use websocket]
F --> I[No]
I --> J{HTTP/1.1 or Streaming?}
J --> K[Streaming/SSE]
K --> L[Use sse]
J --> M[Request/Response]
M --> N[Use http]Decision Matrix
| Scenario | Recommended Transport |
|---|---|
| Development, local testing | stdio |
| Production HTTP API | http |
| Server pushing events to client | sse |
| Bidirectional, low-latency needs | websocket |
| Fire-and-forget subprocess | stdio |
Timeout Configuration
Default Timeouts
| Transport | Parameter | Default Value |
|---|---|---|
| HTTP | timeout | 60.0 seconds |
| SSE | timeout | 60.0 seconds |
| SSE | sse_read_timeout | 300.0 seconds |
| WebSocket | timeout | Connection timeout |
Custom Timeout Example
from langchain_mcp_adapters.sessions import StreamableHttpConnection
connection = StreamableHttpConnection(
url="http://localhost:8000/mcp",
timeout=120.0, # 2 minute request timeout
)
Error Handling
Transport-specific errors may occur during session creation or tool execution:
Stdio Transport Errors
- Process startup failure: Check
commandpath and permissions - Encoding errors: Configure
encodingandencoding_error_handler
HTTP/SSE/WebSocket Transport Errors
- Connection timeout: Increase
timeoutparameter - SSE read timeout: Increase
sse_read_timeoutfor long-running operations - Header authentication failures: Verify header format and token validity
See Also
- MultiServerMCPClient - Multi-server connection management
- load_mcp_tools - Tool loading with transport
- Callbacks System - Progress and notification handling
Sources: [langchain_mcp_adapters/client.py:1-50]()
Callbacks
Related topics: Tool Call Interceptors, MultiServerMCPClient
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Tool Call Interceptors, MultiServerMCPClient
Callbacks
The Callbacks system in langchain-mcp-adapters provides a mechanism for handling notifications, events, and progress updates during MCP tool execution. It acts as a bridge between the LangChain callback format and the MCP (Model Context Protocol) callback format, enabling developers to intercept and respond to tool call lifecycle events.
Overview
When working with MCP tools through langchain-mcp-adapters, callbacks serve several critical purposes:
- Progress Notification: Track long-running tool operations via progress callbacks
- Event Handling: Respond to notifications and events from the MCP server
- Context Propagation: Maintain context about which server and tool is being executed
- Lifecycle Integration: Integrate with LangChain's callback system for broader ecosystem compatibility
The callback system is primarily used in two contexts:
- When loading MCP tools via
load_mcp_tools()orconvert_mcp_tool_to_langchain_tool() - When configuring the
MultiServerMCPClientfor multi-server tool aggregation
Sources: langchain_mcp_adapters/tools.py:1-30
Core Components
CallbackContext
The CallbackContext class provides context information about an ongoing tool call operation.
| Property | Type | Description | |
|---|---|---|---|
server_name | `str \ | None` | Name of the MCP server handling the tool call |
tool_name | `str \ | None` | Name of the tool being executed |
Sources: langchain_mcp_adapters/callbacks.py Sources: langchain_mcp_adapters/tools.py:55-62
Callbacks Class
The Callbacks class is the main abstraction for handling MCP events. It provides the interface that developers implement to receive notifications.
class Callbacks:
"""Handler for MCP notifications and events."""
def to_mcp_format(self, context: CallbackContext) -> _MCPCallbacks:
"""Convert to MCP-compatible callback format."""
...
Sources: langchain_mcp_adapters/callbacks.py Sources: langchain_mcp_adapters/tools.py:63-68
_MCPCallbacks Class
The internal _MCPCallbacks class wraps callbacks in the format expected by the MCP SDK.
| Property | Type | Description | |
|---|---|---|---|
progress_callback | `Callable \ | None` | Callback for progress updates during tool execution |
Sources: langchain_mcp_adapters/callbacks.py
Architecture
graph TD
A[MultiServerMCPClient] --> B[Callbacks Instance]
A --> C[load_mcp_tools]
B --> D[to_mcp_format]
D --> E[_MCPCallbacks]
E --> F[session.call_tool]
C --> G[CallbackContext]
G --> D
H[MCP Server] --> I[Progress Updates]
I --> FUsage Patterns
Basic Usage with MultiServerMCPClient
The most common pattern is to pass a Callbacks instance to the MultiServerMCPClient:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext
class MyCallbacks(Callbacks):
def to_mcp_format(self, context: CallbackContext) -> _MCPCallbacks:
# Custom callback handling
return _MCPCallbacks(progress_callback=self.on_progress)
async def on_progress(self, progress: float, total: float | None, message: str | None):
print(f"Progress: {progress}/{total} - {message}")
client = MultiServerMCPClient(
{
"math": {
"command": "python",
"args": ["/path/to/math_server.py"],
"transport": "stdio",
},
},
callbacks=MyCallbacks()
)
Sources: langchain_mcp_adapters/client.py:40-60
Usage with load_mcp_tools
Callbacks can also be passed directly when loading tools from a session:
from langchain_mcp_adapters.tools import load_mcp_tools
async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
async with ClientSession(read, write) as session:
await session.initialize()
tools = await load_mcp_tools(
session,
callbacks=MyCallbacks(),
server_name="math_server"
)
Sources: langchain_mcp_adapters/tools.py:100-135
Usage with Tool Interceptors
Callbacks work alongside tool interceptors for advanced control over tool execution:
from langchain_mcp_adapters.interceptors import ToolCallInterceptor, MCPToolCallRequest, MCPToolCallResult
class LoggingInterceptor(ToolCallInterceptor):
async def intercept(
self,
request: MCPToolCallRequest,
call_next: Callable
) -> MCPToolCallResult:
print(f"Calling tool: {request.name}")
result = await call_next(request)
print(f"Tool result: {result}")
return result
client = MultiServerMCPClient(
{...},
callbacks=MyCallbacks(),
tool_interceptors=[LoggingInterceptor()]
)
Sources: langchain_mcp_adapters/interceptors.py:1-50
Callback Flow in Tool Execution
sequenceDiagram
participant Client as MCP Client
participant Callbacks as Callbacks Handler
participant Session as ClientSession
participant Server as MCP Server
Client->>Callbacks: to_mcp_format(context)
Callbacks-->>Client: _MCPCallbacks
Client->>Session: call_tool(name, args, progress_callback)
Session->>Server: Execute Tool
Server-->>Session: Progress Update
Session->>Callbacks: progress_callback
Server-->>Session: Tool Result
Session-->>Client: CallToolResultCallbackContext Construction
The CallbackContext is constructed with server and tool information at different points in the execution flow:
| Function | Context Construction |
|---|---|
load_mcp_tools() | Uses server_name from parameters |
convert_mcp_tool_to_langchain_tool() | Uses both server_name and tool.name |
MultiServerMCPClient | Passed through to all tool loading operations |
Sources: langchain_mcp_adapters/tools.py:70-80
Error Handling
When callbacks are not provided, the system uses a default _MCPCallbacks() instance:
mcp_callbacks = (
callbacks.to_mcp_format(context=CallbackContext(server_name=server_name, tool_name=tool.name))
if callbacks is not None
else _MCPCallbacks()
)
This ensures that tool execution continues normally even without custom callback handling.
Sources: langchain_mcp_adapters/tools.py:70-75
Integration with Tool Result Conversion
Callbacks are passed through the entire tool execution chain and are used when converting tool results back to LangChain format:
async def call_tool(...) -> tuple[ConvertedToolResult, MCPToolArtifact | None]:
mcp_callbacks = (
callbacks.to_mcp_format(
context=CallbackContext(server_name=server_name, tool_name=tool.name)
)
if callbacks is not None
else _MCPCallbacks()
)
# Execute with progress callback
call_tool_result = await session.call_tool(
tool_name,
tool_args,
progress_callback=mcp_callbacks.progress_callback,
)
Sources: langchain_mcp_adapters/tools.py:55-70
API Reference
Callbacks Class
class Callbacks:
"""Base class for handling MCP notifications and events."""
def to_mcp_format(self, context: CallbackContext) -> _MCPCallbacks:
"""Convert the callbacks to MCP-compatible format.
Args:
context: The callback context containing server and tool info.
Returns:
An _MCPCallbacks instance configured with appropriate handlers.
"""
...
_MCPCallbacks Class
@dataclass
class _MCPCallbacks:
"""Internal MCP-compatible callbacks wrapper."""
progress_callback: Callable | None = None
CallbackContext Class
@dataclass
class CallbackContext:
"""Context information for callback handlers."""
server_name: str | None = None
tool_name: str | None = None
Best Practices
- Always provide context: When constructing
CallbackContext, include bothserver_nameandtool_namefor maximum observability.
- Handle None gracefully: The callback system is designed to work without callbacks, so ensure your code handles the default case.
- Combine with interceptors: For comprehensive tool call control, combine callbacks with tool interceptors.
- Thread-safe progress updates: Progress callbacks may be called from different tasks; ensure your handler is thread-safe or async-safe.
- Resource cleanup: When using callbacks that allocate resources, ensure proper cleanup in the client lifecycle.
Summary
The Callbacks system in langchain-mcp-adapters provides a clean abstraction for handling MCP tool lifecycle events. By implementing the Callbacks class and its to_mcp_format() method, developers can:
- Monitor tool execution progress
- Handle notifications from MCP servers
- Integrate with LangChain's callback ecosystem
- Build custom logging, monitoring, and error handling for MCP tool calls
The system is designed to be optionalβtools work with default callbacks when none are providedβwhile providing rich customization when needed.
Sources: [langchain_mcp_adapters/tools.py:1-30]()
Tool Call Interceptors
Related topics: Callbacks, Tool Conversion
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Callbacks, Tool Conversion
Tool Call Interceptors
Overview
Tool Call Interceptors provide a mechanism to wrap and control MCP tool call execution in the langchain-mcp-adapters library. They enable developers to inject custom logic before and after tool calls, modify request parameters, handle responses, and implement cross-cutting concerns like logging, authentication, and caching.
The interceptor system follows the onion pattern (also known as decorator pattern or chain of responsibility), where each interceptor wraps the next one, allowing pre-processing and post-processing of tool calls in a composable way.
Architecture
High-Level Flow
graph TD
A[External Code] --> B[Interceptor Chain]
B --> C[Interceptor 1]
C --> D[Interceptor 2]
D --> E[...]
E --> F[execute_tool]
F --> G[MCP ClientSession.call_tool]
subgraph "Onion Layers (wrapping inward)"
B
C
D
E
endComponent Diagram
classDiagram
class MCPToolCallRequest {
+str name
+dict args
+str server_name
+dict headers
+object runtime
+override() MCPToolCallRequest
}
class MCPToolCallResult {
<<Type Alias>>
CallToolResult | ToolMessage | Command
}
class ToolCallInterceptor {
<<Protocol>>
+async __call__(request, handler) MCPToolCallResult
}
class _build_interceptor_chain {
+build_composed_handler()
}
MCPToolCallRequest --> ToolCallInterceptor : passed to
_build_interceptor_chain --> ToolCallInterceptor : composesCore Data Models
MCPToolCallRequest
Represents a tool execution request passed to MCP tool call interceptors. Follows a flat namespace pattern rather than separating call data and context into nested objects.
| Field | Type | Modifiable | Description | |
|---|---|---|---|---|
name | str | Yes | Tool name to invoke | |
args | dict[str, Any] | Yes | Tool arguments as key-value pairs | |
server_name | str | No | Name of the MCP server handling the tool | |
headers | `dict[str, Any] \ | None` | Yes | HTTP headers for applicable transports (SSE, HTTP) |
runtime | `object \ | None` | No | LangGraph runtime context (if any) |
Sources: interceptors.py:58-74
#### The override() Method
The MCPToolCallRequest class provides an immutable override() method that returns a new instance with specified attributes replaced:
def override(
self, **overrides: Unpack[_MCPToolCallRequestOverrides]
) -> MCPToolCallRequest:
This follows an immutable pattern, leaving the original request unchanged.
| Parameter | Type | Description | |
|---|---|---|---|
name | str | Tool name (optional) | |
args | dict[str, Any] | Tool arguments (optional) | |
headers | `dict[str, Any] \ | None` | HTTP headers (optional) |
MCPToolCallResult
A type alias representing the possible return types from an interceptor:
| Type | Description |
|---|---|
CallToolResult | MCP protocol result (standard MCP format) |
ToolMessage | LangChain format message |
Command | LangGraph Command (when langgraph is installed) |
if LANGGRAPH_PRESENT:
MCPToolCallResult = CallToolResult | ToolMessage | Command
else:
MCPToolCallResult = CallToolResult | ToolMessage
Sources: interceptors.py:29-36
ToolCallInterceptor Protocol
The ToolCallInterceptor is a runtime-checkable protocol that defines the interface for interceptor implementations:
@runtime_checkable
class ToolCallInterceptor(Protocol):
async def __call__(
self,
request: MCPToolCallRequest,
handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
) -> MCPToolCallResult:
...
| Parameter | Type | Description |
|---|---|---|
request | MCPToolCallRequest | The tool call request to process |
handler | Callable | The next handler in the chain (call to continue execution) |
| Returns | MCPToolCallResult | The result of processing |
Sources: interceptors.py:42-49
Interceptor Pattern
Interceptors work by:
- Receiving the
requestand thehandlercallable - Optionally modifying the request before passing it on
- Calling the
handlerto continue the chain - Optionally modifying the result before returning
async def my_interceptor(
request: MCPToolCallRequest,
handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
) -> MCPToolCallResult:
# Pre-processing: modify request
modified_request = request.override(args={**request.args, "injected": True})
# Continue to next handler
result = await handler(modified_request)
# Post-processing: modify result
# ... do something with result ...
return result
Building the Interceptor Chain
The _build_interceptor_chain() function composes multiple interceptors into a single handler using the onion pattern:
def _build_interceptor_chain(
base_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
tool_interceptors: list[ToolCallInterceptor] | None,
) -> Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]:
| Parameter | Type | Description | |
|---|---|---|---|
base_handler | Callable | Innermost handler that executes the actual tool call | |
tool_interceptors | `list[ToolCallInterceptor] \ | None` | List of interceptors to wrap around the handler |
Sources: tools.py:145-147
Execution Order
The first interceptor in the list becomes the outermost layer, with subsequent interceptors wrapping inward. This means:
- Interceptor at index 0 executes first (outermost)
- Interceptor at index 1 executes second
- And so on...
- The
base_handler(actual tool execution) executes last (innermost)
graph LR
A[External Call] --> B["Interceptor[0]<br/>outermost"]
B --> C["Interceptor[1]"]
C --> D["Interceptor[2]"]
D --> E["..."]
E --> F["base_handler<br/>innermost"]
F --> G[MCP call_tool]Usage
Loading Tools with Interceptors
When loading MCP tools, you can provide a list of interceptors:
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools
# Define your interceptor
class LoggingInterceptor:
async def __call__(self, request, handler):
print(f"Calling tool: {request.name}")
result = await handler(request)
print(f"Tool {request.name} completed")
return result
client = MultiServerMCPClient({
"math": {
"command": "python",
"args": ["./math_server.py"],
"transport": "stdio",
}
})
tools = await client.get_tools(
tool_interceptors=[LoggingInterceptor()]
)
Sources: tools.py:163-179
Individual Tool Conversion
You can also apply interceptors when converting individual tools:
from langchain_mcp_adapters.tools import convert_mcp_tool_to_langchain_tool
tool = convert_mcp_tool_to_langchain_tool(
session=session,
tool=mcp_tool,
tool_interceptors=[CustomInterceptor()],
server_name="my_server",
tool_name_prefix=True
)
Using Runtime Context
Interceptors have access to the runtime field, which contains LangGraph runtime context when used within a LangGraph graph:
class RuntimeAwareInterceptor:
async def __call__(self, request, handler):
if request.runtime:
# Access LangGraph runtime
pass
return await handler(request)
Example Interceptors
Authentication Interceptor
class AuthInterceptor:
def __init__(self, api_key: str):
self.api_key = api_key
async def __call__(self, request, handler):
# Inject auth headers
request = request.override(
headers={"Authorization": f"Bearer {self.api_key}"}
)
return await handler(request)
Caching Interceptor
from functools import lru_cache
class CacheInterceptor:
def __init__(self):
self.cache = {}
async def __call__(self, request, handler):
cache_key = f"{request.name}:{hash(frozenset(request.args.items()))}"
if cache_key in self.cache:
return self.cache[cache_key]
result = await handler(request)
self.cache[cache_key] = result
return result
Request Modification Interceptor
class DefaultArgsInterceptor:
def __init__(self, defaults: dict[str, Any]):
self.defaults = defaults
async def __call__(self, request, handler):
# Merge defaults with provided args
merged_args = {**self.defaults, **request.args}
request = request.override(args=merged_args)
return await handler(request)
API Reference
Functions
#### _build_interceptor_chain()
def _build_interceptor_chain(
base_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
tool_interceptors: list[ToolCallInterceptor] | None,
) -> Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]:
Builds a composed handler chain with interceptors in onion pattern.
Parameters:
| Name | Type | Description | |
|---|---|---|---|
base_handler | Callable | Innermost handler executing the actual tool call | |
tool_interceptors | `list[ToolCallInterceptor] \ | None` | Optional list of interceptors to wrap |
Returns: Composed handler with all interceptors applied
Sources: tools.py:145-175
Classes
#### MCPToolCallRequest
@dataclass
class MCPToolCallRequest:
name: str
args: dict[str, Any]
server_name: str
headers: dict[str, Any] | None = None
runtime: object | None = None
Sources: interceptors.py:58-74
#### ToolCallInterceptor
@runtime_checkable
class ToolCallInterceptor(Protocol):
async def __call__(
self,
request: MCPToolCallRequest,
handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
) -> MCPToolCallResult:
...
Sources: interceptors.py:42-49
Type Aliases
#### MCPToolCallResult
if LANGGRAPH_PRESENT:
MCPToolCallResult = CallToolResult | ToolMessage | Command
else:
MCPToolCallResult = CallToolResult | ToolMessage
Sources: interceptors.py:29-36
Best Practices
- Always call the handler: Interceptors should typically call
handler(request)unless intentionally short-circuiting - Immutability: Use
request.override()to create modified requests instead of mutating the original - Error handling: Wrap handler calls in try/except for proper error handling and logging
- Order matters: Place interceptors in the correct order as the first in the list is the outermost
- Type hints: Use type hints for better IDE support and type checking
Limitations
- Interceptors cannot currently modify the
server_nameorruntimefields ofMCPToolCallRequestas they are context fields - The interceptor system is designed for tool call interception; other MCP lifecycle events (like resource access) are not currently interceptable
- Runtime headers are only supported for
sseandhttptransports
Sources: [interceptors.py:58-74]()
Doramagic Pitfall Log
Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.
First-time setup may fail or require extra isolation and rollback planning.
First-time setup may fail or require extra isolation and rollback planning.
The project should not be treated as fully validated until this signal is reviewed.
First-time setup may fail or require extra isolation and rollback planning.
Doramagic Pitfall Log
Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.
1. Installation risk: Prompts and Resources auto-discovery
- Severity: high
- Finding: Installation risk is backed by a source signal: Prompts and Resources auto-discovery. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/issues/62
2. Installation risk: `MultiServerMCPClient.get_tools()` silently returns no tools when any single server fails to connect
- Severity: high
- Finding: Installation risk is backed by a source signal:
MultiServerMCPClient.get_tools()silently returns no tools when any single server fails to connect. Treat it as a review item until the current version is checked. - User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/issues/492
3. Project risk: Fix TypeError in resources.py and make __aexit__ an async coroutine in client.py
- Severity: high
- Finding: Project risk is backed by a source signal: Fix TypeError in resources.py and make __aexit__ an async coroutine in client.py. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/issues/496
4. Installation risk: langchain-mcp-adapters==0.2.2
- Severity: medium
- Finding: Installation risk is backed by a source signal: langchain-mcp-adapters==0.2.2. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.2.2
5. Configuration risk: langchain-mcp-adapters==0.1.10
- Severity: medium
- Finding: Configuration risk is backed by a source signal: langchain-mcp-adapters==0.1.10. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.1.10
6. Capability assumption: langchain-mcp-adapters==0.1.14
- Severity: medium
- Finding: Capability assumption is backed by a source signal: langchain-mcp-adapters==0.1.14. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.1.14
7. Capability assumption: README/documentation is current enough for a first validation pass.
- Severity: medium
- Finding: README/documentation is current enough for a first validation pass.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: capability.assumptions | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | README/documentation is current enough for a first validation pass.
8. Project risk: langchain-mcp-adapters==0.1.12
- Severity: medium
- Finding: Project risk is backed by a source signal: langchain-mcp-adapters==0.1.12. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.1.12
9. Maintenance risk: langchain-mcp-adapters==0.2.0
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: langchain-mcp-adapters==0.2.0. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.2.0
10. Maintenance risk: langchain-mcp-adapters==0.2.0a1
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: langchain-mcp-adapters==0.2.0a1. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.2.0a1
11. Maintenance risk: Maintainer activity is unknown
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | last_activity_observed missing
12. Security or permission risk: no_demo
- Severity: medium
- Finding: no_demo
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: downstream_validation.risk_items | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | no_demo; severity=medium
Source: Doramagic discovery, validation, and Project Pack records
Community Discussion Evidence
These external discussion links are review inputs, not standalone proof that the project is production-ready.
Count of project-level external discussion links exposed on this manual page.
Open the linked issues or discussions before treating the pack as ready for your environment.
Community Discussion Evidence
Doramagic exposes project-level community discussion separately from official documentation. Review these links before using langchain-mcp-adapters with real data or production workflows.
MultiServerMCPClient.get_tools()silently returns no tools when any si - github / github_issue- Feature Request: Support passing server-defined params extensions (e.g. - github / github_issue
- Prompts and Resources auto-discovery - github / github_issue
- Fix TypeError in resources.py and make __aexit__ an async coroutine in c - github / github_issue
- langchain-mcp-adapters==0.2.2 - github / github_release
- langchain-mcp-adapters==0.2.1 - github / github_release
- langchain-mcp-adapters==0.2.0 - github / github_release
- langchain-mcp-adapters==0.2.0a1 - github / github_release
- langchain-mcp-adapters==0.1.14 - github / github_release
- langchain-mcp-adapters==0.1.13 - github / github_release
- langchain-mcp-adapters==0.1.12 - github / github_release
- langchain-mcp-adapters==0.1.10 - github / github_release
Source: Project Pack community evidence and pitfall evidence