# https://github.com/langchain-ai/langchain-mcp-adapters 项目说明书

生成时间：2026-05-15 14:10:43 UTC

## 目录

- [Introduction](#page-introduction)
- [Installation](#page-installation)
- [Quick Start Guide](#page-quickstart)
- [System Architecture](#page-architecture)
- [Package Structure](#page-package-structure)
- [Tool Conversion](#page-tool-conversion)
- [MultiServerMCPClient](#page-multiserver-client)
- [Transport Types](#page-transport-types)
- [Callbacks](#page-callbacks)
- [Tool Call Interceptors](#page-interceptors)

<a id='page-introduction'></a>

## Introduction

### 相关页面

相关主题：[Installation](#page-installation), [Quick Start Guide](#page-quickstart)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)
- [langchain_mcp_adapters/__init__.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/__init__.py)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
</details>

# Introduction

LangChain MCP Adapters is a Python library that bridges the gap between the Model Context Protocol (MCP) ecosystem and LangChain/LangGraph applications. This library provides a lightweight wrapper that converts MCP tools, prompts, and resources into LangChain-compatible formats, enabling seamless integration of MCP servers with AI agents and applications built on the LangChain framework.

## Overview

The Model Context Protocol (MCP) is an open protocol developed by Anthropic that enables AI applications to connect with external data sources, tools, and services. MCP defines a standard interface for AI models to interact with various resources through a client-server architecture.

LangChain MCP Adapters serves as the integration layer between these two ecosystems. It allows developers to:

- Use MCP servers as tool providers for LangChain and LangGraph agents
- Load tools from multiple MCP servers simultaneously
- Convert MCP resources into LangChain Blob objects for processing
- Transform MCP prompts into formats compatible with LangChain
- Intercept and modify tool call behavior through a configurable middleware pattern

资料来源：[README.md:1-20](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)

## Architecture

The library follows a modular architecture with clear separation of concerns across several key components:

```mermaid
graph TD
    A[LangChain/LangGraph Agent] --> B[langchain-mcp-adapters]
    B --> C[Tools Adapter]
    B --> D[Resources Adapter]
    B --> E[Prompts Adapter]
    B --> F[MultiServerMCPClient]
    C --> G[MCP ClientSession]
    D --> G
    E --> G
    F --> H[Connection Manager]
    H --> I[StdioConnection]
    H --> J[StreamableHttpConnection]
    H --> K[SSEConnection]
    H --> L[WebsocketConnection]
    G --> M[MCP Server 1]
    G --> N[MCP Server 2]
    G --> O[MCP Server N]
```

### Core Components

| Component | File | Purpose |
|-----------|------|---------|
| `MultiServerMCPClient` | `client.py` | Manages connections to multiple MCP servers |
| `load_mcp_tools()` | `tools.py` | Converts MCP tools to LangChain tools |
| `load_mcp_resources()` | `resources.py` | Converts MCP resources to LangChain Blobs |
| `load_mcp_prompt()` | `prompts.py` | Converts MCP prompts to LangChain prompts |
| `ToolCallInterceptor` | `interceptors.py` | Middleware for tool call lifecycle management |

资料来源：[langchain_mcp_adapters/__init__.py:1-12](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/__init__.py)

## Supported Transports

The library supports multiple transport mechanisms for connecting to MCP servers. Each transport type is implemented in the sessions module and provides different capabilities for various deployment scenarios.

```mermaid
graph LR
    A[Client Application] --> B[Transport Layer]
    B --> C[stdio]
    B --> D[streamable-http]
    B --> E[SSE]
    B --> F[WebSocket]
    C --> G[Local Process]
    D --> H[HTTP Server]
    E --> H
    F --> H
```

### Transport Comparison

| Transport | Use Case | Headers Support | Stateful | Notes |
|-----------|----------|-----------------|----------|-------|
| `stdio` | Local subprocesses | No | Yes | Standard I/O communication |
| `streamable-http` | HTTP-based servers | Yes | Configurable | Recommended for stateless deployments |
| `sse` | Server-Sent Events | Yes | Yes | Bidirectional communication |
| `websocket` | Persistent connections | No | Yes | Low latency, real-time |

资料来源：[langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)

## Tool Conversion Process

When loading MCP tools, the library performs a series of conversions to transform the tool definitions into LangChain-compatible `StructuredTool` objects. This process involves mapping MCP tool schemas, descriptions, and execution semantics.

```mermaid
graph TD
    A[MCP Tool Definition] --> B[Extract inputSchema]
    B --> C[Create StructuredTool]
    C --> D[Wrap with interceptor chain]
    D --> E[Return BaseTool]
    E --> F[Used by LangChain Agent]
    F --> G[Tool call invocation]
    G --> H[MCP ClientSession.call_tool]
    H --> I[Result conversion]
    I --> J[Return to Agent]
```

### Tool Result Handling

The tool adapter handles various content types returned by MCP tools:

| MCP Content Type | LangChain Output | Notes |
|------------------|------------------|-------|
| `TextContent` | `{"type": "text", "text": ...}` | Direct text conversion |
| `ImageContent` | `{"type": "image", "base64": ..., "mime_type": ...}` | Image data with MIME type |
| `ResourceLink` (image/*) | `{"type": "image", "url": ...}` | Image URL reference |
| `ResourceLink` (other) | `{"type": "file", "url": ...}` | File URL reference |
| `EmbeddedResource` (text) | `{"type": "text", "text": ...}` | Embedded text content |
| `EmbeddedResource` (blob) | `{"type": "image"/"file", ...}` | Binary content |

资料来源：[langchain_mcp_adapters/tools.py:70-130](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)

## Interceptor System

The library provides a powerful interceptor mechanism that allows developers to intercept and modify tool call behavior. This follows the onion pattern (also known as decorator pattern) for composable middleware.

```mermaid
graph TD
    A[Request] --> B[Interceptor 1]
    B --> C[Interceptor 2]
    C --> D[Interceptor N]
    D --> E[Base Handler<br/>session.call_tool]
    E --> F[Interceptor N Result]
    F --> G[Interceptor 2 Result]
    G --> H[Interceptor 1 Result]
    H --> I[Response]
```

### ToolCallInterceptor Interface

Interceptors implement the `ToolCallInterceptor` protocol and can:

- Modify tool arguments before execution
- Change the tool name being called
- Add or modify HTTP headers for requests
- Transform or wrap the result
- Handle errors and retry logic
- Support LangGraph's `Command` for state modification

资料来源：[langchain_mcp_adapters/interceptors.py:1-50](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)

## Resource Conversion

MCP resources are converted to LangChain `Blob` objects, enabling integration with LangChain's document loading and processing capabilities.

```mermaid
graph TD
    A[MCP Resource URI] --> B[session.read_resource]
    B --> C[ResourceContents]
    C --> D{Content Type?}
    D -->|TextResourceContents| E[Extract text]
    D -->|BlobResourceContents| F[base64 decode]
    E --> G[Blob.from_data]
    F --> G
    G --> H[LangChain Blob]
```

资料来源：[langchain_mcp_adapters/resources.py:1-60](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)

## Basic Usage Patterns

### Single Server with load_mcp_tools

```python
from mcp import ClientSession
from langchain_mcp_adapters.tools import load_mcp_tools

# Initialize MCP client session
async with ClientSession(read, write) as session:
    await session.initialize()
    tools = await load_mcp_tools(session)
    # Use tools with LangChain agent
```

### Multi-Server with MultiServerMCPClient

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
    "math": {
        "command": "python",
        "args": ["./math_server.py"],
        "transport": "stdio",
    },
    "weather": {
        "url": "http://localhost:8000/mcp",
        "transport": "http",
    }
})
tools = await client.get_tools()
```

资料来源：[README.md:40-80](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)

## Installation

The library can be installed via pip:

```bash
pip install langchain-mcp-adapters
```

For LangGraph integration with full agent capabilities:

```bash
pip install langchain-mcp-adapters langgraph "langchain[openai]"
```

资料来源：[README.md:25-30](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)

## Key Features Summary

| Feature | Description |
|---------|-------------|
| Tool Conversion | Convert MCP tools to LangChain `StructuredTool` objects |
| Multi-Server Support | Connect to multiple MCP servers simultaneously |
| Resource Loading | Convert MCP resources to LangChain Blobs |
| Transport Flexibility | Support for stdio, HTTP, SSE, and WebSocket transports |
| Interceptor Middleware | Hook into tool call lifecycle for custom behavior |
| LangGraph Integration | Full compatibility with LangGraph agents and state management |
| Pagination Support | Automatic handling of paginated tool listings |

## Related Documentation

- [Tools Module](./tools) - Detailed guide on tool conversion and execution
- [Client Module](./client) - Multi-server client configuration and usage
- [Resources Module](./resources) - Resource loading and conversion
- [Interceptors](./interceptors) - Middleware and request/response modification
- [Sessions](./sessions) - Transport layer implementation details

---

<a id='page-installation'></a>

## Installation

### 相关页面

相关主题：[Introduction](#page-introduction), [Quick Start Guide](#page-quickstart)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [pyproject.toml](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/pyproject.toml)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
</details>

# Installation

This page documents how to install and set up the **langchain-mcp-adapters** library, which provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph).

## Overview

The `langchain-mcp-adapters` library bridges MCP servers with LangChain/LangGraph ecosystems. It enables:

- Converting MCP tools into LangChain tools
- Connecting to multiple MCP servers simultaneously
- Loading and managing MCP resources as LangChain Blob objects
- Intercepting and modifying tool call execution

资料来源：[README.md:1-20]()

## Prerequisites

### Python Version

| Version | Support Status |
|---------|----------------|
| Python 3.10+ | Required |
| Python 3.11+ | Recommended |
| Python 3.12+ | Supported |

### Required Dependencies

The following packages are automatically installed as dependencies:

| Package | Purpose | Min Version |
|---------|---------|-------------|
| `langchain-core` | Core LangChain functionality | Latest stable |
| `mcp` | Model Context Protocol SDK | Latest stable |
| `pydantic` | Data validation and settings | V2 |
| `httpx` | HTTP client for streamable HTTP transport | Latest stable |

### Optional Dependencies

| Package | Purpose | Install Command |
|---------|---------|-----------------|
| `langgraph` | For LangGraph agent support | `pip install langgraph` |
| `langchain[openai]` | OpenAI integration for agents | `pip install "langchain[openai]"` |

资料来源：[langchain_mcp_adapters/tools.py:1-50]()

## Basic Installation

### Standard Installation

Install the core package using pip:

```bash
pip install langchain-mcp-adapters
```

资料来源：[README.md:32]()

### With LangGraph Support

For full LangGraph agent functionality:

```bash
pip install langchain-mcp-adapters langgraph "langchain[openai]"
```

This installs:
- The MCP adapters library
- LangGraph for building stateful agents
- OpenAI integration for LLM-powered agents

资料来源：[README.md:32-36]()

## Environment Configuration

### OpenAI API Key

If using OpenAI models with the library, set your API key:

```bash
export OPENAI_API_KEY=<your_api_key>
```

Alternatively, pass it programmatically:

```python
import os
os.environ["OPENAI_API_KEY"] = "your-api-key"
```

## Package Dependencies Graph

```mermaid
graph TD
    subgraph "langchain-mcp-adapters"
        A[tools.py] --> B[Base Tools Module]
        A --> C[Tool Interceptors]
        D[resources.py] --> E[Resource Adapter]
        F[client.py] --> G[MultiServerMCPClient]
        H[sessions.py] --> I[Session Management]
    end
    
    subgraph "Required Dependencies"
        J[langchain-core] --> B
        J --> E
        K[mcp Python SDK] --> B
        K --> G
        K --> I
        L[pydantic] --> B
        M[httpx] --> I
    end
    
    subgraph "Optional Dependencies"
        N[langgraph] -.->|if installed| B
        N -.->|if installed| G
    end
```

## Installation Verification

After installation, verify the package is correctly installed:

```python
import langchain_mcp_adapters
print(langchain_mcp_adapters.__version__)
```

Test basic MCP tool loading:

```python
from langchain_mcp_adapters.tools import load_mcp_tools
from langchain_mcp_adapters.client import MultiServerMCPClient

# Verify imports work
print("Installation verified successfully!")
```

## Transport-Specific Installation Notes

The library supports multiple MCP server transport types, each with specific requirements:

### Standard I/O (stdio) Transport

No additional dependencies required. Uses the built-in `mcp` SDK stdio client.

资料来源：[langchain_mcp_adapters/sessions.py:1-100]()

### Streamable HTTP Transport

Requires `httpx` for HTTP client functionality (included by default).

```bash
pip install langchain-mcp-adapters
# httpx is installed as a dependency
```

### Server-Sent Events (SSE) Transport

Requires `httpx` with SSE support (included by default).

资料来源：[langchain_mcp_adapters/sessions.py:100-200]()

## Installing Development Version

### From Source

To install the latest development version from the repository:

```bash
git clone https://github.com/langchain-ai/langchain-mcp-adapters.git
cd langchain-mcp-adapters
pip install -e .
```

### With Development Dependencies

```bash
git clone https://github.com/langchain-ai/langchain-mcp-adapters.git
cd langchain-mcp-adapters
pip install -e ".[dev]"
```

## Dependency Resolution

### Core Dependencies

The package requires these core dependencies which are installed automatically:

```toml
# From pyproject.toml
dependencies = [
    "langchain-core>=0.0.1",
    "mcp>=1.0.0",
    "pydantic>=2.0.0",
    "httpx>=0.25.0",
]
```

### Optional Feature Dependencies

| Feature | Dependencies |
|---------|--------------|
| LangGraph Support | `langgraph` |
| All Features | `langgraph`, `langchain[openai]` |

资料来源：[pyproject.toml](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/pyproject.toml)

## Importing the Package

After installation, import the main components:

```python
# Core tools module
from langchain_mcp_adapters.tools import load_mcp_tools, convert_mcp_tool_to_langchain_tool

# Multi-server client
from langchain_mcp_adapters.client import MultiServerMCPClient

# Resource adapter
from langchain_mcp_adapters.resources import load_mcp_resources, get_mcp_resource

# Session management
from langchain_mcp_adapters.sessions import create_session, Connection

# Interceptors (optional)
from langchain_mcp_adapters.interceptors import ToolCallInterceptor
```

资料来源：[langchain_mcp_adapters/tools.py:1-50]()

## Next Steps

After installation, proceed to:

1. **[Quickstart Guide](README.md)** - Get started with basic MCP tool usage
2. **[Multi-Server Setup](README.md)** - Connect to multiple MCP servers
3. **[LangGraph Integration](README.md)** - Build agents with MCP tools
4. **[Client Configuration](README.md)** - Configure connection options and transports

---

<a id='page-quickstart'></a>

## Quick Start Guide

### 相关页面

相关主题：[Introduction](#page-introduction), [Tool Conversion](#page-tool-conversion), [MultiServerMCPClient](#page-multiserver-client)

<details>
<summary>Relevant Source Files</summary>

The following source files were used to generate this page:

- [README.md](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [examples/servers/streamable-http-stateless/mcp_simple_streamablehttp_stateless/__main__.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/examples/servers/streamable-http-stateless/mcp_simple_streamablehttp_stateless/__main__.py)
</details>

# Quick Start Guide

This guide provides a comprehensive introduction to **langchain-mcp-adapters**, a library that bridges [Anthropic's Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) servers with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph) applications.

## Overview

The langchain-mcp-adapters library serves two primary purposes:

1. **Tool Conversion**: Transform MCP tools into LangChain-compatible tools that integrate seamlessly with LangGraph agents
2. **Multi-Server Client**: Manage connections to multiple MCP servers simultaneously

The library provides a lightweight wrapper that enables developers to leverage MCP servers' capabilities within the LangChain ecosystem without additional boilerplate code.

## Installation

Install the core package along with required dependencies:

```bash
pip install langchain-mcp-adapters
```

For development with OpenAI models:

```bash
pip install langchain-mcp-adapters langgraph "langchain[openai]"
```

## Architecture Overview

The library follows a layered architecture where MCP client sessions interact with server tools, prompts, and resources through adapter classes that convert data formats between MCP and LangChain standards.

```mermaid
graph TD
    A[LangChain / LangGraph Application] --> B[langchain-mcp-adapters]
    B --> C[MultiServerMCPClient]
    B --> D[Individual Tool Conversion]
    C --> E[MCP Server 1]
    C --> F[MCP Server 2]
    C --> N[MCP Server N]
    D --> E
    D --> F
    D --> N
    E --> G[stdio Transport]
    F --> H[HTTP Transport]
    F --> I[SSE Transport]
    F --> J[WebSocket Transport]
```

## Core Components

### MultiServerMCPClient

The `MultiServerMCPClient` manages connections to multiple MCP servers and provides unified access to their tools, prompts, and resources.

**资料来源**：[langchain_mcp_adapters/client.py:1-50]()

#### Connection Configuration

| Parameter | Type | Description |
|-----------|------|-------------|
| `command` | `str` | Executable command (e.g., `"python"`, `"node"`) |
| `args` | `list[str]` | Command arguments |
| `transport` | `str` | Transport type: `stdio`, `http`, `sse`, `websocket` |
| `url` | `str` | Server URL for HTTP/SSE/WebSocket transports |
| `headers` | `dict[str, str]` | Custom HTTP headers for requests |

#### Supported Transports

| Transport | Use Case | Notes |
|----------|----------|-------|
| `stdio` | Local subprocess servers | Communication via stdin/stdout |
| `http` | Remote HTTP servers | REST-based communication |
| `sse` | Servers using Server-Sent Events | Real-time streaming |
| `websocket` | WebSocket connections | Bidirectional communication |

**资料来源**：[langchain_mcp_adapters/client.py:1-100]()

## Basic Usage Patterns

### Pattern 1: Direct Session Usage

For single-server scenarios, create an MCP session and load tools directly:

```python
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from langchain_mcp_adapters.tools import load_mcp_tools

async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
    async with ClientSession(read, write) as session:
        await session.initialize()
        tools = await load_mcp_tools(session)
        # Use tools with LangChain/LangGraph
```

**资料来源**：[README.md:1-50]()

### Pattern 2: MultiServerMCPClient with stdio

Connect to locally running MCP servers using standard I/O:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        },
    }
)
tools = await client.get_tools()
```

**资料来源**：[README.md:50-100]()

### Pattern 3: MultiServerMCPClient with HTTP

Connect to remote MCP servers via HTTP transport:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient(
    {
        "weather": {
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)
tools = await client.get_tools()
```

**资料来源**：[README.md:100-150]()

### Pattern 4: Explicit Session Management

For advanced scenarios requiring direct session access:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools

client = MultiServerMCPClient({...})
async with client.session("math") as session:
    tools = await load_mcp_tools(session)
```

**资料来源**：[langchain_mcp_adapters/client.py:50-80]()

## Tool Loading

### load_mcp_tools Function

The `load_mcp_tools` function retrieves all available tools from an MCP session and converts them to LangChain tools.

**资料来源**：[langchain_mcp_adapters/tools.py:100-200]()

#### Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `session` | `ClientSession` | Yes | MCP client session |
| `connection` | `Connection` | No | Connection config if session is `None` |
| `callbacks` | `Callbacks` | No | Event notification handlers |
| `tool_interceptors` | `list[ToolCallInterceptor]` | No | Interceptors for tool call processing |
| `server_name` | `str` | No | Server identifier for logging |
| `tool_name_prefix` | `bool` | No | Prefix tool names with server name (default: `False`) |

#### Return Value

Returns a `list[BaseTool]` containing LangChain-compatible tool objects. Each tool's metadata includes annotations from the MCP tool definition.

**资料来源**：[langchain_mcp_adapters/tools.py:200-300]()

## Integration with LangGraph

### Complete Agent Setup

The following example demonstrates a full LangGraph agent setup using MCP tools:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-4.1")

client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            "args": ["./examples/math_server.py"],
            "transport": "stdio",
        },
        "weather": {
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)

tools = await client.get_tools()

def call_model(state: MessagesState):
    response = model.bind_tools(tools).invoke(state["messages"])
    return {"messages": response}

builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges(
    "call_model",
    tools_condition,
)
# Continue with compile and execution
```

**资料来源**：[README.md:150-200]()

### Workflow Diagram

```mermaid
graph LR
    A[User Message] --> B[call_model Node]
    B --> C{tools_condition}
    C -->|END| D[Response to User]
    C -->|tools| E[ToolNode]
    E --> F[MCP Tool Execution]
    F --> G[Tool Result]
    G --> B
```

## Tool Interceptors

Tool interceptors allow you to modify tool call requests and responses in an onion-pattern chain:

```mermaid
graph TD
    A[Request] --> B[Interceptor 1]
    B --> C[Interceptor 2]
    C --> D[Interceptor N]
    D --> E[Execute Tool]
    E --> D
    D --> C
    C --> B
    B --> F[Response]
```

**资料来源**：[langchain_mcp_adapters/interceptors.py:1-50]()

### Creating a Custom Interceptor

```python
from langchain_mcp_adapters.interceptors import (
    ToolCallInterceptor,
    MCPToolCallRequest,
    MCPToolCallResult,
)

async def logging_interceptor(
    request: MCPToolCallRequest, 
    next_handler
) -> MCPToolCallResult:
    print(f"Calling tool: {request.name} with args: {request.args}")
    result = await next_handler(request)
    print(f"Tool result: {result}")
    return result

client = MultiServerMCPClient(
    {...},
    tool_interceptors=[logging_interceptor]
)
```

## Resource Loading

The library also supports loading MCP resources as LangChain Blob objects:

```python
from langchain_mcp_adapters.resources import load_mcp_resources

# Load all resources
blobs = await load_mcp_resources(session)

# Load specific resources
blobs = await load_mcp_resources(session, uris=["resource://file1", "resource://file2"])

# Load single resource
from langchain_mcp_adapters.resources import get_mcp_resource
blob = await get_mcp_resource(session, "resource://document")
```

**资料来源**：[langchain_mcp_adapters/resources.py:1-80]()

## Creating an MCP Server

For testing, you can create a simple MCP server using FastMCP:

```python
# math_server.py
from mcp.server.fastmcp import FastMCP

mcp = FastMCP("Math")

@mcp.tool()
def add(a: int, b: int) -> int:
    """Add two numbers"""
    return a + b

@mcp.tool()
def multiply(a: int, b: int) -> int:
    """Multiply two numbers"""
    return a * b

if __name__ == "__main__":
    mcp.run()
```

**资料来源**：[README.md:50-100]()

## HTTP Server Setup

For remote access, use the provided streamable HTTP server example:

```bash
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
```

This starts a stateless HTTP server on port 3000 that can be accessed via the `streamablehttp_client`.

**资料来源**：[examples/servers/streamable-http-stateless/mcp_simple_streamablehttp_stateless/__main__.py:1-10]()

## Response Format

All tool calls return results in the `content_and_artifact` format:

| Component | Type | Description |
|-----------|------|-------------|
| `content` | `list[ToolMessageContentBlock]` | Primary tool response content |
| `artifact` | `MCPToolArtifact` | Structured data from MCP tool (if any) |

**资料来源**：[langchain_mcp_adapters/tools.py:50-120]()

## Next Steps

- Explore the [API Reference](https://github.com/langchain-ai/langchain-mcp-adapters) for detailed function signatures
- Review the example applications in the `examples/` directory
- Implement custom tool interceptors for logging, caching, or authentication
- Integrate with LangGraph's streaming capabilities for real-time tool execution

---

<a id='page-architecture'></a>

## System Architecture

### 相关页面

相关主题：[Package Structure](#page-package-structure), [Tool Conversion](#page-tool-conversion), [MultiServerMCPClient](#page-multiserver-client)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/__init__.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/__init__.py)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
</details>

# System Architecture

## Overview

The **langchain-mcp-adapters** library provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph) 资料来源：[README.md]()

The library acts as a bridge between MCP servers and LangChain applications, enabling:

- **Tool Conversion**: Transform MCP tools into LangChain-compatible tools
- **Multi-Server Support**: Connect to multiple MCP servers simultaneously
- **Resource Management**: Convert MCP resources to LangChain Blob objects
- **Prompt Integration**: Load MCP prompts into LangChain format
- **Interceptor Support**: Customizable tool call interception and modification

资料来源：[langchain_mcp_adapters/__init__.py:1-10]()

---

## High-Level Architecture

The system follows a layered architecture with clear separation of concerns:

```mermaid
graph TD
    subgraph "Client Layer"
        Client[MultiServerMCPClient]
    end

    subgraph "Session Layer"
        Stdio[StdioConnection]
        HTTP[StreamableHttpConnection]
        SSE[SSEConnection]
        WS[WebsocketConnection]
    end

    subgraph "Adapters Layer"
        Tools[tools.py]
        Resources[resources.py]
        Prompts[prompts.py]
    end

    subgraph "Core Layer"
        Interceptors[interceptors.py]
        Sessions[sessions.py]
    end

    subgraph "External"
        MCPServer[MCP Server]
        LangChain[LangChain/LangGraph]
    end

    Client --> Tools
    Client --> Resources
    Client --> Stdio
    Client --> HTTP
    Client --> SSE
    Client --> WS
    Stdio --> MCPServer
    HTTP --> MCPServer
    SSE --> MCPServer
    WS --> MCPServer
    Tools --> LangChain
    Resources --> LangChain
    Interceptors --> Tools
    Sessions --> Client
```

---

## Core Components

### MultiServerMCPClient

The `MultiServerMCPClient` is the main entry point for connecting to multiple MCP servers. It manages connections and provides unified access to tools, prompts, and resources.

资料来源：[langchain_mcp_adapters/client.py:1-50]()

#### Key Responsibilities

| Responsibility | Description |
|----------------|-------------|
| Connection Management | Manages multiple server connections |
| Tool Loading | Loads and converts tools from all servers |
| Resource Loading | Loads MCP resources as LangChain Blobs |
| Prompt Loading | Loads prompts from MCP servers |
| Session Handling | Provides session context managers for explicit control |

#### Configuration Parameters

| Parameter | Type | Description |
|-----------|------|-------------|
| `connections` | `dict[str, Connection]` | Server connection configurations |
| `callbacks` | `Callbacks` | Event notification handlers |
| `tool_interceptors` | `list[ToolCallInterceptor]` | Tool call interceptors |
| `tool_name_prefix` | `bool` | Prefix tool names with server name |

资料来源：[langchain_mcp_adapters/client.py:60-80]()

---

### Connection Types

The library supports multiple transport mechanisms for connecting to MCP servers:

```mermaid
graph LR
    A[Client] --> B[StdioConnection]
    A --> C[StreamableHttpConnection]
    A --> D[SSEConnection]
    A --> E[WebsocketConnection]

    B --> F[stdio_client]
    C --> G[mcp.client.streamable_http]
    D --> H[mcp.client.sse]
    E --> I[mcp.client.websocket]
```

资料来源：[langchain_mcp_adapters/sessions.py:1-50]()

#### Connection Types

| Transport | Use Case | Configuration |
|-----------|----------|---------------|
| `stdio` | Local subprocess execution | `command`, `args`, `env`, `cwd` |
| `http` | Streamable HTTP servers | `url`, `headers`, `timeout` |
| `sse` | Server-Sent Events transport | `url`, `headers`, `timeout` |
| `websocket` | WebSocket connections | `url`, `headers`, `timeout` |

资料来源：[langchain_mcp_adapters/sessions.py:100-200]()

---

## Tool Conversion System

### Architecture

```mermaid
graph TD
    subgraph "MCP Side"
        MCPTool[MCP Tool]
        MCPToolResult[MCPToolCallResult]
    end

    subgraph "Conversion Layer"
        ContentConverter[_convert_mcp_content_to_lc_block]
        ResultConverter[_convert_call_tool_result]
        InterceptorChain[_build_interceptor_chain]
    end

    subgraph "LangChain Side"
        StructuredTool[StructuredTool]
        ToolMessage[ToolMessage]
        Command[Command]
        Artifact[MCPToolArtifact]
    end

    MCPTool --> load_mcp_tool
    load_mcp_tool --> InterceptorChain
    InterceptorChain --> MCPToolResult
    MCPToolResult --> ResultConverter
    ContentConverter --> StructuredTool
    ResultConverter --> ToolMessage
    ResultConverter --> Command
    ResultConverter --> Artifact
```

### Content Type Mapping

The tool adapter converts MCP content types to LangChain content blocks:

| MCP Content Type | LangChain Block | Description |
|------------------|-----------------|-------------|
| `TextContent` | `TextContentBlock` | Plain text content |
| `ImageContent` | `ImageContentBlock` | Image with base64 data |
| `ResourceLink` (image/*) | `ImageContentBlock` | Image via URL |
| `ResourceLink` (other) | `FileContentBlock` | File via URL |
| `EmbeddedResource` (text) | `TextContentBlock` | Embedded text resource |
| `EmbeddedResource` (blob) | `ImageContentBlock` / `FileContentBlock` | Embedded binary resource |
| `AudioContent` | `NotImplementedError` | Not yet supported |

资料来源：[langchain_mcp_adapters/tools.py:100-150]()

### Tool Call Execution Flow

```mermaid
sequenceDiagram
    participant Agent as LangGraph Agent
    participant Tool as StructuredTool
    participant Interceptor as Interceptor Chain
    participant Executor as Execute Tool
    participant MCPSession as MCP Session
    participant MCPServer as MCP Server

    Agent->>Tool: invoke(args)
    Tool->>Interceptor: MCPToolCallRequest
    Interceptor->>Interceptor: Before interceptors
    Interceptor->>Executor: MCPToolCallRequest
    Executor->>MCPSession: session.call_tool()
    MCPSession->>MCPServer: CallToolRequest
    MCPServer-->>MCPSession: CallToolResult
    MCPSession-->>Executor: CallToolResult
    Executor-->>Interceptor: MCPToolCallResult
    Interceptor->>Interceptor: After interceptors
    Interceptor-->>Tool: MCPToolCallResult
    Tool->>Tool: _convert_call_tool_result()
    Tool-->>Agent: (content, artifact)
```

---

## Interceptor System

### Purpose

The interceptor system allows custom code to execute before and after tool calls, enabling:

- Request modification
- Response transformation
- Logging and monitoring
- Caching
- Error handling
- Conditional execution

资料来源：[langchain_mcp_adapters/interceptors.py:1-30]()

### Interceptor Interface

```mermaid
graph TD
    subgraph "MCPToolCallRequest"
        ReqName[name]
        ReqArgs[args]
        ReqServer[server_name]
        ReqHeaders[headers]
        ReqRuntime[runtime]
    end

    subgraph "MCPToolCallResult"
        ResContent[content]
        ResIsError[isError]
        ResStruct[structuredContent]
    end

    Interceptor["ToolCallInterceptor"]
    Interceptor --> Before[before_tool_call]
    Interceptor --> After[after_tool_call]
```

### Request Override Support

| Field | Modifiable | Description |
|-------|------------|-------------|
| `name` | Yes | Tool name override |
| `args` | Yes | Arguments override |
| `headers` | Yes | HTTP headers override |
| `server_name` | No | Read-only context |
| `runtime` | No | Read-only context |

资料来源：[langchain_mcp_adapters/interceptors.py:50-70]()

### Interceptor Chain Pattern

The system implements an onion-pattern interceptor chain:

```mermaid
graph TD
    A[Agent Request] --> B[Interceptor 1 - Outermost]
    B --> C[Interceptor 2]
    C --> D[Interceptor N]
    D --> E[Execute Tool - Innermost]
    E --> D'
    D' --> C'
    C' --> B'
    B' --> F[Agent Response]
    
    style B fill:#ff9999
    style C fill:#ffcc99
    style D fill:#ffff99
    style E fill:#99ff99
```

资料来源：[langchain_mcp_adapters/tools.py:50-80]()

---

## Session Management

### Session Creation Flow

```mermaid
graph TD
    A[MultiServerMCPClient] --> B{Connection Type}
    
    B -->|Stdio| C[StdioConnection]
    B -->|HTTP| D[StreamableHttpConnection]
    B -->|SSE| E[SSEConnection]
    B -->|WebSocket| F[WebsocketConnection]
    
    C --> G[create_session]
    D --> G
    E --> G
    F --> G
    
    G --> H[ClientSession]
```

### Session Factory

The `create_session()` function provides a unified interface for session creation:

```python
async with create_session(connection) as session:
    tools = await load_mcp_tools(session)
```

资料来源：[langchain_mcp_adapters/sessions.py:200-300]()

---

## Resource Management

### Resource Conversion

```mermaid
graph LR
    MCP[MCPServer] -->|read_resource| Session[ClientSession]
    Session -->|ResourceContents| Converter[convert_mcp_resource_to_langchain_blob]
    
    Converter --> Text[TextResourceContents] --> Blob1[Blob (text)]
    Converter --> Blob[BlobResourceContents] --> Blob2[Blob (binary)]
```

### Supported Resource Types

| MCP Type | LangChain Type | Notes |
|----------|---------------|-------|
| `TextResourceContents` | `Blob` | MIME type from resource |
| `BlobResourceContents` | `Blob` | Base64 decoded data |

资料来源：[langchain_mcp_adapters/resources.py:1-50]()

---

## Data Flow Architecture

### Complete Request Flow

```mermaid
graph TD
    subgraph "1. Initialization"
        A[MultiServerMCPClient] --> B[Load Tools]
        B --> C[create_session]
        C --> D[session.initialize]
    end

    subgraph "2. Tool Invocation"
        E[Agent] --> F[StructuredTool.invoke]
        F --> G[call_tool coroutine]
        G --> H[Build Request]
        H --> I[Apply Interceptors]
    end

    subgraph "3. MCP Execution"
        I --> J[session.call_tool]
        J --> K[MCP Server]
        K --> L[CallToolResult]
    end

    subgraph "4. Response Conversion"
        L --> M[_convert_call_tool_result]
        M --> N[Content Blocks]
        M --> O[MCPToolArtifact]
        N --> P[ToolMessage/Command]
    end

    subgraph "5. Return to Agent"
        P --> Q[Agent Response]
        O --> R[ToolArtifact]
    end
```

---

## Type System

### Result Types

The library defines conditional types based on LangGraph availability:

```python
if LANGGRAPH_PRESENT:
    ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage | Command
else:
    ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage
```

### MCPToolArtifact

A TypedDict wrapping structured content from MCP tool calls:

```python
class MCPToolArtifact(TypedDict):
    structured_content: dict[str, Any]
```

资料来源：[langchain_mcp_adapters/tools.py:50-70]()

---

## Error Handling

### Error Flow

```mermaid
graph TD
    A[MCP Tool Call] --> B{Result Type}
    
    B -->|isError = True| C[Extract Text Blocks]
    C --> D[Join Error Parts]
    D --> E[ToolException]
    
    B -->|isError = False| F[Convert Content]
    F --> G[Return Result]
    
    B -->|AudioContent| H[NotImplementedError]
```

### Error Scenarios

| Scenario | Handling | Source |
|----------|----------|--------|
| MCP server error | `ToolException` raised | tools.py:conversion |
| Unknown content type | `ValueError` raised | tools.py:content |
| Audio content | `NotImplementedError` raised | tools.py:audio |
| Missing session | `ValueError` raised | tools.py:session |

---

## Integration Patterns

### LangGraph Integration

```mermaid
graph LR
    A[StateGraph] --> B[call_model]
    B --> C[tools_condition]
    C --> D{Tool Node?}
    D -->|Yes| E[ToolNode]
    D -->|No| F[End]
    E --> G[Tools]
    G --> B
```

### Response Format

The tool uses `response_format="content_and_artifact"` to return both content and structured data:

```python
return StructuredTool(
    ...
    response_format="content_and_artifact",
)
```

---

## Configuration Reference

### MultiServerMCPClient Configuration

```python
MultiServerMCPClient(
    connections={
        "server_name": {
            "transport": "stdio|http|sse|websocket",
            # Transport-specific options
        }
    },
    callbacks=Callbacks(),      # Optional
    tool_interceptors=[],       # Optional
    tool_name_prefix=False      # Optional
)
```

### Transport Configurations

| Transport | Required Options | Optional Options |
|-----------|-----------------|------------------|
| `stdio` | `command`, `args` | `env`, `cwd`, `encoding` |
| `http` | `url` | `headers`, `timeout` |
| `sse` | `url` | `headers`, `timeout` |
| `websocket` | `url` | `headers`, `timeout` |

---

## Summary

The langchain-mcp-adapters library implements a clean, layered architecture:

1. **Client Layer**: `MultiServerMCPClient` provides high-level API for managing multiple server connections
2. **Session Layer**: Multiple transport implementations (`Stdio`, `HTTP`, `SSE`, `WebSocket`) handle protocol details
3. **Adapters Layer**: `tools.py`, `resources.py`, and `prompts.py` convert between MCP and LangChain formats
4. **Interceptor Layer**: `interceptors.py` enables customization of the tool call lifecycle
5. **Core Layer**: Type definitions and conversion utilities provide the foundation

The architecture prioritizes:
- **Extensibility**: Through the interceptor system
- **Flexibility**: Multiple transport and connection options
- **Type Safety**: Comprehensive type annotations and Pydantic models
- **Integration**: Seamless LangChain and LangGraph compatibility

---

<a id='page-package-structure'></a>

## Package Structure

### 相关页面

相关主题：[System Architecture](#page-architecture), [Tool Conversion](#page-tool-conversion)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)
- [langchain_mcp_adapters/callbacks.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/callbacks.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
- [langchain_mcp_adapters/prompts.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/prompts.py)
</details>

# Package Structure

## Overview

The `langchain-mcp-adapters` package provides a lightweight wrapper that makes [Anthropic Model Context Protocol (MCP)](https://modelcontextprotocol.io/introduction) tools compatible with [LangChain](https://github.com/langchain-ai/langchain) and [LangGraph](https://github.com/langchain-ai/langgraph). The package bridges MCP servers with LangChain applications by converting MCP tools, prompts, and resources into LangChain-compatible formats.

资料来源：[README.md]()

## Package Architecture

The package follows a modular architecture with distinct responsibilities for each module:

```mermaid
graph TD
    subgraph "langchain_mcp_adapters Package"
        A["__init__.py<br/>Package Entry"] --> B["client.py<br/>MultiServerMCPClient"]
        B --> C["sessions.py<br/>Connection Management"]
        B --> D["tools.py<br/>Tool Conversion"]
        B --> E["resources.py<br/>Resource Conversion"]
        B --> F["prompts.py<br/>Prompt Loading"]
        C --> G["callbacks.py<br/>Callback Handling"]
        C --> H["interceptors.py<br/>Tool Call Interceptors"]
    end
    
    I["MCP Servers"] --> C
    D --> J["LangChain Tools"]
    E --> K["LangChain Blobs"]
    F --> L["LangChain Prompts"]
```

## Directory Structure

```
langchain_mcp_adapters/
├── __init__.py          # Package initialization and exports
├── client.py            # MultiServerMCPClient for managing multiple servers
├── tools.py             # MCP to LangChain tool conversion
├── resources.py         # MCP resource to Blob conversion
├── prompts.py           # MCP prompt loading
├── sessions.py          # Connection handling for different transports
├── callbacks.py         # Event and notification callbacks
└── interceptors.py     # Tool call interception and modification
```

## Core Modules

### 1. tools.py — Tool Conversion

The `tools.py` module handles conversion of MCP tools to LangChain-compatible tools.

| Component | Purpose |
|-----------|---------|
| `load_mcp_tools()` | Loads all available MCP tools and converts them to LangChain tools |
| `_convert_mcp_content_to_lc_block()` | Converts MCP content blocks (Text, Image, Audio, Resource) to LangChain content blocks |
| `_convert_call_tool_result()` | Converts MCP CallToolResult to LangChain tool result format |
| `MCPToolArtifact` | TypedDict wrapping structured content from MCP tool calls |

**Key Type Definitions:**

```python
ToolMessageContentBlock = TextContentBlock | ImageContentBlock | FileContentBlock

ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage | Command  # if langgraph installed
```

资料来源：[langchain_mcp_adapters/tools.py:1-150]()

### 2. client.py — MultiServerMCPClient

The `client.py` module provides the `MultiServerMCPClient` class for managing connections to multiple MCP servers.

| Parameter | Type | Description |
|-----------|------|-------------|
| `connections` | `dict[str, Connection]` | Dictionary mapping server names to connection configurations |
| `callbacks` | `Callbacks` | Optional callbacks for handling notifications |
| `tool_interceptors` | `list[ToolCallInterceptor]` | Optional interceptors for tool call processing |
| `tool_name_prefix` | `bool` | Prefix tool names with server name (default: `False`) |

**Supported Connection Configurations:**

The client supports multiple transport types with their respective parameters:

| Transport | Required Parameters |
|-----------|---------------------|
| `stdio` | `command`, `args` |
| `http` | `url` |
| `sse` | `url`, optional `headers` |
| `streamable_http` | `url`, optional `headers` |
| `websocket` | `url` |

资料来源：[langchain_mcp_adapters/client.py:1-100]()

### 3. sessions.py — Connection Management

The `sessions.py` module handles connection management for different MCP transport types.

| Connection Type | Class | Purpose |
|-----------------|-------|---------|
| Stdio | `StdioConnection` | stdio-based communication with subprocess |
| HTTP | `McpHttpClientFactory`, `StreamableHttpConnection` | HTTP-based communication |
| SSE | `SSEConnection` | Server-Sent Events transport |
| WebSocket | `WebsocketConnection` | WebSocket-based communication |

**Session Creation Flow:**

```mermaid
graph TD
    A["create_session()"] --> B{"Connection Type?"}
    B -->|Stdio| C["_create_stdio_session()"]
    B -->|HTTP| D["_create_http_session()"]
    B -->|SSE| E["_create_sse_session()"]
    B -->|WebSocket| F["_create_websocket_session()"]
    
    C --> G["ClientSession"]
    D --> G
    E --> G
    F --> G
```

The `create_session()` function returns an async generator that yields an initialized `ClientSession`:

```python
@asynccontextmanager
async def create_session(connection: Connection) -> AsyncIterator[ClientSession]:
```

**Environment Variable Expansion:**

Sessions support environment variable expansion in configuration values using `${VAR}` or `${VAR:default}` syntax.

资料来源：[langchain_mcp_adapters/sessions.py:1-100]()

### 4. resources.py — Resource Conversion

The `resources.py` module converts MCP resources into LangChain Blob objects.

| Function | Purpose |
|----------|---------|
| `convert_mcp_resource_to_langchain_blob()` | Converts a single MCP resource content to a Blob |
| `get_mcp_resource()` | Fetches a single MCP resource by URI |
| `load_mcp_resources()` | Loads multiple MCP resources and converts them to Blobs |

**Supported Content Types:**

| MCP Type | Conversion |
|----------|------------|
| `TextResourceContents` | Raw text data |
| `BlobResourceContents` | Base64-decoded binary data |

资料来源：[langchain_mcp_adapters/resources.py:1-80]()

### 5. prompts.py — Prompt Loading

The `prompts.py` module handles loading MCP prompts into LangChain prompt formats. The module provides functionality to convert MCP prompt definitions into LangChain-compatible prompt structures.

资料来源：[langchain_mcp_adapters/prompts.py:1-50]()

### 6. callbacks.py — Callback Handling

The `callbacks.py` module provides callback infrastructure for handling notifications and events during MCP operations.

| Component | Purpose |
|-----------|---------|
| `Callbacks` | Main callback container class |
| `CallbackContext` | Context passed to callbacks with server/tool information |

The `CallbackContext` dataclass holds:

```python
@dataclass
class CallbackContext:
    server_name: str | None = None
    tool_name: str | None = None
```

资料来源：[langchain_mcp_adapters/callbacks.py:1-60]()

### 7. interceptors.py — Tool Call Interceptors

The `interceptors.py` module provides interceptor interfaces for wrapping and controlling MCP tool call execution.

| Component | Purpose |
|-----------|---------|
| `ToolCallInterceptor` | Protocol for intercepting tool calls |
| `MCPToolCallRequest` | Request object passed to interceptors |
| `_build_interceptor_chain()` | Builds composed handler chain with interceptors in onion pattern |

**Interceptor Pattern:**

```mermaid
graph TD
    A["Request"] --> B["Interceptor 1<br/>(Outer Layer)"]
    B --> C["Interceptor 2"]
    C --> D["..."]
    D --> E["Interceptor N"]
    E --> F["execute_tool<br/>(Innermost)"]
    F --> G["Result"]
    G --> E
    G --> D
    G --> C
    G --> B
    G --> H["Response"]
```

The interceptor chain follows an onion pattern where each interceptor wraps the next, allowing pre-processing before and post-processing after tool execution.

**MCPToolCallRequest Structure:**

```python
@dataclass
class MCPToolCallRequest:
    name: str
    args: dict[str, Any]
    server_name: str
    headers: dict[str, Any] | None
    runtime: Any
```

**Result Type (Conditional):**

```python
if LANGGRAPH_PRESENT:
    MCPToolCallResult = CallToolResult | ToolMessage | Command
else:
    MCPToolCallResult = CallToolResult | ToolMessage
```

资料来源：[langchain_mcp_adapters/interceptors.py:1-80]()

## Data Flow Architecture

### Tool Execution Flow

```mermaid
sequenceDiagram
    participant User
    participant MultiServerMCPClient
    participant load_mcp_tools
    participant ToolCallInterceptor
    participant ClientSession
    participant MCPServer
    
    User->>MultiServerMCPClient: get_tools()
    MultiServerMCPClient->>load_mcp_tools: load_mcp_tools(session)
    load_mcp_tools->>load_mcp_tools: Create StructuredTool
    Note over load_mcp_tools: Register call_tool coroutine
    
    User->>StructuredTool: invoke(args)
    StructuredTool->>load_mcp_tools: call_tool(args)
    
    alt With Interceptors
        load_mcp_tools->>ToolCallInterceptor: intercept(request)
        ToolCallInterceptor->>ToolCallInterceptor: modify/validate
    end
    
    load_mcp_tools->>ClientSession: call_tool(name, args)
    ClientSession->>MCPServer: MCP CallToolRequest
    MCPServer-->>ClientSession: CallToolResult
    ClientSession-->>load_mcp_tools: CallToolResult
    
    alt Error Result
        load_mcp_tools->>load_mcp_tools: Check isError flag
        load_mcp_tools->>ToolException: raise
    end
    
    load_mcp_tools->>_convert_call_tool_result: format result
    Note over load_mcp_tools: Convert content blocks to LC format
    
    load_mcp_tools-->>User: (content, artifact)
```

### Content Conversion Flow

```mermaid
graph LR
    subgraph "MCP Content Types"
        A["TextContent"]
        B["ImageContent"]
        C["AudioContent"]
        D["ResourceLink"]
        E["EmbeddedResource"]
    end
    
    subgraph "Conversion Functions"
        F["_convert_mcp_content_to_lc_block"]
    end
    
    subgraph "LangChain Content Blocks"
        G["TextContentBlock"]
        H["ImageContentBlock"]
        I["FileContentBlock"]
    end
    
    A --> F
    B --> F
    D --> F
    E --> F
    C -.->|NotImplementedError| F
    
    F --> G
    F --> H
    F --> I
```

## Type System

### Conditional Type Definitions

The package uses conditional type definitions based on whether `langgraph` is installed:

```python
try:
    from langgraph.types import Command
    LANGGRAPH_PRESENT = True
except ImportError:
    LANGGRAPH_PRESENT = False
```

| Type | Without langgraph | With langgraph |
|------|-------------------|----------------|
| `ConvertedToolResult` | `list[ToolMessageContentBlock] \| ToolMessage` | `list[ToolMessageContentBlock] \| ToolMessage \| Command` |
| `MCPToolCallResult` | `CallToolResult \| ToolMessage` | `CallToolResult \| ToolMessage \| Command` |

## Error Handling

### Tool Exceptions

| Error Type | Trigger | Behavior |
|------------|---------|----------|
| `ToolException` | MCP tool returns `isError: true` | Raised with joined error message from content blocks |
| `NotImplementedError` | AudioContent conversion attempted | Audio content is not yet supported |
| `ValueError` | Unknown content type | Unknown MCP content types raise ValueError |

### Connection Errors

| Error Type | Condition |
|------------|-----------|
| `ValueError` | Neither session nor connection provided to `load_mcp_tools()` |

## Configuration Options

### Tool Loading Options

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `session` | `ClientSession` | `None` | MCP client session |
| `connection` | `Connection` | `None` | Connection config for new session |
| `callbacks` | `Callbacks` | `None` | Event callbacks |
| `tool_interceptors` | `list[ToolCallInterceptor]` | `None` | Tool call interceptors |
| `server_name` | `str` | `None` | Server name for context |
| `tool_name_prefix` | `bool` | `False` | Prefix tool names with server |

### Client Configuration

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `connections` | `dict[str, Connection]` | `{}` | Server connection configs |
| `callbacks` | `Callbacks` | `Callbacks()` | Default callbacks |
| `tool_interceptors` | `list[ToolCallInterceptor]` | `[]` | Default interceptors |
| `tool_name_prefix` | `bool` | `False` | Prefix tool names |

## Dependencies

### Required Dependencies

| Package | Purpose |
|---------|---------|
| `langchain-core` | LangChain core functionality and BaseTool |
| `mcp` | MCP client SDK |
| `pydantic` | Data validation and model creation |

### Optional Dependencies

| Package | Feature |
|---------|---------|
| `langgraph` | LangGraph Command support, enhanced state management |

## Package Exports

The `__init__.py` exports the main public API:

- `MultiServerMCPClient` - Multi-server client class
- `load_mcp_tools` - Tool loading function
- `load_mcp_resources` - Resource loading function
- `load_mcp_prompt` - Prompt loading function
- `Callbacks`, `CallbackContext` - Callback infrastructure
- `ToolCallInterceptor` - Interceptor protocol
- `Connection` - Connection configuration types

资料来源：[langchain_mcp_adapters/__init__.py:1-20]()

---

<a id='page-tool-conversion'></a>

## Tool Conversion

### 相关页面

相关主题：[MultiServerMCPClient](#page-multiserver-client), [Transport Types](#page-transport-types), [Package Structure](#page-package-structure)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
- [README.md](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)
</details>

# Tool Conversion

## Overview

Tool Conversion is the core mechanism that bridges **MCP (Model Context Protocol)** tools with **LangChain tools**, enabling interoperability between the MCP ecosystem and LangChain/LangGraph agents. This adapter transforms native MCP tool definitions into LangChain-compatible `StructuredTool` instances that can be used with LangChain agents and LangGraph state machines.

The conversion layer handles:

- Tool signature translation (MCP schema → LangChain Pydantic schema)
- Tool execution with proper session context
- Content block conversion (MCP content types → LangChain content blocks)
- Error handling and artifact wrapping
- Interceptor chain support for middleware patterns

资料来源：[langchain_mcp_adapters/tools.py:1-30]()

## Architecture

```mermaid
graph TD
    subgraph "MCP Layer"
        MCPTool[MCP Tool Definition]
        MCPToolCallResult[MCP CallToolResult]
    end
    
    subgraph "Adapter Layer"
        convert_mcp_tool[convert_mcp_tool_to_langchain_tool]
        load_mcp_tools[load_mcp_tools]
        interceptor_chain[Interceptor Chain]
        content_converter[_convert_mcp_content_to_lc_block]
        result_converter[_convert_call_tool_result]
    end
    
    subgraph "LangChain Layer"
        StructuredTool[StructuredTool]
        ToolMessage[ToolMessage]
        Command[Command<br/>langgraph.types]
        MCPToolArtifact[MCPToolArtifact]
    end
    
    MCPTool --> convert_mcp_tool
    MCPTool --> load_mcp_tools
    load_mcp_tools --> convert_mcp_tool
    convert_mcp_tool --> interceptor_chain
    interceptor_chain --> content_converter
    MCPToolCallResult --> result_converter
    result_converter --> ToolMessage
    result_converter --> Command
    result_converter --> MCPToolArtifact
```

### Conversion Flow

```mermaid
sequenceDiagram
    participant Agent as LangChain Agent
    participant LC_Tool as LangChain StructuredTool
    participant Interceptor as ToolCallInterceptor
    participant MCP_Session as MCP ClientSession
    participant MCP_Server as MCP Server

    Agent->>LC_Tool: invoke(name, args)
    LC_Tool->>Interceptor: MCPToolCallRequest
    Interceptor->>Interceptor: preprocess()
    Interceptor->>MCP_Session: call_tool()
    MCP_Session->>MCP_Server: protocol call
    MCP_Server-->>MCP_Session: CallToolResult
    MCP_Session-->>Interceptor: MCPToolCallResult
    Interceptor->>Interceptor: postprocess()
    Interceptor-->>LC_Tool: Converted Result
    LC_Tool->>LC_Tool: _convert_call_tool_result()
    LC_Tool-->>Agent: (content, artifact)
```

资料来源：[langchain_mcp_adapters/tools.py:140-220]()

## Core Functions

### load_mcp_tools

Loads all available MCP tools from a session and converts them to LangChain tools.

```python
async def load_mcp_tools(
    session: ClientSession | None,
    *,
    connection: Connection | None = None,
    callbacks: Callbacks | None = None,
    tool_interceptors: list[ToolCallInterceptor] | None = None,
    server_name: str | None = None,
    tool_name_prefix: bool = False,
) -> list[BaseTool]
```

**Parameters:**

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `session` | `ClientSession \| None` | required | MCP client session. If `None`, `connection` must be provided. |
| `connection` | `Connection \| None` | `None` | Connection config to create a new session if session is `None`. |
| `callbacks` | `Callbacks \| None` | `None` | Optional callbacks for handling notifications and events. |
| `tool_interceptors` | `list[ToolCallInterceptor] \| None` | `None` | Optional list of interceptors for tool call processing. |
| `server_name` | `str \| None` | `None` | Name of the server these tools belong to. |
| `tool_name_prefix` | `bool` | `False` | If `True`, tool names are prefixed with server name (e.g., `"weather_search"`). |

资料来源：[langchain_mcp_adapters/tools.py:219-270]()

### convert_mcp_tool_to_langchain_tool

Converts a single MCP tool to a LangChain `StructuredTool`.

```python
def convert_mcp_tool_to_langchain_tool(
    session: ClientSession | None,
    tool: MCPTool,
    *,
    connection: Connection | None = None,
    callbacks: Callbacks | None = None,
    tool_interceptors: list[ToolCallInterceptor] | None = None,
    server_name: str | None = None,
    tool_name_prefix: bool = False,
) -> BaseTool
```

**Returns:**
A LangChain `StructuredTool` with `response_format="content_and_artifact"`.

**Key Implementation Details:**

- Creates an async `call_tool` coroutine that handles execution
- Injects `runtime` via `InjectedToolArg` for LangGraph compatibility
- Supports `ToolCallInterceptor` chain via `_build_interceptor_chain()`
- Wraps errors as `ToolException`
- Extracts `structuredContent` into `MCPToolArtifact`

资料来源：[langchain_mcp_adapters/tools.py:150-218]()

## Content Block Conversion

The adapter converts MCP content types to LangChain content blocks for uniform handling.

### Supported Conversions

| MCP Content Type | LangChain Content Block | Notes |
|------------------|-------------------------|-------|
| `TextContent` | `{"type": "text", "text": ...}` | Direct text conversion |
| `ImageContent` | `{"type": "image", "base64": ..., "mime_type": ...}` | Base64 encoded image data |
| `ResourceLink` (image/*) | `{"type": "image", "url": ..., "mime_type": ...}` | Image via URI reference |
| `ResourceLink` (other) | `{"type": "file", "url": ..., "mime_type": ...}` | Generic file via URI reference |
| `EmbeddedResource` (text) | `{"type": "text", "text": ...}` | Text from embedded resource |
| `EmbeddedResource` (blob) | Image or file block | Based on MIME type |
| `AudioContent` | — | Raises `NotImplementedError` |

资料来源：[langchain_mcp_adapters/tools.py:70-115]()

### _convert_mcp_content_to_lc_block

```python
def _convert_mcp_content_to_lc_block(
    content: ContentBlock,
) -> ToolMessageContentBlock
```

This function handles the 1:1 mapping between MCP content types and LangChain content blocks.

```mermaid
graph LR
    A[ContentBlock] --> B{Type Check}
    B -->|TextContent| C[create_text_block]
    B -->|ImageContent| D[create_image_block]
    B -->|ResourceLink| E{MIME type?}
    B -->|EmbeddedResource| F{Resource Type?}
    B -->|AudioContent| G[NotImplementedError]
    
    E -->|image/*| H[create_image_block<br/>url=uri]
    E -->|other| I[create_file_block<br/>url=uri]
    
    F -->|TextResourceContents| J[create_text_block]
    F -->|BlobResourceContents| K{MIME type?}
    K -->|image/*| L[create_image_block]
    K -->|other| M[create_file_block]
```

资料来源：[langchain_mcp_adapters/tools.py:70-115]()

## Result Conversion

### _convert_call_tool_result

Converts the result of an MCP tool call to LangChain format with support for multiple return types.

```python
def _convert_call_tool_result(
    call_tool_result: MCPToolCallResult,
) -> tuple[ConvertedToolResult, MCPToolArtifact | None]
```

**Return Types:**

The function returns a tuple where:
- **First element**: The converted content
- **Second element**: The artifact (if any)

**Content Types Based on Input:**

| Input Type | Output Content | Output Artifact |
|------------|----------------|-----------------|
| `ToolMessage` | `ToolMessage` (passthrough) | `None` |
| `Command` (LangGraph) | `Command` (passthrough) | `None` |
| `CallToolResult` (MCP) | `list[ToolMessageContentBlock]` | `MCPToolArtifact` (if `structuredContent` present) |

资料来源：[langchain_mcp_adapters/tools.py:117-145]()

### MCPToolArtifact

A TypedDict wrapping structured content from MCP tool calls:

```python
class MCPToolArtifact(TypedDict):
    """Artifact returned from MCP tool calls."""
    structured_content: dict[str, Any]
```

This allows downstream consumers to access MCP-specific structured data while maintaining compatibility with LangChain's tool result format.

资料来源：[langchain_mcp_adapters/tools.py:55-68]()

## Interceptor Chain

The interceptor system implements the **onion pattern** for middleware-like processing of tool calls.

### _build_interceptor_chain

```python
def _build_interceptor_chain(
    base_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
    tool_interceptors: list[ToolCallInterceptor] | None,
) -> Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]
```

**Execution Order:**
1. Interceptors are applied in **reverse order** (last in list = outermost layer)
2. Each interceptor wraps the previous handler
3. Request flows inward through interceptors, response flows outward

```mermaid
graph TD
    subgraph "Request Flow (inward)"
        R1[Request] --> I1[Interceptor 1<br/>outermost]
        I1 --> I2[Interceptor 2]
        I2 --> I3[Interceptor N<br/>innermost]
        I3 --> BH[Base Handler<br/>execute_tool]
    end
    
    subgraph "Response Flow (outward)"
        BH --> RT1[Response]
        RT1 --> I4[Interceptor N]
        I4 --> I5[Interceptor 2]
        I5 --> I6[Interceptor 1]
        I6 --> R2[Response]
    end
```

资料来源：[langchain_mcp_adapters/tools.py:147-149]()

### ToolCallInterceptor Interface

Interceptors implement the `ToolCallInterceptor` protocol:

```python
@runtime_checkable
class ToolCallInterceptor(Protocol):
    async def intercept(
        self,
        request: MCPToolCallRequest,
        current_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
    ) -> MCPToolCallResult:
        ...
```

**Usage Pattern:**

```python
class MyInterceptor:
    async def intercept(
        self,
        request: MCPToolCallRequest,
        current_handler: Callable,
    ) -> MCPToolCallResult:
        # Pre-processing
        modified_request = request.override(args={"modified": True})
        
        # Call next handler
        result = await current_handler(modified_request)
        
        # Post-processing
        return result
```

资料来源：[langchain_mcp_adapters/interceptors.py:1-50]()

## Type Definitions

### ConvertedToolResult

Conditional type based on LangGraph availability:

```python
if LANGGRAPH_PRESENT:
    ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage | Command
else:
    ConvertedToolResult = list[ToolMessageContentBlock] | ToolMessage
```

### ToolMessageContentBlock

```python
ToolMessageContentBlock = TextContentBlock | ImageContentBlock | FileContentBlock
```

Import sourced from `langchain_core.messages.content`:

资料来源：[langchain_mcp_adapters/tools.py:15-35]()

## Configuration Options

### Tool Name Prefixing

When connecting to multiple MCP servers, tools may have name conflicts. Enable prefixing:

```python
client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        },
        "weather": {
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)

# With prefix: tool names become "math_add", "weather_get_weather"
tools = await client.get_tools(tool_name_prefix=True)
```

### Session Management

| Mode | Description | Use Case |
|------|-------------|----------|
| **Shared Session** | Single session for all tools | Single server, multiple tools |
| **Per-Tool Session** | New session created per call | Stateless servers |
| **Explicit Session** | User-managed session | Custom lifecycle control |

资料来源：[langchain_mcp_adapters/client.py:1-80]()

## Error Handling

### ToolException

Tool call errors are wrapped in `ToolException`:

```python
if call_tool_result.isError:
    error_parts = []
    for item in tool_content:
        if isinstance(item, str):
            error_parts.append(item)
        elif isinstance(item, dict) and item.get("type") == "text":
            error_parts.append(item.get("text", ""))
    error_msg = "\n".join(error_parts) if error_parts else str(tool_content)
    raise ToolException(error_msg)
```

资料来源：[langchain_mcp_adapters/tools.py:130-140]()

## Usage Examples

### Basic Tool Loading

```python
from mcp import ClientSession
from mcp.client.stdio import stdio_client

from langchain_mcp_adapters.tools import load_mcp_tools

server_params = StdioServerParameters(
    command="python",
    args=["/path/to/math_server.py"],
)

async with stdio_client(server_params) as (read, write):
    async with ClientSession(read, write) as session:
        await session.initialize()
        tools = await load_mcp_tools(session)
```

### With LangGraph Agent

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition

client = MultiServerMCPClient({
    "math": {
        "command": "python",
        "args": ["/path/to/math_server.py"],
        "transport": "stdio",
    }
})
tools = await client.get_tools()

builder = StateGraph(MessagesState)
builder.add_node("call_model", call_model)
builder.add_node("tools", ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", tools_condition)
```

资料来源：[README.md:1-100]()

## See Also

- [MultiServerMCPClient](../client) — Client for connecting to multiple MCP servers
- [Tool Call Interceptors](../interceptors) — Middleware for tool call processing
- [Resource Conversion](../resources) — Converting MCP resources to LangChain Blobs

---

<a id='page-multiserver-client'></a>

## MultiServerMCPClient

### 相关页面

相关主题：[Tool Conversion](#page-tool-conversion), [Transport Types](#page-transport-types), [Callbacks](#page-callbacks)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
- [langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [README.md](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)
</details>

# MultiServerMCPClient

The `MultiServerMCPClient` is the primary entry point for connecting LangChain applications to multiple Model Context Protocol (MCP) servers. It provides a unified interface to manage connections, load tools, resources, and prompts from various MCP server implementations.

## Overview

`MultiServerMCPClient` serves as a central client that abstracts the complexity of connecting to multiple MCP servers simultaneously. It handles session management, tool conversion, and integrates seamlessly with LangChain and LangGraph agents.

```mermaid
graph TD
    A[MultiServerMCPClient] --> B[Connection Manager]
    B --> C[StdioConnection]
    B --> D[SSEConnection]
    B --> E[StreamableHttpConnection]
    B --> F[WebsocketConnection]
    G[load_mcp_tools] --> H[LangChain Tools]
    I[load_mcp_resources] --> J[LangChain Blobs]
    K[load_mcp_prompts] --> L[LangChain Messages]
```

## Initialization

### Constructor Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `connections` | `dict[str, Connection] \| None` | `None` | Mapping of server names to connection configurations |
| `callbacks` | `Callbacks \| None` | `None` | Optional callbacks for notifications and events |
| `tool_interceptors` | `list[ToolCallInterceptor] \| None` | `None` | Optional interceptors for modifying tool requests/responses |
| `tool_name_prefix` | `bool` | `False` | Prefix tool names with server name to avoid conflicts |

### Connection Configuration

Each server in the `connections` dictionary requires a transport-specific configuration:

| Transport | Required Parameters |
|-----------|---------------------|
| `stdio` | `command`, `args` |
| `http` | `url` |
| `sse` | `url` |
| `streamable_http` | `url` |
| `websocket` | `url` |

**资料来源**：[client.py:51-76]()

## Connection Types

The library supports multiple transport protocols for connecting to MCP servers.

### StdioConnection

Used for spawning local MCP server processes via standard I/O.

```python
client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        }
    }
)
```

**资料来源**：[README.md:82-90]()

### HTTP/Streamable HTTP Connection

Used for connecting to HTTP-based MCP servers, including stateless streamable HTTP servers.

```python
client = MultiServerMCPClient(
    {
        "weather": {
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)
```

**资料来源**：[README.md:37-45]()

### WebSocket Connection

For WebSocket-based MCP server connections.

### SSE Connection

Server-Sent Events transport for MCP server communication.

## Usage Patterns

### Basic Usage with get_tools()

The simplest pattern starts a new session for each tool call:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        },
        "weather": {
            "url": "http://localhost:8000/mcp",
            "transport": "http",
        }
    }
)
all_tools = await client.get_tools()
```

**资料来源**：[client.py:51-74]()

### Explicit Session Management

For more control, use explicit session management:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools

client = MultiServerMCPClient({...})
async with client.session("math") as session:
    tools = await load_mcp_tools(session)
```

**资料来源**：[client.py:75-81]()

### With LangGraph StateGraph

Integration with LangGraph for agent-based workflows:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langgraph.graph import StateGraph, MessagesState, START
from langgraph.prebuilt import ToolNode, tools_condition
from langchain.chat_models import init_chat_model

model = init_chat_model("openai:gpt-4.1")

client = MultiServerMCPClient({...})
tools = await client.get_tools()

def call_model(state: MessagesState):
    response = model.bind_tools(tools).invoke(state["messages"])
    return {"messages": response}

builder = StateGraph(MessagesState)
builder.add_node(call_model)
builder.add_node(ToolNode(tools))
builder.add_edge(START, "call_model")
builder.add_conditional_edges("call_model", tools_condition)
```

**资料来源**：[README.md:103-126]()

## Tool Name Prefixing

When `tool_name_prefix=True`, tool names are prefixed with the server name using an underscore separator:

```python
# With prefix: "weather_search"
# Without prefix: "search"
client = MultiServerMCPClient(
    {...},
    tool_name_prefix=True
)
```

This helps avoid conflicts when multiple servers expose tools with identical names.

**资料来源**：[client.py:48-51]()

## Runtime Headers

For HTTP and SSE transports, you can pass custom headers for authentication or tracing:

```python
client = MultiServerMCPClient(
    {
        "weather": {
            "transport": "http",
            "url": "http://localhost:8000/mcp",
            "headers": {
                "Authorization": "Bearer YOUR_TOKEN",
                "X-Custom-Header": "custom-value"
            },
        }
    }
)
```

> Only `sse` and `http` transports support runtime headers.

**资料来源**：[README.md:129-152]()

## Tool Interceptors

Tool call interceptors allow you to modify requests and responses in an onion-pattern chain:

```python
from langchain_mcp_adapters.interceptors import (
    MCPToolCallRequest,
    MCPToolCallResult,
    ToolCallInterceptor
)

class CustomInterceptor(ToolCallInterceptor):
    async def intercept(
        self, request: MCPToolCallRequest, handler
    ) -> MCPToolCallResult:
        # Modify request
        modified_request = request.override(args={"modified": True})
        # Process and potentially modify response
        result = await handler(modified_request)
        return result

client = MultiServerMCPClient(
    {...},
    tool_interceptors=[CustomInterceptor()]
)
```

**资料来源**：[interceptors.py:1-55]()

## MCPToolArtifact

Tool call results that include structured content are wrapped in an `MCPToolArtifact`:

```python
class MCPToolArtifact(TypedDict):
    """Artifact returned from MCP tool calls.
    
    Attributes:
        structured_content: The structured content returned by the MCP tool,
            corresponding to the structuredContent field in CallToolResult.
    """
    structured_content: dict[str, Any]
```

**资料来源**：[tools.py:70-84]()

## Content Conversion

The library automatically converts MCP content blocks to LangChain content blocks:

| MCP Type | LangChain Type |
|----------|----------------|
| `TextContent` | `{"type": "text", "text": ...}` |
| `ImageContent` | `{"type": "image", ...}` |
| `FileContentBlock` | `{"type": "file", ...}` |
| `ResourceLink` | `{"type": "image"}` or `{"type": "file"}` |
| `EmbeddedResource` | `{"type": "text"}`, `{"type": "image"}`, or `{"type": "file"}` |
| `AudioContent` | `NotImplementedError` |

**资料来源**：[tools.py:86-126]()

## Limitations

### Async Context Manager Deprecation

As of version 0.1.0, `MultiServerMCPClient` cannot be used as an async context manager:

```python
# This is NOT allowed:
# async with MultiServerMCPClient(...) as client:
#     ...

# Instead use:
client = MultiServerMCPClient(...)
tools = await client.get_tools()
```

**资料来源**：[client.py:55-68]()

## Architecture

```mermaid
sequenceDiagram
    participant Client as MultiServerMCPClient
    participant Session as ClientSession
    participant Loader as load_mcp_tools
    participant Converter as Content Converter
    participant LC as LangChain Tool

    Client->>Session: create_session()
    Session->>Loader: session.list_tools()
    Loader->>Session: tool definitions
    Session-->>Converter: Tool metadata
    Converter->>LC: StructuredTool
    LC-->>Client: BaseTool list
```

## See Also

- [load_mcp_tools()](langchain_mcp_adapters/tools.md) - Loading and converting MCP tools
- [load_mcp_resources()](langchain_mcp_adapters/resources.md) - Loading MCP resources as Blobs
- [load_mcp_prompts()](langchain_mcp_adapters/prompts.md) - Loading MCP prompts as Messages
- [ToolCallInterceptor](langchain_mcp_adapters/interceptors.md) - Intercepting tool calls

---

<a id='page-transport-types'></a>

## Transport Types

### 相关页面

相关主题：[MultiServerMCPClient](#page-multiserver-client)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/sessions.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/sessions.py)
- [README.md](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/README.md)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
</details>

# Transport Types

LangChain MCP Adapters supports multiple transport types for connecting to MCP (Model Context Protocol) servers. Transport types define the communication mechanism used between the client and server, enabling flexibility in different deployment scenarios.

## Overview

Transport types in langchain-mcp-adapters determine how MCP client sessions communicate with MCP servers. The library provides native support for four primary transport mechanisms, each suited for different use cases ranging from local development to production deployments.

```mermaid
graph TD
    A[MultiServerMCPClient] --> B{Transport Type}
    B --> C[stdio]
    B --> D[http]
    B --> E[sse]
    B --> F[websocket]
    
    C --> G[Local/Subprocess]
    D --> H[HTTP Server]
    E --> I[HTTP + SSE Events]
    F --> J[WebSocket Server]
    
    G --> K[StdioServerParameters]
    H --> L[URL + Headers]
    I --> M[URL + Headers]
    J --> N[URL + Headers]
```

资料来源：[langchain_mcp_adapters/client.py:1-50]()

## Supported Transport Types

| Transport | Use Case | Session Creation | Header Support | Timeout Config |
|-----------|----------|------------------|----------------|----------------|
| `stdio` | Local subprocesses, development | In-process via stdin/stdout | N/A | Encoding handlers |
| `http` | Remote HTTP servers, stateless | Streamable HTTP client | ✅ | Request timeout |
| `sse` | Server-Sent Events servers | HTTP + SSE endpoint | ✅ | SSE read timeout |
| `websocket` | Real-time bidirectional | WebSocket connection | ✅ | Connection timeout |

资料来源：[langchain_mcp_adapters/sessions.py:1-100]()

## Stdio Transport

The `stdio` transport uses standard input/output streams for communication. This is ideal for running MCP servers as local subprocesses or when the server runs on the same machine as the client.

### Configuration Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `command` | `str` | ✅ | Executable command (e.g., `"python"`, `"node"`) |
| `args` | `list[str]` | ❌ | Command-line arguments |
| `env` | `dict[str, str]` | ❌ | Environment variables |
| `cwd` | `str` | ❌ | Working directory |
| `encoding` | `str` | ❌ | Character encoding (default: system default) |
| `encoding_error_handler` | `str` | ❌ | How to handle encoding errors |
| `session_kwargs` | `dict` | ❌ | Additional `ClientSession` arguments |

资料来源：[langchain_mcp_adapters/sessions.py:60-90]()

### Environment Variable Expansion

The `env` parameter supports environment variable expansion in variable values:

```python
env = {
    "API_KEY": "${MY_API_KEY}",  # Expands from current environment
    "STATIC": "custom-value"     # Passed through unchanged
}
```

Variable references use the pattern `${VAR_NAME}`. Only values (not keys) are expanded. Unexpanded references trigger a warning.

资料来源：[langchain_mcp_adapters/sessions.py:80-85]()

### Example: Stdio Connection

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
    "math": {
        "command": "python",
        "args": ["/path/to/math_server.py"],
        "transport": "stdio",
    }
})
tools = await client.get_tools()
```

资料来源：[README.md:80-100]()

## HTTP Transport

The `http` transport connects to MCP servers via HTTP protocol. This is designed for remote server deployments and supports stateless request/response patterns.

### Configuration Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | `str` | ✅ | Full URL to the MCP server endpoint |
| `headers` | `dict[str, str]` | ❌ | HTTP headers sent with each request |
| `timeout` | `float` | ❌ | Request timeout in seconds (default: `60.0`) |

### Header Support

HTTP transport supports runtime headers, enabling dynamic authentication and authorization:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient

client = MultiServerMCPClient({
    "weather": {
        "url": "http://localhost:8000/mcp",
        "transport": "http",
        "headers": {
            "Authorization": "Bearer ${API_TOKEN}",
            "X-Custom-Header": "custom-value"
        }
    }
})
```

> Only `sse` and `http` transports support runtime headers.

资料来源：[README.md:110-130]()

### Example: HTTP Connection

```bash
# Start a streamable HTTP server
cd examples/servers/streamable-http-stateless/
uv run mcp-simple-streamablehttp-stateless --port 3000
```

```python
from mcp import ClientSession
from mcp.client.streamable_http import streamablehttp_client
from langchain_mcp_adapters.tools import load_mcp_tools

async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
    async with ClientSession(read, write) as session:
        await session.initialize()
        tools = await load_mcp_tools(session)
```

资料来源：[README.md:35-55]()

## SSE Transport

SSE (Server-Sent Events) transport combines HTTP requests with server-side event streaming. This is useful when the MCP server needs to push updates or progress notifications to the client.

### Configuration Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | `str` | ✅ | Full URL to the MCP server SSE endpoint |
| `headers` | `dict[str, str]` | ❌ | HTTP headers sent with each request |
| `sse_read_timeout` | `float` | ❌ | SSE read timeout in seconds (default: `300.0`) |
| `timeout` | `float` | ❌ | HTTP request timeout (default: `60.0`) |

### Progress Callbacks

SSE transport enables progress callback functionality through the MCP client callbacks system:

```python
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext

class CustomCallbacks(Callbacks):
    async def progress_callback(self, progress_token: str, progress: dict) -> None:
        print(f"Progress: {progress}")
```

资料来源：[langchain_mcp_adapters/tools.py:180-200]()

## WebSocket Transport

WebSocket transport provides bidirectional real-time communication between the client and MCP server. This is suitable for applications requiring low-latency, persistent connections.

### Configuration Parameters

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `url` | `str` | ✅ | WebSocket endpoint URL |
| `headers` | `dict[str, str]` | ❌ | WebSocket handshake headers |
| `timeout` | `float` | ❌ | Connection timeout |

## Connection Factory

The `Connection` abstract class defines the common interface for all transport implementations:

```mermaid
classDiagram
    class Connection {
        <<abstract>>
        +session_kwarg: dict
        +server_name: str
        +get_session() ClientSession
    }
    
    class StdioConnection {
        +command: str
        +args: list
        +env: dict
        +get_session() ClientSession
    }
    
    class StreamableHttpConnection {
        +url: str
        +headers: dict
        +timeout: float
        +get_session() ClientSession
    }
    
    class SSEConnection {
        +url: str
        +headers: dict
        +timeout: float
        +sse_read_timeout: float
        +get_session() ClientSession
    }
    
    class WebsocketConnection {
        +url: str
        +headers: dict
        +timeout: float
        +get_session() ClientSession
    }
    
    Connection <|-- StdioConnection
    Connection <|-- StreamableHttpConnection
    Connection <|-- SSEConnection
    Connection <|-- WebsocketConnection
```

资料来源：[langchain_mcp_adapters/sessions.py:1-50]()

## Session Creation

All transport types ultimately create an `MCP.ClientSession` for tool execution:

```python
from langchain_mcp_adapters.sessions import create_session

# Direct session creation
async with create_session(connection) as session:
    tools = await load_mcp_tools(session)
```

资料来源：[langchain_mcp_adapters/sessions.py:1-30]()

### MultiServerMCPClient Session Management

```python
# Explicitly starting a session
client = MultiServerMCPClient({
    "math": {
        "command": "python",
        "args": ["/path/to/math_server.py"],
        "transport": "stdio",
    }
})

async with client.session("math") as session:
    tools = await load_mcp_tools(session)
```

> MultiServerMCPClient cannot be used as a context manager directly. Use `client.session(server_name)` for explicit session control.

资料来源：[langchain_mcp_adapters/client.py:1-60]()

## Tool Name Prefixing

When using multiple servers with overlapping tool names, enable the `tool_name_prefix` option to avoid conflicts:

```python
client = MultiServerMCPClient(
    {
        "math": {"transport": "stdio", ...},
        "weather": {"transport": "http", "url": "http://localhost:8000/mcp"}
    },
    tool_name_prefix=True  # Enables prefixed tool names
)
tools = await client.get_tools()
# Tool names: "math_add", "weather_search" (prefixed with server name)
```

资料来源：[langchain_mcp_adapters/client.py:30-45]()

## Transport Selection Guide

```mermaid
graph TD
    A[Select Transport] --> B{Deployment Type}
    
    B --> C[Local/Subprocess]
    C --> D[Use stdio]
    
    B --> E[Remote Server]
    E --> F{Need Real-time Events?}
    
    F --> G[Yes]
    G --> H[Use websocket]
    
    F --> I[No]
    I --> J{HTTP/1.1 or Streaming?}
    
    J --> K[Streaming/SSE]
    K --> L[Use sse]
    
    J --> M[Request/Response]
    M --> N[Use http]
```

### Decision Matrix

| Scenario | Recommended Transport |
|----------|----------------------|
| Development, local testing | `stdio` |
| Production HTTP API | `http` |
| Server pushing events to client | `sse` |
| Bidirectional, low-latency needs | `websocket` |
| Fire-and-forget subprocess | `stdio` |

## Timeout Configuration

### Default Timeouts

| Transport | Parameter | Default Value |
|-----------|-----------|---------------|
| HTTP | `timeout` | `60.0` seconds |
| SSE | `timeout` | `60.0` seconds |
| SSE | `sse_read_timeout` | `300.0` seconds |
| WebSocket | `timeout` | Connection timeout |

### Custom Timeout Example

```python
from langchain_mcp_adapters.sessions import StreamableHttpConnection

connection = StreamableHttpConnection(
    url="http://localhost:8000/mcp",
    timeout=120.0,  # 2 minute request timeout
)
```

## Error Handling

Transport-specific errors may occur during session creation or tool execution:

### Stdio Transport Errors

- **Process startup failure**: Check `command` path and permissions
- **Encoding errors**: Configure `encoding` and `encoding_error_handler`

### HTTP/SSE/WebSocket Transport Errors

- **Connection timeout**: Increase `timeout` parameter
- **SSE read timeout**: Increase `sse_read_timeout` for long-running operations
- **Header authentication failures**: Verify header format and token validity

## See Also

- [MultiServerMCPClient](langchain_mcp_adapters/client.py) - Multi-server connection management
- [load_mcp_tools](langchain_mcp_adapters/tools.py) - Tool loading with transport
- [Callbacks System](langchain_mcp_adapters/callbacks.py) - Progress and notification handling

---

<a id='page-callbacks'></a>

## Callbacks

### 相关页面

相关主题：[Tool Call Interceptors](#page-interceptors), [MultiServerMCPClient](#page-multiserver-client)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/callbacks.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/callbacks.py)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
- [langchain_mcp_adapters/client.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/client.py)
- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [langchain_mcp_adapters/resources.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/resources.py)
</details>

# Callbacks

The Callbacks system in `langchain-mcp-adapters` provides a mechanism for handling notifications, events, and progress updates during MCP tool execution. It acts as a bridge between the LangChain callback format and the MCP (Model Context Protocol) callback format, enabling developers to intercept and respond to tool call lifecycle events.

## Overview

When working with MCP tools through `langchain-mcp-adapters`, callbacks serve several critical purposes:

- **Progress Notification**: Track long-running tool operations via progress callbacks
- **Event Handling**: Respond to notifications and events from the MCP server
- **Context Propagation**: Maintain context about which server and tool is being executed
- **Lifecycle Integration**: Integrate with LangChain's callback system for broader ecosystem compatibility

The callback system is primarily used in two contexts:
1. When loading MCP tools via `load_mcp_tools()` or `convert_mcp_tool_to_langchain_tool()`
2. When configuring the `MultiServerMCPClient` for multi-server tool aggregation

资料来源：[langchain_mcp_adapters/tools.py:1-30]()

## Core Components

### CallbackContext

The `CallbackContext` class provides context information about an ongoing tool call operation.

| Property | Type | Description |
|----------|------|-------------|
| `server_name` | `str \| None` | Name of the MCP server handling the tool call |
| `tool_name` | `str \| None` | Name of the tool being executed |

资料来源：[langchain_mcp_adapters/callbacks.py]()
资料来源：[langchain_mcp_adapters/tools.py:55-62]()

### Callbacks Class

The `Callbacks` class is the main abstraction for handling MCP events. It provides the interface that developers implement to receive notifications.

```python
class Callbacks:
    """Handler for MCP notifications and events."""
    
    def to_mcp_format(self, context: CallbackContext) -> _MCPCallbacks:
        """Convert to MCP-compatible callback format."""
        ...
```

资料来源：[langchain_mcp_adapters/callbacks.py]()
资料来源：[langchain_mcp_adapters/tools.py:63-68]()

### _MCPCallbacks Class

The internal `_MCPCallbacks` class wraps callbacks in the format expected by the MCP SDK.

| Property | Type | Description |
|----------|------|-------------|
| `progress_callback` | `Callable \| None` | Callback for progress updates during tool execution |

资料来源：[langchain_mcp_adapters/callbacks.py]()

## Architecture

```mermaid
graph TD
    A[MultiServerMCPClient] --> B[Callbacks Instance]
    A --> C[load_mcp_tools]
    B --> D[to_mcp_format]
    D --> E[_MCPCallbacks]
    E --> F[session.call_tool]
    C --> G[CallbackContext]
    G --> D
    
    H[MCP Server] --> I[Progress Updates]
    I --> F
```

## Usage Patterns

### Basic Usage with MultiServerMCPClient

The most common pattern is to pass a `Callbacks` instance to the `MultiServerMCPClient`:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.callbacks import Callbacks, CallbackContext

class MyCallbacks(Callbacks):
    def to_mcp_format(self, context: CallbackContext) -> _MCPCallbacks:
        # Custom callback handling
        return _MCPCallbacks(progress_callback=self.on_progress)

    async def on_progress(self, progress: float, total: float | None, message: str | None):
        print(f"Progress: {progress}/{total} - {message}")

client = MultiServerMCPClient(
    {
        "math": {
            "command": "python",
            "args": ["/path/to/math_server.py"],
            "transport": "stdio",
        },
    },
    callbacks=MyCallbacks()
)
```

资料来源：[langchain_mcp_adapters/client.py:40-60]()

### Usage with load_mcp_tools

Callbacks can also be passed directly when loading tools from a session:

```python
from langchain_mcp_adapters.tools import load_mcp_tools

async with streamablehttp_client("http://localhost:3000/mcp") as (read, write, _):
    async with ClientSession(read, write) as session:
        await session.initialize()
        
        tools = await load_mcp_tools(
            session,
            callbacks=MyCallbacks(),
            server_name="math_server"
        )
```

资料来源：[langchain_mcp_adapters/tools.py:100-135]()

### Usage with Tool Interceptors

Callbacks work alongside tool interceptors for advanced control over tool execution:

```python
from langchain_mcp_adapters.interceptors import ToolCallInterceptor, MCPToolCallRequest, MCPToolCallResult

class LoggingInterceptor(ToolCallInterceptor):
    async def intercept(
        self, 
        request: MCPToolCallRequest, 
        call_next: Callable
    ) -> MCPToolCallResult:
        print(f"Calling tool: {request.name}")
        result = await call_next(request)
        print(f"Tool result: {result}")
        return result

client = MultiServerMCPClient(
    {...},
    callbacks=MyCallbacks(),
    tool_interceptors=[LoggingInterceptor()]
)
```

资料来源：[langchain_mcp_adapters/interceptors.py:1-50]()

## Callback Flow in Tool Execution

```mermaid
sequenceDiagram
    participant Client as MCP Client
    participant Callbacks as Callbacks Handler
    participant Session as ClientSession
    participant Server as MCP Server
    
    Client->>Callbacks: to_mcp_format(context)
    Callbacks-->>Client: _MCPCallbacks
    Client->>Session: call_tool(name, args, progress_callback)
    Session->>Server: Execute Tool
    Server-->>Session: Progress Update
    Session->>Callbacks: progress_callback
    Server-->>Session: Tool Result
    Session-->>Client: CallToolResult
```

## CallbackContext Construction

The `CallbackContext` is constructed with server and tool information at different points in the execution flow:

| Function | Context Construction |
|----------|---------------------|
| `load_mcp_tools()` | Uses `server_name` from parameters |
| `convert_mcp_tool_to_langchain_tool()` | Uses both `server_name` and `tool.name` |
| `MultiServerMCPClient` | Passed through to all tool loading operations |

资料来源：[langchain_mcp_adapters/tools.py:70-80]()

## Error Handling

When callbacks are not provided, the system uses a default `_MCPCallbacks()` instance:

```python
mcp_callbacks = (
    callbacks.to_mcp_format(context=CallbackContext(server_name=server_name, tool_name=tool.name))
    if callbacks is not None
    else _MCPCallbacks()
)
```

This ensures that tool execution continues normally even without custom callback handling.

资料来源：[langchain_mcp_adapters/tools.py:70-75]()

## Integration with Tool Result Conversion

Callbacks are passed through the entire tool execution chain and are used when converting tool results back to LangChain format:

```python
async def call_tool(...) -> tuple[ConvertedToolResult, MCPToolArtifact | None]:
    mcp_callbacks = (
        callbacks.to_mcp_format(
            context=CallbackContext(server_name=server_name, tool_name=tool.name)
        )
        if callbacks is not None
        else _MCPCallbacks()
    )
    
    # Execute with progress callback
    call_tool_result = await session.call_tool(
        tool_name,
        tool_args,
        progress_callback=mcp_callbacks.progress_callback,
    )
```

资料来源：[langchain_mcp_adapters/tools.py:55-70]()

## API Reference

### Callbacks Class

```python
class Callbacks:
    """Base class for handling MCP notifications and events."""
    
    def to_mcp_format(self, context: CallbackContext) -> _MCPCallbacks:
        """Convert the callbacks to MCP-compatible format.
        
        Args:
            context: The callback context containing server and tool info.
            
        Returns:
            An _MCPCallbacks instance configured with appropriate handlers.
        """
        ...
```

### _MCPCallbacks Class

```python
@dataclass
class _MCPCallbacks:
    """Internal MCP-compatible callbacks wrapper."""
    
    progress_callback: Callable | None = None
```

### CallbackContext Class

```python
@dataclass
class CallbackContext:
    """Context information for callback handlers."""
    
    server_name: str | None = None
    tool_name: str | None = None
```

## Best Practices

1. **Always provide context**: When constructing `CallbackContext`, include both `server_name` and `tool_name` for maximum observability.

2. **Handle None gracefully**: The callback system is designed to work without callbacks, so ensure your code handles the default case.

3. **Combine with interceptors**: For comprehensive tool call control, combine callbacks with tool interceptors.

4. **Thread-safe progress updates**: Progress callbacks may be called from different tasks; ensure your handler is thread-safe or async-safe.

5. **Resource cleanup**: When using callbacks that allocate resources, ensure proper cleanup in the client lifecycle.

## Summary

The Callbacks system in `langchain-mcp-adapters` provides a clean abstraction for handling MCP tool lifecycle events. By implementing the `Callbacks` class and its `to_mcp_format()` method, developers can:

- Monitor tool execution progress
- Handle notifications from MCP servers
- Integrate with LangChain's callback ecosystem
- Build custom logging, monitoring, and error handling for MCP tool calls

The system is designed to be optional—tools work with default callbacks when none are provided—while providing rich customization when needed.

---

<a id='page-interceptors'></a>

## Tool Call Interceptors

### 相关页面

相关主题：[Callbacks](#page-callbacks), [Tool Conversion](#page-tool-conversion)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [langchain_mcp_adapters/interceptors.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/interceptors.py)
- [langchain_mcp_adapters/tools.py](https://github.com/langchain-ai/langchain-mcp-adapters/blob/main/langchain_mcp_adapters/tools.py)
</details>

# Tool Call Interceptors

## Overview

Tool Call Interceptors provide a mechanism to wrap and control MCP tool call execution in the langchain-mcp-adapters library. They enable developers to inject custom logic before and after tool calls, modify request parameters, handle responses, and implement cross-cutting concerns like logging, authentication, and caching.

The interceptor system follows the **onion pattern** (also known as decorator pattern or chain of responsibility), where each interceptor wraps the next one, allowing pre-processing and post-processing of tool calls in a composable way.

## Architecture

### High-Level Flow

```mermaid
graph TD
    A[External Code] --> B[Interceptor Chain]
    B --> C[Interceptor 1]
    C --> D[Interceptor 2]
    D --> E[...]
    E --> F[execute_tool]
    F --> G[MCP ClientSession.call_tool]
    
    subgraph "Onion Layers (wrapping inward)"
        B
        C
        D
        E
    end
```

### Component Diagram

```mermaid
classDiagram
    class MCPToolCallRequest {
        +str name
        +dict args
        +str server_name
        +dict headers
        +object runtime
        +override() MCPToolCallRequest
    }
    
    class MCPToolCallResult {
        <<Type Alias>>
        CallToolResult | ToolMessage | Command
    }
    
    class ToolCallInterceptor {
        <<Protocol>>
        +async __call__(request, handler) MCPToolCallResult
    }
    
    class _build_interceptor_chain {
        +build_composed_handler()
    }
    
    MCPToolCallRequest --> ToolCallInterceptor : passed to
    _build_interceptor_chain --> ToolCallInterceptor : composes
```

## Core Data Models

### MCPToolCallRequest

Represents a tool execution request passed to MCP tool call interceptors. Follows a flat namespace pattern rather than separating call data and context into nested objects.

| Field | Type | Modifiable | Description |
|-------|------|-------------|-------------|
| `name` | `str` | Yes | Tool name to invoke |
| `args` | `dict[str, Any]` | Yes | Tool arguments as key-value pairs |
| `server_name` | `str` | No | Name of the MCP server handling the tool |
| `headers` | `dict[str, Any] \| None` | Yes | HTTP headers for applicable transports (SSE, HTTP) |
| `runtime` | `object \| None` | No | LangGraph runtime context (if any) |

资料来源：[interceptors.py:58-74]()

#### The `override()` Method

The `MCPToolCallRequest` class provides an immutable `override()` method that returns a new instance with specified attributes replaced:

```python
def override(
    self, **overrides: Unpack[_MCPToolCallRequestOverrides]
) -> MCPToolCallRequest:
```

This follows an immutable pattern, leaving the original request unchanged.

| Parameter | Type | Description |
|-----------|------|-------------|
| `name` | `str` | Tool name (optional) |
| `args` | `dict[str, Any]` | Tool arguments (optional) |
| `headers` | `dict[str, Any] \| None` | HTTP headers (optional) |

### MCPToolCallResult

A type alias representing the possible return types from an interceptor:

| Type | Description |
|------|-------------|
| `CallToolResult` | MCP protocol result (standard MCP format) |
| `ToolMessage` | LangChain format message |
| `Command` | LangGraph Command (when langgraph is installed) |

```python
if LANGGRAPH_PRESENT:
    MCPToolCallResult = CallToolResult | ToolMessage | Command
else:
    MCPToolCallResult = CallToolResult | ToolMessage
```

资料来源：[interceptors.py:29-36]()

## ToolCallInterceptor Protocol

The `ToolCallInterceptor` is a runtime-checkable protocol that defines the interface for interceptor implementations:

```python
@runtime_checkable
class ToolCallInterceptor(Protocol):
    async def __call__(
        self,
        request: MCPToolCallRequest,
        handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
    ) -> MCPToolCallResult:
        ...
```

| Parameter | Type | Description |
|-----------|------|-------------|
| `request` | `MCPToolCallRequest` | The tool call request to process |
| `handler` | `Callable` | The next handler in the chain (call to continue execution) |
| **Returns** | `MCPToolCallResult` | The result of processing |

资料来源：[interceptors.py:42-49]()

### Interceptor Pattern

Interceptors work by:

1. **Receiving** the `request` and the `handler` callable
2. **Optionally** modifying the request before passing it on
3. **Calling** the `handler` to continue the chain
4. **Optionally** modifying the result before returning

```python
async def my_interceptor(
    request: MCPToolCallRequest,
    handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
) -> MCPToolCallResult:
    # Pre-processing: modify request
    modified_request = request.override(args={**request.args, "injected": True})
    
    # Continue to next handler
    result = await handler(modified_request)
    
    # Post-processing: modify result
    # ... do something with result ...
    
    return result
```

## Building the Interceptor Chain

The `_build_interceptor_chain()` function composes multiple interceptors into a single handler using the onion pattern:

```python
def _build_interceptor_chain(
    base_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
    tool_interceptors: list[ToolCallInterceptor] | None,
) -> Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]:
```

| Parameter | Type | Description |
|-----------|------|-------------|
| `base_handler` | `Callable` | Innermost handler that executes the actual tool call |
| `tool_interceptors` | `list[ToolCallInterceptor] \| None` | List of interceptors to wrap around the handler |

资料来源：[tools.py:145-147]()

### Execution Order

The first interceptor in the list becomes the **outermost layer**, with subsequent interceptors wrapping inward. This means:

1. Interceptor at index 0 executes **first** (outermost)
2. Interceptor at index 1 executes **second**
3. And so on...
4. The `base_handler` (actual tool execution) executes **last** (innermost)

```mermaid
graph LR
    A[External Call] --> B["Interceptor[0]<br/>outermost"]
    B --> C["Interceptor[1]"]
    C --> D["Interceptor[2]"]
    D --> E["..."]
    E --> F["base_handler<br/>innermost"]
    F --> G[MCP call_tool]
```

## Usage

### Loading Tools with Interceptors

When loading MCP tools, you can provide a list of interceptors:

```python
from langchain_mcp_adapters.client import MultiServerMCPClient
from langchain_mcp_adapters.tools import load_mcp_tools

# Define your interceptor
class LoggingInterceptor:
    async def __call__(self, request, handler):
        print(f"Calling tool: {request.name}")
        result = await handler(request)
        print(f"Tool {request.name} completed")
        return result

client = MultiServerMCPClient({
    "math": {
        "command": "python",
        "args": ["./math_server.py"],
        "transport": "stdio",
    }
})

tools = await client.get_tools(
    tool_interceptors=[LoggingInterceptor()]
)
```

资料来源：[tools.py:163-179]()

### Individual Tool Conversion

You can also apply interceptors when converting individual tools:

```python
from langchain_mcp_adapters.tools import convert_mcp_tool_to_langchain_tool

tool = convert_mcp_tool_to_langchain_tool(
    session=session,
    tool=mcp_tool,
    tool_interceptors=[CustomInterceptor()],
    server_name="my_server",
    tool_name_prefix=True
)
```

### Using Runtime Context

Interceptors have access to the `runtime` field, which contains LangGraph runtime context when used within a LangGraph graph:

```python
class RuntimeAwareInterceptor:
    async def __call__(self, request, handler):
        if request.runtime:
            # Access LangGraph runtime
            pass
        return await handler(request)
```

## Example Interceptors

### Authentication Interceptor

```python
class AuthInterceptor:
    def __init__(self, api_key: str):
        self.api_key = api_key
    
    async def __call__(self, request, handler):
        # Inject auth headers
        request = request.override(
            headers={"Authorization": f"Bearer {self.api_key}"}
        )
        return await handler(request)
```

### Caching Interceptor

```python
from functools import lru_cache

class CacheInterceptor:
    def __init__(self):
        self.cache = {}
    
    async def __call__(self, request, handler):
        cache_key = f"{request.name}:{hash(frozenset(request.args.items()))}"
        
        if cache_key in self.cache:
            return self.cache[cache_key]
        
        result = await handler(request)
        self.cache[cache_key] = result
        return result
```

### Request Modification Interceptor

```python
class DefaultArgsInterceptor:
    def __init__(self, defaults: dict[str, Any]):
        self.defaults = defaults
    
    async def __call__(self, request, handler):
        # Merge defaults with provided args
        merged_args = {**self.defaults, **request.args}
        request = request.override(args=merged_args)
        return await handler(request)
```

## API Reference

### Functions

#### `_build_interceptor_chain()`

```python
def _build_interceptor_chain(
    base_handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
    tool_interceptors: list[ToolCallInterceptor] | None,
) -> Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]]:
```

Builds a composed handler chain with interceptors in onion pattern.

**Parameters:**

| Name | Type | Description |
|------|------|-------------|
| `base_handler` | `Callable` | Innermost handler executing the actual tool call |
| `tool_interceptors` | `list[ToolCallInterceptor] \| None` | Optional list of interceptors to wrap |

**Returns:** Composed handler with all interceptors applied

资料来源：[tools.py:145-175]()

### Classes

#### `MCPToolCallRequest`

```python
@dataclass
class MCPToolCallRequest:
    name: str
    args: dict[str, Any]
    server_name: str
    headers: dict[str, Any] | None = None
    runtime: object | None = None
```

资料来源：[interceptors.py:58-74]()

#### `ToolCallInterceptor`

```python
@runtime_checkable
class ToolCallInterceptor(Protocol):
    async def __call__(
        self,
        request: MCPToolCallRequest,
        handler: Callable[[MCPToolCallRequest], Awaitable[MCPToolCallResult]],
    ) -> MCPToolCallResult:
        ...
```

资料来源：[interceptors.py:42-49]()

### Type Aliases

#### `MCPToolCallResult`

```python
if LANGGRAPH_PRESENT:
    MCPToolCallResult = CallToolResult | ToolMessage | Command
else:
    MCPToolCallResult = CallToolResult | ToolMessage
```

资料来源：[interceptors.py:29-36]()

## Best Practices

1. **Always call the handler**: Interceptors should typically call `handler(request)` unless intentionally short-circuiting
2. **Immutability**: Use `request.override()` to create modified requests instead of mutating the original
3. **Error handling**: Wrap handler calls in try/except for proper error handling and logging
4. **Order matters**: Place interceptors in the correct order as the first in the list is the outermost
5. **Type hints**: Use type hints for better IDE support and type checking

## Limitations

- Interceptors cannot currently modify the `server_name` or `runtime` fields of `MCPToolCallRequest` as they are context fields
- The interceptor system is designed for tool call interception; other MCP lifecycle events (like resource access) are not currently interceptable
- Runtime headers are only supported for `sse` and `http` transports

---

---

## Doramagic 踩坑日志

项目：langchain-ai/langchain-mcp-adapters

摘要：发现 17 个潜在踩坑项，其中 3 个为 high/blocking；最高优先级：安装坑 - 来源证据：Prompts and Resources auto-discovery。

## 1. 安装坑 · 来源证据：Prompts and Resources auto-discovery

- 严重度：high
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安装相关的待验证问题：Prompts and Resources auto-discovery
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_bf1812b74caa4e989767a9307a8ffc16 | https://github.com/langchain-ai/langchain-mcp-adapters/issues/62 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 2. 安装坑 · 来源证据：`MultiServerMCPClient.get_tools()` silently returns no tools when any single server fails to connect

- 严重度：high
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安装相关的待验证问题：`MultiServerMCPClient.get_tools()` silently returns no tools when any single server fails to connect
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_a5093182914b4df0b7ad2cd560bacdf2 | https://github.com/langchain-ai/langchain-mcp-adapters/issues/492 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 3. 运行坑 · 来源证据：Fix TypeError in resources.py and make __aexit__ an async coroutine in client.py

- 严重度：high
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个运行相关的待验证问题：Fix TypeError in resources.py and make __aexit__ an async coroutine in client.py
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_ac102050dd4841d6954559a3413e0b92 | https://github.com/langchain-ai/langchain-mcp-adapters/issues/496 | 来源类型 github_issue 暴露的待验证使用条件。

## 4. 安装坑 · 来源证据：langchain-mcp-adapters==0.2.2

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安装相关的待验证问题：langchain-mcp-adapters==0.2.2
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_0c6ca0722ab046379d28ecf30f8d2bcf | https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.2.2 | 来源类型 github_release 暴露的待验证使用条件。

## 5. 配置坑 · 来源证据：langchain-mcp-adapters==0.1.10

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：langchain-mcp-adapters==0.1.10
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_8b18dbf32ccd41e38b272a458f4040f5 | https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.1.10 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 6. 能力坑 · 来源证据：langchain-mcp-adapters==0.1.14

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个能力理解相关的待验证问题：langchain-mcp-adapters==0.1.14
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_6727e0d698e54fc38d7c60e262978ac2 | https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.1.14 | 来源类型 github_release 暴露的待验证使用条件。

## 7. 能力坑 · 能力判断依赖假设

- 严重度：medium
- 证据强度：source_linked
- 发现：README/documentation is current enough for a first validation pass.
- 对用户的影响：假设不成立时，用户拿不到承诺的能力。
- 建议检查：将假设转成下游验证清单。
- 防护动作：假设必须转成验证项；没有验证结果前不能写成事实。
- 证据：capability.assumptions | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | README/documentation is current enough for a first validation pass.

## 8. 运行坑 · 来源证据：langchain-mcp-adapters==0.1.12

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个运行相关的待验证问题：langchain-mcp-adapters==0.1.12
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_e71a46a9e0374d139555a78f229b0469 | https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.1.12 | 来源类型 github_release 暴露的待验证使用条件。

## 9. 维护坑 · 来源证据：langchain-mcp-adapters==0.2.0

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个维护/版本相关的待验证问题：langchain-mcp-adapters==0.2.0
- 对用户的影响：可能影响升级、迁移或版本选择。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_59483f9a6a16414c9087b1751fba8efc | https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.2.0 | 来源类型 github_release 暴露的待验证使用条件。

## 10. 维护坑 · 来源证据：langchain-mcp-adapters==0.2.0a1

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个维护/版本相关的待验证问题：langchain-mcp-adapters==0.2.0a1
- 对用户的影响：可能影响升级、迁移或版本选择。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_4e7fcda1716948898295279af95f8f96 | https://github.com/langchain-ai/langchain-mcp-adapters/releases/tag/langchain-mcp-adapters%3D%3D0.2.0a1 | 来源类型 github_release 暴露的待验证使用条件。

## 11. 维护坑 · 维护活跃度未知

- 严重度：medium
- 证据强度：source_linked
- 发现：未记录 last_activity_observed。
- 对用户的影响：新项目、停更项目和活跃项目会被混在一起，推荐信任度下降。
- 建议检查：补 GitHub 最近 commit、release、issue/PR 响应信号。
- 防护动作：维护活跃度未知时，推荐强度不能标为高信任。
- 证据：evidence.maintainer_signals | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | last_activity_observed missing

## 12. 安全/权限坑 · 下游验证发现风险项

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：下游已经要求复核，不能在页面中弱化。
- 建议检查：进入安全/权限治理复核队列。
- 防护动作：下游风险存在时必须保持 review/recommendation 降级。
- 证据：downstream_validation.risk_items | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | no_demo; severity=medium

## 13. 安全/权限坑 · 存在安全注意事项

- 严重度：medium
- 证据强度：source_linked
- 发现：No sandbox install has been executed yet; downstream must verify before user use.
- 对用户的影响：用户安装前需要知道权限边界和敏感操作。
- 建议检查：转成明确权限清单和安全审查提示。
- 防护动作：安全注意事项必须面向用户前置展示。
- 证据：risks.safety_notes | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | No sandbox install has been executed yet; downstream must verify before user use.

## 14. 安全/权限坑 · 存在评分风险

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：风险会影响是否适合普通用户安装。
- 建议检查：把风险写入边界卡，并确认是否需要人工复核。
- 防护动作：评分风险必须进入边界卡，不能只作为内部分数。
- 证据：risks.scoring_risks | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | no_demo; severity=medium

## 15. 安全/权限坑 · 来源证据：Feature Request: Support passing server-defined params extensions (e.g. LangGraph `context`) through tools/call

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安全/权限相关的待验证问题：Feature Request: Support passing server-defined params extensions (e.g. LangGraph `context`) through tools/call
- 对用户的影响：可能影响授权、密钥配置或安全边界。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_8c46dab4b6dd4a6e92c96af49ea47647 | https://github.com/langchain-ai/langchain-mcp-adapters/issues/502 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 16. 维护坑 · issue/PR 响应质量未知

- 严重度：low
- 证据强度：source_linked
- 发现：issue_or_pr_quality=unknown。
- 对用户的影响：用户无法判断遇到问题后是否有人维护。
- 建议检查：抽样最近 issue/PR，判断是否长期无人处理。
- 防护动作：issue/PR 响应未知时，必须提示维护风险。
- 证据：evidence.maintainer_signals | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | issue_or_pr_quality=unknown

## 17. 维护坑 · 发布节奏不明确

- 严重度：low
- 证据强度：source_linked
- 发现：release_recency=unknown。
- 对用户的影响：安装命令和文档可能落后于代码，用户踩坑概率升高。
- 建议检查：确认最近 release/tag 和 README 安装命令是否一致。
- 防护动作：发布节奏未知或过期时，安装说明必须标注可能漂移。
- 证据：evidence.maintainer_signals | github_repo:929158279 | https://github.com/langchain-ai/langchain-mcp-adapters | release_recency=unknown

<!-- canonical_name: langchain-ai/langchain-mcp-adapters; human_manual_source: deepwiki_human_wiki -->
