# https://github.com/openai/openai-agents-python 项目说明书

生成时间：2026-05-16 04:52:19 UTC

## 目录

- [OpenAI Agents SDK Overview](#overview)
- [Installation and Setup](#installation)
- [Examples Index](#examples-index)
- [Agents](#agents)
- [Tools](#tools)
- [Guardrails](#guardrails)
- [Handoffs](#handoffs)
- [Agents as Tools](#agents-as-tools)
- [Run Loop and Execution](#run-loop)
- [Sessions and Memory](#sessions)

<a id='overview'></a>

## OpenAI Agents SDK Overview

### 相关页面

相关主题：[Installation and Setup](#installation), [Agents](#agents), [Tools](#tools)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/openai/openai-agents-python/blob/main/README.md)
- [src/agents/__init__.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/__init__.py)
- [src/agents/run.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run.py)
- [src/agents/run_internal/turn_resolution.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_internal/turn_resolution.py)
- [src/agents/handoffs/__init__.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/__init__.py)
- [src/agents/items.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/items.py)
- [src/agents/extensions/visualization.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/visualization.py)
- [src/agents/mcp/server.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/mcp/server.py)
</details>

# OpenAI Agents SDK Overview

## Introduction

The OpenAI Agents SDK is a Python framework designed to build multi-agent systems that can interact with users, execute tools, and delegate tasks to specialized sub-agents. The SDK provides a structured approach to orchestrating agent conversations, managing tool execution, handling handoffs between agents, and maintaining conversation state throughout the execution lifecycle.

The SDK's core responsibility is to manage the runtime execution of agents, handling the turn-based conversation flow, tool invocations, guardrail evaluations, and multi-agent handoffs within a single unified execution model. 资料来源：[src/agents/__init__.py]()

## Architecture Overview

The SDK follows a layered architecture that separates concerns between agent definition, runtime execution, and tool/mcp integrations.

```mermaid
graph TD
    A[User Input] --> B[Runner]
    B --> C[Agent]
    C --> D[Handoffs]
    C --> E[Tools]
    C --> F[Guardrails]
    D --> C
    D --> G[Sub-Agent]
    E --> H[MCP Servers]
    F --> I[Input/Output Guards]
    G --> C
    B --> J[Session Persistence]
    B --> K[Tracing]
```

### Core Components

| Component | Purpose | Location |
|-----------|---------|----------|
| `Agent` | Defines agent behavior, tools, handoffs, and instructions | `src/agents/__init__.py` |
| `Runner` | Executes agents and manages conversation flow | `src/agents/run.py` |
| `Handoff` | Enables transfer of control between agents | `src/agents/handoffs/__init__.py` |
| `MCPServer` | Provides Model Context Protocol server abstraction | `src/agents/mcp/server.py` |
| `ItemHelpers` | Utility for extracting content from conversation items | `src/agents/items.py` |

## Agent System

### Agent Definition

Agents are the fundamental unit of computation in the SDK. An agent encapsulates:

- **Instructions**: The system prompt that defines the agent's role and behavior
- **Tools**: A list of callable tools the agent can invoke
- **Handoffs**: Definitions for transferring control to other agents
- **Input Guardrails**: Pre-processing validation before agent execution
- **Output Guardrails**: Post-processing validation of agent responses

```mermaid
graph LR
    A[Agent] --> B[Instructions]
    A --> C[Tools]
    A --> D[Handoffs]
    A --> E[Guardrails]
```

### Agent Execution Flow

The execution follows a turn-based model where each turn processes user input, generates model responses, executes tools, and evaluates handoffs until a final response is produced.

```mermaid
sequenceDiagram
    participant User
    participant Runner
    participant Agent
    participant Tools
    participant Handoffs

    User->>Runner: User Input
    Runner->>Agent: Process Turn
    Agent->>Agent: Generate Response
    alt Tool Call
        Agent->>Tools: Execute Tool
        Tools-->>Agent: Tool Result
    end
    alt Handoff
        Agent->>Handoffs: Request Handoff
        Handoffs->>Agent: Switch Agent
    end
    Agent-->>Runner: Final Output
    Runner-->>User: Response
```

## Handoffs System

The handoff system enables agents to delegate conversations to other specialized agents while preserving conversation context. Each handoff defines:

| Property | Type | Description |
|----------|------|-------------|
| `name` | `str` | Unique identifier for the handoff tool |
| `tool_name` | `str` | Name exposed to the model for invoking |
| `tool_description` | `str` | Description shown to the model |
| `input_json_schema` | `dict` | JSON schema for handoff arguments |
| `on_invoke_handoff` | `Callable` | Function that returns the target agent |
| `input_filter` | `HandoffInputFilter` | Optional filter for conversation context |

资料来源：[src/agents/handoffs/__init__.py]()

### Handoff Input Filtering

By default, the new agent receives the entire conversation history. The `input_filter` function allows customization of what context is passed to the target agent:

```python
input_filter: HandoffInputFilter | None = None
"""A function that filters the inputs that are passed to the next agent."""
```

## Turn Resolution

The turn resolution system handles the complexity of multi-step agent interactions within a single turn. This includes managing pre-step items, new step items, tool results, guardrail evaluations, and handoff transitions.

### Turn Resolution States

```mermaid
stateDiagram-v2
    [*] --> InputGuardrails: Input Received
    InputGuardrails --> ModelResponse: Passed
    ModelResponse --> ToolExecution: Tool Call
    ModelResponse --> Handoff: Agent Switch
    ModelResponse --> FinalOutput: Direct Response
    ToolExecution --> ModelResponse: More Tools
    ToolExecution --> Handoff: Switch During Tool
    ToolExecution --> FinalOutput: Complete
    Handoff --> InputGuardrails: New Agent
    FinalOutput --> [*]
```

### Key Resolution Functions

The turn resolution process evaluates several conditions:

1. **Tool Input Guardrail Results**: Validation before tool execution
2. **Function Results**: Output from tool invocations
3. **Tool Output Guardrail Results**: Validation after tool execution
4. **Handoff Evaluation**: Check for agent transfer requests

资料来源：[src/agents/run_internal/turn_resolution.py]()

## Tool Execution and Guardrails

### Guardrail System

The SDK implements a two-layer guardrail system:

| Guardrail Type | Timing | Purpose |
|----------------|--------|---------|
| Input Guardrails | Before agent processes input | Validate and sanitize user input |
| Output Guardrails | After agent generates response | Validate response content |

### Tool Use Tracking

Tools are tracked throughout execution to maintain state and enable:

- Streaming output collection
- Refusal detection
- Error handling
- Output validation

```mermaid
graph TD
    A[Tool Call] --> B{Input Guardrails}
    B -->|Pass| C[Execute Tool]
    B -->|Fail| D[Reject]
    C --> E[Tool Result]
    E --> F{Output Guardrails}
    F -->|Pass| G[Continue]
    F -->|Fail| H[Error Response]
```

## Model Context Protocol (MCP) Integration

The SDK provides a Python abstraction for MCP servers through the `MCPServer` base class. This enables agents to interact with external MCP-capable tools and services.

### MCPServer Base Class

The `MCPServer` class provides the foundation for MCP protocol implementation with methods for:

- **Resources**: `list_resources()`, `list_resource_templates()`, `read_resource()`
- **Tools**: Tool invocation and management
- **Prompts**: Server-provided prompt templates

资料来源：[src/agents/mcp/server.py]()

### Require Approval Settings

MCP tools support granular approval controls:

| Setting | Behavior |
|---------|----------|
| `RequireApprovalSetting.NEVER` | Always auto-approve |
| `RequireApprovalSetting.ALWAYS` | Always require approval |
| `RequireApprovalSetting.UNDETERMINED` | Use default behavior |

## Session and State Management

### Run State

The `run_state` object tracks execution context including:

- Current agent
- Conversation history
- Generated items
- Original input
- Turn counters

### Persistence

The SDK supports session persistence for maintaining state across multiple interactions:

```python
session_persistence_enabled: bool
store: StoreSetting
```

## Tracing and Visualization

### Agent Visualization

The SDK includes visualization utilities for generating DOT-format diagrams of agent relationships:

| Function | Purpose |
|----------|---------|
| `get_all_nodes()` | Generate node definitions for agent graph |
| `get_all_edges()` | Generate edge definitions for handoff connections |

```mermaid
graph TD
    A[User] --> B[Orchestrator Agent]
    B --> C[Research Agent]
    B --> D[Writer Agent]
    C --> E[Web Search Tool]
    D --> F[File Write Tool]
    B --> G[Analytics Agent]
    G --> H[Data Analysis Tool]
```

资料来源：[src/agents/extensions/visualization.py]()

## Item Processing

### Message Item Extraction

The SDK provides utilities for extracting content from conversation items:

| Method | Purpose |
|--------|---------|
| `text_message_output()` | Extract text from a single message output item |
| `text_message_outputs()` | Extract concatenated text from multiple items |
| `extract_refusal()` | Extract refusal content if model refused to respond |

```python
@classmethod
def extract_refusal(cls, message: TResponseOutputItem) -> str | None:
    """Extracts refusal content from a message, if any."""
```

## Run Configuration

### Key Configuration Options

| Parameter | Type | Description |
|-----------|------|-------------|
| `max_turns` | `int` | Maximum conversation turns |
| `tools` | `list[Function]` | Available tools for the run |
| `input_guardrails` | `list[InputGuardrail]` | Input validation |
| `output_guardrails` | `list[OutputGuardrail]` | Output validation |
| `tool_use_tracker` | `ToolUseTracker` | Tracks tool invocations |
| `run_state` | `RunState` | Mutable execution state |

资料来源：[src/agents/run.py]()

## Example Workflow Patterns

### Research Bot Architecture

A common pattern involves multiple specialized agents:

1. **Planner Agent**: Decomposes user queries into search tasks
2. **Search Agent**: Executes web searches in parallel
3. **Writer Agent**: Synthesizes research into final reports

```mermaid
graph LR
    A[User Query] --> B[Planner Agent]
    B --> C[Search 1]
    B --> D[Search 2]
    B --> E[Search N]
    C --> F[Writer Agent]
    D --> F
    E --> F
    F --> G[Final Report]
```

### Sandbox Agent Workflow

Sandbox agents provide isolated execution environments:

```mermaid
graph TD
    A[SandboxAgent] --> B[Workspace]
    A --> C[Manifest]
    C --> D[Skill Loading]
    B --> E[Artifact Management]
    E --> F[File System Access]
    D --> G[Tool Execution]
```

## SDK Version

Current SDK version: `1.0.0` (semantic versioning)

资料来源：[src/agents/version.py]()

## Summary

The OpenAI Agents SDK provides a comprehensive framework for building sophisticated multi-agent applications. Key capabilities include:

- **Multi-Agent Orchestration**: Define and coordinate multiple agents with specialized roles
- **Handoff System**: Seamlessly transfer control between agents while maintaining context
- **Tool Execution**: Integrate tools with guardrail validation at input and output
- **MCP Integration**: Connect to external Model Context Protocol servers
- **State Management**: Track execution state with persistence support
- **Tracing**: Monitor and visualize agent interactions and flows

The SDK abstracts the complexity of turn resolution, tool tracking, and handoff management, allowing developers to focus on defining agent behavior and tool integrations.

---

<a id='installation'></a>

## Installation and Setup

### 相关页面

相关主题：[OpenAI Agents SDK Overview](#overview), [Examples Index](#examples-index)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [pyproject.toml](https://github.com/openai/openai-agents-python/blob/main/pyproject.toml)
- [src/agents/_config.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/_config.py)
- [src/agents/run_config.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_config.py)
- [examples/sandbox/extensions/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/extensions/README.md)
- [examples/sandbox/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/README.md)
- [examples/model_providers/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/model_providers/README.md)
</details>

# Installation and Setup

## Overview

The `openai-agents-python` library provides a comprehensive multi-agent framework for building AI-powered applications. The installation and setup process involves managing dependencies, configuring environment variables, and optionally setting up sandbox backends for code execution capabilities.

This page covers the complete setup workflow from initial installation through runtime configuration.

---

## Prerequisites

### System Requirements

| Requirement | Specification |
|-------------|---------------|
| Python | 3.10 or higher |
| Package Manager | pip, uv, or poetry |
| API Access | OpenAI API key (or compatible provider) |

### Environment Variables

The library requires the `OPENAI_API_KEY` environment variable for core functionality. Additional provider-specific variables may be needed depending on your use case.

```bash
# Core requirement
export OPENAI_API_KEY="sk-..."

# Optional: Model provider alternatives
export OPENROUTER_API_KEY="..."
export LITELLM_API_KEY="..."
```

资料来源：[examples/model_providers/README.md]()

---

## Installation Methods

### Using pip

```bash
pip install openai-agents
```

### Using uv (Recommended)

```bash
uv pip install openai-agents
```

Or with sync for development:

```bash
uv sync
```

### With Extras

The `pyproject.toml` defines optional dependency groups for specific features:

| Extra | Description | Dependencies |
|-------|-------------|--------------|
| `sandbox` | Core sandbox functionality | e2b-sdk, modal-client |
| `e2b` | E2B sandbox backend | e2b-code-interpreter, e2b |
| `blaxel` | Blaxel sandbox backend | blaxel |
| `modal` | Modal sandbox backend | modal |
| `vercel` | Vercel deployment | vercel |
| `daytona` | Daytona sandbox backend | daytona |
| `temporal` | Temporal workflow integration | temporal-sdk |
| `runloop` | Runloop backend | runloop |
| `dev` | Development dependencies | pytest, ruff, mypy |

安装带所有 sandbox 后端的完整版本：

```bash
uv sync --extra sandbox
```

资料来源：[pyproject.toml](https://github.com/openai/openai-agents-python/blob/main/pyproject.toml)

---

## Configuration Architecture

The library uses a layered configuration system:

```mermaid
graph TD
    A[Environment Variables] --> B[DefaultConfig]
    C[User Code Config] --> D[RunConfig]
    B --> D
    E[Agent-specific Config] --> F[Agent]
    F --> D
```

### Configuration Loading Order

1. **Environment Variables** - Base API keys and provider settings
2. **Default Config** - Library defaults from `_config.py`
3. **RunConfig** - User-provided runtime configuration
4. **Agent Config** - Per-agent overrides

资料来源：[src/agents/_config.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/_config.py)

---

## Core Configuration

### RunConfig Parameters

The `RunConfig` class provides runtime configuration for agent execution:

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `model` | `str` | `"gpt-4o"` | Model identifier |
| `model_provider` | `str \| None` | `None` | Custom model provider |
| `max_tokens` | `int \| None` | `None` | Maximum response tokens |
| `temperature` | `float \| None` | `None` | Sampling temperature |
| `parallel_tool_calls` | `bool` | `True` | Enable parallel tool execution |
| `tool_choice` | `str \| None` | `None` | Tool selection strategy |
| `tracing` | `TracingKind` | `"off"` | Tracing provider |
| `trace_include_defaults` | `bool` | `False` | Include default values in traces |
| `trace_include_raw_model_messages` | `bool` | `False` | Include raw model messages |
| `session.persistence` | `SessionPersistence` | `None` | Conversation persistence |

资料来源：[src/agents/run_config.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_config.py)

### Basic Configuration Example

```python
from agents import Agent, Runner, RunConfig

config = RunConfig(
    model="gpt-4o",
    temperature=0.7,
    tracing="console",  # Enable console tracing
)

agent = Agent(
    name="assistant",
    instructions="You are a helpful assistant.",
)

result = await Runner.run(agent, "Hello!", run_config=config)
```

---

## Sandbox Backend Setup

The library supports multiple sandbox backends for secure code execution. Each backend has specific setup requirements.

### Backend Comparison

| Backend | Use Case | Key Features |
|---------|----------|--------------|
| E2B | General-purpose sandbox | Bash/Jupyter interfaces, filesystem access |
| Blaxel | Cloud development | Persistent storage, cloud bucket mounts |
| Modal | Serverless execution | GPU support, scalable workloads |
| Daytona | Containerized dev | Full development environments |
| Vercel | Deployment | Serverless deployment, edge functions |

资料来源：[examples/sandbox/extensions/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/extensions/README.md)

### E2B Setup

```bash
# Install E2B extra
uv sync --extra e2b

# Set API key
export E2B_API_KEY="e2b_..."
```

**Run Example:**

```bash
uv run python examples/sandbox/basic.py --backend e2b
uv run python examples/sandbox/basic.py --backend e2b_code_interpreter
```

资料来源：[examples/sandbox/extensions/README.md]()

### Blaxel Setup

```bash
# Install Blaxel extra
uv sync --extra blaxel

# Set environment variables
export OPENAI_API_KEY="..."
export BL_API_KEY="..."
export BL_WORKSPACE="..."
```

**Run Example:**

```bash
uv run python examples/sandbox/extensions/blaxel_runner.py --stream
```

**Useful Flags:**

| Flag | Description |
|------|-------------|
| `--image blaxel/py-app` | Container image |
| `--region us-pdx-1` | Deployment region |
| `--memory 4096` | Memory allocation (MB) |
| `--ttl 1h` | Session time-to-live |

资料来源：[examples/sandbox/extensions/README.md]()

### Modal Setup

```bash
# Install Modal extra
uv sync --extra modal

# Authenticate
uv run modal token set --token-id <token-id> --token-secret <token-secret>

# Or use environment variables
export MODAL_TOKEN_ID="..."
export MODAL_TOKEN_SECRET="..."
```

**Run Example:**

```bash
uv run python examples/sandbox/extensions/modal_runner.py \
  --app-name openai-agents-python-sandbox-example \
  --stream
```

**Useful Flags:**

| Flag | Description |
|------|-------------|
| `--workspace-persistence tar` | Workspace persistence mode |
| `--sandbox-create-timeout-s 60` | Sandbox creation timeout |
| `--runtime node22` | Runtime environment |

资料来源：[examples/sandbox/extensions/README.md]()

### Daytona Setup

```bash
# Install Daytona extra
uv sync --extra daytona

# Set API key
export OPENAI_API_KEY="..."
export DAYTONA_API_KEY="..."
```

**Run Example:**

```bash
uv run python examples/sandbox/extensions/daytona/daytona_runner.py --stream
```

### Vercel Setup

```bash
# Install Vercel extra
uv sync --extra vercel

# Option 1: OIDC token (recommended)
export OPENAI_API_KEY="..."
export VERCEL_OIDC_TOKEN="..."

# Option 2: Explicit tokens
export OPENAI_API_KEY="..."
export VERCEL_TOKEN="..."
export VERCEL_PROJECT_ID="..."
export VERCEL_TEAM_ID="..."
```

**Run Example:**

```bash
uv run python examples/sandbox/extensions/vercel_runner.py --stream
```

### Runloop Setup

```bash
# Install Runloop extra
uv sync --extra runloop

# Sign up at platform.runloop.ai
```

资料来源：[examples/sandbox/extensions/README.md]()

---

## Sandbox Basic Examples

### Minimal Sandbox Setup

```python
from agents.sandbox import SandboxAgent, SandboxSession
from agents.sandbox.backends.e2b import E2BBackend

# Create backend
backend = E2BBackend(api_key="e2b_...")

# Create sandbox session
session = SandboxSession(backend=backend)

# Run agent
agent = SandboxAgent(
    name="code_assistant",
    instructions="Execute Python code in the sandbox.",
)

result = await Runner.run(agent, "Print hello world", session=session)
```

资料来源：[examples/sandbox/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/README.md)

### Available Sandbox Examples

| Example | File | Description |
|---------|------|-------------|
| Basic sandbox | `examples/sandbox/basic.py` | Create session, run agent, stream results |
| Handoffs | `examples/sandbox/handoffs.py` | Agent handoffs with sandbox backends |
| Workspace capabilities | `examples/sandbox/sandbox_agent_capabilities.py` | Configure workspace access |
| Combined tools | `examples/sandbox/sandbox_agent_with_tools.py` | Sandbox + host-defined tools |
| Agents as tools | `examples/sandbox/sandbox_agents_as_tools.py` | Expose sandbox agents as tools |
| Remote snapshots | `examples/sandbox/sandbox_agent_with_remote_snapshot.py` | Start from saved snapshots |

**Run Commands:**

```bash
uv run python examples/sandbox/basic.py
uv run python examples/sandbox/handoffs.py
uv run python examples/sandbox/sandbox_agent_capabilities.py
```

---

## Model Provider Configuration

### OpenRouter (Default for Examples)

```bash
export OPENROUTER_API_KEY="..."
```

```python
from agents import Agent, Runner, RunConfig

config = RunConfig(
    model="openrouter/openai/gpt-4o-mini",
)

result = await Runner.run(agent, "Hello", run_config=config)
```

### LiteLLM Provider

```bash
uv sync --extra litellm
```

```python
from agents.model_providers.litellm_provider import LiteLLMProvider

provider = LiteLLMProvider(model="gpt-4o-mini")
```

### Any-LLM Provider

```bash
uv sync --extra any-llm
```

```python
from agents.model_providers.any_llm_provider import AnyLLMProvider

provider = AnyLLMProvider(model="gpt-4o-mini")
```

**Run Examples:**

```bash
uv run examples/model_providers/litellm_provider.py
uv run examples/model_providers/litellm_auto.py
uv run examples/model_providers/any_llm_provider.py
uv run examples/model_providers/any_llm_auto.py
```

资料来源：[examples/model_providers/README.md]()

---

## Example Project Setup

### Healthcare Support Example

```bash
# List available scenarios
uv run python examples/sandbox/healthcare_support/main.py --list-scenarios

# Run specific scenario
uv run python examples/sandbox/healthcare_support/main.py --scenario blue_cross_pt_benefits
uv run python examples/sandbox/healthcare_support/main.py --scenario messy_ambiguous_knee_case

# Reset memory state
uv run python examples/sandbox/healthcare_support/main.py --reset-memory
```

**For unattended runs:**

```bash
EXAMPLES_INTERACTIVE_MODE=auto uv run python examples/sandbox/healthcare_support/main.py --scenario messy_ambiguous_knee_case
```

资料来源：[examples/sandbox/healthcare_support/README.md]()

### Research Bot Example

```bash
python -m examples.research_bot.main
```

资料来源：[examples/research_bot/README.md]()

---

## Temporal Integration Setup

For workflow-based sandbox management:

```bash
# Install Temporal extra
uv sync --extra temporal

# Install Temporal CLI and just
# Start dev server
just temporal

# In separate terminals
just worker  # Start worker
just tui     # Start TUI
```

**TUI Commands:**

| Command | Description |
|---------|-------------|
| `/switch` | Switch to different sandbox backend |
| `/fork [title]` | Fork session to different backend |
| `/title <name>` | Rename current session |

资料来源：[examples/sandbox/extensions/temporal/README.md]()

---

## Environment Configuration Files

### Repository Root `.env`

Place a `.env` file at the repository root:

```
OPENAI_API_KEY="sk-..."
```

### Example-Specific `.env`

Some examples support their own `.env` files:

```
# examples/sandbox/extensions/temporal/.env
OPENAI_API_KEY="sk-..."
DAYTONA_API_KEY="dtn_..."
E2B_API_KEY="e2b_..."
```

---

## Troubleshooting Setup Issues

### Common Issues

| Issue | Solution |
|-------|----------|
| Missing API key | Set `OPENAI_API_KEY` environment variable |
| Backend connection failed | Verify backend API key and network access |
| Import errors | Run `uv sync` to install all dependencies |
| Sandbox timeout | Increase `--sandbox-create-timeout-s` parameter |

### Verify Installation

```python
import agents
print(agents.__version__)
```

### Check Backend Configuration

```python
from agents.sandbox.backends.e2b import E2BBackend

backend = E2BBackend()
# Check if backend is properly configured
```

---

## Next Steps

After completing installation and setup:

1. **Quick Start** - Run `examples/sandbox/basic.py` to verify sandbox functionality
2. **Agent Development** - Create your first agent with custom instructions
3. **Tool Integration** - Add custom tools to extend agent capabilities
4. **Multi-Agent Systems** - Implement agent handoffs and orchestration

---

<a id='examples-index'></a>

## Examples Index

### 相关页面

相关主题：[OpenAI Agents SDK Overview](#overview), [Agents](#agents)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [examples/basic/hello_world.py](https://github.com/openai/openai-agents-python/blob/main/examples/basic/hello_world.py)
- [examples/agent_patterns/agents_as_tools.py](https://github.com/openai/openai-agents-python/blob/main/examples/agent_patterns/agents_as_tools.py)
- [examples/sandbox/basic.py](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/basic.py)
- [examples/voice/streamed/main.py](https://github.com/openai/openai-agents-python/blob/main/examples/voice/streamed/main.py)
- [examples/financial_research_agent/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/financial_research_agent/README.md)
- [examples/research_bot/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/research_bot/README.md)
- [examples/sandbox/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/README.md)
- [examples/mcp/streamable_http_remote_example/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/mcp/streamable_http_remote_example/README.md)
- [examples/model_providers/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/model_providers/README.md)
- [examples/sandbox/extensions/README.md](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/extensions/README.md)
</details>

# Examples Index

## Overview

The Examples Index serves as a comprehensive guide to the sample applications and demonstrations provided in the openai-agents-python repository. These examples are designed to showcase the capabilities of the Agents SDK across various use cases, from basic agent interactions to complex multi-agent workflows involving sandboxed execution environments, voice interfaces, and external tool integrations.

The examples directory structure organizes demonstrations by functional category, allowing developers to quickly locate relevant implementations for their specific requirements. Each example is designed to be runnable with minimal configuration, serving as both documentation and a starting point for custom implementations.

## Example Categories

### Basic Examples

The basic examples provide the foundational patterns for building agents with the SDK. These examples demonstrate core concepts with minimal complexity.

| Example | File | Purpose |
|---------|------|---------|
| Hello World | `examples/basic/hello_world.py` | Simple agent that responds to user input |
| Agent as Tool | `examples/agent_patterns/agents_as_tools.py` | Demonstrates wrapping agents as tools for other agents |

资料来源：[examples/basic/hello_world.py](examples/basic/hello_world.py)
资料来源：[examples/agent_patterns/agents_as_tools.py](examples/agent_patterns/agents_as_tools.py)

### Sandbox Examples

Sandbox examples demonstrate the isolated workspace capabilities of the Agents SDK, enabling agents to execute code and manipulate files in a secure environment.

#### Small API Examples

| Example | Command | Description |
|---------|---------|-------------|
| Basic Sandbox | `uv run python examples/sandbox/basic.py` | Creates a sandbox session from a manifest, runs a `SandboxAgent`, and streams the result |
| Handoffs | `uv run python examples/sandbox/handoffs.py` | Uses handoffs with sandbox-backed agents |
| Workspace Capabilities | `uv run python examples/sandbox/sandbox_agent_capabilities.py` | Configures a sandbox agent with workspace capabilities |
| Sandbox with Tools | `uv run python examples/sandbox/sandbox_agent_with_tools.py` | Combines sandbox capabilities with host-defined tools |
| Agents as Tools | `uv run python examples/sandbox/sandbox_agents_as_tools.py` | Exposes sandbox agents as tools for another agent |
| Remote Snapshot | `uv run python examples/sandbox/sandbox_agent_with_remote_snapshot.py` | Starts from a remote snapshot |

资料来源：[examples/sandbox/README.md:1-20](examples/sandbox/README.md)

#### Sandbox Extensions

Sandbox extensions provide integrations with various cloud sandbox providers:

| Provider | Setup Command | Run Command |
|----------|---------------|-------------|
| E2B | `uv sync --extra e2b` | `uv run python examples/sandbox/basic.py --backend e2b` |
| Modal | `uv sync --extra modal` | `uv run python examples/sandbox/extensions/modal_runner.py --stream` |
| Blaxel | `uv sync --extra blaxel` | `uv run python examples/sandbox/extensions/blaxel_runner.py --stream` |
| Vercel | `uv sync --extra vercel` | `uv run python/examples/sandbox/extensions/vercel_runner.py --stream` |
| Daytona | `uv sync --extra daytona` | `uv run python examples/sandbox/extensions/daytona/daytona_runner.py --stream` |
| Runloop | `uv sync --extra runloop` | Platform-specific setup |
| Temporal | Temporal CLI + just | `just worker` / `just tui` |

资料来源：[examples/sandbox/extensions/README.md](examples/sandbox/extensions/README.md)

### Multi-Agent Research Examples

#### Research Bot

The research bot demonstrates a multi-agent system where agents collaborate to perform web research and synthesize findings into reports.

**Architecture Flow:**

```mermaid
graph TD
    A[User Input] --> B[Planner Agent]
    B --> C[Generate Search Queries]
    C --> D[Search Agent 1]
    C --> E[Search Agent 2]
    C --> F[Search Agent N]
    D --> G[Parallel Execution]
    E --> G
    F --> G
    G --> H[Writer Agent]
    H --> I[Final Report]
```

**Key Components:**

- **Planner Agent**: Creates a research plan with search terms and rationale
- **Search Agent**: Uses Web Search tool to search and summarize results
- **Writer Agent**: Synthesizes summaries into a long-form markdown report

资料来源：[examples/research_bot/README.md](examples/research_bot/README.md)

#### Financial Research Agent

The financial research agent demonstrates domain-specific research capabilities with access to specialized analysis tools.

**Agent Configuration:**

```
You are a senior financial analyst. You will be provided with the original query
and a set of raw search summaries. Your job is to synthesize these into a
long‑form markdown report with a short executive summary.
```

**Available Tools:**
- `fundamentals_analysis` - Specialist write-up for fundamental analysis
- `risk_analysis` - Specialist write-up for risk assessment

资料来源：[examples/financial_research_agent/README.md](examples/financial_research_agent/README.md)

### Healthcare Support Example

A demonstration workflow that combines sandbox execution with human-in-the-loop approvals for healthcare-related tasks.

**Workflow Components:**

- **Orchestrator Agent**: Coordinates the overall workflow
- **Benefits Subagent**: Handles benefits-related queries
- **Sandbox Policy Agent**: Executes policy validation in sandbox
- **Memory Recap Agent**: Maintains conversation context

**Key Files:**

| File | Purpose |
|------|---------|
| `main.py` | Standalone CLI demo runner |
| `workflow.py` | Shared workflow execution logic, sandbox setup, artifact copying, tracing |
| `support_agents.py` | Agent definitions |
| `tools.py` | Local lookup tools and approval-gated human handoff |
| `skills/prior-auth-packet-builder/SKILL.md` | Sandbox skill definition |

**Available Scenarios:**

```bash
uv run python examples/sandbox/healthcare_support/main.py --list-scenarios
uv run python examples/sandbox/healthcare_support/main.py --scenario blue_cross_pt_benefits
uv run python examples/sandbox/healthcare_support/main.py --scenario messy_ambiguous_knee_case
```

资料来源：[examples/sandbox/healthcare_support/README.md](examples/sandbox/healthcare_support/README.md)

### Voice Examples

Voice examples demonstrate real-time audio interaction capabilities with agents.

**Architecture:**

```mermaid
graph LR
    A[Audio Input] --> B[Voice Agent]
    B --> C[Streaming Response]
    C --> D[Audio Output]
    B --> E[Tool Calls]
    E --> F[External Services]
```

**Run Command:**

```bash
uv run python examples/voice/streamed/main.py
```

资料来源：[examples/voice/streamed/main.py](examples/voice/streamed/main.py)

### MCP Examples

Model Context Protocol (MCP) examples demonstrate integration with external MCP servers for extended tool capabilities.

#### Streamable HTTP Remote Example

Connects to DeepWiki over the Streamable HTTP transport to leverage external tools.

**Run Command:**

```bash
uv run python examples/mcp/streamable_http_remote_example/main.py
```

**Prerequisites:**
- `OPENAI_API_KEY` set for model calls

资料来源：[examples/mcp/streamable_http_remote_example/README.md](examples/mcp/streamable_http_remote_example/README.md)

### Model Provider Examples

Model provider examples demonstrate routing models through adapter layers for flexibility in model selection.

| Adapter | Direct Run | Auto Mode |
|---------|------------|-----------|
| any-llm | `uv run examples/model_providers/any_llm_provider.py` | `uv run examples/model_providers/any_llm_auto.py` |
| LiteLLM | `uv run examples/model_providers/litellm_provider.py` | `uv run examples/model_providers/litellm_auto.py` |

**Model Override:**

```bash
uv run examples/model_providers/any_llm_provider.py --model openrouter/openai/gpt-5.4-mini
```

资料来源：[examples/model_providers/README.md](examples/model_providers/README.md)

## Common Configuration

### Environment Variables

Most examples require the `OPENAI_API_KEY` environment variable. Configure it in one of these locations:

1. Repository-root `.env` file
2. Example's local `.env` file
3. Shell environment

### Running with uv

The project uses `uv` for dependency management. Run examples with:

```bash
uv run python <path-to-example>
```

### Interactive Mode

For examples with prompts, set `EXAMPLES_INTERACTIVE_MODE=auto` to auto-answer:

```bash
EXAMPLES_INTERACTIVE_MODE=auto uv run python examples/sandbox/healthcare_support/main.py --scenario messy_ambiguous_knee_case
```

## Example Selection Guide

```mermaid
graph TD
    A[Use Case] --> B{Basic Interaction?}
    B -->|Yes| C[Basic Examples]
    B -->|No| D{Multi-Agent Workflow?}
    D -->|Yes| E{Research Domain?}
    D -->|No| F{Sandbox Required?}
    E -->|Financial| G[Financial Research Agent]
    E -->|General| H[Research Bot]
    F -->|Yes| I{Specialized Provider?}
    F -->|No| J[Agent Patterns]
    I -->|E2B| K[E2B Examples]
    I -->|Modal| L[Modal Examples]
    I -->|Vercel| M[Vercel Examples]
    I -->|Daytona| N[Daytona Examples]
    I -->|Blaxel| O[Blaxel Examples]
```

## Sandbox Backend Comparison

| Backend | Interface | Workspace Persistence | Cloud Support |
|---------|-----------|----------------------|---------------|
| E2B | Bash-style | Snapshot files | Yes |
| Modal | Bash-style | Tar, snapshot files/directory | Yes |
| Blaxel | Bash-style + PTY | Drive mount, cloud buckets | Yes (S3, R2, GCS) |
| Vercel | Command execution | Tar, snapshot | Yes |
| Daytona | Bash-style | Yes | Yes |
| Runloop | TBD | Yes | Yes |

资料来源：[examples/sandbox/extensions/README.md](examples/sandbox/extensions/README.md)

---

<a id='agents'></a>

## Agents

### 相关页面

相关主题：[Tools](#tools), [Handoffs](#handoffs), [Guardrails](#guardrails), [Run Loop and Execution](#run-loop)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [src/agents/agent.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/agent.py)
- [src/agents/lifecycle.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/lifecycle.py)
- [src/agents/agent_output.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/agent_output.py)
- [src/agents/items.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/items.py)
- [src/agents/function_schema.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/function_schema.py)
- [src/agents/run.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run.py)
- [src/agents/run_internal/turn_resolution.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_internal/turn_resolution.py)
- [src/agents/extensions/visualization.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/visualization.py)
- [src/agents/handoffs/__init__.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/__init__.py)
- [src/agents/handoffs/history.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/history.py)
</details>

# Agents

## Overview

Agents are the core execution units in the OpenAI Agents SDK. An agent encapsulates an LLM with instructions, tools, handoffs, and guardrails that enable autonomous task completion. Agents process user inputs, make decisions about tool usage, transfer control to other agents, and generate responses.

The agent system provides a structured approach to building AI-powered applications by separating concerns between orchestration, tool execution, and response generation. Agents can be composed hierarchically, where one agent can delegate tasks to sub-agents or hand off control entirely to specialized agents.

## Architecture

### Agent Core Components

An agent consists of several interconnected components that work together to process requests and generate responses.

```mermaid
graph TD
    A[User Input] --> B[Agent]
    B --> C[Instructions/Prompt]
    B --> D[Tools]
    B --> E[Handoffs]
    B --> F[Guardrails]
    C --> G[LLM Decision Engine]
    D --> H[Tool Execution]
    E --> I[Agent Transfer]
    G --> J[Response/Action]
    H --> J
    I --> K[Target Agent]
    K --> G
```

### Agent Types

| Type | Description | Use Case |
|------|-------------|----------|
| `Agent[TContext]` | Base agent type with generic context | General purpose agents |
| `SandboxAgent` | Agent with isolated workspace | Code execution, file operations |
| `FunctionAgent` | Agent for function/tool orchestration | Tool-heavy workflows |

### Source File Organization

| File | Purpose |
|------|---------|
| `src/agents/agent.py` | Core agent class definition |
| `src/agents/lifecycle.py` | Agent lifecycle management |
| `src/agents/agent_output.py` | Output types and responses |
| `src/agents/items.py` | Run item definitions and helpers |
| `src/agents/function_schema.py` | Tool schema generation |

## Agent Lifecycle

Agents follow a defined lifecycle from initialization through execution to completion or handoff.

```mermaid
stateDiagram-v2
    [*] --> Initialized: Agent Created
    Initialized --> Running: Input Received
    Running --> ToolExecution: Tool Call
    ToolExecution --> Running: Tool Result
    Running --> Handoff: Transfer Request
    Handoff --> [*]: Complete
    Running --> Response: Final Output
    Response --> [*]: Complete
    Handoff --> Running: New Agent
```

### Lifecycle States

| State | Description | Entry Condition |
|-------|-------------|-----------------|
| `Initialized` | Agent created but not yet processing | Object instantiation |
| `Running` | Actively processing input | `run()` or `run_sync()` called |
| `ToolExecution` | Executing one or more tools | LLM requests tool call |
| `Handoff` | Transferring to another agent | LLM triggers handoff |
| `Response` | Generating final response | No more actions needed |

资料来源：[src/agents/lifecycle.py:1-50]()

### Turn Resolution

The turn resolution process handles the core agent loop. Each turn processes input and determines next actions.

```mermaid
sequenceDiagram
    participant U as User
    participant R as Runner
    participant A as Agent
    participant T as Tools
    participant H as Handoffs
    
    U->>R: User Input
    R->>A: Process Turn
    A->>T: Tool Calls?
    T-->>A: Results
    A->>H: Handoff?
    H-->>A: New Agent
    A->>R: Response
    R-->>U: Output
```

资料来源：[src/agents/run_internal/turn_resolution.py:1-80]()

## Run Items

Run items represent the atomic units of work within an agent execution. They capture messages, tool calls, tool results, and handoffs.

### Item Types

| Type | Description | Source |
|------|-------------|--------|
| `MessageOutputItem` | LLM generated message | `src/agents/items.py:30-60` |
| `ToolCallItem` | Tool invocation request | `src/agents/items.py:61-90` |
| `ToolCallOutputItem` | Tool execution result | `src/agents/items.py:91-120` |
| `HandoffItem` | Agent transfer | `src/agents/items.py:121-150` |
| `ToolApprovalItem` | Human approval for tools | `src/agents/handoffs/history.py:50-70` |

### Message Extraction

The `ItemHelpers` class provides utilities for extracting content from run items:

```python
# Extract text from message output
text = ItemHelpers.text_message_output(message_item)

# Extract refusal if present
refusal = ItemHelpers.extract_refusal(message.raw_item)

# Convert string to input list
input_list = ItemHelpers.input_to_new_input_list("user message")
```

资料来源：[src/agents/items.py:40-75]()

## Handoffs

Handoffs enable agent-to-agent transfer, allowing specialized agents to handle specific tasks.

### Handoff Configuration

| Parameter | Type | Description |
|-----------|------|-------------|
| `agent` | `Agent` | Target agent |
| `tool_name_override` | `str` | Override for handoff tool name |
| `tool_description_override` | `str` | Override for handoff description |
| `on_handoff` | `Callable` | Callback when handoff occurs |
| `input_type` | `Type` | Type validation for handoff input |
| `input_filter` | `Callable` | Filter inputs passed to next agent |
| `is_enabled` | `bool \| Callable` | Enable/disable handoff |

资料来源：[src/agents/handoffs/__init__.py:30-80]()

### Handoff History Management

When an agent hands off to another, the conversation history is summarized to maintain context:

```python
# Nested history processing
nested_history = nest_handoff_history(
    handoff_input_data,
    history_mapper=custom_mapper
)
```

The history wrapper markers default to `<CONVERSATION HISTORY>` tags but can be customized:

```python
# Customize history markers
set_conversation_history_wrappers(
    start="<PREVIOUS_CONTEXT>",
    end="</PREVIOUS_CONTEXT>"
)
```

资料来源：[src/agents/handoffs/history.py:20-60]()

## Tools and Function Schema

Tools extend agent capabilities by providing functions the LLM can call.

### Function Schema Generation

The `FunctionSchema` class converts Python functions into OpenAI-compatible tool schemas:

```python
schema = FunctionSchema.from_fn(my_function)
tool_definition = schema.to_tool_definition()
```

### Tool Definition Structure

| Field | Type | Description |
|-------|------|-------------|
| `name` | `str` | Tool identifier |
| `description` | `str` | Human-readable description |
| `parameters` | `dict` | JSON schema for parameters |
| `strict` | `bool` | Enable strict parameter validation |

资料来源：[src/agents/function_schema.py:1-50]()

## Agent Visualization

The SDK provides DOT-format visualization for agent graphs:

```mermaid
graph TD
    subgraph AgentGraph
        A["User Input"] --> B["Agent"]
        B --> C["Tool: search"]
        B --> D["Tool: calculate"]
        B --> E["Handoff: specialist"]
        E --> F["Specialist Agent"]
    end
```

### Graph Components

| Component | Shape | Color | Description |
|-----------|-------|-------|-------------|
| Start | Ellipse | lightblue | Entry point |
| Agent | Box | lightyellow | Agent nodes |
| Tool | Ellipse | lightgreen | Tool definitions |
| Handoff | Box | lightgrey | Agent transfer points |
| End | Ellipse | lightblue | Exit point |

资料来源：[src/agents/extensions/visualization.py:1-60]()

## Agent Output

Agent execution produces structured output containing messages, tool calls, and metadata.

### Output Structure

```python
@dataclass
class AgentOutput:
    messages: list[MessageOutputItem]
    tool_calls: list[ToolCallItem]
    tool_results: list[ToolCallOutputItem]
    handoffs: list[HandoffItem]
    final_response: str | None
```

资料来源：[src/agents/agent_output.py:1-40]()

### Response Finalization

After tool execution, the system finalizes responses:

```python
tool_final_output = await _maybe_finalize_from_tool_results(
    public_agent=agent,
    original_input=input,
    new_response=response,
    pre_step_items=pre_items,
    new_step_items=new_items,
    function_results=results
)
```

Refusals are extracted and converted to errors:

```python
refusal = ItemHelpers.extract_refusal(message_item.raw_item)
if refusal:
    raise ModelRefusalError(refusal)
```

资料来源：[src/agents/run_internal/turn_resolution.py:80-120]()

## Runner Integration

The `Runner` class orchestrates agent execution, managing the turn loop and state transitions.

### Run Configuration

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `max_turns` | `int` | `10` | Maximum conversation turns |
| `max_tools` | `int` | `100` | Maximum tool calls |
| `context_length` | `int` | Model dependent | Context window size |
| `tool_choice` | `str` | `"auto"` | Tool selection strategy |

### State Management

The runner maintains `RunState` throughout execution:

```python
run_state = RunState(
    current_agent=agent,
    model_response=response,
    generated_items=items,
    run_config=config
)
```

资料来源：[src/agents/run.py:100-180]()

## Error Handling

### Model Refusal

When the LLM refuses to respond, a `ModelRefusalError` is raised:

```python
if refusal:
    refusal_error = ModelRefusalError(refusal)
    run_error_data = build_run_error_data(...)
```

### Tool Activity Tracking

The system tracks tool usage even when no messages are generated:

```python
has_tool_activity_without_message = not message_items and bool(
    processed_response.tools_used
)
```

## Multi-Agent Patterns

### Hierarchical Agents

```mermaid
graph TD
    O[Orchestrator] --> S[Search Agent]
    O --> A[Analysis Agent]
    O --> W[Writer Agent]
    S --> R[Research Results]
    A --> R
    A --> D[Data Insights]
    W --> R
    W --> D
```

### Parallel Execution

Agents can execute in parallel for independent tasks:

```python
# Multiple search agents running concurrently
search_tasks = [search_agent.run(query) for query in queries]
results = await asyncio.gather(*search_tasks)
```

## Best Practices

1. **Context Management**: Use generic `Agent[TContext]` with custom context classes for type safety
2. **Handoff Design**: Create focused agents with clear responsibilities and minimal handoffs
3. **Tool Organization**: Group related tools into toolkits for better organization
4. **History Filtering**: Use `input_filter` in handoffs to prevent context overflow
5. **Error Handling**: Always handle `ModelRefusalError` and tool execution failures

## Related Components

| Component | File | Relationship |
|-----------|------|--------------|
| MCP Server | `src/agents/mcp/server.py` | Provides external tool access |
| Guardrails | `src/agents/guardrails.py` | Input/output validation |
| Streaming | `src/agents/streaming.py` | Real-time output |
| Tracing | `src/agents/tracing.py` | Execution monitoring |

---

<a id='tools'></a>

## Tools

### 相关页面

相关主题：[Agents](#agents), [Guardrails](#guardrails)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [src/agents/tool.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/tool.py)
- [src/agents/tool_context.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/tool_context.py)
- [src/agents/agent_tool_state.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/agent_tool_state.py)
- [src/agents/editor.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/editor.py)
- [src/agents/computer.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/computer.py)
- [src/agents/apply_diff.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/apply_diff.py)
</details>

# Tools

## Overview

Tools in the OpenAI Agents Python SDK enable AI agents to interact with external systems, execute code, manipulate files, and perform actions in isolated environments. The tools system provides a structured way for agents to extend their capabilities beyond pure text generation by calling functions, accessing resources, and performing complex operations.

The SDK implements a tool abstraction that wraps callable functions with metadata, descriptions, and execution logic. When an agent decides to use a tool, the SDK handles the invocation, manages the context, processes results, and returns responses to the agent for further processing.

工具系统支持多种工具类型，从简单的函数调用到复杂的沙箱执行环境。工具可以在初始化时配置各种选项，包括名称、描述、参数模式等，并且可以与代理的批准机制和防护栏系统集成。

## Core Tool Architecture

### Tool Base Class

The foundation of the tools system is the `Tool` class defined in `src/agents/tool.py`. This abstract base class defines the interface that all tools must implement, ensuring consistent behavior across different tool types.

```mermaid
graph TD
    A[Tool Base Class] --> B[FunctionTool]
    A --> C[FileSearchTool]
    A --> D[ComputerTool]
    A --> E[WebSearchTool]
    A --> F[Sandbox Agent Tools]
```

Each tool implementation must provide:
- A unique name identifier
- A description for the LLM to understand tool purpose
- Parameter schema for function calling
- Execution logic in an `invoke` or `acall` method

### Tool Interface

The tool interface follows a standard pattern where each tool is defined with metadata that allows the LLM to understand when and how to use it. Tools can be synchronous or asynchronous, supporting both simple function calls and complex operations that require I/O operations.

工具的关键属性包括：

| Property | Type | Description |
|----------|------|-------------|
| `name` | `str` | Unique identifier for the tool |
| `description` | `str` | Natural language description for LLM |
| `parameters` | `dict` | JSON Schema for tool arguments |
| `strict` | `bool` | Whether to enforce parameter validation |

资料来源：[src/agents/tool.py:1-50]()

## Built-in Tool Types

### FunctionTool

`FunctionTool` is the most common tool type, wrapping a Python function with tool metadata. It allows developers to expose arbitrary Python functions as tools that agents can call.

```python
from agents import FunctionTool

def calculate_budget(items: list[str]) -> float:
    # Implementation
    return total

budget_tool = FunctionTool(
    name="calculate_budget",
    description="Calculate the total budget for a list of items",
    params_json_schema={...},
    handle_invoke=calculate_budget
)
```

### File and Editor Tools

The SDK provides specialized tools for file operations. The `FileSearchTool` enables searching through file contents, while editor tools provide controlled file manipulation capabilities.

资料来源：[src/agents/editor.py:1-100]()

#### Editor Tool Capabilities

| Operation | Description |
|-----------|-------------|
| `read` | Read file contents |
| `write` | Write content to files |
| `edit` | Modify existing files |
| `glob` | Find files by pattern |
| `ls` | List directory contents |
| `mv` | Move/rename files |
| `rm` | Delete files |

### Computer Tool

The `ComputerTool` enables agents to interact with a virtualized computer environment. This is particularly useful for tasks requiring UI automation, screenshot analysis, and keyboard/mouse control.

资料来源：[src/agents/computer.py:1-100]()

The Computer Tool provides:

- **Screen Capture**: Take screenshots of the virtual display
- **Mouse Control**: Move cursor, click, scroll operations
- **Keyboard Control**: Type text, press keys and key combinations
- **Process Management**: Launch and interact with applications

```mermaid
graph LR
    A[Agent Decision] --> B[Computer Tool Action]
    B --> C{Screen Capture?}
    C -->|Yes| D[Screenshot Analysis]
    C -->|No| E[Execute Action]
    D --> F[Observation Result]
    E --> G[Action Result]
    F --> H[Agent Processing]
    G --> H
```

### ApplyDiff Tool

The `ApplyDiff` tool provides efficient file modification capabilities using diff-based operations. Instead of replacing entire files, it applies targeted changes, making it more efficient for large files and reducing the risk of unintended modifications.

资料来源：[src/agents/apply_diff.py:1-100]()

## Tool Context and State Management

### Tool Context

Tool context (`tool_context`) provides runtime information to tools during execution. It encapsulates the current run state, session information, and access to shared resources.

资料来源：[src/agents/tool_context.py:1-100]()

```mermaid
graph TD
    A[Tool Execution] --> B[ToolContext]
    B --> C[RunContext]
    B --> D[Session]
    B --> E[Store Settings]
    C --> F[Current Agent]
    C --> G[User Context]
```

### Agent Tool State

The `AgentToolState` manages tool-related state within an agent's execution context. This includes tracking tool usage, maintaining state across tool calls, and managing tool-specific configurations.

资料来源：[src/agents/agent_tool_state.py:1-100]()

Key responsibilities include:
- Tracking which tools have been invoked
- Maintaining state between sequential tool calls
- Managing tool-specific configuration options
- Handling tool result caching when appropriate

## Tool Configuration

### Tool Parameters

Tools are configured with JSON Schema definitions that describe their expected parameters. This schema serves dual purposes:

1. **LLM Understanding**: Helps the model generate correct tool calls
2. **Validation**: Ensures incoming parameters meet requirements

```python
params_json_schema = {
    "type": "object",
    "properties": {
        "query": {
            "type": "string",
            "description": "Search query string"
        },
        "limit": {
            "type": "integer",
            "description": "Maximum results to return",
            "default": 10
        }
    },
    "required": ["query"]
}
```

### Tool Options

| Option | Description | Default |
|--------|-------------|---------|
| `name` | Tool identifier | Function name |
| `description` | LLM-facing description | Docstring |
| `params_json_schema` | Parameter schema | Auto-generated |
| `strict` | Enforce schema strictly | `False` |
| `require_approval` | Require human approval | `None` |

## Tool Guardrails

### Input Guardrails

Input guardrails validate tool parameters before execution. They provide an opportunity to inspect, modify, or reject tool calls based on custom logic.

```python
async def validate_search_params(
    ctx: RunContextWrapper,
    tool: MCPTool,
    params: dict
) -> InputGuardrailResult:
    # Custom validation logic
    if contains_prohibited_terms(params.get("query")):
        return InputGuardrailResult(
            did_pass=False,
            message="Query contains prohibited content"
        )
    return InputGuardrailResult(did_pass=True)
```

### Output Guardrails

Output guardrails validate tool results after execution. They ensure that tool outputs meet safety, formatting, or content requirements before being returned to the agent.

资料来源：[src/agents/items.py:50-100]()

## Tool Filtering

The SDK supports filtering which tools are exposed to agents. This is particularly useful when:

- Limiting agent capabilities for security
- Testing specific tool behaviors
- Implementing role-based access control

资料来源：[examples/mcp/tool_filter_example/README.md]()

```python
# Static tool filter
tool_filter = ["filesystem_read", "filesystem_write"]

# Dynamic tool filter
async def dynamic_filter(
    ctx: RunContextWrapper,
    agent: Agent,
    tool: Tool
) -> bool:
    return tool.name in allowed_tools
```

## Integration with Agents

### Adding Tools to Agents

Tools are added to agents through the agent's initialization or configuration:

```python
agent = Agent(
    name="research_agent",
    tools=[
        web_search_tool,
        file_search_tool,
        custom_function_tool
    ],
    instructions="You are a research assistant..."
)
```

### Tool Execution Flow

```mermaid
sequenceDiagram
    participant Agent
    participant SDK
    participant Tool
    participant External

    Agent->>SDK: Request tool execution
    SDK->>Tool: Validate parameters
    Tool->>Tool: Apply input guardrails
    Tool->>External: Execute operation
    External-->>Tool: Return result
    Tool->>Tool: Apply output guardrails
    Tool-->>SDK: Return processed result
    SDK-->>Agent: Provide tool result
```

## Human-in-the-Loop with Tools

### Approval Requirements

Tools can be configured to require human approval before execution. When enabled, the SDK pauses tool execution and awaits human confirmation.

```python
tool = FunctionTool(
    name="send_email",
    handle_invoke=send_email,
    require_approval="always"
)
```

资料来源：[src/agents/mcp/server.py:100-150]()

### Approval Resume

After human approval or rejection, the SDK resumes execution with the approval result:

```python
await runner.resume(
    run_id=run_id,
    approval_result=ApprovalResult(approved=True)
)
```

## Summary

The Tools system in the OpenAI Agents Python SDK provides a flexible, extensible framework for adding capabilities to AI agents. Key features include:

- **Abstraction**: Consistent interface for diverse tool types
- **Composition**: Tools can be combined and filtered dynamically
- **Safety**: Built-in guardrails and approval mechanisms
- **Context Awareness**: Runtime context enables stateful tool interactions
- **Integration**: Seamless integration with the agent execution model

By leveraging these tools, developers can create sophisticated agents that can search the web, manipulate files, execute code, interact with computer interfaces, and integrate with external services through protocols like MCP.

---

<a id='guardrails'></a>

## Guardrails

### 相关页面

相关主题：[Agents](#agents), [Tools](#tools)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [src/agents/guardrail.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/guardrail.py)
- [src/agents/tool_guardrails.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/tool_guardrails.py)
- [src/agents/run_internal/guardrails.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_internal/guardrails.py)
</details>

# Guardrails

Guardrails provide a security and validation layer in the agents framework, enabling developers to intercept, validate, and control both incoming inputs and outgoing outputs at various stages of agent execution. They serve as programmable checkpoints that can enforce policy compliance, prevent data leakage, block harmful content, and ensure operational safety across the entire agent runtime.

## Overview

The guardrail system operates at multiple checkpoints during agent execution:

```mermaid
graph TD
    A[User Input] --> B[Input Guardrails]
    B --> C[Agent Processing]
    C --> D[Tool Call]
    D --> E[Tool Input Guardrails]
    E --> F[Tool Execution]
    F --> G[Tool Output Guardrails]
    G --> H[Response Generation]
    H --> I[Output Guardrails]
    I --> J[Final Output]
    
    B -.->|Block/Modify| A
    E -.->|Block/Modify| D
    G -.->|Block/Modify| F
    I -.->|Block/Modify| H
```

Guardrails are implemented as pluggable components that can be attached to agents, individual tools, or configured globally. Each guardrail can define one of three behavioral responses when triggered:

| Behavior Type | Description |
|---------------|-------------|
| `raise_exception` | Throws a tripwire exception, halting execution |
| `reject_content` | Replaces the content with a custom rejection message |
| `filter` | Removes or sanitizes the problematic content (planned) |

资料来源：[src/agents/run_internal/tool_execution.py:1-50]()

## Types of Guardrails

### Input Guardrails

Input guardrails validate user-provided input before it reaches the agent. They receive the raw input and can inspect, modify, or reject it based on custom logic.

```mermaid
sequenceDiagram
    participant User
    participant Runner
    participant InputGuardrail
    participant Agent
    
    User->>Runner: User Input
    Runner->>InputGuardrail: Run input through guardrails
    alt Guardrail triggers
        InputGuardrail->>Runner: GuardrailOutput with behavior
        alt raise_exception
            Runner-->>User: GuardrailTripwireTriggered Error
        else reject_content
            Runner->>Agent: Modified/Sanitized input
        end
    else Pass through
        InputGuardrail->>Runner: GuardrailOutput with pass behavior
        Runner->>Agent: Original input
    end
```

资料来源：[src/agents/run_internal/guardrails.py:1-30]()

### Tool Input Guardrails

Tool input guardrails validate the arguments passed to tool calls before execution. They have access to the tool context, agent information, and the raw tool arguments.

```python
@dataclass
class ToolInputGuardrailData:
    context: ToolContext[Any]
    agent: Agent[Any]
    input: Any  # The raw tool arguments
```

资料来源：[src/agents/tool_guardrails.py:1-20]()

### Tool Output Guardrails

Tool output guardrails validate the results returned from tool execution before those results are processed further. They can inspect, filter, or reject tool outputs.

```python
@dataclass
class ToolOutputGuardrailData:
    context: ToolContext[Any]
    agent: Agent[Any]
    output: Any  # The raw tool result
```

资料来源：[src/agents/tool_guardrails.py:1-20]()

### Output Guardrails

Output guardrails validate the agent's final response before it is returned to the user. These operate on the completed message stream and can perform final content filtering or policy checks.

## GuardrailResult Structure

Each guardrail execution produces a `GuardrailOutput` result that defines the subsequent action:

```python
@dataclass
class GuardrailOutput:
    content_filtered: bool
    policy_name: str
    policy_version: str
    content: str | None
    behavior: dict[str, Any]
```

The `behavior` dictionary must contain at minimum a `type` key specifying one of the supported behavior types.

资料来源：[src/agents/guardrail.py:1-50]()

## Configuration

### Agent-Level Guardrail Configuration

Guardrails can be attached directly to an agent instance:

```python
from agents import Agent, Guardrail

agent = Agent(
    name="secure_agent",
    instructions="You are a helpful assistant",
    input_guardrails=[
        Guardrail(guardrail_name="content_filter"),
        Guardrail(guardrail_name="pii_detector"),
    ],
    output_guardrails=[
        Guardrail(guardrail_name="safety_check"),
    ],
)
```

### Tool-Level Guardrail Configuration

Individual tools can have their own guardrails:

```python
from agents import function_tool, ToolInputGuardrail, ToolOutputGuardrail

@function_tool(
    tool_input_guardrails=[input_check_guardrail],
    tool_output_guardrails=[output_check_guardrail],
)
def sensitive_operation(x: str) -> str:
    return process(x)
```

资料来源：[src/agents/tool.py:1-30]()

### Guardrail Behavior Configuration

Guardrails can be configured with different tripwire behaviors:

| Parameter | Type | Description |
|-----------|------|-------------|
| `guardrail_name` | `str` | Unique identifier for the guardrail |
| `on_fail` | `GuardrailFailureMode` | Behavior when triggered |
| `error_message` | `str` | Custom error message for exceptions |
| `log` | `bool` | Whether to log guardrail triggers |

## Tracing and Observability

Guardrail execution is automatically traced using the observability framework:

```mermaid
graph LR
    A[Guardrail Trigger] --> B[guardrail_span]
    B --> C[Record triggered status]
    B --> D[Capture span data]
    D --> E[Export to trace provider]
    
    C -->|True| F[Mark span as triggered]
    C -->|False| G[Continue normally]
```

The `guardrail_span` function creates spans for monitoring:

```python
def guardrail_span(
    name: str,
    triggered: bool = False,
    span_id: str | None = None,
    parent: Trace | Span[Any] | None = None,
    disabled: bool = False,
) -> Span[GuardrailSpanData]:
```

资料来源：[src/agents/tracing/create.py:1-40]()

## Execution Flow

### Tool Guardrail Execution

Tool guardrails are executed within the tool execution pipeline:

```mermaid
flowchart TD
    A[Tool Call Invoked] --> B{Input Guardrails exist?}
    B -->|Yes| C[Execute Input Guardrails]
    C --> D{Any trigger raise_exception?}
    D -->|Yes| E[Raise ToolInputGuardrailTripwireTriggered]
    D -->|No| F{Any trigger reject_content?}
    F -->|Yes| G[Replace input with message]
    F -->|No| H[Execute Tool]
    H --> I{Output Guardrails exist?}
    I -->|Yes| J[Execute Output Guardrails]
    J --> K{Any trigger raise_exception?}
    K -->|Yes| L[Raise ToolOutputGuardrailTripwireTriggered]
    K -->|No| M{Any trigger reject_content?}
    M -->|Yes| N[Replace output with message]
    M -->|No| O[Return result]
```

资料来源：[src/agents/run_internal/tool_execution.py:50-100]()

### Guardrail Tripwire Exceptions

When a guardrail triggers with `raise_exception` behavior, specific exception types are raised:

| Exception Type | Triggered By |
|---------------|--------------|
| `ToolInputGuardrailTripwireTriggered` | Tool input guardrail rejection |
| `ToolOutputGuardrailTripwireTriggered` | Tool output guardrail rejection |

These exceptions contain both the guardrail reference and the output that triggered it, enabling detailed error handling and debugging.

## Implementation Pattern

### Creating a Custom Guardrail

```python
from agents import Guardrail, RunContextWrapper
from agents.guardrail import (
    GuardrailOutput,
    InputGuardrailOutputData,
    OutputGuardrailOutputData,
)

async def my_guardrail(
    context: RunContextWrapper,
    input_data: InputGuardrailOutputData,
) -> GuardrailOutput:
    text = input_data.agents_input
    if contains_problematic_content(text):
        return GuardrailOutput(
            content_filtered=True,
            policy_name="my_policy",
            policy_version="1.0",
            content="Content filtered due to policy violation",
            behavior={"type": "reject_content", "message": "Content not allowed"},
        )
    return GuardrailOutput(
        content_filtered=False,
        policy_name="my_policy",
        policy_version="1.0",
        content=None,
        behavior={"type": "pass"},
    )

guardrail = Guardrail(
    guardrail_name="my_custom_guardrail",
    guardrail_function=my_guardrail,
)
```

### Using with FunctionTool

```python
from agents import function_tool, ToolInputGuardrail, ToolOutputGuardrail

@function_tool(
    tool_input_guardrails=[
        ToolInputGuardrail(guardrail_function=validate_json_input),
    ],
    tool_output_guardrails=[
        ToolOutputGuardrail(guardrail_function=validate_output_schema),
    ],
)
def process_data(input: str) -> dict:
    # Tool implementation
    pass
```

## Best Practices

1. **Defense in Depth**: Layer multiple guardrails at different checkpoints for comprehensive coverage
2. **Fail-Safe Defaults**: Configure guardrails to fail closed (reject) rather than open (pass) when uncertain
3. **Logging**: Enable guardrail logging for security auditing and debugging
4. **Performance**: Keep guardrail logic lightweight to avoid introducing latency
5. **Idempotency**: Ensure guardrails produce consistent results for the same input

## See Also

- [Agents Overview](../agents/overview) — General agent architecture
- [Tools](../tools/overview) — Tool implementation and configuration
- [Tracing](../tracing/overview) — Observability and monitoring
- [Handoffs](../handoffs/overview) — Multi-agent handoff mechanisms

---

<a id='handoffs'></a>

## Handoffs

### 相关页面

相关主题：[Agents](#agents), [Agents as Tools](#agents-as-tools), [Run Loop and Execution](#run-loop)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [src/agents/handoffs/__init__.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/__init__.py)
- [src/agents/handoffs/history.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/history.py)
- [src/agents/extensions/handoff_filters.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/handoff_filters.py)
- [src/agents/extensions/handoff_prompt.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/handoff_prompt.py)
</details>

# Handoffs

## Overview

Handoffs in the OpenAI Agents Python SDK enable seamless transfer of control and conversation context between different agents. When an agent determines that a task should be handled by another agent, a handoff executes the transition, optionally filtering or transforming the input data before the receiving agent begins processing.

The handoff mechanism serves as the backbone for multi-agent architectures, allowing complex workflows where specialized agents handle specific subtasks while maintaining coherent conversation state across transitions.

## Core Concepts

### What is a Handoff?

A handoff is a structured mechanism that transfers control from one agent to another. It encapsulates:

- The destination agent
- Tool configuration for invoking the handoff
- Optional input filtering logic
- Optional type validation for handoff arguments
- Enable/disable conditions

资料来源：[src/agents/handoffs/__init__.py:1-100]()

### The Handoff Class

The `Handoff` class is the primary abstraction for defining agent-to-agent transfers:

```python
class Handoff(Generic[TAgent, TContext]):
    name: str
    description: str
    input_json_schema: dict[str, Any]
    on_invoke_handoff: Callable[[RunContextWrapper[Any], str], Awaitable[TAgent]]
    agent_name: str
    input_filter: HandoffInputFilter | None = None
    is_enabled: bool | Callable[[RunContextWrapper[Any], Agent[TContext]], bool] = True
```

资料来源：[src/agents/handoffs/__init__.py:100-130]()

### HandoffInputData

When a handoff is invoked, it receives and processes `HandoffInputData`:

| Field | Type | Description |
|-------|------|-------------|
| `input_history` | `list[InputItem]` | Conversation history up to the handoff point |
| `pre_handoff_items` | `list[RunItem]` | Run items generated before handoff |
| `input_items` | `list[InputItem]` | Input items to pass to the next agent |
| `new_items` | `list[RunItem]` | New items to add to the receiving agent's context |

资料来源：[src/agents/handoffs/__init__.py:50-80]()

## Architecture

### Handoff Flow

```mermaid
graph TD
    A[Current Agent] -->|Determines handoff needed| B[Handoff Tool Call]
    B --> C{is_enabled check}
    C -->|Enabled| D[on_invoke_handoff]
    C -->|Disabled| E[Hide from LLM]
    D --> F[Input Filter Processing]
    F --> G{HandoffInputData}
    G --> H[Next Agent Context]
    H --> I[Receiving Agent]
    
    J[Type Validation] -.->|if input_type provided| F
    K[History Nesting] -.->|if nest_handoff_history enabled| G
```

### Agent Hierarchy with Handoffs

```mermaid
graph TD
    A[Orchestrator Agent] -->|handoff| B[Research Agent]
    A -->|handoff| C[Writer Agent]
    A -->|handoff| D[Review Agent]
    B -->|handoff| E[Web Search Agent]
    B -->|handoff| F[Data Analysis Agent]
    C -->|handoff| D
```

## Configuration Options

### Handoff Constructor Parameters

| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `agent` | `Agent[TContext]` | Yes | - | The destination agent |
| `name` | `str` | No | agent.name | Custom name for the handoff tool |
| `description` | `str` | No | agent.description | Tool description shown to the model |
| `tool_description_override` | `str` | No | None | Override the tool description |
| `on_handoff` | `Callable` | No | None | Side effect function executed on handoff |
| `input_type` | `type` | No | None | Type for validating handoff arguments |
| `input_filter` | `HandoffInputFilter` | No | None | Function to filter/transform inputs |
| `nest_handoff_history` | `bool` | No | None | Override run-level history nesting setting |
| `is_enabled` | `bool \| Callable` | No | True | Whether the handoff is available |

资料来源：[src/agents/handoffs/__init__.py:150-200]()

### Input Type Validation

When `input_type` is provided, the model-generated JSON arguments are validated:

```python
if input_type is not None and on_handoff is None:
    raise UserError("You must provide on_handoff when input_type is provided")
```

The `on_handoff` callback must accept two parameters for type-validated inputs:

```python
async def on_handoff(ctx: RunContext, data: ValidatedInputType) -> Agent:
    ...
```

资料来源：[src/agents/handoffs/__init__.py:200-220]()

### Enabling/Disabling Handoffs

Handoffs can be conditionally enabled using the `is_enabled` parameter:

```python
# Static boolean
handoff = Handoff(agent=agent, is_enabled=False)

# Dynamic condition
handoff = Handoff(
    agent=agent,
    is_enabled=lambda ctx, current_agent: ctx.user_id in ADMIN_USERS
)
```

Disabled handoffs are hidden from the LLM at runtime.

资料来源：[src/agents/handoffs/__init__.py:180-190]()

## Input Filtering

### HandoffInputFilter

The `input_filter` function receives the entire conversation history and can modify what the next agent receives:

```python
HandoffInputFilter = Callable[
    [HandoffInputData], HandoffInputData | Awaitable[HandoffInputData]
]
```

### Common Filtering Patterns

| Pattern | Use Case |
|---------|----------|
| Remove sensitive data | Strip user credentials before handoff |
| Context summarization | Condense long conversations |
| Tool filtering | Remove tools not needed by next agent |
| History truncation | Keep only recent relevant items |

### Example Input Filter

```python
def filter_sensitive_inputs(data: HandoffInputData) -> HandoffInputData:
    # Remove tool call outputs containing sensitive info
    filtered_history = [
        item for item in data.input_history
        if not contains_sensitive(item)
    ]
    return dataclasses.replace(data, input_history=filtered_history)
```

资料来源：[src/agents/extensions/handoff_filters.py]()

## History Management

### Nesting Conversation History

When `nest_handoff_history=True`, the previous agent's conversation is summarized before being passed to the next agent:

```python
def nest_handoff_history(
    handoff_input_data: HandoffInputData,
    *,
    history_mapper: HandoffHistoryMapper | None = None,
) -> HandoffInputData:
    """Summarize the previous transcript for the next agent."""
```

This prevents context overflow and provides the new agent with a concise summary rather than full conversation history.

资料来源：[src/agents/handoffs/history.py:40-60]()

### Conversation History Wrappers

Default markers wrap nested conversation summaries:

| Marker | Default Value |
|--------|---------------|
| Start | `<CONVERSATION HISTORY>` |
| End | `</CONVERSATION HISTORY>` |

These can be customized:

```python
set_conversation_history_wrappers(
    start="<PREVIOUS AGENT TRANSCRIPT>",
    end="</PREVIOUS AGENT TRANSCRIPT>"
)
```

资料来源：[src/agents/handoffs/history.py:20-40]()

## Creating Handoffs

### Basic Handoff

```python
from agents import Agent, Handoff, Runner

agent_a = Agent(name="Agent A", instructions="...")
agent_b = Agent(name="Agent B", instructions="...")

# Create handoff
handoff_to_b = Handoff(name="transfer_to_b", agent=agent_b)

# Add to source agent
agent_a.handoffs.append(handoff_to_b)
```

### Handoff with Callbacks

```python
async def on_transfer_to_b(ctx: RunContext, input_data: str) -> Agent:
    # Log the handoff
    logger.info(f"Handoff triggered by user: {ctx.user_id}")
    # Return destination agent
    return agent_b

handoff_to_b = Handoff(
    agent=agent_b,
    name="transfer_to_b",
    on_handoff=on_transfer_to_b
)
```

### Handoff with Type Validation

```python
from pydantic import BaseModel

class TransferData(BaseModel):
    reason: str
    priority: int = 1

async def handle_transfer(ctx: RunContext, data: TransferData) -> Agent:
    if data.priority > 5:
        return urgent_agent
    return standard_agent

handoff = Handoff(
    agent=standard_agent,
    input_type=TransferData,
    on_handoff=handle_transfer
)
```

## Handoffs in the Run Loop

### Turn Resolution with Handoffs

When a handoff is triggered during agent execution:

```mermaid
sequenceDiagram
    participant Agent as Current Agent
    participant Run as Run Loop
    participant Handoff as Handoff Handler
    
    Agent->>Run: Generate response with handoff tool call
    Run->>Handoff: Process NextStepHandoff
    Handoff->>Handoff: Validate input_type if provided
    Handoff->>Handoff: Execute input_filter
    Handoff->>Handoff: Call on_handoff callback
    Handoff-->>Run: Return new agent and filtered input
    Run->>Run: Reset current agent
    Run->>Run: Start next turn with new agent
```

资料来源：[src/agents/run.py:200-250]()

### Handoff Result Processing

The run loop handles handoff transitions:

```python
elif isinstance(turn_result.next_step, NextStepHandoff):
    current_agent = cast(Agent[TContext], turn_result.next_step.new_agent)
    # Next agent starts with the nested/filtered input
    starting_input = turn_result.original_input
    original_input = turn_result.original_input
    should_run_agent_start_hooks = True
```

资料来源：[src/agents/run.py:230-245]()

## Prompt Integration

### Handoff Tool Representation

Handoffs appear as tools to the LLM with descriptions generated from the handoff configuration:

```python
# Tool name format
f"transfer_to_{agent_name}"

# Tool description includes
- Handoff name
- Agent description
- Input schema if defined
- Custom tool_description_override if provided
```

资料来源：[src/agents/extensions/handoff_prompt.py]()

### Prompt Instructions

The system prompt can include handoff guidance:

```
- When a task matches another agent's expertise, use the handoff tool
- Explain the reason for handoff in your response
- Preserve relevant context during transfer
```

## Best Practices

### Design Principles

1. **Clear Agent Specialization**: Each agent should have a distinct responsibility
2. **Minimal Handoff Arguments**: Pass only essential data, not entire conversations
3. **Meaningful Handoff Names**: Use descriptive names that indicate the destination
4. **Appropriate History Management**: Enable nesting for long conversations

### Error Handling

| Scenario | Recommended Approach |
|----------|---------------------|
| Handoff to unavailable agent | Check `is_enabled` before showing to model |
| Invalid input type | Use Pydantic validation with clear error messages |
| Filter failure | Return original input with warning |

### Performance Considerations

- Avoid complex filters that run synchronously on large histories
- Use `is_enabled` callbacks to prevent unnecessary tool calls
- Consider disabling history nesting for high-frequency handoffs

## Related Components

| Component | File | Purpose |
|-----------|------|---------|
| `Handoff` class | `src/agents/handoffs/__init__.py` | Core handoff definition |
| `HandoffInputData` | `src/agents/handoffs/__init__.py` | Input data structure |
| `nest_handoff_history` | `src/agents/handoffs/history.py` | History summarization |
| `HandoffInputFilter` | `src/agents/extensions/handoff_filters.py` | Input filtering utilities |
| Handoff prompt integration | `src/agents/extensions/handoff_prompt.py` | Prompt rendering |

## Summary

Handoffs provide a robust mechanism for multi-agent orchestration in the OpenAI Agents Python SDK. Key capabilities include:

- **Structured Transfer**: Defined handoff contracts with optional type validation
- **Flexible Input Management**: Filtering and transformation before agent handoff
- **History Control**: Nesting or truncating conversation context
- **Conditional Execution**: Enable/disable based on runtime conditions
- **Callback Support**: Side effects and logging during transitions

These mechanisms enable complex agent workflows while maintaining clean separation of concerns and manageable context sizes.

---

<a id='agents-as-tools'></a>

## Agents as Tools

### 相关页面

相关主题：[Handoffs](#handoffs), [Agents](#agents)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [examples/agent_patterns/agents_as_tools.py](https://github.com/openai/openai-agents-python/blob/main/examples/agent_patterns/agents_as_tools.py)
- [examples/agent_patterns/agents_as_tools_conditional.py](https://github.com/openai/openai-agents-python/blob/main/examples/agent_patterns/agents_as_tools_conditional.py)
- [examples/agent_patterns/agents_as_tools_structured.py](https://github.com/openai/openai-agents-python/blob/main/examples/agent_patterns/agents_as_tools_structured.py)
- [src/agents/agent.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/agent.py)
- [src/agents/run.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run.py)
- [src/agents/extensions/visualization.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/visualization.py)
- [examples/sandbox/handoffs.py](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/handoffs.py)
</details>

# Agents as Tools

Agents as Tools is a powerful architectural pattern in the openai-agents-python library that enables one agent to be invoked as a callable tool by another agent. This pattern allows for sophisticated multi-agent orchestration where specialized agents can be dynamically called with specific inputs, returning structured results to the calling agent.

## Overview

In the traditional agent architecture, agents operate as standalone units that receive input, execute tasks, and return results. The "Agents as Tools" pattern extends this by wrapping agents inside function tool abstractions, enabling:

- **Dynamic Agent Invocation**: Agents can be called like functions within other agents' workflows
- **Structured Inputs and Outputs**: Typed interfaces ensure consistent data exchange between agents
- **Conditional Execution**: Agents can be invoked based on specific conditions or input patterns
- **Parallel Tool Calls**: Multiple agents can be called simultaneously as tools
- **Nested Architectures**: Complex hierarchies of agents calling sub-agents as tools

This pattern is particularly valuable for building research assistants, customer service systems, and specialized workflow engines where different capabilities need to be composed dynamically.

## Architecture

```mermaid
graph TD
    subgraph "Primary Agent"
        PA[Main Agent]
        PA -->|has tools| T1[Agent-as-Tool 1]
        PA -->|has tools| T2[Agent-as-Tool 2]
        PA -->|has tools| Tn[Agent-as-Tool N]
    end
    
    subgraph "Wrapped Agents"
        T1 -->|wraps| A1[Specialized Agent 1]
        T2 -->|wraps| A2[Specialized Agent 2]
        Tn -->|wraps| An[Specialized Agent N]
    end
    
    A1 -->|returns| T1
    A2 -->|returns| T2
    An -->|returns| Tn
    T1 -->|tool result| PA
    T2 -->|tool result| PA
    Tn -->|tool result| PA
```

### Core Components

| Component | Role | Location |
|-----------|------|----------|
| `Agent` | Base agent with instructions, tools, handoffs | `src/agents/agent.py` |
| `FunctionTool` | Wraps callable functions for agent use | Tool infrastructure |
| `Runner` | Executes agents and manages tool calls | `src/agents/run.py` |
| `Handoff` | Enables agent-to-agent transfers | `src/agents/handoffs/__init__.py` |

## Implementation Patterns

### Basic Agent-to-Tool Conversion

The simplest form of this pattern converts an existing agent into a callable tool:

```python
from agents import Agent, function_tool

# Create a specialized agent
search_agent = Agent(
    name="web_searcher",
    instructions="You are a web search expert. Search for the given query and summarize results.",
    tools=[web_search_tool],
)

# Convert to a function tool that the primary agent can use
@function_tool
def search_tool(query: str) -> str:
    """Search the web for information."""
    result = Runner.run(search_agent, input=query)
    return result.final_output
```

### AgentTool with Structured Output

For more sophisticated scenarios, agents can be wrapped with explicit input/output schemas:

```python
from agents import Agent
from pydantic import BaseModel

class SearchResult(BaseModel):
    title: str
    url: str
    summary: str

search_agent = Agent(
    name="structured_searcher",
    instructions="Search for information and return structured results.",
    output_type=SearchResult,
)
```

### Conditional Agent Invocation

Agents can be configured to only be available under certain conditions:

```python
from agents import Agent

admin_agent = Agent(
    name="admin_panel",
    instructions="Handle administrative tasks.",
)

# Conditional enabling based on user role
def is_admin(context):
    return context.user_role == "admin"

admin_agent.is_enabled = is_admin
```

## Usage Examples

### Research Assistant Pattern

A common use case is a research bot with specialized sub-agents:

```mermaid
sequenceDiagram
    participant User
    participant Planner as Planner Agent
    participant Search as Search Agent (Tool)
    participant Writer as Writer Agent
    
    User->>Planner: "Research topic: AI trends"
    Planner->>Planner: Generate search queries
    Planner->>Search: tool_call(search_queries[0])
    Planner->>Search: tool_call(search_queries[1])
    Planner->>Search: tool_call(search_queries[n])
    Search-->>Planner: SearchResult
    Planner->>Writer: Pass summaries
    Writer-->>User: Final report
```

### Example: Agent Patterns in Code

The repository includes several agent pattern examples demonstrating this functionality:

**Basic Pattern** (`examples/agent_patterns/agents_as_tools.py`):
```python
# Agents are wrapped as tools and called by a primary agent
primary_agent = Agent(
    name="orchestrator",
    instructions="Coordinate specialized agents to answer user queries.",
    tools=[search_agent_as_tool, code_agent_as_tool],
)
```

**Conditional Pattern** (`examples/agent_patterns/agents_as_tools_conditional.py`):
```python
# Agents are conditionally available based on context
if user.is_premium:
    primary_agent.tools.append(premium_agent_tool)
```

**Structured Pattern** (`examples/agent_patterns/agents_as_tools_structured.py`):
```python
# Agents return structured data types
@function_tool
def get_weather(location: str) -> WeatherData:
    """Get weather for a location."""
    return Runner.run(weather_agent, input=location)
```

## Configuration Options

### Tool Metadata Configuration

When converting an agent to a tool, you can override the default tool behavior:

| Parameter | Type | Purpose |
|-----------|------|---------|
| `name` | `str` | Override the tool name shown to the LLM |
| `description` | `str` | Human-readable description of what the tool does |
| `input_type` | `Type[BaseModel]` | Pydantic model for input validation |
| `output_type` | `Type[BaseModel]` | Pydantic model for output schema |
| `is_enabled` | `bool \| Callable` | Condition for tool availability |

### Agent Configuration

Agents used as tools support standard agent parameters:

| Parameter | Description |
|-----------|-------------|
| `instructions` | System prompt for the agent |
| `tools` | Additional tools available to the agent |
| `handoffs` | Agents the sub-agent can transfer to |
| `output_type` | Expected output type |
| `model` | Specific model to use |

## Execution Flow

```mermaid
flowchart LR
    A[Primary Agent] -->|decides to call| B[Agent-as-Tool]
    B -->|parses input| C{Input Validation}
    C -->|valid| D[Execute Wrapped Agent]
    C -->|invalid| E[Return Error]
    D -->|run agent| F[Runner.run]
    F -->|collect results| G[Format Output]
    G -->|return| B
    B -->|tool result| A
```

## Integration with Handoffs

The Agents as Tools pattern complements the handoff mechanism:

| Aspect | Agents as Tools | Handoffs |
|--------|-----------------|----------|
| Control Flow | Agent calls tool, waits for result | Agent transfers control completely |
| State | Shared context | Fresh context for new agent |
| Use Case | Parallel specialized tasks | Sequential role switches |
| Return | Structured result | Handoff message |

**资料来源**：[src/agents/handoffs/__init__.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/__init__.py)

## Best Practices

1. **Clear Tool Descriptions**: Provide explicit descriptions so the LLM knows when to invoke the agent
2. **Typed Interfaces**: Use Pydantic models for input/output to ensure type safety
3. **Error Handling**: Wrap agent executions in try-catch to handle failures gracefully
4. **Context Management**: Pass relevant context to sub-agents without overwhelming them
5. **Conditional Enabling**: Use `is_enabled` to control access based on user permissions

## Related Patterns

- **Handoffs**: Complete agent-to-agent transfer for distinct roles
- **Multi-Agent Orchestration**: Coordinated multi-agent workflows
- **Sandbox Agents**: Isolated execution environments for agents
- **Guardrails**: Input/output validation for agent tool calls

**资料来源**：[examples/sandbox/handoffs.py](https://github.com/openai/openai-agents-python/blob/main/examples/sandbox/handoffs.py)

---

<a id='run-loop'></a>

## Run Loop and Execution

### 相关页面

相关主题：[Agents](#agents), [Sessions and Memory](#sessions)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [src/agents/run.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run.py)
- [src/agents/run_config.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_config.py)
- [src/agents/result.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/result.py)
- [src/agents/run_internal/turn_resolution.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/run_internal/turn_resolution.py)
- [src/agents/items.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/items.py)
</details>

# Run Loop and Execution

The Run Loop and Execution system is the core engine of the openai-agents-python SDK. It orchestrates the interaction between agents, language models, tools, and external systems through an iterative turn-based processing architecture.

## Overview

The execution model follows a **turn-based loop** where each turn consists of:

1. **Turn Preparation** - Setting up context, hooks, and session state
2. **Model Invocation** - Calling the language model with the current input
3. **Response Processing** - Parsing and validating model outputs
4. **Tool Execution** - Running any tools or side effects requested by the model
5. **Turn Resolution** - Determining the next step (continue, handoff, or finish)

资料来源：[src/agents/run.py:1-50]()

## Architecture Components

### Core Execution Flow

```mermaid
graph TD
    A[User Input] --> B[Run Loop Entry]
    B --> C[Turn Preparation]
    C --> D[Call Model]
    D --> E{Response Type?}
    E -->|Tool Calls| F[Execute Tools]
    E -->|Handoff| G[Switch Agent]
    E -->|Message| H[Finalize Output]
    F --> C
    G --> C
    H --> I[Return RunResult]
```

### Key Modules

| Module | Purpose | Key Classes/Functions |
|--------|---------|----------------------|
| `run.py` | Main entry point | `run()`, `run_sync()` |
| `run_loop.py` | Core loop logic | `run_loop()` |
| `turn_preparation.py` | Turn setup | Input filtering, hook invocation |
| `turn_resolution.py` | Response handling | Tool result processing, output finalization |
| `tool_execution.py` | Tool runner | `execute_tools_and_side_effects()` |
| `streaming.py` | Streaming support | Stream handlers |

资料来源：[src/agents/run.py:1-30]()

## Run Configuration

### RunOptions

The `RunOptions` TypedDict defines all parameters for running an agent:

```python
class RunOptions(TypedDict, Generic[TContext]):
    context: NotRequired[TContext | None]
    max_turns: NotRequired[int | None]
    hooks: NotRequired[RunHooks[TContext] | None]
    run_config: NotRequired[RunConfig | None]
    previous_response_id: NotRequired[str | None]
    auto_previous_response_id: NotRequired[bool]
    conversation_id: NotRequired[str | None]
    session: NotRequired[Session | None]
    error_handlers: NotRequired[RunErrorHandlers[TContext] | None]
```

### Configuration Options

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `max_turns` | `int \| None` | `None` | Maximum turns; `None` disables limit |
| `context` | `TContext \| None` | `None` | Custom context object |
| `hooks` | `RunHooks[TContext]` | `None` | Lifecycle hooks |
| `run_config` | `RunConfig` | `None` | Runtime configuration |
| `session` | `Session` | `None` | Session for state persistence |
| `error_handlers` | `RunErrorHandlers` | `None` | Error callback handlers |

资料来源：[src/agents/run_config.py:50-75]()

## Turn Processing

### Turn Resolution

The `turn_resolution.py` module handles processing model responses after tool execution:

```python
tool_final_output = await _maybe_finalize_from_tool_results(
    public_agent=public_agent,
    original_input=original_input,
    new_response=new_response,
    pre_step_items=pre_step_items,
    new_step_items=new_step_items,
    function_results=function_results,
    hooks=hooks,
    context_wrapper=context_wrapper,
    tool_input_guardrail_results=tool_input_guardrail_results,
    tool_output_guardrail_results=tool_output_guardrail_results,
)
```

### Message Output Extraction

The `ItemHelpers` class provides utilities for extracting content from model responses:

```python
@classmethod
def extract_refusal(cls, message: TResponseOutputItem) -> str | None:
    """Extracts refusal content from a message, if any."""
    if not isinstance(message, ResponseOutputMessage):
        return None
    refusal = ""
    for content_item in message.content:
        if isinstance(content_item, ResponseOutputRefusal):
            refusal += content_item.refusal or ""
    return refusal or None
```

### Refusal Handling

When the model refuses to respond, a `ModelRefusalError` is raised:

```python
if refusal:
    refusal_error = ModelRefusalError(refusal)
    run_error_data = build_run_error_data(...)
```

资料来源：[src/agents/run_internal/turn_resolution.py:25-45]()

## Agent Handoffs

### Handoff Processing

The run loop handles agent handoffs through the `NextStepHandoff` type:

```python
elif isinstance(turn_result.next_step, NextStepHandoff):
    current_agent = cast(Agent[TContext], turn_result.next_step.new_agent)
    if run_state is not None:
        run_state._current_agent = current_agent
    starting_input = turn_result.original_input
    original_input = turn_result.original_input
    current_span.finish(reset_current=True)
    should_run_agent_start_hooks = True
```

### Loop Continuation

For cases requiring another iteration without switching agents:

```python
elif isinstance(turn_result.next_step, NextStepRunAgain):
    await save_turn_items_if_needed(
        session=session,
        run_state=run_state,
        session_persistence_enabled=session_persistence_enabled,
        items=session_items_for_turn(turn_result),
        response_id=turn_result.model_response.response_id,
        store=store_setting,
    )
    continue
```

资料来源：[src/agents/run.py:150-180]()

## Result Types

### RunResult Structure

| Field | Type | Description |
|-------|------|-------------|
| `last_agent` | `Agent` | Final agent that produced output |
| `new_items` | `list[RunItem]` | All items from the run |
| `final_output` | `Response` | Final model response |
| `raw_responses` | `list[RawResponsesFromModel]` | Raw model outputs |

### Tool Output Handling

Tool outputs are processed through multiple stages:

1. **Pre-step items** - State before tool execution
2. **New step items** - State after tool execution
3. **Function results** - Structured tool call results

The system tracks tool activity without messages using:

```python
has_tool_activity_without_message = not message_items and bool(
    processed_response.tools_used
)
```

资料来源：[src/agents/run_internal/turn_resolution.py:35-40]()

## Input Processing

### Input Conversion

The `ItemHelpers` class handles input normalization:

```python
@classmethod
def input_to_new_input_list(
    cls, input: str | list[TResponseInputItem]
) -> list[TResponseInputItem]:
    """Converts a string or list of input items into a list of input items."""
    if isinstance(input, str):
        return [{"content": input, "role": "user"}]
    return cast(list[TResponseInputItem], _to_dump_compatible(input))
```

### Text Extraction

Concatenate all text content from message output items:

```python
@classmethod
def text_message_outputs(cls, items: list[RunItem]) -> str:
    """Concatenates all the text content from a list of message output items."""
    text = ""
    for item in items:
        if isinstance(item, MessageOutputItem):
            text += cls.text_message_output(item)
    return text
```

资料来源：[src/agents/items.py:60-90]()

## Error Handling

### Error Flow

```mermaid
graph TD
    A[Error Occurs] --> B{Error Type?}
    B -->|Refusal| C[ModelRefusalError]
    B -->|Tool Failure| D[ToolExecutionError]
    B -->|Max Turns| E[MaxTurnsExceededError]
    B -->|Other| F[Generic Error Handler]
    C --> G[Build Error Data]
    D --> G
    E --> G
    F --> G
    G --> H[Return Error Result]
```

### Error Handlers Configuration

Custom error handlers can be registered per error kind:

```python
error_handlers: RunErrorHandlers[TContext] | None
```

The system supports typed error handling where handlers are keyed by error category.

资料来源：[src/agents/run_config.py:60-65]()

## Session Persistence

### Save Turn Items

The run loop persists state after each turn when session is enabled:

```python
await save_turn_items_if_needed(
    session=session,
    run_state=run_state,
    session_persistence_enabled=session_persistence_enabled,
    items=session_items_for_turn(turn_result),
    response_id=turn_result.model_response.response_id,
    store=store_setting,
)
```

### Parameters

| Parameter | Type | Description |
|-----------|------|-------------|
| `session` | `Session \| None` | Active session instance |
| `run_state` | `RunState \| None` | Current run state |
| `session_persistence_enabled` | `bool` | Whether persistence is active |
| `items` | `list[RunItem]` | Items to persist |
| `response_id` | `str` | Model response ID |
| `store` | `StoreSetting` | Storage configuration |

资料来源：[src/agents/run.py:160-170]()

## Streaming Support

The system supports streaming model outputs through the streaming module. Streaming is configured via `RunConfig` and allows real-time output handling without waiting for complete responses.

## Lifecycle Hooks

### Available Hooks

| Hook | Trigger | Purpose |
|------|---------|---------|
| `on_agent_start` | Agent turn begins | Initialize agent-specific state |
| `on_agent_end` | Agent turn ends | Cleanup or logging |
| `on_tool_call` | Tool invocation | Logging or monitoring |
| `on_handoff` | Agent switch | Track transitions |

Hooks receive `RunContextWrapper` and relevant context data, enabling deep customization of the execution flow.

资料来源：[src/agents/run_config.py:35-45]()

## Summary

The Run Loop and Execution system provides:

- **Iterative Processing**: Turn-based model interaction with tool execution
- **Flexible Configuration**: Extensive options via `RunOptions` and `RunConfig`
- **Agent Orchestration**: Seamless handoff between agents
- **Error Resilience**: Typed error handlers and refusal detection
- **Session Management**: Persistent state across turns
- **Lifecycle Hooks**: Customization at every execution stage

The architecture prioritizes extensibility, allowing developers to hook into any phase of execution while maintaining a clear, predictable flow from input to final output.

---

<a id='sessions'></a>

## Sessions and Memory

### 相关页面

相关主题：[Run Loop and Execution](#run-loop)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [src/agents/sandbox/memory/prompts/rollout_extraction_user_message.md](https://github.com/openai/openai-agents-python/blob/main/src/agents/sandbox/memory/prompts/rollout_extraction_user_message.md)
- [src/agents/sandbox/capabilities/memory.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/sandbox/capabilities/memory.py)
- [src/agents/extensions/memory/__init__.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/extensions/memory/__init__.py)
- [src/agents/sandbox/session/sinks.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/sandbox/session/sinks.py)
- [src/agents/sandbox/session/base_sandbox_session.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/sandbox/session/base_sandbox_session.py)
- [src/agents/handoffs/history.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/handoffs/history.py)
- [src/agents/sandbox/errors.py](https://github.com/openai/openai-agents-python/blob/main/src/agents/sandbox/errors.py)
</details>

# Sessions and Memory

## Overview

The Sessions and Memory system in the openai-agents-python library provides persistent conversation state management for AI agents. This system enables agents to maintain context across multiple interactions, store conversation history, and access previously learned information through a flexible session abstraction layer.

The architecture is built around a **protocol-based design** that allows different storage backends while maintaining a consistent interface. Sessions track conversation items, manage agent handoffs, and enable memory persistence for sandboxed agent environments.

资料来源：[src/agents/extensions/memory/__init__.py:1-8]()

## Architecture

### Session Protocol

The core of the session system is the `Session` protocol, which defines the contract for all session implementations. This allows developers to swap storage backends without changing application code.

```mermaid
graph TD
    A[Agent Run] --> B[Session Protocol]
    B --> C[SQLiteSession]
    B --> D[AsyncSQLiteSession]
    B --> E[AdvancedSQLiteSession]
    B --> F[EncryptedSession]
    B --> G[RedisSession]
    B --> H[SQLAlchemySession]
    B --> I[MongoDBSession]
    B --> J[DaprSession]
```

资料来源：[src/agents/extensions/memory/__init__.py:1-30]()

### Memory Capability in Sandboxes

Sandbox agents have a dedicated memory capability that provides context from previous sessions. The `Memory` class in the sandbox capabilities layer enables agents to read and write persistent memory.

```mermaid
graph LR
    A[SandboxAgent] -->|requires| B[Memory Capability]
    B --> C[read: MemoryReadConfig]
    B --> D[generate: MemoryGenerateConfig]
    B --> E[layout: MemoryLayout]
```

The memory system requires either `read` or `generate` configuration to be meaningful. When `read.live_update` is enabled, the capability requires both `filesystem` and `shell` capabilities; otherwise, only `shell` is required.

资料来源：[src/agents/sandbox/capabilities/memory.py:1-30]()

## Session Persistence Layer

### Session Lifecycle

Sessions manage the persistence of conversation state through a structured workflow:

```mermaid
sequenceDiagram
    participant Agent as Agent Run
    participant Session as Session Store
    participant Sandbox as Sandbox Session
    
    Agent->>Session: Create/Resume Session
    Session-->>Agent: Session ID
    Agent->>Sandbox: Initialize Workspace
    loop Turn Processing
        Agent->>Sandbox: Execute Tool
        Sandbox-->>Agent: Tool Result
        Agent->>Session: Save Turn Items
        Session-->>Agent: Acknowledge
    end
    Agent->>Session: Finalize Session
```

### Turn Item Persistence

During agent execution, each turn generates items that must be persisted:

- `input`: Current segment user input
- `generated_items`: Memory-relevant assistant and tool items
- `terminal_metadata`: Completion/failure state
- `final_output`: Final segment output when available

资料来源：[src/agents/sandbox/memory/prompts/rollout_extraction_user_message.md:1-20]()

## Memory Rollout Extraction

When an agent session completes, the system can extract a structured memory summary for future reference. This process is handled by the rollout extraction prompt system.

### JSON Output Schema

The extraction produces JSON with three fields:

| Field | Type | Description |
|-------|------|-------------|
| `raw_memory` | string | Raw memory content from the session |
| `rollout_summary` | string | Generated summary of the session |
| `rollout_slug` | string | Short identifier (empty string if unknown) |

资料来源：[src/agents/sandbox/memory/prompts/rollout_extraction_user_message.md:1-25]()

### Memory Summary Path

The memory system reads summaries from a configurable path within the sandbox workspace:

```
memory_summary_path = Path(layout.memories_dir) / "memory_summary.md"
```

The memory summary is truncated to a maximum token limit (`_MEMORY_SUMMARY_MAX_TOKENS`) to ensure efficient processing.

资料来源：[src/agents/sandbox/capabilities/memory.py:50-65]()

## Workspace Sink System

The `WorkspaceSink` class manages buffered writes to the sandbox workspace, providing a layer between agent operations and persistent storage.

### Flush Strategy

The sink implements intelligent flushing based on several conditions:

```mermaid
graph TD
    A[Should Flush?] --> B{Seen count % flush_every == 0}
    A --> C{Operation: persist_workspace start}
    A --> D{Operation: stop}
    A --> E{Operation: shutdown start}
    B -->|Yes| F[Flush to workspace]
    C -->|Yes| F
    D -->|Yes| F
    E -->|Yes| F
    B -->|No| G{Check running state}
    G -->|Running| F
    G -->|Not running| H[Defer flush]
```

Flush conditions include:
- Periodic flush based on event count
- Explicit persist workspace operations
- Session stop and shutdown events

资料来源：[src/agents/sandbox/session/sinks.py:1-40]()

### Workspace Persistence

The sink handles reading existing outbox content before writing new data, ensuring append-style semantics for workspace files. If no existing outbox is found, it marks the outbox as loaded and proceeds with new writes.

资料来源：[src/agents/sandbox/session/sinks.py:60-85]()

## Error Handling

The session system defines specific error types for workspace operations:

### Error Hierarchy

| Error Class | Code | Purpose |
|-------------|------|---------|
| `WorkspaceIOError` | - | Base class for workspace read/write errors |
| `ApplyPatchPathError` | `APPLY_PATCH_INVALID_PATH` | Invalid path (absolute, escape root, or empty) |
| `ApplyPatchDiffError` | - | Malformed patch diff |
| `ExecNonZeroError` | - | Non-zero exit code from exec operations |
| `InvalidManifestPathError` | - | Path resolution failed in manifest context |

### Path Validation

The system validates relative paths to prevent directory traversal attacks:

```python
def _validate_relative_path(*, name: str, path: Path) -> None:
    if path.is_absolute():
        raise ValueError(f"{name} must be relative")
    if ".." in path.parts:
        raise ValueError(f"{name} must not escape root")
    if path.parts in [(), (".",)]:
        raise ValueError(f"{name} must be non-empty")
```

资料来源：[src/agents/sandbox/errors.py:1-50]()

## Session Handoff History

When agents hand off to other agents, the system can summarize conversation history for the receiving agent. This is managed by the handoff history module.

### History Normalization

The system normalizes input history and flattens nested messages before creating summaries. Items like `ToolApprovalItem` are filtered out as they shouldn't be forwarded.

```mermaid
graph LR
    A[Handoff Input] --> B[Normalize History]
    B --> C[Flatten Nested Messages]
    C --> D[Filter Tool Approvals]
    D --> E[Convert to Plain Inputs]
    E --> F[Generate Transcript Summary]
```

资料来源：[src/agents/handoffs/history.py:1-60]()

### History Markers

The conversation history uses customizable markers for wrapping summaries:

| Variable | Default |
|----------|---------|
| `_conversation_history_start` | `<CONVERSATION HISTORY>` |
| `_conversation_history_end` | `</CONVERSATION HISTORY>` |

These can be overridden at runtime using `set_conversation_history_wrappers()`.

资料来源：[src/agents/handoffs/history.py:1-50]()

## Extension Memory Backends

The library includes several optional session backends that require additional dependencies:

### Available Backends

| Backend | Package | Features |
|---------|---------|----------|
| `SQLiteSession` | Built-in | Basic SQLite persistence |
| `AsyncSQLiteSession` | Built-in | Async SQLite operations |
| `AdvancedSQLiteSession` | Built-in | Advanced SQLite features |
| `EncryptedSession` | `cryptography` | Encryption at rest |
| `RedisSession` | `redis` | Distributed session management |
| `SQLAlchemySession` | `sqlalchemy` | ORM integration |
| `MongoDBSession` | `mongodb` | Document store backend |
| `DaprSession` | `dapr` | Dapr state store integration |

资料来源：[src/agents/extensions/memory/__init__.py:1-50]()

### Lazy Loading

Extensions use lazy imports to avoid requiring all dependencies when not needed:

```python
_LAZY_EXPORTS: dict[str, tuple[str, tuple[str, str] | None]] = {
    "EncryptedSession": (".encrypt_session", ("cryptography", "encrypt")),
    "RedisSession": (".redis_session", ("redis", "redis")),
    ...
}
```

This pattern ensures that optional dependencies are only loaded when the specific backend is used.

资料来源：[src/agents/extensions/memory/__init__.py:1-50]()

## Configuration

### Session Settings

Sessions are configured through `SessionSettings` which control:

- Storage backend selection
- Connection parameters
- Persistence strategies
- Compaction policies (for OpenAI responses backend)

### Memory Layout

For sandbox memory, the `MemoryLayout` class specifies directory structure:

| Setting | Description |
|---------|-------------|
| `memories_dir` | Directory for stored memories |
| `sessions_dir` | Directory for session data |

Both paths must be relative to the sandbox workspace root to prevent escape vulnerabilities.

资料来源：[src/agents/sandbox/capabilities/memory.py:20-35]()

## Usage Patterns

### Basic Session Usage

```python
from agents.memory import SQLiteSession

session = SQLiteSession(session_id="user-123")
await session.initialize()

# Run agent with session
result = await Runner.run(agent, input, session=session)

# Session automatically persists turn items
```

### Sandbox Memory Setup

```python
from agents.sandbox.capabilities import Memory, MemoryReadConfig, MemoryLayout

memory = Memory(
    read=MemoryReadConfig(live_update=True),
    layout=MemoryLayout(memories_dir="memory", sessions_dir="sessions"),
    run_as="root"
)
```

### Resume from Session

```python
# Resume a previous session
session = SQLiteSession(session_id="user-123", resume=True)

# Continue the conversation
result = await Runner.run(agent, input, session=session)
```

## Best Practices

1. **Path Validation**: Always use relative paths for memory directories to prevent sandbox escape vulnerabilities.

2. **Session Initialization**: Check `session.is_initialized()` before running agent logic.

3. **Error Handling**: Catch specific session errors rather than generic exceptions for better recovery.

4. **Turn Item Management**: Let the session system manage persistence automatically through the `save_turn_items_if_needed()` function.

5. **Live Update Trade-offs**: Enable `live_update` only when agents need real-time file system access; otherwise, rely on shell-only mode for better isolation.

6. **Extension Dependencies**: Use lazy-loading backends to minimize startup time and avoid unnecessary dependency loading.

---

---

## Doramagic 踩坑日志

项目：openai/openai-agents-python

摘要：发现 24 个潜在踩坑项，其中 0 个为 high/blocking；最高优先级：身份坑 - 仓库名和安装名不一致。

## 1. 身份坑 · 仓库名和安装名不一致

- 严重度：medium
- 证据强度：runtime_trace
- 发现：仓库名 `openai-agents-python` 与安装入口 `openai-agents` 不完全一致。
- 对用户的影响：用户照着仓库名搜索包或照着包名找仓库时容易走错入口。
- 建议检查：在 npm/PyPI/GitHub 上确认包名映射和官方 README 说明。
- 复现命令：`pip install openai-agents`
- 防护动作：页面必须同时展示 repo 名和真实安装入口，避免用户搜索错包。
- 证据：identity.distribution | github_repo:946380199 | https://github.com/openai/openai-agents-python | repo=openai-agents-python; install=openai-agents

## 2. 配置坑 · 来源证据：AdvancedSQLiteSession.delete_branch() leaves branch-only messages in the base table

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：AdvancedSQLiteSession.delete_branch() leaves branch-only messages in the base table
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_d867c75f80af49c9968398851ff8bf6a | https://github.com/openai/openai-agents-python/issues/3346 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 3. 配置坑 · 来源证据：Clarify whether retry-after delays should respect retry max_delay

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：Clarify whether retry-after delays should respect retry max_delay
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_f486d2247bf24df8bbc7a2bd6fddbd65 | https://github.com/openai/openai-agents-python/issues/3266 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 4. 配置坑 · 来源证据：OpenAIConversationsSession persists empty reasoning item {"type":"reasoning","summary":[]} and Conversations API reject…

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：OpenAIConversationsSession persists empty reasoning item {"type":"reasoning","summary":[]} and Conversations API rejects it as invalid
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_d6bad5c23bf3457eb546c22a1636cc26 | https://github.com/openai/openai-agents-python/issues/3268 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 5. 配置坑 · 来源证据：Tracing shutdown cannot interrupt exporter retry backoff

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：Tracing shutdown cannot interrupt exporter retry backoff
- 对用户的影响：可能阻塞安装或首次运行。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_e1ceae098cf84c8aafae7082b13c5345 | https://github.com/openai/openai-agents-python/issues/3354 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 6. 配置坑 · 来源证据：v0.15.2

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：v0.15.2
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_b73472b5ae90447199984775aacdca67 | https://github.com/openai/openai-agents-python/releases/tag/v0.15.2 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 7. 配置坑 · 来源证据：v0.15.3

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：v0.15.3
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_7e05a382001a4d07b74eda1e1316320b | https://github.com/openai/openai-agents-python/releases/tag/v0.15.3 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 8. 配置坑 · 来源证据：v0.16.1

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：v0.16.1
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_44335088ff52486e9f2f41f72a274c35 | https://github.com/openai/openai-agents-python/releases/tag/v0.16.1 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 9. 配置坑 · 来源证据：v0.17.0

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个配置相关的待验证问题：v0.17.0
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_86b81f310a6e45feadc65196a057b23b | https://github.com/openai/openai-agents-python/releases/tag/v0.17.0 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 10. 能力坑 · 来源证据：v0.15.1

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个能力理解相关的待验证问题：v0.15.1
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_4c70d563ac704aeaa14b8e2c49976bc5 | https://github.com/openai/openai-agents-python/releases/tag/v0.15.1 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 11. 能力坑 · 能力判断依赖假设

- 严重度：medium
- 证据强度：source_linked
- 发现：README/documentation is current enough for a first validation pass.
- 对用户的影响：假设不成立时，用户拿不到承诺的能力。
- 建议检查：将假设转成下游验证清单。
- 防护动作：假设必须转成验证项；没有验证结果前不能写成事实。
- 证据：capability.assumptions | github_repo:946380199 | https://github.com/openai/openai-agents-python | README/documentation is current enough for a first validation pass.

## 12. 运行坑 · 来源证据：v0.14.8

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个运行相关的待验证问题：v0.14.8
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_a31947cfee3a4299923f7714bfb54f42 | https://github.com/openai/openai-agents-python/releases/tag/v0.14.8 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 13. 维护坑 · 来源证据：AdvancedSQLiteSession.add_items can report success after structure metadata failure

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个维护/版本相关的待验证问题：AdvancedSQLiteSession.add_items can report success after structure metadata failure
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_0fed2dd63d55400d9e0d9adaf08570e5 | https://github.com/openai/openai-agents-python/issues/3348 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 14. 维护坑 · 来源证据：Chat Completions converter can send empty tool output for non-text results

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个维护/版本相关的待验证问题：Chat Completions converter can send empty tool output for non-text results
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_34a35e920a01467e957cdd59b4179cc1 | https://github.com/openai/openai-agents-python/issues/3310 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 15. 维护坑 · 来源证据：v0.15.0

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个维护/版本相关的待验证问题：v0.15.0
- 对用户的影响：可能增加新用户试用和生产接入成本。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_33cd0193aea84f9b82b15a02098d85cd | https://github.com/openai/openai-agents-python/releases/tag/v0.15.0 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 16. 维护坑 · 维护活跃度未知

- 严重度：medium
- 证据强度：source_linked
- 发现：未记录 last_activity_observed。
- 对用户的影响：新项目、停更项目和活跃项目会被混在一起，推荐信任度下降。
- 建议检查：补 GitHub 最近 commit、release、issue/PR 响应信号。
- 防护动作：维护活跃度未知时，推荐强度不能标为高信任。
- 证据：evidence.maintainer_signals | github_repo:946380199 | https://github.com/openai/openai-agents-python | last_activity_observed missing

## 17. 安全/权限坑 · 下游验证发现风险项

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：下游已经要求复核，不能在页面中弱化。
- 建议检查：进入安全/权限治理复核队列。
- 防护动作：下游风险存在时必须保持 review/recommendation 降级。
- 证据：downstream_validation.risk_items | github_repo:946380199 | https://github.com/openai/openai-agents-python | no_demo; severity=medium

## 18. 安全/权限坑 · 存在安全注意事项

- 严重度：medium
- 证据强度：source_linked
- 发现：No sandbox install has been executed yet; downstream must verify before user use.
- 对用户的影响：用户安装前需要知道权限边界和敏感操作。
- 建议检查：转成明确权限清单和安全审查提示。
- 防护动作：安全注意事项必须面向用户前置展示。
- 证据：risks.safety_notes | github_repo:946380199 | https://github.com/openai/openai-agents-python | No sandbox install has been executed yet; downstream must verify before user use.

## 19. 安全/权限坑 · 存在评分风险

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：风险会影响是否适合普通用户安装。
- 建议检查：把风险写入边界卡，并确认是否需要人工复核。
- 防护动作：评分风险必须进入边界卡，不能只作为内部分数。
- 证据：risks.scoring_risks | github_repo:946380199 | https://github.com/openai/openai-agents-python | no_demo; severity=medium

## 20. 安全/权限坑 · 来源证据：Proposal: per-run BudgetGuard for token / request / cost limits (follow-up to #2848)

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安全/权限相关的待验证问题：Proposal: per-run BudgetGuard for token / request / cost limits (follow-up to #2848)
- 对用户的影响：可能阻塞安装或首次运行。
- 建议检查：来源问题仍为 open，Pack Agent 需要复核是否仍影响当前版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_00884163bb274aecb62eeff18df12634 | https://github.com/openai/openai-agents-python/issues/3353 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 21. 安全/权限坑 · 来源证据：v0.16.0

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安全/权限相关的待验证问题：v0.16.0
- 对用户的影响：可能影响授权、密钥配置或安全边界。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_a9d11d6b8fd24b22882ee03998b45d63 | https://github.com/openai/openai-agents-python/releases/tag/v0.16.0 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 22. 安全/权限坑 · 来源证据：v0.17.1

- 严重度：medium
- 证据强度：source_linked
- 发现：GitHub 社区证据显示该项目存在一个安全/权限相关的待验证问题：v0.17.1
- 对用户的影响：可能影响授权、密钥配置或安全边界。
- 建议检查：来源显示可能已有修复、规避或版本变化，说明书中必须标注适用版本。
- 防护动作：不得脱离来源链接放大为确定性结论；需要标注适用版本和复核状态。
- 证据：community_evidence:github | cevd_0d47be3955c747baadea812c5f4c6487 | https://github.com/openai/openai-agents-python/releases/tag/v0.17.1 | 来源讨论提到 python 相关条件，需在安装/试用前复核。

## 23. 维护坑 · issue/PR 响应质量未知

- 严重度：low
- 证据强度：source_linked
- 发现：issue_or_pr_quality=unknown。
- 对用户的影响：用户无法判断遇到问题后是否有人维护。
- 建议检查：抽样最近 issue/PR，判断是否长期无人处理。
- 防护动作：issue/PR 响应未知时，必须提示维护风险。
- 证据：evidence.maintainer_signals | github_repo:946380199 | https://github.com/openai/openai-agents-python | issue_or_pr_quality=unknown

## 24. 维护坑 · 发布节奏不明确

- 严重度：low
- 证据强度：source_linked
- 发现：release_recency=unknown。
- 对用户的影响：安装命令和文档可能落后于代码，用户踩坑概率升高。
- 建议检查：确认最近 release/tag 和 README 安装命令是否一致。
- 防护动作：发布节奏未知或过期时，安装说明必须标注可能漂移。
- 证据：evidence.maintainer_signals | github_repo:946380199 | https://github.com/openai/openai-agents-python | release_recency=unknown

<!-- canonical_name: openai/openai-agents-python; human_manual_source: deepwiki_human_wiki -->
