Doramagic Project Pack · Human Manual
agent-framework
Microsoft Agent Framework is a comprehensive, multi-language framework for building intelligent agents that integrate with various AI services and providers. The framework enables develope...
Getting Started with Microsoft Agent Framework
Related topics: System Architecture, Agent System
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: System Architecture, Agent System
Getting Started with Microsoft Agent Framework
Overview
Microsoft Agent Framework is a comprehensive, multi-language framework for building intelligent agents that integrate with various AI services and providers. The framework enables developers to create agents capable of natural language understanding, tool usage, multi-turn conversations, and complex workflow orchestration.
The framework supports two primary ecosystems:
| Language | Package Manager | Core Package |
|---|---|---|
| Python | pip | agent-framework, agent-framework-core |
| .NET | NuGet | Microsoft.Agents.AI |
Sources: python/README.md:1-40
Supported Platforms
| Component | Requirements |
|---|---|
| Python | 3.10+ |
| Operating Systems | Windows, macOS, Linux |
| .NET | .NET 8+ |
Sources: python/README.md:36-39
Installation
Python Installation
The framework offers two installation approaches depending on your use case:
#### Development Mode (Full Installation)
For exploring or developing locally with all features:
pip install agent-framework
This installs the core package and all integration sub-packages, ensuring all features are available without additional configuration steps.
Sources: python/README.md:10-15
#### Selective Installation
For lightweight environments with specific integration needs:
| Package | Command | Description |
|---|---|---|
| Core Only | pip install agent-framework-core | Azure OpenAI, OpenAI support + workflows |
| + Azure AI Foundry | pip install agent-framework-foundry | Azure AI Foundry integration |
| + Copilot Studio | pip install agent-framework-copilotstudio --pre | Microsoft Copilot Studio (preview) |
Released packages (agent-framework, agent-framework-core, agent-framework-foundry) no longer require the --pre flag, while preview connectors like agent-framework-copilotstudio still do.
Sources: python/README.md:17-34
.NET Installation
For .NET projects, add the appropriate package reference to your .csproj file:
<ItemGroup>
<PackageReference Include="Microsoft.Agents.AI" Version="[CURRENTVERSION]" />
</ItemGroup>
For Azure Functions hosting:
<ItemGroup>
<PackageReference Include="Microsoft.Agents.AI.Hosting.AzureFunctions" Version="[CURRENTVERSION]" />
</ItemGroup>
Sources: dotnet/src/Microsoft.Agents.AI/Microsoft.Agents.AI.csproj
Quick Start
Python: Basic Agent
import asyncio
from agent_framework import Agent, AzureCliCredential
from agent_framework.integrations.azure_ai import FoundryChatClient
import os
async def main():
agent = Agent(
client=FoundryChatClient(
credential=AzureCliCredential(),
),
name="HaikuAgent",
instructions="You are an upbeat assistant that writes beautifully.",
)
print(await agent.run("Write a haiku about Microsoft Agent Framework."))
if __name__ == "__main__":
asyncio.run(main())
Sources: README.md:30-50
.NET: Basic Agent
using Azure.AI.Projects;
using Azure.Identity;
using Microsoft.Agents.AI;
string endpoint = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT")
?? throw new InvalidOperationException("AZURE_AI_PROJECT_ENDPOINT is not set.");
string deploymentName = Environment.GetEnvironmentVariable("AZURE_AI_MODEL_DEPLOYMENT_NAME")
?? "gpt-5.4-mini";
AIAgent agent = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential())
.AsAIAgent(model: deploymentName, instructions: "You are an upbeat assistant.", name: "HaikuAgent");
Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));
Sources: README.md:52-65
Environment Configuration
Set API keys and configuration as environment variables or in a .env file at your project root:
| Variable | Description | Required |
|---|---|---|
FOUNDRY_PROJECT_ENDPOINT | Azure AI Foundry project endpoint | Yes |
FOUNDRY_MODEL | Model deployment name (defaults to gpt-4o) | No |
AZURE_AI_PROJECT_ENDPOINT | Alternative endpoint variable | Yes |
AZURE_AI_MODEL_DEPLOYMENT_NAME | Model deployment name | No |
Sources: python/samples/01-get-started/README.md:9-13
Core Concepts
Agent Architecture
graph TD
A[User Input] --> B[Agent]
B --> C[AI Client]
C --> D[Azure AI Foundry / OpenAI / Claude]
B --> E[Tools]
B --> F[Memory / Session]
E --> G[Function Calls]
F --> H[Context Preservation]
G --> I[Action Execution]
I --> BKey Components
| Component | Python Package | .NET Namespace | Purpose |
|---|---|---|---|
| Agent | agent_framework | Microsoft.Agents.AI | Core agent implementation |
| Chat Client | agent_framework.integrations.azure_ai | Azure.AI.Projects | AI service connectivity |
| Tools | @tool decorator | AITool attribute | Function definitions |
| Sessions | AgentSession | IAgentSession | Multi-turn conversation state |
| Context | ContextProvider | IContextProvider | Dynamic context injection |
Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkill.cs:15-30
Progressive Learning Samples (Python)
The framework provides a progressive set of samples in python/samples/01-get-started/:
| Sample | File | Learning Objective |
|---|---|---|
| 1 | 01_hello_agent.py | Create your first agent and run it (streaming and non-streaming) |
| 2 | 02_add_tools.py | Define a function tool with @tool and attach it to an agent |
| 3 | 03_multi_turn.py | Keep conversation history across turns with AgentSession |
| 4 | 04_memory.py | Add dynamic context with a custom ContextProvider |
| 5 | 05_functional_workflow_with_agents.py | Call agents inside a functional workflow |
| 6 | 06_functional_workflow_basics.py | Write a workflow as a plain async function |
| 7 | 07_first_graph_workflow.py | Chain executors into a graph workflow with edges |
| 8 | 08_host_your_agent.py | Host your agent in various environments |
Sources: python/samples/01-get-started/README.md:17-30
Authentication
The framework supports multiple authentication methods:
| Provider | Python Credential | .NET Credential |
|---|---|---|
| Azure AI Foundry | AzureCliCredential() | DefaultAzureCredential() |
| Azure Content Understanding | AzureCliCredential() | DefaultAzureCredential() |
| GitHub Copilot | API Key-based | API Key-based |
For Azure-based authentication, run az login in your terminal before executing samples:
az login
Sources: python/samples/02-agents/skills/code_defined_skill/README.md:15-17
Integration Packages
Python Integrations
| Package | Purpose | Install Command |
|---|---|---|
agent-framework-core | Core framework with Azure OpenAI and OpenAI | Default |
agent-framework-foundry | Azure AI Foundry integration | Default |
agent-framework-claude | Claude Agent SDK integration | pip install agent-framework-claude --pre |
agent-framework-github-copilot | GitHub Copilot integration | pip install agent-framework-github-copilot --pre |
agent-framework-declarative | YAML-based agent specification | pip install agent-framework-declarative --pre |
agent-framework-copilotstudio | Microsoft Copilot Studio | pip install agent-framework-copilotstudio --pre |
#### Claude Agent
The Claude agent enables integration with Claude Agent SDK, allowing interaction with Claude's agentic capabilities through the Agent Framework.
pip install agent-framework-claude --pre
Sources: python/packages/claude/README.md:1-10
#### GitHub Copilot Agent
The GitHub Copilot agent enables integration with GitHub Copilot for agentic capabilities:
pip install agent-framework-github-copilot --pre
Sources: python/packages/github_copilot/README.md:1-10
#### Declarative Agents
The declarative package provides support for building agents based on YAML specifications:
pip install agent-framework-declarative --pre
Sources: python/packages/declarative/README.md:1-10
.NET Integrations
| Package | Purpose |
|---|---|
Microsoft.Agents.AI | Core AI library |
Microsoft.Agents.AI.Hosting.OpenAI | OpenAI hosting |
Microsoft.Agents.AI.GitHub.Copilot | GitHub Copilot agent |
Microsoft.Agents.AI.AzureAI.Persistent | Azure AI persistent agents (deprecated) |
Microsoft.Agents.AI.DurableTask | Durable Task integration for stateful workflows |
Microsoft.Agents.AI.Hosting.AzureFunctions | Azure Functions hosting |
Aspire.Hosting.AgentFramework.DevUI | Aspire-based DevUI hosting |
#### Creating a GitHub Copilot Agent (.NET)
public static AIAgent AsAIAgent(
this CopilotClient client,
bool ownsClient = false,
string? id = null,
string? name = null,
string? description = null,
IList<AITool>? tools = null,
string? instructions = null)
Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs:15-25
Agent Skills
Agent Skills enable domain-specific capabilities with instructions, resources, and scripts. The framework follows the Agent Skills specification.
Skill Types
| Skill Type | Python | .NET |
|---|---|---|
| File-based | AgentFileSkill | AgentFileSkill |
| Code-defined | AgentInlineSkill | AgentInlineSkill |
| Declarative | YAML-based | N/A |
Skill Configuration Options (.NET)
public sealed class AgentSkillsProviderOptions
{
/// <summary>
/// Custom system prompt template containing {skills}, {resource_instructions}, {script_instructions}
/// </summary>
public string? SkillsInstructionPrompt { get; set; }
/// <summary>
/// Require script execution approval (default: false)
/// </summary>
public bool ScriptApproval { get; set; }
/// <summary>
/// Disable caching of tools and instructions (default: false)
/// </summary>
public bool DisableCaching { get; set; }
}
Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkillsProviderOptions.cs:14-35
Skill Content Structure
The skill content is structured as XML, containing:
<name>{skill_name}</name>
<description>{skill_description}</description>
<instructions>
{skill_instructions}
</instructions>
<resources>
{resource_definitions}
</resources>
<scripts>
{script_definitions}
</scripts>
Sources: dotnet/src/Microsoft.Agents.AI/Skills/Programmatic/AgentInlineSkillContentBuilder.cs:20-40
Azure Content Understanding Integration
The framework supports Azure Content Understanding for document, image, audio, and video analysis:
| Sample | Description | Run Command |
|---|---|---|
| Document Q&A | Upload PDF, extract info with CU | uv run samples/01-get-started/01_document_qa.py |
| Multi-Turn Session | AgentSession persistence | uv run samples/01-get-started/02_multi_turn_session.py |
| Multi-Modal Chat | PDF + audio + video analysis | uv run samples/01-get-started/03_multimodal_chat.py |
| Invoice Processing | Structured field extraction | uv run samples/01-get-started/04_invoice_processing.py |
Required environment variables:
FOUNDRY_PROJECT_ENDPOINT=https://your-project.services.ai.azure.com
FOUNDRY_MODEL=gpt-4.1
AZURE_CONTENTUNDERSTANDING_ENDPOINT=https://your-cu-resource.cognitiveservices.azure.com/
Sources: python/packages/azure-contentunderstanding/samples/README.md:1-30
Durable Task Integration (.NET)
For stateful, long-running workflows, use the DurableTask integration:
dotnet add package Microsoft.Agents.AI.DurableTask
This package enables building stateful agents that can handle complex orchestration scenarios with checkpointing and replay capabilities.
Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/README.md:1-15
Development Tools
DevUI Sample Application
DevUI is a sample application for getting started with the Agent Framework:
// Features displayed in settings modal
interface ServerInfo {
version: string;
runtime: string;
uiMode: string;
capabilities?: {
instrumentation?: boolean;
// ... other capabilities
};
}
Sources: python/packages/devui/frontend/src/components/layout/settings-modal.tsx:5-20
Sample Gallery
The DevUI includes a Sample Gallery for browsing and downloading curated examples:
graph LR
A[Sample Gallery] --> B[Beginner Examples]
A --> C[Advanced Examples]
B --> D[Download & Run Locally]
C --> DSources: python/packages/devui/frontend/src/components/features/gallery/gallery-view.tsx:10-25
Workflow Orchestration
The framework supports multiple workflow patterns:
graph TD
A[Functional Workflow] --> B[Plain Async Functions]
A --> C[Agent Calls within Workflows]
D[Graph Workflow] --> E[Chained Executors]
D --> F[Edges between Nodes]
E --> G[Complex Routing]Functional Workflow Pattern
Write workflows as plain async functions:
from agent_framework import workflow
@workflow
async def my_workflow(agent, input_data):
result = await agent.process(input_data)
return result
Graph Workflow Pattern
Chain executors into a graph workflow with edges:
from agent_framework.graph import Graph, Node, Edge
graph = Graph()
graph.add_node(Node("start", executor_a))
graph.add_node(Node("middle", executor_b))
graph.add_node(Node("end", executor_c))
graph.add_edge(Edge("start", "middle"))
graph.add_edge(Edge("middle", "end"))
Sources: python/samples/01-get-started/README.md:25-30
Next Steps
| Resource | Purpose |
|---|---|
| Agent Skills Specification | Skill definition standard |
| Documentation | Full framework docs |
| Azure Functions Samples | .NET hosting examples |
| File-Based Skills Sample | Skill implementation patterns |
| Mixed Skills Sample | Combining multiple skill types |
Sources: python/samples/02-agents/skills/code_defined_skill/README.md:25-30
Sources: [python/README.md:1-40](https://github.com/microsoft/agent-framework/blob/main/python/README.md)
System Architecture
Related topics: Getting Started with Microsoft Agent Framework, Agent System, Workflows and Orchestration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Getting Started with Microsoft Agent Framework, Agent System, Workflows and Orchestration
System Architecture
Overview
The Microsoft Agent Framework is a cross-platform, multi-language framework designed for building AI-powered agents with tool-calling capabilities, workflow orchestration, and extensible integrations. The architecture follows a unified conceptual model implemented in both Python (3.10+) and .NET, enabling developers to create agents that interact with various AI backends while maintaining consistent APIs and patterns across platforms.
The framework's primary purpose is to abstract the complexity of AI agent development, providing a declarative approach to defining agent behavior, tools, memory, and workflows. It supports integration with Azure AI Foundry, OpenAI, GitHub Copilot, Anthropic Claude, and Microsoft Copilot Studio.
Sources: docs/design/python-package-setup.md
High-Level Architecture
graph TD
subgraph "Client Applications"
A[Python Apps]
B[.NET Apps]
end
subgraph "Agent Framework Core"
C[Agent Abstractions]
D[Workflow Engine]
E[Skill System]
F[Memory/Context Providers]
end
subgraph "AI Backend Integrations"
G[Azure AI Foundry]
H[OpenAI / Azure OpenAI]
I[GitHub Copilot]
J[Anthropic Claude]
K[Copilot Studio]
end
A --> C
B --> C
C --> D
C --> E
C --> F
C --> G
C --> H
C --> I
C --> J
C --> KCore Architecture Components
Agent Abstraction Layer
The central abstraction in the framework is the AIAgent interface, which defines the contract for all agent implementations. This abstraction enables loose coupling between client code and specific AI backend implementations.
#### Python Implementation
In Python, the Agent class serves as the primary agent implementation, accepting a chat client and configuration options:
agent = Agent(
client=FoundryChatClient(
credential=AzureCliCredential(),
),
name="MyAgent",
instructions="You are a helpful assistant.",
tools=[my_tool]
)
Sources: python/packages/core/agent_framework/__init__.py
#### .NET Implementation
In .NET, the ChatClientAgent class provides the core agent functionality with dependency injection support:
public sealed class ChatClientAgent : AIAgent
{
public ChatClientAgent(
IChatClient chatClient,
string? instructions = null,
string? name = null,
string? description = null,
IList<AITool>? tools = null,
ILoggerFactory? loggerFactory = null,
IServiceProvider? services = null);
}
The agent accepts tools that can be invoked during conversations, and all provided tools are invoked without user approval by default. Developers should require explicit approval for tools that have side effects or access sensitive data.
Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
Chat Client Architecture
The framework uses a chat client abstraction pattern to separate agent logic from the underlying AI service implementation.
| Chat Client | Language | Description |
|---|---|---|
FoundryChatClient | Python/.NET | Azure AI Foundry integration |
OpenAIChatClient | Python | OpenAI and Azure OpenAI support |
CopilotClient | .NET | GitHub Copilot integration |
ClaudeClient | Python | Anthropic Claude integration |
#### Client Configuration Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
credential | AzureCliCredential / DefaultAzureCredential | Yes (Foundry) | Authentication credential |
project_endpoint | string | Yes (Foundry) | Azure AI Foundry project endpoint |
model | string | No | Model deployment name (defaults vary) |
temperature | float | No | Sampling temperature (0.0-2.0) |
top_p | float | No | Nucleus sampling parameter |
response_format | object | No | Structured output format |
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs
Package Architecture
Python Package Structure
The Python implementation uses a modular package structure allowing selective installation based on required integrations:
graph TD
A[agent-framework] --> B[agent-framework-core]
A --> C[agent-framework-foundry]
A --> D[agent-framework-copilotstudio]
B --> E[OpenAI Support]
B --> F[Workflow Engine]
B --> G[Skill System]
C --> B
D --> B#### Package Descriptions
| Package | Description | Install Command |
|---|---|---|
agent-framework | Full framework with all sub-packages | pip install agent-framework |
agent-framework-core | Core agent, workflow, and OpenAI support | pip install agent-framework-core |
agent-framework-foundry | Azure AI Foundry integration | pip install agent-framework-foundry |
agent-framework-copilotstudio | Microsoft Copilot Studio (preview) | pip install agent-framework-copilotstudio --pre |
agent-framework-claude | Anthropic Claude integration (preview) | pip install agent-framework-claude --pre |
agent-framework-github-copilot | GitHub Copilot integration (preview) | pip install agent-framework-github-copilot --pre |
The core package includes Azure OpenAI and OpenAI support by default, along with workflows and orchestrations.
Sources: docs/decisions/0008-python-subpackages.md
.NET Package Structure
The .NET implementation uses a shared library pattern with dependency injection:
<PropertyGroup>
<InjectSharedFoundryAgents>true</InjectSharedFoundryAgents>
</PropertyGroup>
Core namespaces include:
| Namespace | Purpose |
|---|---|
Microsoft.Agents.AI | Core agent abstractions and implementations |
Microsoft.Agents.AI.Abstractions | Interface definitions |
Microsoft.Agents.AI.AzureAI | Azure AI Foundry integration |
Microsoft.Agents.AI.GitHub.Copilot | GitHub Copilot integration |
Microsoft.Agents.AI.Skills | Skill-based agent configuration |
Sources: dotnet/src/Shared/Foundry/Agents/README.md
Agent Execution Model
Agent Run Response Pattern
The framework standardizes agent responses through a consistent return type that wraps the final output along with any intermediate steps taken during execution.
sequenceDiagram
participant Client
participant Agent
participant Tool
participant AI_Backend
Client->>Agent: run(input)
Agent->>AI_Backend: send(messages)
AI_Backend-->>Agent: response
alt tool_call detected
Agent->>Tool: invoke(arguments)
Tool-->>Agent: result
Agent->>AI_Backend: send(result)
AI_Backend-->>Agent: response
end
Agent-->>Client: RunResponse(output, steps)Multi-Turn Conversation Support
Agents maintain conversation history through AgentSession, enabling stateful multi-turn interactions:
session = AgentSession()
async for response in agent.run("Hello", session=session):
print(response)
Sources: docs/decisions/0001-agent-run-response.md
Skill System Architecture
Skill Definition
Skills provide a declarative way to define agent capabilities with instructions and associated tools:
public class AgentInlineSkill
{
public AgentInlineSkill(
string name,
string description,
string instructions,
string? license = null,
string? compatibility = null,
string? allowedTools = null,
AdditionalPropertiesDictionary? metadata = null,
JsonSerializerOptions? serializerOptions = null);
}
Skill Frontmatter Schema
| Field | Type | Required | Description |
|---|---|---|---|
name | string | Yes | Skill name in kebab-case |
description | string | Yes | Skill description for discovery |
instructions | string | Yes | Skill instructions text |
license | string | No | License name or reference |
compatibility | string | No | Compatibility information (max 500 chars) |
allowedTools | string | No | Space-delimited pre-approved tools |
metadata | dictionary | No | Arbitrary key-value metadata |
Sources: dotnet/src/Microsoft.Agents.AI/Skills/Programmatic/AgentInlineSkill.cs
Workflow Orchestration
Workflow Types
The framework supports multiple workflow paradigms:
| Workflow Type | Description | Use Case |
|---|---|---|
| Functional Workflow | Async functions calling agents | Simple sequential operations |
| Graph Workflow | DAG-based executor chains | Complex conditional flows |
| Durable Workflow | Long-running with state persistence | Human-in-the-loop approval |
Graph Workflow Structure
graph LR
A[Input] --> B[Agent 1]
B --> C{Decision}
C -->|Path A| D[Agent 2]
C -->|Path B| E[Agent 3]
D --> F[Output]
E --> FThe graph workflow uses edges to connect executors, allowing conditional routing based on agent outputs.
Sources: python/samples/01-get-started/README.md
Memory and Context Architecture
Context Providers
Dynamic context injection is supported through custom ContextProvider implementations:
class MyContextProvider(ContextProvider):
async def get_context(self, context_params) -> str:
# Retrieve and format context
return formatted_context
Memory Scoping
| Scope Parameter | Description |
|---|---|
application_id | Global scope across entire application |
agent_id | Agent-specific memory isolation |
user_id | User-specific memory partitioning |
Hybrid Vector Search
Context providers can optionally enable vector search for semantic retrieval:
| Setting | Options | Description |
|---|---|---|
vectorizer_choice | "openai", "hf" | Embedding model selection |
vector_field_name | string | Redis field for vectors |
overwrite_redis_index | boolean | Index recreation control |
Sources: python/samples/02-agents/context_providers/redis/README.md
Hosting and Deployment
Local Hosting with DevUI
DevUI provides a local development server with OpenAI-compatible endpoints:
devui /path/to/agents/folder
API endpoints exposed:
| Endpoint | Method | Description |
|---|---|---|
/v1/responses | POST | Agent invocation |
/v1/entities | GET | List available entities |
Agent Entity Structure
Agents must export an agent or workflow in their __init__.py:
# my_agent/__init__.py
from agent_framework import Agent
agent = Agent(
name="MyAgent",
client=OpenAIChatClient(),
)
Foundry Deployment
Production deployment to Azure AI Foundry uses the same agent configuration with environment-based credential resolution.
Sources: python/samples/02-agents/devui/README.md
Agent Mode System
The framework supports configurable agent operating modes for interactive planning and autonomous execution:
public sealed class AgentMode
{
public string Name { get; }
public string Description { get; }
}
public class AgentModeProviderOptions
{
public IReadOnlyList<AgentMode>? Modes { get; set; }
public string? DefaultMode { get; set; }
}
| Mode | Purpose |
|---|---|
"plan" | Interactive planning with human oversight |
"execute" | Autonomous execution without intervention |
Sources: dotnet/src/Microsoft.Agents.AI/Harness/AgentMode/AgentModeProviderOptions.cs
Integration Patterns
GitHub Copilot Integration
Agents can wrap GitHub Copilot clients for unified interaction:
public static AIAgent AsAIAgent(
this CopilotClient client,
bool ownsClient = false,
string? id = null,
string? name = null,
string? description = null,
IList<AITool>? tools = null,
string? instructions = null);
This extension method creates an AIAgent backed by the Copilot client with optional additional tools and instructions.
Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs
Data Flow Summary
graph TD
subgraph "Input Processing"
A[User Input] --> B[Session Manager]
B --> C[Context Provider]
end
subgraph "Agent Processing"
C --> D[Agent Executor]
D --> E[AI Chat Client]
E --> F{Tool Call?}
end
subgraph "Tool Execution"
F -->|Yes| G[Tool Executor]
G --> H[Result Formatter]
H --> E
end
F -->|No| I[Response Formatter]
subgraph "Output"
I --> J[RunResponse]
J --> K[Client Application]
endEnvironment Configuration
Required Environment Variables
| Variable | Description | Required For |
|---|---|---|
FOUNDRY_PROJECT_ENDPOINT | Azure AI Foundry project URL | Foundry agents |
FOUNDRY_MODEL | Model deployment name | Foundry agents |
OPENAI_API_KEY | OpenAI API key | OpenAI clients, embeddings |
AZURE_AI_PROJECT_ENDPOINT | .NET Foundry endpoint | .NET Foundry |
AZURE_AI_MODEL_DEPLOYMENT_NAME | .NET model deployment | .NET Foundry |
Authentication Methods
| Method | Use Case | Command |
|---|---|---|
AzureCliCredential | Interactive login | az login |
DefaultAzureCredential | Automated environments | Managed identity |
| API Key | Direct authentication | Environment variable |
Sources: python/README.md
Sources: [docs/design/python-package-setup.md](https://github.com/microsoft/agent-framework/blob/main/docs/design/python-package-setup.md)
Agent System
Related topics: System Architecture, Tools and Skills, Workflows and Orchestration, AI Provider Integration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: System Architecture, Tools and Skills, Workflows and Orchestration, AI Provider Integration
Agent System
The Agent System is the core abstraction layer in Microsoft Agent Framework that enables the creation, configuration, and execution of AI agents. Agents are autonomous or semi-autonomous software entities that can interact with users, execute tools, maintain conversation state, and perform complex multi-step tasks using Large Language Models (LLMs) as their reasoning engine.
Architecture Overview
The Agent System follows a layered architecture that separates concerns between the agent abstraction, runtime context, tool invocation, and the underlying chat client implementations.
graph TD
subgraph "Agent Abstraction Layer"
AIAgent[AIAgent Interface]
ChatClientAgent[ChatClientAgent]
AgentRunContext[AgentRunContext]
end
subgraph "Tool Layer"
AITool[AITool]
ToolDefinition[ToolDefinition]
ToolResources[ToolResources]
end
subgraph "Client Layer"
IChatClient[IChatClient]
CopilotClient[CopilotClient]
ClaudeClient[ClaudeClient]
end
subgraph "Context Layer"
AIContext[AIContext]
AgentSession[AgentSession]
ContextProvider[ContextProvider]
end
AIAgent --> AgentRunContext
AIAgent --> AITool
AIAgent --> IChatClient
ChatClientAgent --> IChatClient
AgentRunContext --> AIContext
AgentSession --> AIContextSources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs
Core Components
AIAgent Interface
The AIAgent interface serves as the foundational abstraction for all agent implementations in the .NET SDK. It defines the contract that all concrete agent types must implement.
| Property | Type | Description |
|---|---|---|
Id | string | Unique identifier for the agent instance |
Name | string | Human-readable name for the agent |
Description | string | Description of the agent's purpose and capabilities |
Instructions | string | System instructions that guide agent behavior |
Tools | IList<AITool> | Collection of tools available to the agent |
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs
ChatClientAgent
ChatClientAgent is the primary concrete implementation of AIAgent that uses an IChatClient for LLM interactions. It provides comprehensive support for agent configuration, tool execution, and streaming responses.
#### Constructor Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
chatClient | IChatClient | Yes | The chat client used for LLM communication |
instructions | string? | No | System instructions for agent behavior |
name | string? | No | Agent identifier for logging |
description | string? | No | Human-readable agent description |
tools | IEnumerable<AITool>? | No | Tools the agent can invoke |
loggerFactory | ILoggerFactory? | No | Factory for creating loggers |
services | IServiceProvider? | No | Service provider for dependency resolution |
cancellationToken | CancellationToken | No | Cancellation token for async operations |
Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
AgentRunContext
The AgentRunContext provides runtime context for agent execution, including conversation history, tool configurations, and execution options.
| Property | Type | Description |
|---|---|---|
SessionId | string | Unique identifier for the current session |
ConversationHistory | IList<ChatMessage> | Messages exchanged in the conversation |
Options | ChatOptions | Configuration for the current run |
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AgentRunContext.cs
Agent Configuration
Creating a Basic Agent
Agents can be configured with various levels of complexity depending on the use case.
# Python: Basic agent creation
# python/samples/01-get-started/01_hello_agent.py
from agent_framework import Agent
# Simple agent with instructions
agent = Agent(
model="gpt-4o",
instructions="You are a helpful assistant."
)
// C#: Basic agent creation
// dotnet/samples/01-get-started/01_hello_agent/Program.cs
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Hosting.OpenAI;
// Create agent with instructions
var agent = new ChatClientAgent(
chatClient: chatClient,
instructions: "You are a helpful assistant that answers questions accurately."
);
Sources: python/samples/01-get-started/01_hello_agent.py Sources: dotnet/samples/01-get-started/01_hello_agent/Program.cs
Agent with Tools
Tools extend agent capabilities by allowing them to perform actions beyond text generation.
# Python: Agent with function tool
# python/samples/01-get-started/02_add_tools.py
from agent_framework import Agent, tool
@tool
def get_weather(location: str) -> str:
"""Get the weather for a specific location."""
# Tool implementation
return f"The weather in {location} is sunny."
agent = Agent(model="gpt-4o")
agent.tools.add(get_weather)
// C#: Agent with tools
// dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
// Tools augment any tools provided via ChatOptions.Tools when the agent is run
var agent = new ChatClientAgent(
chatClient: chatClient,
instructions: "You are a helpful assistant.",
tools: new List<AITool> { customTool }
);
Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
Tool Security Considerations
By default, all provided tools are invoked without user approval. The AI selects which functions to call and chooses the arguments — these arguments should be treated as untrusted input.
Security Warning: Developers should require explicit approval for tools that have side effects, access sensitive data, or perform irreversible operations.
Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
Running Agents
Synchronous Execution
# Python: Non-streaming execution
# python/samples/01-get-started/01_hello_agent.py
result = agent.run("What is the capital of France?")
print(result)
Streaming Execution
# Python: Streaming execution
# python/samples/01-get-started/01_hello_agent.py
async for chunk in agent.run_streaming("Tell me a story"):
print(chunk, end="", flush=True)
Multi-Turn Conversations
# Python: Multi-turn with AgentSession
# python/samples/01-get-started/03_multi_turn.py
session = AgentSession()
# First turn
response1 = await session.run(agent, "Hi, my name is Alice")
print(response1)
# Second turn - maintains context
response2 = await session.run(agent, "What is my name?")
print(response2) # "Your name is Alice"
Sources: python/samples/01-get-started/03_multi_turn.py
Agent Execution Flow
sequenceDiagram
participant User
participant Agent as AIAgent/ChatClientAgent
participant Context as AgentRunContext
participant Tools as Tool System
participant LLM as IChatClient
User->>Agent: Run(userMessage, options)
Agent->>Context: Create execution context
Context->>LLM: Send chat request
alt Tool Invocation Required
LLM-->>Context: FunctionCall(tool_name, args)
Context->>Tools: InvokeTool(tool_name, args)
Tools-->>Context: ToolResult
Context->>LLM: Continue with result
end
LLM-->>Agent: Final response
Agent-->>User: Return resultContext Management
ContextProvider
Custom context providers allow agents to access dynamic context during execution.
# Python: Custom context provider
# python/samples/01-get-started/04_memory.py
from agent_framework import ContextProvider
class MemoryProvider(ContextProvider):
def __init__(self):
self.memories = []
async def get_context(self) -> str:
if self.memories:
return "User preferences: " + ", ".join(self.memories)
return ""
async def update_context(self, interaction: dict):
if "preference" in interaction:
self.memories.append(interaction["preference"])
agent = Agent(model="gpt-4o")
agent.context_providers.add(MemoryProvider())
Sources: python/samples/01-get-started/04_memory.py
AgentSession
AgentSession maintains conversation state across multiple turns, enabling persistent interactions.
| Method | Description |
|---|---|
run(agent, message) | Execute a single turn with the agent |
clear() | Clear conversation history |
get_history() | Retrieve conversation history |
Sources: python/samples/01-get-started/03_multi_turn.py
Dependency Injection
The .NET implementation supports dependency injection for resolving services required by AI functions.
// C#: Service provider integration
// dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
// The services parameter is particularly important when using custom tools
// that require dependency injection
var services = new ServiceCollection();
services.AddSingleton<IMyService, MyServiceImplementation>();
var agent = new ChatClientAgent(
chatClient: chatClient,
services: services.BuildServiceProvider()
);
Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs
Agent Integrations
Copilot Integration
// C#: GitHub Copilot integration
// dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs
var agent = copilotClient.AsAIAgent(
name: "CopilotAgent",
description: "GitHub Copilot powered agent",
tools: new List<AITool> { customTool }
);
Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs
Claude Integration
The Claude agent enables integration with Claude Agent SDK for accessing Claude's agentic capabilities.
Sources: python/packages/claude/README.md
Azure AI Foundry Integration
// C#: Azure AI Foundry Persistent Agents
// dotnet/src/Microsoft.Agents.AI.AzureAI.Persistent/PersistentAgentsClientExtensions.cs
[Obsolete("Please use the latest Foundry Agents service via the Microsoft.Agents.AI.AzureAI package.")]
public static async Task<ChatClientAgent> CreateAIAgentAsync(
this PersistentAgentsClient persistentAgentsClient,
string model,
string? name = null,
string? description = null,
string? instructions = null,
IEnumerable<ToolDefinition>? tools = null,
ToolResources toolResources = null,
double? temperature = null,
double? topP = null,
ResponseFormat? responseFormat = null,
IDictionary<string, string>? metadata = null,
IChatClientFactory? clientFactory = null,
IServiceProvider services = null,
CancellationToken cancellationToken = default)
Sources: dotnet/src/Microsoft.Agents.AI.AzureAI.Persistent/PersistentAgentsClientExtensions.cs
Best Practices
1. Clear Instructions
Provide specific, detailed instructions that define the agent's role, behavior, and constraints.
# Good: Specific instructions
agent = Agent(
model="gpt-4o",
instructions="""
You are a technical documentation assistant.
- Always use code blocks for code examples
- Include practical examples
- Explain technical terms on first use
"""
)
# Avoid: Vague instructions
agent = Agent(
model="gpt-4o",
instructions="Be helpful."
)
2. Tool Security
Implement approval mechanisms for sensitive tools:
// Review tools before allowing execution
public class SecureToolExecutor
{
public async Task<ToolResult> ExecuteAsync(AITool tool, object args)
{
// Require approval for destructive or sensitive operations
if (tool.HasSideEffects)
{
var approved = await RequestApprovalAsync(tool, args);
if (!approved) throw new OperationCanceledException();
}
return await tool.InvokeAsync(args);
}
}
3. Proper Resource Cleanup
Always dispose of agents and clients properly:
# Python: Async context manager usage
async with Agent(model="gpt-4o") as agent:
result = await agent.run("Hello")
# Agent is automatically cleaned up
# Or explicit cleanup
agent = Agent(model="gpt-4o")
try:
result = await agent.run("Hello")
finally:
await agent.close()
See Also
Sources: [dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs](https://github.com/microsoft/agent-framework/blob/main/dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs)
Tools and Skills
Related topics: Agent System, Middleware System
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agent System, Middleware System
Tools and Skills
Overview
Tools and Skills are core abstractions in the Microsoft Agent Framework that extend an agent's capabilities beyond its base instruction set. Tools enable functional operations (like calling APIs or executing code), while Skills provide domain-specific knowledge, structured instructions, resources, and scripts that guide agent behavior in specialized areas.
Tools are function-based capabilities that agents can invoke to perform specific tasks such as calculations, data retrieval, or external API calls. Sources: python/packages/core/agent_framework/_tools.py:1-50
Skills are containers of domain-specific knowledge that include instructions, reference documents (resources), and executable scripts. They enable agents to handle specialized tasks by providing contextual guidance and tooling. Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkill.cs:1-30
graph TD
subgraph AgentFramework
A[Agent] --> T[Tools]
A --> S[Skills]
T --> TF[Function Tools]
T --> TT[Tool Definitions]
S --> SF[File-Based Skills]
S --> SC[Code-Defined Skills]
S --> SB[Class-Based Skills]
SF --> Instructions
SF --> Resources
SF --> Scripts
SC --> Instructions
SC --> Resources
SC --> Scripts
endCore Concepts
Tools
Tools in the Agent Framework are the primary mechanism for enabling agents to perform actions. A tool is essentially a callable function that the agent can invoke during its execution. Sources: python/packages/core/agent_framework/_tools.py:1-80
| Tool Type | Description | Use Case |
|---|---|---|
| Function Tool | Decorated Python function | Custom operations in Python agents |
| Tool Definition | Declarative tool specification | Cross-platform tool definition |
| Managed Tool | Pre-built tool from providers | Anthropic skills, Azure AI services |
Skills
Skills provide specialized knowledge and capabilities to agents. Each skill contains:
- Instructions: Domain-specific guidance for the agent
- Resources: Reference documents and data files
- Scripts: Executable code for automated operations Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkill.cs:20-45
graph LR
subgraph SkillStructure
I[Instructions] --> C[Content]
R[Resources] --> C
S[Scripts] --> C
F[Frontmatter] --> C
end
C --> A[Agent Skill]
A --> P[Agent Skills Provider]Skill Types
File-Based Skills
File-based skills are defined through a SKILL.md file containing YAML frontmatter and Markdown body. The frontmatter declares skill metadata, while the body contains instructions and references to resources and scripts. Sources: dotnet/src/Microsoft.Agents.AI/Skills/File/AgentFileSkill.cs:1-50
SKILL.md Structure:
Source: https://github.com/microsoft/agent-framework / Human Manual
Workflows and Orchestration
Related topics: Agent System, Hosting and Deployment Patterns, Observability and Telemetry
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agent System, Hosting and Deployment Patterns, Observability and Telemetry
Workflows and Orchestration
Overview
The Agent Framework provides a comprehensive workflow and orchestration system that enables developers to compose multiple agents into structured execution patterns. Workflows serve as the architectural backbone for multi-agent coordination, allowing agents to be chained, parallelized, or conditionally executed based on runtime state.
Workflows in the Agent Framework are built using an executor-based architecture where each component (agents, functions, workflows) implements a common executor interface. This design enables flexible composition through a builder pattern, supporting both imperative (code-based) and declarative (YAML-based) workflow definitions.
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs
Architecture
Core Concepts
graph TD
A[Input] --> B[Executor]
B --> C[Executor]
C --> D[Executor]
B --> E[Executor]
D --> F[Aggregator]
E --> F
F --> G[Output]
H[Workflow] --> B
H --> C
H --> D
H --> E
H --> F
I[Builder] -->|Builds| HThe orchestration system is built on three fundamental abstractions:
| Concept | Description |
|---|---|
| Executor | A callable unit that processes inputs and produces outputs |
| Workflow | A composed structure of executors connected by edges |
| Builder | Fluent API for constructing workflows programmatically |
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/Workflow.cs
Executor Types
Executors form the atomic units of workflow execution:
| Executor Type | Purpose |
|---|---|
AIAgent | Encapsulates an AI agent that processes text and returns responses |
FunctionExecutor | Executes synchronous or asynchronous functions |
WorkflowExecutor | Wraps an entire sub-workflow for nested orchestration |
OutputMessagesExecutor | Terminal executor that captures final output |
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs
Workflow Composition Patterns
Sequential Workflow
Agents or functions execute in a linear chain, where each component receives the output of the previous one.
graph LR
A[Input] --> B[Agent 1]
B --> C[Agent 2]
C --> D[Agent 3]
D --> E[Output]Example: Translation Chain
Input text (English)
│
▼
┌─────────────┐ ┌──────────────┐ ┌──────────────┐
│ French Agent │ → │ Spanish Agent │ → │ English Agent │
│ (translate) │ │ (translate) │ │ (translate) │
└─────────────┘ └──────────────┘ └──────────────┘
│
▼
Final output
Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Workflow-Simple/README.md
Concurrent Workflow
Multiple agents operate on the same input simultaneously, with outputs aggregated into a collection.
graph TD
A[Input] --> B[Agent 1]
A --> C[Agent 2]
A --> D[Agent 3]
B --> E[Aggregator]
C --> E
D --> E
E --> F[Output Collection]Conditional Workflow
Execution branches based on runtime conditions or agent responses.
graph TD
A[Input] --> B[Router Agent]
B -->|Condition A| C[Agent A]
B -->|Condition B| D[Agent B]
B -->|Default| E[Default Agent]
C --> F[Output]
D --> F
E --> FDeclarative Workflows
The Agent Framework supports defining workflows using YAML, enabling configuration-driven orchestration without code changes.
Workflow Structure
name: my-workflow
description: A declarative workflow example
actions:
- kind: SetValue
path: turn.greeting
value: Hello, World!
- kind: SendActivity
activity:
text: =turn.greeting
Sources: python/samples/03-workflows/declarative/README.md
Action Types
#### Variable Actions
| Action | Purpose |
|---|---|
SetValue | Set a variable in state |
SetVariable | Set a variable (.NET style naming) |
AppendValue | Append to a list |
ResetVariable | Clear a variable |
#### Control Flow
| Action | Purpose |
|---|---|
If | Conditional branching |
Switch | Multi-way branching |
Foreach | Iterate over collections |
RepeatUntil | Loop until condition |
GotoAction | Jump to labeled action |
#### Output
| Action | Purpose |
|---|---|
SendActivity | Send text/attachments to user |
Sources: python/samples/03-workflows/declarative/README.md
Durable Orchestration
For long-running workflows that may span hours or days, the Agent Framework provides durable orchestration using the Durable Task Framework.
Architecture
graph TD
A[Client] -->|Schedule| B[Orchestrator]
B -->|Calls| C[Activity]
B -->|Calls| D[Agent]
C -->|Result| B
D -->|Result| B
B -->|Persisted| E[State Store]Key Features
| Feature | Description |
|---|---|
| Long-running execution | Workflows persist across process restarts |
| Human-in-the-loop | Workflows can pause and await human approval |
| Event-driven | Activities can send notifications and wait for responses |
| State management | Built-in state persistence with checkpointing |
Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/ServiceCollectionExtensions.cs
Human-in-the-Loop Pattern
Durable workflows support pausing for human approval:
- Initial Generation: Agent creates content based on input
- Review Loop: Up to configurable maximum attempts
- Activity notifies user for approval
- Orchestration waits for approval event OR timeout
- Resolution:
- Approved: Content published, workflow completes
- Rejected: Feedback incorporated, regeneration triggered
- Timeout: Error raised
Sources: python/samples/04-hosting/durabletask/07_single_agent_orchestration_hitl/README.md
Durable Workflow Context
The DurableWorkflowContext manages workflow state and events:
| Property | Type | Description |
|---|---|---|
SentMessages | List<TypedPayload> | Messages sent during activity execution |
OutboundEvents | List<WorkflowEvent> | Events added during execution |
StateUpdates | Dictionary<string, string?> | State modifications |
ClearedScopes | HashSet<string> | Scopes cleared during execution |
HaltRequested | bool | Whether executor requested workflow halt |
Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/Workflows/DurableWorkflowContext.cs
Workflow Builder API
.NET Implementation
The WorkflowBuilder class provides a fluent API for composing workflows:
// Sequential composition
Workflow workflow = WorkflowBuilder.BuildSequential(
"MyWorkflow",
agent1, agent2, agent3);
// Concurrent composition
Workflow workflow = WorkflowBuilder.BuildConcurrent(
"ConcurrentWorkflow",
agent1, agent2, agent3);
Builder Configuration Options:
| Option | Description |
|---|---|
ReassignOtherAgentsAsUsers | When true, other agents in scope become user participants |
ForwardIncomingMessages | When true, incoming messages propagate through the chain |
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs
Python Implementation
The Python workflow system provides similar builder patterns:
from agent_framework.workflows import WorkflowBuilder
workflow = WorkflowBuilder(
start_executor=first_agent
).add_edge(
from_node=first_agent,
to_node=second_agent
).build()
MagenticBuilder for Multi-Agent Orchestration:
from agent_framework.orchestrations import MagenticBuilder
workflow = MagenticBuilder(
participants=[researcher, writer, reviewer],
manager_agent=manager_agent,
).build()
Sources: python/packages/orchestrations/README.md
Workflow State Management
State Persistence
Workflows maintain state throughout execution:
graph LR
A[Checkpoint] --> B[State Dictionary]
B --> C[Resume]
D[Input] --> E[Executor]
E --> F[Output]
E -->|State Update| BState Variables
Custom state variables are stored alongside system state:
| Key | Purpose |
|---|---|
_executor_state | Internal executor tracking (hidden from user state) |
* (custom) | User-defined state variables |
Sources: python/packages/devui/frontend/src/components/features/workflow/checkpoint-info-modal.tsx
Configuration
Service Registration
#### .NET
<PropertyGroup>
<InjectSharedWorkflowsSettings>true</InjectSharedWorkflowsSettings>
<InjectSharedWorkflowsExecution>true</InjectSharedWorkflowsExecution>
</PropertyGroup>
#### Durable Options Configuration
services.ConfigureDurableWorkflows(options =>
{
options.Workflows.HubName = "MyAgentHub";
options.Workflows.TaskOrchestration.Type = OrchestrationType.InProcess;
});
Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/ServiceCollectionExtensions.cs
Python Environment Variables
export FOUNDRY_PROJECT_ENDPOINT="https://your-project-endpoint"
export FOUNDRY_MODEL="gpt-4o" # optional, defaults to gpt-4o
Sources: python/samples/01-get-started/README.md
Sample Code Reference
Basic Sequential Workflow (.NET)
// Create agent executors
ExecutorBinding agent1 = agent1.BindAsExecutor(options);
ExecutorBinding agent2 = agent2.BindAsExecutor(options);
// Build sequential chain
WorkflowBuilder builder = new WorkflowBuilder(agent1);
builder.AddEdge(agent1, agent2);
// Add terminal output executor
OutputMessagesExecutor end = new();
builder = builder.AddEdge(agent2, end).WithOutputFrom(end);
Workflow workflow = builder.Build();
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs
Durable Workflow with HITL (Python)
# 1. Initial generation
content = yield writer_agent.generate(topic)
# 2. Notify for review
yield send_notification(content)
# 3. Wait for approval/rejection
approval_event = yield wait_for_event("ApprovalEvent")
if approval_event.approved:
yield publish_content(content)
else:
# Regenerate with feedback
content = yield writer_agent.generate(topic, feedback=approval_event.feedback)
Sources: python/samples/04-hosting/durabletask/07_single_agent_orchestration_hitl/README.md
Monitoring and Debugging
Durable Task Dashboard
View orchestration state at http://localhost:8082:
| View | Information Available |
|---|---|
| Orchestrations | Instance status, runtime state, input/output, execution history |
| Agents | Conversation history, agent state |
OpenTelemetry Traces
The framework emits Otel traces for workflow operations:
devui ./agents --instrumentation
Sources: python/packages/devui/README.md
See Also
Sources: [dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs](https://github.com/microsoft/agent-framework/blob/main/dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs)
Middleware System
Related topics: Agent System, Tools and Skills
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agent System, Tools and Skills
Middleware System
Overview
The Middleware System in the Microsoft Agent Framework provides a powerful extensibility mechanism that allows developers to intercept, modify, and control the flow of interactions between agents, tools, and AI models. Middleware components act as interceptors in the request-response pipeline, enabling cross-cutting concerns such as logging, authentication, tool approval, and request filtering.
According to the architecture decision record, the middleware system was designed to solve the problem of filtering agent requests and responses without tightly coupling such logic to the core agent implementation. Sources: docs/decisions/0007-agent-filtering-middleware.md
Architecture
Core Concepts
The middleware system follows a pipeline-based architecture where requests flow through a chain of middleware components before reaching the core agent logic, and responses flow back through the same chain in reverse order.
graph TD
A[User Request] --> B[Middleware 1]
B --> C[Middleware 2]
C --> D[Middleware N]
D --> E[Core Agent Logic]
E --> F[Response from Agent]
F --> D
D --> C
C --> B
B --> G[User Response]
H[Tool Calls] <-->|Intercepted| D
I[AI Model] <-->|Filtered| DMiddleware Types
| Type | Purpose | Python Implementation | .NET Implementation |
|---|---|---|---|
| Function-based | Simple callable middleware | @middleware decorator | Delegate-based |
| Class-based | State-aware middleware with full lifecycle control | Middleware abstract class | IAgentMiddleware interface |
| Tool Approval | Approves or rejects tool executions | Custom handler | ToolApprovalAgent |
Sources: python/packages/core/agent_framework/_middleware.py | dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs
Python Middleware Implementation
Function-Based Middleware
The simplest way to define middleware in Python is using the @middleware decorator. This creates a middleware that wraps an agent and intercepts all calls.
from agent_framework import Agent, middleware
@middleware
async def my_logging_middleware(agent, tool_call, context, call_next):
print(f"Tool call: {tool_call.name}")
result = await call_next(agent, tool_call, context)
print(f"Result: {result}")
return result
# Apply middleware to agent
agent = Agent(...)
wrapped_agent = my_logging_middleware(agent)
Sources: python/samples/02-agents/middleware/function_based_middleware.py
Middleware Base Class
For more complex scenarios, you can extend the Middleware abstract class:
from agent_framework import Middleware, Agent
class ToolApprovalMiddleware(Middleware):
def __init__(self):
self.pending_approvals = []
async def on_tool_call(
self,
agent: Agent,
tool_call: ToolCall,
context: Context
) -> Awaitable[Result]:
# Custom logic to approve or reject
if self._requires_approval(tool_call):
return Result(success=False, error="Approval required")
return await self.next(agent, tool_call, context)
Sources: python/packages/core/agent_framework/_middleware.py
Middleware Pipeline Execution
The middleware system processes requests through a pipeline where each middleware can:
- Pre-process: Act on the request before passing to the next middleware
- Pass through: Forward the request to the next component in the chain
- Post-process: Act on the response as it flows back up the chain
- Short-circuit: Return a response without calling subsequent middleware
sequenceDiagram
participant Client
participant MW1 as Middleware 1
participant MW2 as Middleware 2
participant Agent as Core Agent
Client->>MW1: request
MW1->>MW2: pass to next
MW2->>Agent: forward request
Agent-->>MW2: response
MW2-->>MW1: post-process
MW1-->>Client: final response.NET Middleware Implementation
ToolApprovalAgent
The .NET implementation provides a ToolApprovalAgent that wraps an agent and requires approval for tool executions. This is particularly useful for scenarios where human-in-the-loop approval is required for sensitive operations.
public class ToolApprovalAgent : Agent
{
public ToolApprovalAgent(
Agent inner,
IToolApprover toolApprover,
Func<ToolCall, bool>? shouldApprove = null);
public override async Task<Result> OnToolCallAsync(
ToolCall toolCall,
Context context,
CancellationToken cancellationToken);
}
Sources: dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs
Middleware Registration
In .NET, middleware is typically registered through dependency injection and configured on the agent:
// Program.cs from the sample
var builder = Kernel.CreateBuilder();
// Register middleware
builder.Services.AddSingleton<IAgentMiddleware, LoggingMiddleware>();
var kernel = builder.Build();
// Configure agent with middleware
var agent = new ChatClientAgent(chatClient)
.WithMiddleware<LoggingMiddleware>()
.WithMiddleware<ToolApprovalMiddleware>();
Sources: dotnet/samples/02-agents/Agents/Agent_Step11_Middleware/Program.cs
Built-in .NET Middleware
| Middleware | Description |
|---|---|
LoggingMiddleware | Logs all requests, responses, and tool calls |
ToolApprovalMiddleware | Requires approval before tool execution |
RateLimitMiddleware | Enforces rate limiting on agent requests |
AuthenticationMiddleware | Validates authentication tokens |
Middleware API Reference
Python Middleware API
#### @middleware Decorator
Creates a simple function-based middleware.
@middleware
async def middleware_func(agent, tool_call, context, call_next):
"""Middleware function signature."""
pass
| Parameter | Type | Description |
|---|---|---|
agent | Agent | The agent instance being wrapped |
tool_call | ToolCall | The tool call being processed |
context | Context | Execution context with state |
call_next | Callable | Function to invoke the next middleware/agent |
Sources: python/packages/core/agent_framework/_middleware.py
#### Middleware Base Class
Abstract class for stateful middleware:
class Middleware(ABC):
@abstractmethod
async def on_tool_call(
self,
agent: Agent,
tool_call: ToolCall,
context: Context
) -> Result:
"""Called when a tool call is intercepted."""
pass
| Method | Description |
|---|---|
on_tool_call | Intercepts and processes tool calls |
on_request | Intercepts incoming requests |
on_response | Intercepts outgoing responses |
next() | Passes control to the next middleware |
.NET Middleware API
#### IAgentMiddleware Interface
public interface IAgentMiddleware
{
Task<Result> InvokeAsync(
AgentContext context,
MiddlewareDelegate next,
CancellationToken cancellationToken);
}
| Parameter | Type | Description |
|---|---|---|
context | AgentContext | Contains request, response, and state |
next | MiddlewareDelegate | Delegate to invoke the next middleware |
cancellationToken | CancellationToken | Cancellation support |
#### Agent Extension Methods
public static class AgentMiddlewareExtensions
{
public static TAgent WithMiddleware<TMiddleware>(
this TAgent agent,
params object[] args) where TAgent : Agent;
public static TAgent WithMiddleware(
this TAgent agent,
Type middlewareType,
params object[] args) where TAgent : Agent;
}
Use Cases
1. Tool Approval Workflow
A common use case is requiring human approval before executing sensitive tools:
graph LR
A[Agent] --> B{ToolApprovalMiddleware}
B --> C{Is Sensitive?}
C -->|Yes| D[Request Human Approval]
D --> E{Approved?}
E -->|Yes| F[Execute Tool]
E -->|No| G[Reject & Return Error]
C -->|No| FSources: dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs
2. Request/Response Logging
Middleware can log all interactions for debugging and auditing:
@middleware
async def audit_logging_middleware(agent, tool_call, context, call_next):
log_entry = {
"timestamp": datetime.utcnow(),
"tool_name": tool_call.name,
"parameters": tool_call.arguments,
"user": context.user_id
}
await audit_log(log_entry)
return await call_next(agent, tool_call, context)
3. Request Filtering
Middleware can filter or modify requests before they reach the agent:
public class ContentFilterMiddleware : IAgentMiddleware
{
public async Task<Result> InvokeAsync(
AgentContext context,
MiddlewareDelegate next,
CancellationToken cancellationToken)
{
// Check for prohibited content
if (ContainsProhibitedContent(context.Request.Text))
{
return new Result { Success = false, Error = "Content filtered" };
}
return await next(context, cancellationToken);
}
}
Configuration
Python Configuration
agent = Agent(
name="my_agent",
instructions="You are a helpful assistant",
middleware=[
LoggingMiddleware(),
ToolApprovalMiddleware(approver=human_approver),
RateLimitMiddleware(max_calls_per_minute=60)
]
)
.NET Configuration
// Via dependency injection
builder.Services.AddTransient<IAgentMiddleware, LoggingMiddleware>();
builder.Services.AddSingleton<IToolApprover, HumanToolApprover>();
// Or inline during agent creation
var agent = new ChatClientAgent(chatClient)
.WithMiddleware<LoggingMiddleware>()
.WithMiddleware(sp.GetRequiredService<ToolApprovalMiddleware>());
Best Practices
- Keep middleware focused: Each middleware should handle a single concern (logging, authentication, etc.)
- Always call
nextor return: Ensure middleware either passes control to the next component or returns a response
- Handle exceptions: Wrap
nextcalls in try-catch to prevent unhandled exceptions from breaking the pipeline
- Order matters: Register middleware in the correct order based on dependencies
- Avoid blocking operations: Use async/await patterns to prevent blocking the pipeline
- Document side effects: Clearly document any side effects middleware may have
Error Handling
Middleware should gracefully handle errors and either:
- Recover and continue the pipeline
- Short-circuit with an appropriate error response
- Propagate the error with additional context
@middleware
async def error_handling_middleware(agent, tool_call, context, call_next):
try:
return await call_next(agent, tool_call, context)
except ToolExecutionException as e:
logger.error(f"Tool execution failed: {e}")
return Result(
success=False,
error=f"Tool execution failed: {str(e)}",
context={"original_error": e}
)
Related Components
| Component | Relationship |
|---|---|
| Agent | Core component that middleware intercepts |
| Tools | Often the target of middleware interception |
| Context | State container passed through middleware pipeline |
| Skills | Can be combined with middleware for complex workflows |
Summary
The Middleware System provides a flexible, extensible pipeline architecture for intercepting and modifying agent behavior. It supports both simple function-based middleware and complex class-based middleware with full lifecycle control. The system is available across both Python and .NET implementations, enabling consistent cross-platform extensibility patterns.
Key takeaways:
- Middleware enables cross-cutting concerns without modifying core agent code
- Both Python and .NET provide decorator/attribute-based middleware creation
- Tool approval is a common built-in middleware pattern
- Middleware can short-circuit, pass through, or modify requests and responses
- Proper ordering and error handling are essential for reliable middleware pipelines
Sources: [python/packages/core/agent_framework/_middleware.py]() | [dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs]()
AI Provider Integration
Related topics: Agent System, Getting Started with Microsoft Agent Framework
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agent System, Getting Started with Microsoft Agent Framework
AI Provider Integration
Overview
The AI Provider Integration layer in Microsoft Agent Framework enables agents to communicate with various Large Language Model (LLM) backends through a unified abstraction. This architecture allows developers to switch between different AI providers—such as OpenAI, Azure AI Foundry, Anthropic, and Ollama—without modifying agent logic. The provider system acts as the bridge between the agent's execution framework and the underlying AI models.
The framework supports both Python and .NET ecosystems, with provider implementations that expose chat completion clients, responses API clients, and specialized agent integrations. Each provider package implements common interfaces while leveraging provider-specific authentication, configuration, and API semantics.
Architecture Overview
graph TD
subgraph "Agent Layer"
A[Agent Instance]
S[Skills/Tools]
end
subgraph "Provider Abstraction"
P[Provider Interface]
end
subgraph "Concrete Providers"
O[OpenAI]
F[Azure AI Foundry]
An[Anthropic Claude]
Ol[Ollama]
G[GitHub Copilot]
end
subgraph "External Services"
OS[OpenAI API]
FS[Azure Foundry]
AS[Anthropic API]
LS[Local Ollama]
GS[GitHub Copilot]
end
A --> P
S --> P
P --> O
P --> F
P --> An
P --> Ol
P --> G
O --> OS
F --> FS
An --> AS
Ol --> LS
G --> GSProvider Packages
Python Provider Packages
| Package | Purpose | Install Command |
|---|---|---|
agent-framework-openai | OpenAI and Azure OpenAI integration | pip install agent-framework-openai |
agent-framework-anthropic | Anthropic Claude model support | pip install agent-framework-anthropic |
agent-framework-foundry | Azure AI Foundry integration | pip install agent-framework-foundry |
agent-framework-claude | Claude-specific agentic capabilities | pip install agent-framework-claude --pre |
agent-framework-ollama | Local Ollama model support | pip install agent-framework-ollama --pre |
Sources: python/samples/02-agents/providers/README.md
.NET Provider Assemblies
| Assembly | Namespace | Purpose |
|---|---|---|
Microsoft.Agents.AI.OpenAI | Microsoft.Agents.AI.OpenAI | OpenAI Response API and Chat Completions |
Microsoft.Agents.AI.Foundry | Microsoft.Agents.AI.Foundry | Azure AI Foundry agent and client integration |
Microsoft.Agents.AI.GitHub.Copilot | Microsoft.Agents.AI.GitHub.Copilot | GitHub Copilot agent extension |
Azure AI Foundry Provider
Azure AI Foundry provides the primary production-grade provider for enterprise deployments. It integrates with Azure AI Foundry projects, enabling agents to leverage Foundry's model deployments, content safety, and telemetry.
Python Implementation
The Foundry provider package exports core classes for connecting to Azure AI Foundry projects:
# python/packages/foundry/agent_framework_foundry/__init__.py
# Core exports include:
# - FoundryChatCompletionClient
# - FoundryAgent
# - Configuration utilities
The provider requires environment configuration:
export FOUNDRY_PROJECT_ENDPOINT="https://<resource>.services.ai.azure.com/api/projects/<project>"
export FOUNDRY_MODEL="<deployment-name>"
Sources: python/samples/02-agents/providers/README.md
.NET Implementation
The .NET Foundry provider exposes two primary integration points:
#### FoundryAgent
The FoundryAgent class serves as the agent implementation backed by Azure AI Foundry:
// dotnet/src/Microsoft.Agents.AI.Foundry/FoundryAgent.cs
public class FoundryAgent
{
// Provides agent creation and lifecycle management
// Integrates with Azure AI Foundry service
}
#### AzureAIProjectChatClient
The AzureAIProjectChatClient wraps the Azure AI Foundry chat client with Agent Framework conventions:
// dotnet/src/Microsoft.Agents.AI.Foundry/AzureAIProjectChatClient.cs
public class AzureAIProjectChatClient
{
// Manages project-scoped chat interactions
// Handles authentication and connection to Foundry
}
Sources: dotnet/src/Microsoft.Agents.AI.Foundry/FoundryAgent.cs
Foundry Configuration Options
| Parameter | Description | Default |
|---|---|---|
project_endpoint | Azure AI Foundry project URL | Required |
model | Model deployment name | gpt-4o |
api_version | API version for requests | Latest stable |
credential | Azure authentication credential | DefaultAzureCredential |
OpenAI Provider
The OpenAI provider enables agents to connect directly to OpenAI's API or Azure OpenAI Service endpoints.
Python Integration
from agent_framework.openai import OpenAIChatClient, OpenAIChatCompletionClient
# Direct OpenAI usage
client = OpenAIChatClient(model="gpt-4")
# Using Responses API
client = OpenAIChatCompletionClient(model="gpt-4")
Sources: python/packages/openai/agent_framework_openai/__init__.py
.NET Integration
The .NET OpenAI provider uses the OpenAIResponseClientExtensions class to create agent instances:
// dotnet/src/Microsoft.Agents.AI.OpenAI/Extensions/OpenAIResponseClientExtensions.cs
public static class OpenAIResponseClientExtensions
{
public static ChatClientAgent AsAIAgent(
this ResponsesClient client,
string? model = null,
string? instructions = null,
string? name = null,
string? description = null,
IList<AITool>? tools = null,
Func<IChatClient, IChatClient>? clientFactory = null,
ILoggerFactory? loggerFactory = null,
IServiceProvider? services = null)
}
Sources: dotnet/src/Microsoft.Agents.AI.OpenAI/Extensions/OpenAIResponseClientExtensions.cs
Anthropic Provider
The Anthropic provider integrates Claude models into the Agent Framework, supporting both direct API access and provider-specific agentic capabilities.
Python Integration
from agent_framework_anthropic import ClaudeAgent
agent = ClaudeAgent(
model="claude-sonnet-4-20250514",
# Provider-specific configuration
)
Sources: python/packages/anthropic/agent_framework_anthropic/__init__.py
The agent-framework-claude package specifically enables Claude agentic capabilities through the Agent Framework:
pip install agent-framework-claude --pre
Sources: python/packages/claude/README.md
Ollama Provider
Ollama enables local LLM deployments, useful for development, testing, and privacy-sensitive scenarios.
Configuration
export OLLAMA_BASE_URL="http://localhost:11434" # Default
export OLLAMA_MODEL="llama3.2" # Model to use
Installation
pip install agent-framework-ollama --pre
Sources: python/packages/ollama/README.md
Samples demonstrating Ollama connector usage are available at:
python/samples/02-agents/providers/ollama/
GitHub Copilot Provider
The .NET implementation includes a GitHub Copilot integration through the CopilotClientExtensions:
// dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs
public static AIAgent AsAIAgent(
this CopilotClient client,
bool ownsClient = false,
string? id = null,
string? name = null,
string? description = null,
IList<AITool>? tools = null,
string? instructions = null)
Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs
Provider Selection Workflow
graph LR
A[Choose Provider] --> B{Have Azure Account?}
B -->|Yes| C[Azure AI Foundry]
B -->|No| D[Direct OpenAI]
C --> E[Configure Endpoint]
D --> F[Set API Key]
E --> G[Create ChatClient]
F --> G
G --> H[Initialize Agent]
H --> I[Attach Skills/Tools]
I --> J[Execute Agent]Agent ID Model
Providers use a standardized AgentId model for identification:
// dotnet/src/Microsoft.Agents.AI.Hosting.OpenAI/Responses/Models/AgentId.cs
internal sealed class AgentId
{
[JsonPropertyName("type")]
public AgentIdType Type { get; init; }
[JsonPropertyName("name")]
public string Name { get; init; }
[JsonPropertyName("version")]
public string Version { get; init; }
}
Sources: dotnet/src/Microsoft.Agents.AI.Hosting.OpenAI/Responses/Models/AgentId.cs
Sample Applications
Python Provider Samples
| Sample | Provider | Description |
|---|---|---|
providers/openai/ | OpenAI | Basic OpenAI integration |
providers/foundry/ | Foundry | Azure AI Foundry integration |
providers/anthropic/ | Anthropic | Claude model usage |
providers/ollama/ | Ollama | Local model deployment |
Run samples:
cd python
uv run samples/02-agents/providers/<provider-name>/
Sources: python/samples/02-agents/providers/README.md
.NET Provider Samples
cd dotnet/samples/02-agents/AgentProviders
dotnet run
Sources: dotnet/samples/02-agents/AgentProviders/README.md
Authentication Patterns
| Provider | Authentication Method |
|---|---|
| Azure AI Foundry | DefaultAzureCredential, AzureCliCredential |
| OpenAI | API Key via environment or parameter |
| Anthropic | API Key via environment |
| Ollama | No authentication (local) |
| GitHub Copilot | Copilot client authentication |
Most Azure-based providers support AzureCliCredential, requiring az login before execution:
az login
Best Practices
- Environment Variables: Store provider credentials in environment variables rather than hardcoding
- Provider Selection: Use Azure AI Foundry for production, OpenAI for development, Ollama for testing
- Client Reuse: Create chat clients once and reuse across agent instances when possible
- Error Handling: Implement retry logic for transient provider failures
- Model Selection: Match model capabilities to task requirements for cost efficiency
Deprecated Integrations
The Microsoft.Agents.AI.AzureAI.Persistent package is marked obsolete:
[Obsolete("Please use the latest Foundry Agents service via the Microsoft.Agents.AI.AzureAI package.")]
public static async Task<ChatClientAgent> CreateAIAgentAsync(...)
Sources: dotnet/src/Microsoft.Agents.AI.AzureAI.Persistent/PersistentAgentsClientExtensions.cs
Migration to the Foundry provider is recommended for persistent agent use cases.
Sources: [python/samples/02-agents/providers/README.md](https://github.com/microsoft/agent-framework/blob/main/python/samples/02-agents/providers/README.md)
Sessions, History, and State Management
Related topics: Agent System, Workflows and Orchestration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agent System, Workflows and Orchestration
Sessions, History, and State Management
Agent Framework provides a comprehensive system for managing conversation state across multi-turn interactions. This system encompasses sessions that track user conversations, history providers that store and retrieve chat messages, and state management mechanisms that preserve context throughout agent interactions.
Overview
The session and state management architecture in Agent Framework enables persistent conversations across multiple exchanges. At its core, the framework uses AgentSession objects to uniquely identify conversation threads, ChatHistoryProvider implementations to store message history, and various compaction strategies to manage context window constraints.
graph TD
A[Agent Invocation] --> B[AgentSession]
B --> C[ChatHistoryProvider]
C --> D[State Storage]
B --> E[StateBag]
D --> F[Persistent Storage]
E --> G[In-Memory State]
C --> H[Compaction Strategy]
H --> I[Context Reduction]
style A fill:#e1f5ff
style F fill:#fff3e0
style G fill:#e8f5e9Agent Session
An AgentSession represents a unique conversation context between a user and an agent. The session serves as the primary container for all stateful information related to a specific interaction.
Session Structure
The session object contains metadata and state information:
| Property | Type | Description |
|---|---|---|
session_id | string | Unique identifier for the session |
user_id | string | Identifier for the user |
agent_id | string | Identifier for the agent |
metadata | dict | Application-specific metadata |
state_bag | dict | Custom state storage |
created_at | datetime | Session creation timestamp |
last_accessed_at | datetime | Last activity timestamp |
Sources: python/packages/core/agent_framework/_sessions.py
Session Lifecycle
Sessions are created when a user initiates a conversation and persist until explicitly terminated. The framework supports both in-memory and persistent session storage backends.
# Session creation pattern (Python)
session = AgentSession(
user_id="user123",
agent_id="assistant-01",
metadata={"conversation_type": "support"}
)
Chat History Management
Chat history providers are responsible for storing, retrieving, and managing conversation messages. The framework provides multiple built-in providers and supports custom implementations.
Built-in History Providers
| Provider | Storage Backend | Use Case |
|---|---|---|
InMemoryChatHistoryProvider | Memory | Development, testing |
CosmosChatHistoryProvider | Azure Cosmos DB | Production, scalable |
RedisChatHistoryProvider | Redis | Production, high-performance |
| Custom Provider | Configurable | Application-specific needs |
In-Memory Provider
The InMemoryChatHistoryProvider provides session-scoped message storage suitable for single-instance deployments:
public class InMemoryChatHistoryProvider : ChatHistoryProvider
{
private readonly SessionState _sessionState;
public List<ChatMessage> GetMessages(AgentSession? session)
=> this._sessionState.GetOrInitializeState(session).Messages;
public void SetMessages(AgentSession? session, List<ChatMessage> messages)
{
Throw.IfNull(messages);
State state = this._sessionState.GetOrInitializeState(session);
state.Messages = messages;
}
}
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/InMemoryChatHistoryProvider.cs:1-30
Cosmos DB Provider
For production deployments requiring persistence and scalability, the Cosmos DB provider offers fully managed storage:
# Cosmos DB history provider initialization
from agent_framework import AgentFrameworkClient
client = AgentFrameworkClient(endpoint="your-endpoint")
history_provider = client.create_chat_history_provider(
provider_type="cosmos",
connection_string="your-connection-string",
database="agent_sessions",
container="chat_history"
)
Sources: dotnet/src/Microsoft.Agents.AI.CosmosNoSql/CosmosChatHistoryProvider.cs
Redis Provider
The Redis provider provides low-latency access to chat history with automatic expiration:
# Redis session management
from agent_framework_redis import RedisSessionManager
session_manager = RedisSessionManager(
host="localhost",
port=6379,
prefix="agent_session:",
ttl=3600 # 1 hour TTL
)
Sources: python/packages/redis/agent_framework_redis/__init__.py
Custom History Provider
Developers can implement custom history providers by extending the base ChatHistoryProvider class:
from agent_framework import ChatHistoryProvider, ChatMessage
from typing import List, Optional
class CustomHistoryProvider(ChatHistoryProvider):
def __init__(self, storage_backend):
self._storage = storage_backend
async def get_messages(self, session_id: str) -> List[ChatMessage]:
return await self._storage.retrieve(session_id)
async def add_message(self, session_id: str, message: ChatMessage) -> None:
await self._storage.append(session_id, message)
async def clear_history(self, session_id: str) -> None:
await self._storage.delete(session_id)
Sources: python/samples/02-agents/conversations/custom_history_provider.py
Compaction and Context Management
As conversations grow, managing context window limits becomes critical. The framework provides compaction strategies that automatically reduce message history while preserving important context.
Compaction Strategy Interface
Both Python and .NET implementations define the CompactionStrategy interface:
| Property | Type | Description |
|---|---|---|
max_context_window_tokens | int | Maximum tokens in context window |
max_output_tokens | int | Reserved tokens for model output |
available_input_tokens | int | Computed available for input |
public abstract class CompactionStrategy
{
public int MaxContextWindowTokens { get; }
public int MaxOutputTokens { get; }
public int AvailableInputTokens => MaxContextWindowTokens - MaxOutputTokens;
public abstract Task<IEnumerable<ChatMessage>> CompactAsync(
IList<ChatMessage> messages,
CancellationToken cancellationToken = default);
}
Sources: dotnet/src/Microsoft.Agents.AI/Compaction/CompactionStrategy.cs
Compaction Trigger Events
The compaction process can be configured to trigger at different points in the message lifecycle:
| Trigger Event | Timing | Use Case |
|---|---|---|
BeforeMessagesRetrieval | Before history fetch | Optimize retrieval |
AfterMessagesRetrieval | After history fetch | Post-processing |
OnTokenThreshold | At token limit | Aggressive reduction |
// Configure pre-retrieval compaction
if (this.ReducerTriggerEvent == InMemoryChatHistoryProviderOptions.ChatReducerTriggerEvent.BeforeMessagesRetrieval
&& this.ChatReducer is not null)
{
await ReduceMessagesAsync(this.ChatReducer, state, cancellationToken);
}
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/InMemoryChatHistoryProvider.cs:25-45
Python Compaction Implementation
The Python implementation follows a similar pattern with configurable compaction strategies:
class CompactionStrategy(ABC):
def __init__(
self,
max_context_window_tokens: int = 128000,
max_output_tokens: int = 4096
):
self.max_context_window_tokens = max_context_window_tokens
self.max_output_tokens = max_output_tokens
@property
def available_input_tokens(self) -> int:
return self.max_context_window_tokens - self.max_output_tokens
@abstractmethod
async def compact(
self,
messages: List[ChatMessage],
cancellation_token: Optional[CancellationToken] = None
) -> List[ChatMessage]:
pass
Sources: python/packages/core/agent_framework/_compaction.py
State Management
Session State Bag
The StateBag provides a dictionary-like interface for storing custom application state within a session:
public class AgentSession
{
public IDictionary<string, object> StateBag { get; set; }
}
// Usage
session.StateBag["last_intent"] = "greeting";
session.StateBag["user_preference"] = new { theme = "dark", language = "en" };
Context Providers
AIContextProvider instances enable middleware-style processing of conversation context:
graph LR
A[User Message] --> B[AIContextProvider.BeforeInvoke]
B --> C[Agent Invocation]
C --> D[AIContextProvider.Invoked]
D --> E[Response to User]
F[Update State] -.-> B
G[Log/Audit] -.-> D
H[Extract Memories] -.-> Dpublic ValueTask BeforeInvokeAsync(InvokingContext context, CancellationToken cancellationToken = default)
{
// Use the request and response messages to:
// - Update state based on conversation outcomes
// - Extract and store memories or preferences
// - Log or audit conversation details
return default;
}
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIContextProvider.cs:1-25
Chat History Memory Provider Scope
For scoping chat history across applications, agents, or sessions:
public sealed class ChatHistoryMemoryProviderScope
{
public string? ApplicationId { get; set; }
public string? AgentId { get; set; }
public string? SessionId { get; set; }
public string? UserId { get; set; }
}
| Scope Property | Effect When Set |
|---|---|
ApplicationId | Restricts history to specific application |
AgentId | Restricts history to specific agent |
SessionId | Restricts history to specific session |
UserId | Restricts history to specific user |
Sources: dotnet/src/Microsoft.Agents.AI/Memory/ChatHistoryMemoryProviderScope.cs
Workflow Checkpointing
For long-running workflows, the framework supports checkpoint-based state persistence that allows recovery from failures and resumption of interrupted executions.
from agent_framework import CheckpointStorage, CosmosCheckpointStorage
checkpoint_storage = CosmosCheckpointStorage(
endpoint="your-cosmos-endpoint",
database="workflows",
container="checkpoints"
)
# Save checkpoint
await checkpoint_storage.save_checkpoint(
workflow_id="workflow-123",
step="step-3",
state={"progress": 75, "data": {...}},
metadata={"started_at": "2024-01-15T10:00:00Z"}
)
# Resume from checkpoint
checkpoint = await checkpoint_storage.load_checkpoint(
workflow_id="workflow-123"
)
Sources: python/samples/03-workflows/checkpoint/cosmos_workflow_checkpointing.py
Agent Configuration Options
The HarnessAgentOptions class demonstrates comprehensive configuration for session and history management:
public class HarnessAgentOptions
{
public ChatOptions? ChatOptions { get; set; }
public ChatHistoryProvider? ChatHistoryProvider { get; set; }
public IEnumerable<AIContextProvider>? AIContextProviders { get; set; }
}
| Option | Description |
|---|---|
ChatOptions | Configures instructions, tools, and model parameters |
ChatHistoryProvider | Storage backend for conversation history |
AIContextProviders | Middleware providers for context processing |
Sources: dotnet/src/Microsoft.Agents.AI.Harness/HarnessAgentOptions.cs:1-50
Agent Modes
Sessions can operate in different modes that affect behavior:
public sealed class AgentMode
{
public string Name { get; }
public string Description { get; }
}
public class AgentModeProviderOptions
{
public IReadOnlyList<AgentMode>? Modes { get; set; }
public string? DefaultMode { get; set; }
}
| Mode | Description |
|---|---|
plan | Interactive planning mode |
execute | Autonomous execution mode |
Sources: dotnet/src/Microsoft.Agents.AI/Harness/AgentMode/AgentModeProviderOptions.cs
Response Updates
The AgentResponseUpdate class represents streaming response data with full metadata:
public class AgentResponseUpdate
{
public string? AuthorName { get; set; }
public ChatRole? Role { get; set; }
public IList<AIContent>? Contents { get; set; }
public FinishReason? FinishReason { get; set; }
public string? MessageId { get; set; }
public string? ResponseId { get; set; }
public DateTimeOffset? CreatedAt { get; set; }
}
Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AgentResponseUpdate.cs:1-30
Best Practices
Session Management
- Session Initialization: Always initialize sessions with appropriate user and agent identifiers
- Session Cleanup: Implement session expiration for idle conversations
- State Isolation: Use separate state bags for different concerns
History Management
- Provider Selection: Choose providers based on scale requirements
- Compaction Tuning: Configure compaction thresholds based on model context limits
- History Pruning: Implement retention policies for regulatory compliance
State Management
- State Serialization: Ensure custom state objects are serializable
- Context Providers: Use context providers for cross-cutting concerns
- Checkpoint Frequency: Balance checkpoint overhead against recovery requirements
See Also
Sources: [python/packages/core/agent_framework/_sessions.py]()
Hosting and Deployment Patterns
Related topics: Workflows and Orchestration, Observability and Telemetry
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Workflows and Orchestration, Observability and Telemetry
Hosting and Deployment Patterns
The Microsoft Agent Framework provides multiple hosting and deployment patterns to accommodate different runtime environments and enterprise requirements. This documentation covers the available hosting options, configuration requirements, and deployment strategies for both Python and .NET implementations.
Overview
The framework supports three primary hosting paradigms:
| Hosting Pattern | Language | Runtime Environment | Use Case |
|---|---|---|---|
| Azure Functions | Python, .NET | Serverless/Event-driven | Stateless agent invocations |
| Durable Task | Python, .NET | Long-running workflows | Complex orchestrations with state persistence |
| Foundry Hosting | Python, .NET | Azure AI Foundry | Managed agent deployment with platform integration |
Sources: python/samples/04-hosting/README.md:1-15 Sources: dotnet/samples/04-hosting/README.md:1-20
Architecture Overview
graph TD
A[Client Request] --> B{Deployment Pattern}
B -->|Azure Functions| C[Function App]
B -->|Durable Task| D[Orchestration Engine]
B -->|Foundry Hosting| E[Azure AI Foundry]
C --> F[Stateless Agent Handler]
D --> G[Stateful Orchestrator]
E --> H[Managed Agent Runtime]
F --> I[Response]
G --> I
H --> IAzure Functions Hosting
Azure Functions provides a serverless hosting model suitable for event-driven agent invocations. The framework offers native integration through dedicated packages for both Python and .NET.
Python Azure Functions Package
The Python Azure Functions hosting package is located at python/packages/azurefunctions/agent_framework_azurefunctions/__init__.py.
Installation
pip install agent-framework-azurefunctions
Sources: python/packages/azurefunctions/agent_framework_azurefunctions/__init__.py
.NET Azure Functions Package
The .NET Azure Functions hosting is provided through the Microsoft.Agents.AI.Hosting.AzureFunctions NuGet package.
Installation
<ItemGroup>
<PackageReference Include="Microsoft.Agents.AI.Hosting.AzureFunctions" Version="[CURRENTVERSION]" />
</ItemGroup>
Or via CLI:
dotnet add package Microsoft.Agents.AI.Hosting.AzureFunctions
Sources: dotnet/src/Microsoft.Agents.AI.Hosting.AzureFunctions/README.md:1-15
Configuration
Azure Functions samples require the following environment configuration:
| Variable | Description | Example |
|---|---|---|
AZURE_OPENAI_ENDPOINT | Azure OpenAI service endpoint | https://your-resource.openai.azure.com/ |
AZURE_OPENAI_DEPLOYMENT_NAME | Model deployment name | gpt-4o |
TASKHUB_NAME | Durable Task hub name (for orchestration) | default |
Sources: dotnet/samples/04-hosting/DurableAgents/AzureFunctions/README.md:1-30
Sample Structure
The repository includes Azure Functions samples organized by complexity:
dotnet/samples/04-hosting/DurableAgents/AzureFunctions/
├── 01_SingleAgent/
├── 02_MultiAgent/
└── README.md
Running the Sample
cd dotnet/samples/04-hosting/DurableAgents/AzureFunctions/01_SingleAgent
func start
The function app becomes available at http://localhost:7071.
Sources: dotnet/samples/04-hosting/DurableAgents/AzureFunctions/README.md:45-60
Durable Task Hosting
Durable Task hosting enables long-running agent workflows with state persistence and checkpoint capabilities. This pattern is essential for complex multi-step orchestrations.
Python Durable Task Package
Installation
pip install agent-framework-durabletask
The package is located at python/packages/durabletask/agent_framework_durabletask/__init__.py.
Sources: python/packages/durabletask/agent_framework_durabletask/__init__.py
.NET Durable Task Package
Installation
<ItemGroup>
<PackageReference Include="Microsoft.Agents.AI.DurableTask" Version="[CURRENTVERSION]" />
</ItemGroup>
Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/README.md:1-10
Workflow Orchestration
graph LR
A[Start] --> B[Activity: Initialize]
B --> C[Activity: Process]
C --> D{Continue?}
D -->|Yes| C
D -->|No| E[Activity: Finalize]
E --> F[Complete]
G[Orchestrator] -.-> A
G -.-> B
G -.-> C
G -.-> D
G -.-> E
G -.-> FAzurite Emulator Requirement
When running Durable Task samples locally, start the Azurite emulator:
az login
azd pipeline config
azd up
Or manually:
azurite
Sources: python/samples/04-hosting/azure_functions/README.md:1-20
Foundry Hosting
Foundry Hosting provides the most comprehensive deployment option with deep integration into Azure AI Foundry. This pattern supports managed agents, model routing, and enterprise-grade security.
Python Foundry Hosting Package
Installation
pip install agent-framework-foundry-hosting
The package is located at python/packages/foundry_hosting/agent_framework_foundry_hosting/__init__.py.
Sources: python/packages/foundry_hosting/agent_framework_foundry_hosting/__init__.py
Configuration Requirements
Foundry-hosted agents require specific environment configuration:
| Variable | Description | Required |
|---|---|---|
FOUNDRY_PROJECT_ENDPOINT | Azure AI Foundry project endpoint | Yes |
FOUNDRY_MODEL or AZURE_AI_MODEL_DEPLOYMENT_NAME | Model deployment name | Yes |
AZURE_BEARER_TOKEN | Authentication token (for Docker) | Docker only |
AGENT_NAME | Foundry-managed agent name | Local dev |
Sources: python/samples/04-hosting/foundry-hosted-agents/README.md:1-40 Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent/README.md:1-25
Environment Setup
Bash/Linux
export FOUNDRY_PROJECT_ENDPOINT="https://<account>.services.ai.azure.com/api/projects/<project>"
export AZURE_AI_MODEL_DEPLOYMENT_NAME="<your-model-deployment-name>"
PowerShell
$env:FOUNDRY_PROJECT_ENDPOINT="https://<account>.services.ai.azure.com/api/projects/<project>"
$env:AZURE_AI_MODEL_DEPLOYMENT_NAME="<your-model-deployment-name>"
Foundry Hosted Agent Samples
The repository provides multiple Foundry hosting samples:
| Sample | Description | Path |
|---|---|---|
Hosted-TextRag | Text-based RAG agent | dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-TextRag/ |
Hosted-FoundryAgent | Direct Foundry agent hosting | dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent/ |
Hosted-AzureSearchRag | Azure AI Search integration | dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-AzureSearchRag/ |
Hosted-McpTools | MCP tools integration | dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-McpTools/ |
Hosted-Files | Bundled file handling | dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Files/ |
Hosted-Workflow-Simple | Multi-step workflow | dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Workflow-Simple/ |
Sources: python/samples/04-hosting/README.md:1-50 Sources: dotnet/samples/04-hosting/README.md:1-60
Deployment Workflows
Direct Execution (Contributors)
For local development and contribution work:
cd dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent
AGENT_NAME=<your-agent-name> dotnet run
The agent starts on http://localhost:8088.
Docker Deployment
#### Publishing for Container Runtime
dotnet publish -c Debug -f net10.0 -r linux-musl-x64 --self-contained false -o out
#### Building the Image
docker build -f Dockerfile.contributor -t hosted-foundry-agent .
#### Running the Container
export AZURE_BEARER_TOKEN=$(az account get-access-token --resource https://ai.azure.com --query accessToken -o tsv)
docker run --rm -p 8088:8088 \
-e AGENT_NAME=hosted-foundry-agent \
-e AZURE_BEARER_TOKEN=$AZURE_BEARER_TOKEN \
--env-file .env \
hosted-foundry-agent
Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent/README.md:20-60
Testing Hosted Agents
Using Azure Developer CLI
azd ai agent invoke --local "Hello!"
Using curl
curl -X POST http://localhost:8088/responses \
-H "Content-Type: application/json" \
-d '{"input": "Hello!", "model": "<your-agent-name>"}'
Testing Session Files
cd ../Using-Samples/SessionFilesClient
$env:AGENT_ENDPOINT = "http://localhost:8088"
$env:AGENT_NAME = "hosted-files"
dotnet run
You> What is the total revenue in the contoso file?
Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Files/README.md:30-50
Comparison Matrix
| Feature | Azure Functions | Durable Task | Foundry Hosting |
|---|---|---|---|
| Stateful Execution | No | Yes | Yes |
| Long-running Workflows | No | Yes | Yes |
| Serverless | Yes | No | No |
| Managed Scaling | Yes | Manual | Yes |
| Checkpoint/Resume | No | Yes | Yes |
| Azure AI Foundry Integration | No | No | Yes |
| Local Development Support | Limited | Yes | Yes |
| Docker Deployment | Yes | Yes | Yes |
Next Steps
- Explore the Azure Functions samples for event-driven patterns
- Review the Foundry hosting samples for enterprise deployments
- Check the Durable Task documentation for orchestration patterns
Sources: [python/samples/04-hosting/README.md:1-15](https://github.com/microsoft/agent-framework/blob/main/python/samples/04-hosting/README.md)
Observability and Telemetry
Related topics: Workflows and Orchestration, Hosting and Deployment Patterns
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Workflows and Orchestration, Hosting and Deployment Patterns
Observability and Telemetry
The Agent Framework provides comprehensive observability capabilities through OpenTelemetry integration, enabling distributed tracing, performance metrics collection, and detailed logging across both .NET and Python implementations.
Overview
Observability in the Agent Framework allows developers to:
- Trace agent invocations across distributed systems
- Collect performance metrics and timing information
- Log request and response payloads (when enabled)
- Track errors and capture exception details
- Monitor usage statistics and token consumption
Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgentBuilderExtensions.cs:1-30
The implementation follows the OpenTelemetry Semantic Conventions for Generative AI systems as defined in the OpenTelemetry specification. The specification for Generative AI is still experimental and subject to change.
Sources: docs/decisions/0003-agent-opentelemetry-instrumentation.md:1-20
Architecture
High-Level Component Interaction
graph TD
A[Application] --> B[OpenTelemetry Agent Wrapper]
B --> C[Inner AIAgent]
C --> D[IChatClient]
D --> E[AI Provider<br/>OpenAI/Anthropic/GitHub Copilot]
B -.-> F[OpenTelemetry Traces]
B -.-> G[Metrics]
B -.-> H[Logs]
F --> I[OTLP Exporter]
G --> I
H --> I
I --> J[Telemetry Backend<br/>Azure Monitor/Jaeger/...]Auto-Wiring Mechanism
When using OpenTelemetryAgent, the framework automatically wraps underlying chat clients with telemetry instrumentation:
graph LR
A[ChatClientAgent] --> B[OpenTelemetryAgent]
B --> C{IChatClient}
C -->|autoWireChatClient: true| D[Auto-wrap with<br/>OpenTelemetryChatClient]
C -->|Already Instrumented| E[No Additional Wrapping]
D --> F[Chat-Level Telemetry]Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgent.cs:1-25
.NET Implementation
OpenTelemetryAgent
The OpenTelemetryAgent class wraps an existing AIAgent to add telemetry capabilities without modifying the underlying agent's behavior.
Class Declaration:
[Experimental(DiagnosticIds.Experiments.AgentsAIExperiments)]
public sealed class OpenTelemetryAgent : AIAgent
Constructor Parameters:
| Parameter | Type | Description |
|---|---|---|
innerAgent | AIAgent | The underlying agent to be augmented with telemetry |
sourceName | string? | Optional source name for telemetry identification |
autoWireChatClient | bool | Auto-wrap ChatClientAgent's IChatClient with OpenTelemetryChatClient |
Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgent.cs:1-40
Key Features:
- Provider Metadata Extraction: Automatically extracts provider metadata from the inner agent via
AIAgentMetadata.
this._providerName = innerAgent.GetService<AIAgentMetadata>()?.ProviderName;
- Chat Client Auto-Wiring: When
autoWireChatClientistrueand the inner agent is aChatClientAgent, the underlyingIChatClientis automatically wrapped withOpenTelemetryChatClient.
Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgent.cs:1-50
Builder Extension
The recommended way to add telemetry to agents is through the AIAgentBuilder:
public static AIAgentBuilder UseOpenTelemetry(
this AIAgentBuilder builder,
string? sourceName = null,
Action<OpenTelemetryAgent>? configure = null)
Usage:
AIAgent agent = builder
.WithChatClient(chatClient)
.UseOpenTelemetry(sourceName: "my-agent")
.Build();
Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgentBuilderExtensions.cs:1-45
Workflow Telemetry Options
The WorkflowTelemetryOptions class provides configuration for workflow-level telemetry:
| Property | Type | Default | Description |
|---|---|---|---|
EnableSensitiveData | bool | false | Include potentially sensitive information in telemetry |
DisableWorkflowBuild | bool | false | Disable workflow.build activities |
DisableWorkflowRun | bool | false | Disable workflow_invoke activities |
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/Observability/WorkflowTelemetryOptions.cs:1-40
Activity Extensions
The framework provides extension methods for creating and managing OpenTelemetry activities in workflows:
// Creating activity spans for workflow operations
ActivitySource activitySource = new ActivitySource("Microsoft.Agents.AI.Workflows");
// Activity creation following semantic conventions
var activity = activitySource.StartActivity("workflow.invoke");
These extensions ensure proper tagging and attributes according to OpenTelemetry's generative AI conventions.
Sources: dotnet/src/Microsoft.Agents.AI.Workflows/Observability/ActivityExtensions.cs:1-30
Python Implementation
Telemetry Module
The Python SDK provides telemetry capabilities through the _telemetry.py module:
from agent_framework._telemetry import configure_otel_providers
Key Functions:
| Function | Description |
|---|---|
configure_otel_providers() | Configure OpenTelemetry providers with exporters |
configure_otel_providers_with_env_var() | Use standard OTEL environment variables |
Sources: python/packages/core/agent_framework/_telemetry.py:1-50
Basic Configuration
from agent_framework.observability import configure_otel_providers
# Enable console exporters for development
configure_otel_providers(enable_console_exporters=True)
Sources: python/samples/02-agents/observability/agent_observability.py:1-20
GitHub Copilot Agent Integration
The GitHubCopilotAgent has OpenTelemetry tracing built-in:
from agent_framework.observability import configure_otel_providers
from agent_framework.github import GitHubCopilotAgent
configure_otel_providers(enable_console_exporters=True)
async with GitHubCopilotAgent() as agent:
response = await agent.run("Hello!")
Sources: python/samples/02-agents/providers/github_copilot/README.md:1-30
Environment Variables
Python observability supports standard OpenTelemetry environment variables:
| Variable | Description |
|---|---|
OTEL_SERVICE_NAME | Service name for telemetry |
OTEL_EXPORTER_OTLP_ENDPOINT | OTLP exporter endpoint |
OTEL_EXPORTER_OTLP_PROTOCOL | Protocol (grpc, http/protobuf) |
OTEL_RESOURCE_ATTRIBUTES | Additional resource attributes |
Sources: python/samples/02-agents/observability/README.md:1-30
Logging Configuration
Align Python logs with telemetry output:
import logging
logging.basicConfig(
format="[%(asctime)s - %(pathname)s:%(lineno)d - %(levelname)s] %(message)s",
datefmt="%Y-%m-%d %H:%M:%S",
)
# Get root logger and set detailed level
logger = logging.getLogger()
logger.setLevel(logging.NOTSET)
Sources: python/samples/02-agents/observability/README.md:1-60
Semantic Conventions
The Agent Framework adheres to OpenTelemetry's semantic conventions for generative AI systems. Key conventions include:
graph TD
A[AI Agent Invocation] --> B[Semantic Convention Attributes]
B --> C[gen_ai.system]
B --> D[gen_ai.request.model]
B --> E[gen_ai.response.id]
B --> F[gen_ai.usage.prompt_tokens]
B --> G[gen_ai.usage.completion_tokens]
B --> H[gen_ai.response.finish_reason]Standard Attributes:
| Attribute | Description |
|---|---|
gen_ai.system | The AI system type (e.g., "openai", "anthropic") |
gen_ai.request.model | Model identifier for the request |
gen_ai.response.id | Unique identifier for the response |
gen_ai.usage.prompt_tokens | Number of tokens in the prompt |
gen_ai.usage.completion_tokens | Number of tokens in completion |
gen_ai.response.finish_reason | Reason for completion termination |
Sources: docs/decisions/0003-agent-opentelemetry-instrumentation.md:1-50
Configuration Examples
.NET: Full Agent with Telemetry
using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Telemetry;
// Create the builder
IAIAgentBuilder builder = new AgentBuilder();
// Configure with telemetry
AIAgent agent = builder
.WithChatClient(chatClient)
.UseOpenTelemetry(
sourceName: "my-agent",
configure: agent =>
{
// Additional configuration
})
.Build();
// Use the agent - all invocations are automatically traced
var response = await agent.InvokeAsync("Hello, agent!");
Sources: dotnet/samples/02-agents/AgentOpenTelemetry/Program.cs:1-50
Python: Advanced Exporter Configuration
from agent_framework.observability import configure_otel_providers
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor
# Create custom exporter
custom_exporter = OTLPSpanExporter(
endpoint="https://your-endpoint.azure.com",
insecure=True
)
# Configure with custom exporter
configure_otel_providers(
service_name="my-agent-service",
span_exporter=custom_exporter,
enable_console_exporters=True
)
Sources: python/samples/02-agents/observability/configure_otel_providers_with_parameters.py:1-40
Best Practices
1. Consistent Source Naming
Use meaningful source names to identify telemetry data:
// Good
builder.UseOpenTelemetry(sourceName: "customer-support-agent");
// Avoid
builder.UseOpenTelemetry(); // Uses default
2. Sensitive Data Handling
By default, telemetry excludes raw inputs and outputs:
var options = new WorkflowTelemetryOptions
{
EnableSensitiveData = false // Default - excludes raw content
};
Only enable sensitive data logging when necessary and ensure proper data protection.
3. Selective Activity Recording
Disable activities that generate excessive telemetry:
var options = new WorkflowTelemetryOptions
{
DisableWorkflowBuild = true, // Reduce noise in build-heavy workflows
DisableWorkflowRun = false // Keep run telemetry
};
4. Provider Compatibility
The telemetry implementation adapts to the underlying AI provider:
| Provider | Telemetry Support |
|---|---|
| OpenAI | Full |
| Anthropic | Full |
| Azure AI Foundry | Full |
| GitHub Copilot | Built-in |
5. Environment-Based Configuration
Use environment variables for deployment flexibility:
# Development
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_SERVICE_NAME=agent-dev
# Production
export OTEL_EXPORTER_OTLP_ENDPOINT=https://telemetry.company.com
export OTEL_SERVICE_NAME=agent-prod
Troubleshooting
Missing Telemetry Data
- Verify OpenTelemetry SDK is properly configured
- Check that the exporter endpoint is accessible
- Ensure
ActivitySourcenames match between instrumentation and export
Duplicate Telemetry
If using ChatClientAgent with OpenTelemetryAgent:
- Set
autoWireChatClient: falsewhen chat client is already instrumented - Avoid manually wrapping already-wrapped clients
Performance Impact
Telemetry collection adds minimal overhead. For high-throughput scenarios:
- Use batch exporters instead of simple exporters
- Consider disabling verbose logging levels
- Sample traces when full fidelity is not required
Related Documentation
Sources: [dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgentBuilderExtensions.cs:1-30]()
Doramagic Pitfall Log
Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.
First-time setup may fail or require extra isolation and rollback planning.
Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
The project may affect permissions, credentials, data exposure, or host boundaries.
The project may affect permissions, credentials, data exposure, or host boundaries.
Doramagic Pitfall Log
Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.
1. Installation risk: .NET: [Bug]: TextContent.AdditionalProperties dropped by AsAGUIEventStreamAsync for TEXT_MESSAGE_START/TEXT_MESSAGE_CON…
- Severity: high
- Finding: Installation risk is backed by a source signal: .NET: [Bug]: TextContent.AdditionalProperties dropped by AsAGUIEventStreamAsync for TEXT_MESSAGE_START/TEXT_MESSAGE_CON…. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/4923
2. Configuration risk: Bug: Agent responses lose structured JSON metadata in multi-agent orchestration (MAF 1.x.x)
- Severity: high
- Finding: Configuration risk is backed by a source signal: Bug: Agent responses lose structured JSON metadata in multi-agent orchestration (MAF 1.x.x). Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5785
3. Security or permission risk: .NET: OpenAI-compatible extra body field thinking is not forwarded when using Microsoft.Agents.AI.OpenAI
- Severity: high
- Finding: Security or permission risk is backed by a source signal: .NET: OpenAI-compatible extra body field thinking is not forwarded when using Microsoft.Agents.AI.OpenAI. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5708
4. Security or permission risk: .NET: [Bug]: In v. 1.5.0 Microsoft.Agents.AI.Anthropic (and Google.GenAI) do not work [Regression]
- Severity: high
- Finding: Security or permission risk is backed by a source signal: .NET: [Bug]: In v. 1.5.0 Microsoft.Agents.AI.Anthropic (and Google.GenAI) do not work [Regression]. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5707
5. Security or permission risk: .NET: [Bug]: Regression - Tool Events not being emitted correctly to the front end
- Severity: high
- Finding: Security or permission risk is backed by a source signal: .NET: [Bug]: Regression - Tool Events not being emitted correctly to the front end. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5794
6. Security or permission risk: Anthropic function limit fallback can return empty final response
- Severity: high
- Finding: Security or permission risk is backed by a source signal: Anthropic function limit fallback can return empty final response. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5769
7. Security or permission risk: Python: Add tutorial for building a custom chat client / LLM provider
- Severity: high
- Finding: Security or permission risk is backed by a source signal: Python: Add tutorial for building a custom chat client / LLM provider. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5505
8. Installation risk: python-1.2.1
- Severity: medium
- Finding: Installation risk is backed by a source signal: python-1.2.1. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/releases/tag/python-1.2.1
9. Configuration risk: .NET: [Bug]: DurableTask: SuperstepState.AccumulatedEvents overflows CustomStatus 16 KB cap on multi-executor workflows…
- Severity: medium
- Finding: Configuration risk is backed by a source signal: .NET: [Bug]: DurableTask: SuperstepState.AccumulatedEvents overflows CustomStatus 16 KB cap on multi-executor workflows…. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5745
10. Configuration risk: Python: CosmosHistoryProvider Code interpreter tool calls are saved chunk by chunk
- Severity: medium
- Finding: Configuration risk is backed by a source signal: Python: CosmosHistoryProvider Code interpreter tool calls are saved chunk by chunk. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5793
11. Configuration risk: dotnet-1.5.0
- Severity: medium
- Finding: Configuration risk is backed by a source signal: dotnet-1.5.0. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/releases/tag/dotnet-1.5.0
12. Configuration risk: python-1.2.2
- Severity: medium
- Finding: Configuration risk is backed by a source signal: python-1.2.2. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/releases/tag/python-1.2.2
Source: Doramagic discovery, validation, and Project Pack records
Community Discussion Evidence
These external discussion links are review inputs, not standalone proof that the project is production-ready.
Count of project-level external discussion links exposed on this manual page.
Open the linked issues or discussions before treating the pack as ready for your environment.
Community Discussion Evidence
Doramagic exposes project-level community discussion separately from official documentation. Review these links before using agent-framework with real data or production workflows.
- [.NET: [Bug]: TextContent.AdditionalProperties dropped by AsAGUIEventStre](https://github.com/microsoft/agent-framework/issues/4923) - github / github_issue
- Python: OpenAI store=True can silently bypass external HistoryProvider p - github / github_issue
- Bug: Agent responses lose structured JSON metadata in multi-agent orches - github / github_issue
- [.NET: [Bug]: DurableTask: SuperstepState.AccumulatedEvents overflows Cus](https://github.com/microsoft/agent-framework/issues/5745) - github / github_issue
- [.NET: [Bug]: Regression - Tool Events not being emitted correctly to the](https://github.com/microsoft/agent-framework/issues/5794) - github / github_issue
- Python: CosmosHistoryProvider Code interpreter tool calls are saved chun - github / github_issue
- Anthropic function limit fallback can return empty final response - github / github_issue
- [.NET: [Bug]: In v. 1.5.0 Microsoft.Agents.AI.Anthropic (and Google.GenAI](https://github.com/microsoft/agent-framework/issues/5707) - github / github_issue
- Python: Add tutorial for building a custom chat client / LLM provider - github / github_issue
- .NET: OpenAI-compatible extra body field thinking is not forwarded when - github / github_issue
- dotnet-1.5.0 - github / github_release
- python-1.3.0 - github / github_release
Source: Project Pack community evidence and pitfall evidence