Doramagic Project Pack · Human Manual

agent-framework

Microsoft Agent Framework is a comprehensive, multi-language framework for building intelligent agents that integrate with various AI services and providers. The framework enables develope...

Getting Started with Microsoft Agent Framework

Related topics: System Architecture, Agent System

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Python Installation

Continue reading this section for the full explanation and source context.

Section .NET Installation

Continue reading this section for the full explanation and source context.

Section Python: Basic Agent

Continue reading this section for the full explanation and source context.

Related topics: System Architecture, Agent System

Getting Started with Microsoft Agent Framework

Overview

Microsoft Agent Framework is a comprehensive, multi-language framework for building intelligent agents that integrate with various AI services and providers. The framework enables developers to create agents capable of natural language understanding, tool usage, multi-turn conversations, and complex workflow orchestration.

The framework supports two primary ecosystems:

LanguagePackage ManagerCore Package
Pythonpipagent-framework, agent-framework-core
.NETNuGetMicrosoft.Agents.AI

Sources: python/README.md:1-40

Supported Platforms

ComponentRequirements
Python3.10+
Operating SystemsWindows, macOS, Linux
.NET.NET 8+

Sources: python/README.md:36-39

Installation

Python Installation

The framework offers two installation approaches depending on your use case:

#### Development Mode (Full Installation)

For exploring or developing locally with all features:

pip install agent-framework

This installs the core package and all integration sub-packages, ensuring all features are available without additional configuration steps.

Sources: python/README.md:10-15

#### Selective Installation

For lightweight environments with specific integration needs:

PackageCommandDescription
Core Onlypip install agent-framework-coreAzure OpenAI, OpenAI support + workflows
+ Azure AI Foundrypip install agent-framework-foundryAzure AI Foundry integration
+ Copilot Studiopip install agent-framework-copilotstudio --preMicrosoft Copilot Studio (preview)

Released packages (agent-framework, agent-framework-core, agent-framework-foundry) no longer require the --pre flag, while preview connectors like agent-framework-copilotstudio still do.

Sources: python/README.md:17-34

.NET Installation

For .NET projects, add the appropriate package reference to your .csproj file:

<ItemGroup>
  <PackageReference Include="Microsoft.Agents.AI" Version="[CURRENTVERSION]" />
</ItemGroup>

For Azure Functions hosting:

<ItemGroup>
  <PackageReference Include="Microsoft.Agents.AI.Hosting.AzureFunctions" Version="[CURRENTVERSION]" />
</ItemGroup>

Sources: dotnet/src/Microsoft.Agents.AI/Microsoft.Agents.AI.csproj

Quick Start

Python: Basic Agent

import asyncio
from agent_framework import Agent, AzureCliCredential
from agent_framework.integrations.azure_ai import FoundryChatClient
import os

async def main():
    agent = Agent(
        client=FoundryChatClient(
            credential=AzureCliCredential(),
        ),
        name="HaikuAgent",
        instructions="You are an upbeat assistant that writes beautifully.",
    )

    print(await agent.run("Write a haiku about Microsoft Agent Framework."))

if __name__ == "__main__":
    asyncio.run(main())

Sources: README.md:30-50

.NET: Basic Agent

using Azure.AI.Projects;
using Azure.Identity;
using Microsoft.Agents.AI;

string endpoint = Environment.GetEnvironmentVariable("AZURE_AI_PROJECT_ENDPOINT") 
    ?? throw new InvalidOperationException("AZURE_AI_PROJECT_ENDPOINT is not set.");
string deploymentName = Environment.GetEnvironmentVariable("AZURE_AI_MODEL_DEPLOYMENT_NAME") 
    ?? "gpt-5.4-mini";

AIAgent agent = new AIProjectClient(new Uri(endpoint), new DefaultAzureCredential())
    .AsAIAgent(model: deploymentName, instructions: "You are an upbeat assistant.", name: "HaikuAgent");

Console.WriteLine(await agent.RunAsync("Write a haiku about Microsoft Agent Framework."));

Sources: README.md:52-65

Environment Configuration

Set API keys and configuration as environment variables or in a .env file at your project root:

VariableDescriptionRequired
FOUNDRY_PROJECT_ENDPOINTAzure AI Foundry project endpointYes
FOUNDRY_MODELModel deployment name (defaults to gpt-4o)No
AZURE_AI_PROJECT_ENDPOINTAlternative endpoint variableYes
AZURE_AI_MODEL_DEPLOYMENT_NAMEModel deployment nameNo

Sources: python/samples/01-get-started/README.md:9-13

Core Concepts

Agent Architecture

graph TD
    A[User Input] --> B[Agent]
    B --> C[AI Client]
    C --> D[Azure AI Foundry / OpenAI / Claude]
    B --> E[Tools]
    B --> F[Memory / Session]
    E --> G[Function Calls]
    F --> H[Context Preservation]
    G --> I[Action Execution]
    I --> B

Key Components

ComponentPython Package.NET NamespacePurpose
Agentagent_frameworkMicrosoft.Agents.AICore agent implementation
Chat Clientagent_framework.integrations.azure_aiAzure.AI.ProjectsAI service connectivity
Tools@tool decoratorAITool attributeFunction definitions
SessionsAgentSessionIAgentSessionMulti-turn conversation state
ContextContextProviderIContextProviderDynamic context injection

Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkill.cs:15-30

Progressive Learning Samples (Python)

The framework provides a progressive set of samples in python/samples/01-get-started/:

SampleFileLearning Objective
101_hello_agent.pyCreate your first agent and run it (streaming and non-streaming)
202_add_tools.pyDefine a function tool with @tool and attach it to an agent
303_multi_turn.pyKeep conversation history across turns with AgentSession
404_memory.pyAdd dynamic context with a custom ContextProvider
505_functional_workflow_with_agents.pyCall agents inside a functional workflow
606_functional_workflow_basics.pyWrite a workflow as a plain async function
707_first_graph_workflow.pyChain executors into a graph workflow with edges
808_host_your_agent.pyHost your agent in various environments

Sources: python/samples/01-get-started/README.md:17-30

Authentication

The framework supports multiple authentication methods:

ProviderPython Credential.NET Credential
Azure AI FoundryAzureCliCredential()DefaultAzureCredential()
Azure Content UnderstandingAzureCliCredential()DefaultAzureCredential()
GitHub CopilotAPI Key-basedAPI Key-based

For Azure-based authentication, run az login in your terminal before executing samples:

az login

Sources: python/samples/02-agents/skills/code_defined_skill/README.md:15-17

Integration Packages

Python Integrations

PackagePurposeInstall Command
agent-framework-coreCore framework with Azure OpenAI and OpenAIDefault
agent-framework-foundryAzure AI Foundry integrationDefault
agent-framework-claudeClaude Agent SDK integrationpip install agent-framework-claude --pre
agent-framework-github-copilotGitHub Copilot integrationpip install agent-framework-github-copilot --pre
agent-framework-declarativeYAML-based agent specificationpip install agent-framework-declarative --pre
agent-framework-copilotstudioMicrosoft Copilot Studiopip install agent-framework-copilotstudio --pre

#### Claude Agent

The Claude agent enables integration with Claude Agent SDK, allowing interaction with Claude's agentic capabilities through the Agent Framework.

pip install agent-framework-claude --pre

Sources: python/packages/claude/README.md:1-10

#### GitHub Copilot Agent

The GitHub Copilot agent enables integration with GitHub Copilot for agentic capabilities:

pip install agent-framework-github-copilot --pre

Sources: python/packages/github_copilot/README.md:1-10

#### Declarative Agents

The declarative package provides support for building agents based on YAML specifications:

pip install agent-framework-declarative --pre

Sources: python/packages/declarative/README.md:1-10

.NET Integrations

PackagePurpose
Microsoft.Agents.AICore AI library
Microsoft.Agents.AI.Hosting.OpenAIOpenAI hosting
Microsoft.Agents.AI.GitHub.CopilotGitHub Copilot agent
Microsoft.Agents.AI.AzureAI.PersistentAzure AI persistent agents (deprecated)
Microsoft.Agents.AI.DurableTaskDurable Task integration for stateful workflows
Microsoft.Agents.AI.Hosting.AzureFunctionsAzure Functions hosting
Aspire.Hosting.AgentFramework.DevUIAspire-based DevUI hosting

#### Creating a GitHub Copilot Agent (.NET)

public static AIAgent AsAIAgent(
    this CopilotClient client,
    bool ownsClient = false,
    string? id = null,
    string? name = null,
    string? description = null,
    IList<AITool>? tools = null,
    string? instructions = null)

Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs:15-25

Agent Skills

Agent Skills enable domain-specific capabilities with instructions, resources, and scripts. The framework follows the Agent Skills specification.

Skill Types

Skill TypePython.NET
File-basedAgentFileSkillAgentFileSkill
Code-definedAgentInlineSkillAgentInlineSkill
DeclarativeYAML-basedN/A

Skill Configuration Options (.NET)

public sealed class AgentSkillsProviderOptions
{
    /// <summary>
    /// Custom system prompt template containing {skills}, {resource_instructions}, {script_instructions}
    /// </summary>
    public string? SkillsInstructionPrompt { get; set; }

    /// <summary>
    /// Require script execution approval (default: false)
    /// </summary>
    public bool ScriptApproval { get; set; }

    /// <summary>
    /// Disable caching of tools and instructions (default: false)
    /// </summary>
    public bool DisableCaching { get; set; }
}

Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkillsProviderOptions.cs:14-35

Skill Content Structure

The skill content is structured as XML, containing:

<name>{skill_name}</name>
<description>{skill_description}</description>
<instructions>
{skill_instructions}
</instructions>
<resources>
{resource_definitions}
</resources>
<scripts>
{script_definitions}
</scripts>

Sources: dotnet/src/Microsoft.Agents.AI/Skills/Programmatic/AgentInlineSkillContentBuilder.cs:20-40

Azure Content Understanding Integration

The framework supports Azure Content Understanding for document, image, audio, and video analysis:

SampleDescriptionRun Command
Document Q&AUpload PDF, extract info with CUuv run samples/01-get-started/01_document_qa.py
Multi-Turn SessionAgentSession persistenceuv run samples/01-get-started/02_multi_turn_session.py
Multi-Modal ChatPDF + audio + video analysisuv run samples/01-get-started/03_multimodal_chat.py
Invoice ProcessingStructured field extractionuv run samples/01-get-started/04_invoice_processing.py

Required environment variables:

FOUNDRY_PROJECT_ENDPOINT=https://your-project.services.ai.azure.com
FOUNDRY_MODEL=gpt-4.1
AZURE_CONTENTUNDERSTANDING_ENDPOINT=https://your-cu-resource.cognitiveservices.azure.com/

Sources: python/packages/azure-contentunderstanding/samples/README.md:1-30

Durable Task Integration (.NET)

For stateful, long-running workflows, use the DurableTask integration:

dotnet add package Microsoft.Agents.AI.DurableTask

This package enables building stateful agents that can handle complex orchestration scenarios with checkpointing and replay capabilities.

Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/README.md:1-15

Development Tools

DevUI Sample Application

DevUI is a sample application for getting started with the Agent Framework:

// Features displayed in settings modal
interface ServerInfo {
  version: string;
  runtime: string;
  uiMode: string;
  capabilities?: {
    instrumentation?: boolean;
    // ... other capabilities
  };
}

Sources: python/packages/devui/frontend/src/components/layout/settings-modal.tsx:5-20

The DevUI includes a Sample Gallery for browsing and downloading curated examples:

graph LR
    A[Sample Gallery] --> B[Beginner Examples]
    A --> C[Advanced Examples]
    B --> D[Download & Run Locally]
    C --> D

Sources: python/packages/devui/frontend/src/components/features/gallery/gallery-view.tsx:10-25

Workflow Orchestration

The framework supports multiple workflow patterns:

graph TD
    A[Functional Workflow] --> B[Plain Async Functions]
    A --> C[Agent Calls within Workflows]
    D[Graph Workflow] --> E[Chained Executors]
    D --> F[Edges between Nodes]
    E --> G[Complex Routing]

Functional Workflow Pattern

Write workflows as plain async functions:

from agent_framework import workflow

@workflow
async def my_workflow(agent, input_data):
    result = await agent.process(input_data)
    return result

Graph Workflow Pattern

Chain executors into a graph workflow with edges:

from agent_framework.graph import Graph, Node, Edge

graph = Graph()
graph.add_node(Node("start", executor_a))
graph.add_node(Node("middle", executor_b))
graph.add_node(Node("end", executor_c))

graph.add_edge(Edge("start", "middle"))
graph.add_edge(Edge("middle", "end"))

Sources: python/samples/01-get-started/README.md:25-30

Next Steps

ResourcePurpose
Agent Skills SpecificationSkill definition standard
DocumentationFull framework docs
Azure Functions Samples.NET hosting examples
File-Based Skills SampleSkill implementation patterns
Mixed Skills SampleCombining multiple skill types

Sources: python/samples/02-agents/skills/code_defined_skill/README.md:25-30

Sources: [python/README.md:1-40](https://github.com/microsoft/agent-framework/blob/main/python/README.md)

System Architecture

Related topics: Getting Started with Microsoft Agent Framework, Agent System, Workflows and Orchestration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Agent Abstraction Layer

Continue reading this section for the full explanation and source context.

Section Chat Client Architecture

Continue reading this section for the full explanation and source context.

Section Python Package Structure

Continue reading this section for the full explanation and source context.

Related topics: Getting Started with Microsoft Agent Framework, Agent System, Workflows and Orchestration

System Architecture

Overview

The Microsoft Agent Framework is a cross-platform, multi-language framework designed for building AI-powered agents with tool-calling capabilities, workflow orchestration, and extensible integrations. The architecture follows a unified conceptual model implemented in both Python (3.10+) and .NET, enabling developers to create agents that interact with various AI backends while maintaining consistent APIs and patterns across platforms.

The framework's primary purpose is to abstract the complexity of AI agent development, providing a declarative approach to defining agent behavior, tools, memory, and workflows. It supports integration with Azure AI Foundry, OpenAI, GitHub Copilot, Anthropic Claude, and Microsoft Copilot Studio.

Sources: docs/design/python-package-setup.md

High-Level Architecture

graph TD
    subgraph "Client Applications"
        A[Python Apps]
        B[.NET Apps]
    end
    
    subgraph "Agent Framework Core"
        C[Agent Abstractions]
        D[Workflow Engine]
        E[Skill System]
        F[Memory/Context Providers]
    end
    
    subgraph "AI Backend Integrations"
        G[Azure AI Foundry]
        H[OpenAI / Azure OpenAI]
        I[GitHub Copilot]
        J[Anthropic Claude]
        K[Copilot Studio]
    end
    
    A --> C
    B --> C
    C --> D
    C --> E
    C --> F
    C --> G
    C --> H
    C --> I
    C --> J
    C --> K

Core Architecture Components

Agent Abstraction Layer

The central abstraction in the framework is the AIAgent interface, which defines the contract for all agent implementations. This abstraction enables loose coupling between client code and specific AI backend implementations.

#### Python Implementation

In Python, the Agent class serves as the primary agent implementation, accepting a chat client and configuration options:

agent = Agent(
    client=FoundryChatClient(
        credential=AzureCliCredential(),
    ),
    name="MyAgent",
    instructions="You are a helpful assistant.",
    tools=[my_tool]
)

Sources: python/packages/core/agent_framework/__init__.py

#### .NET Implementation

In .NET, the ChatClientAgent class provides the core agent functionality with dependency injection support:

public sealed class ChatClientAgent : AIAgent
{
    public ChatClientAgent(
        IChatClient chatClient,
        string? instructions = null,
        string? name = null,
        string? description = null,
        IList<AITool>? tools = null,
        ILoggerFactory? loggerFactory = null,
        IServiceProvider? services = null);
}

The agent accepts tools that can be invoked during conversations, and all provided tools are invoked without user approval by default. Developers should require explicit approval for tools that have side effects or access sensitive data.

Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

Chat Client Architecture

The framework uses a chat client abstraction pattern to separate agent logic from the underlying AI service implementation.

Chat ClientLanguageDescription
FoundryChatClientPython/.NETAzure AI Foundry integration
OpenAIChatClientPythonOpenAI and Azure OpenAI support
CopilotClient.NETGitHub Copilot integration
ClaudeClientPythonAnthropic Claude integration

#### Client Configuration Parameters

ParameterTypeRequiredDescription
credentialAzureCliCredential / DefaultAzureCredentialYes (Foundry)Authentication credential
project_endpointstringYes (Foundry)Azure AI Foundry project endpoint
modelstringNoModel deployment name (defaults vary)
temperaturefloatNoSampling temperature (0.0-2.0)
top_pfloatNoNucleus sampling parameter
response_formatobjectNoStructured output format

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs

Package Architecture

Python Package Structure

The Python implementation uses a modular package structure allowing selective installation based on required integrations:

graph TD
    A[agent-framework] --> B[agent-framework-core]
    A --> C[agent-framework-foundry]
    A --> D[agent-framework-copilotstudio]
    
    B --> E[OpenAI Support]
    B --> F[Workflow Engine]
    B --> G[Skill System]
    
    C --> B
    D --> B

#### Package Descriptions

PackageDescriptionInstall Command
agent-frameworkFull framework with all sub-packagespip install agent-framework
agent-framework-coreCore agent, workflow, and OpenAI supportpip install agent-framework-core
agent-framework-foundryAzure AI Foundry integrationpip install agent-framework-foundry
agent-framework-copilotstudioMicrosoft Copilot Studio (preview)pip install agent-framework-copilotstudio --pre
agent-framework-claudeAnthropic Claude integration (preview)pip install agent-framework-claude --pre
agent-framework-github-copilotGitHub Copilot integration (preview)pip install agent-framework-github-copilot --pre

The core package includes Azure OpenAI and OpenAI support by default, along with workflows and orchestrations.

Sources: docs/decisions/0008-python-subpackages.md

.NET Package Structure

The .NET implementation uses a shared library pattern with dependency injection:

<PropertyGroup>
  <InjectSharedFoundryAgents>true</InjectSharedFoundryAgents>
</PropertyGroup>

Core namespaces include:

NamespacePurpose
Microsoft.Agents.AICore agent abstractions and implementations
Microsoft.Agents.AI.AbstractionsInterface definitions
Microsoft.Agents.AI.AzureAIAzure AI Foundry integration
Microsoft.Agents.AI.GitHub.CopilotGitHub Copilot integration
Microsoft.Agents.AI.SkillsSkill-based agent configuration

Sources: dotnet/src/Shared/Foundry/Agents/README.md

Agent Execution Model

Agent Run Response Pattern

The framework standardizes agent responses through a consistent return type that wraps the final output along with any intermediate steps taken during execution.

sequenceDiagram
    participant Client
    participant Agent
    participant Tool
    participant AI_Backend
    
    Client->>Agent: run(input)
    Agent->>AI_Backend: send(messages)
    AI_Backend-->>Agent: response
    alt tool_call detected
        Agent->>Tool: invoke(arguments)
        Tool-->>Agent: result
        Agent->>AI_Backend: send(result)
        AI_Backend-->>Agent: response
    end
    Agent-->>Client: RunResponse(output, steps)

Multi-Turn Conversation Support

Agents maintain conversation history through AgentSession, enabling stateful multi-turn interactions:

session = AgentSession()
async for response in agent.run("Hello", session=session):
    print(response)

Sources: docs/decisions/0001-agent-run-response.md

Skill System Architecture

Skill Definition

Skills provide a declarative way to define agent capabilities with instructions and associated tools:

public class AgentInlineSkill
{
    public AgentInlineSkill(
        string name,
        string description,
        string instructions,
        string? license = null,
        string? compatibility = null,
        string? allowedTools = null,
        AdditionalPropertiesDictionary? metadata = null,
        JsonSerializerOptions? serializerOptions = null);
}

Skill Frontmatter Schema

FieldTypeRequiredDescription
namestringYesSkill name in kebab-case
descriptionstringYesSkill description for discovery
instructionsstringYesSkill instructions text
licensestringNoLicense name or reference
compatibilitystringNoCompatibility information (max 500 chars)
allowedToolsstringNoSpace-delimited pre-approved tools
metadatadictionaryNoArbitrary key-value metadata

Sources: dotnet/src/Microsoft.Agents.AI/Skills/Programmatic/AgentInlineSkill.cs

Workflow Orchestration

Workflow Types

The framework supports multiple workflow paradigms:

Workflow TypeDescriptionUse Case
Functional WorkflowAsync functions calling agentsSimple sequential operations
Graph WorkflowDAG-based executor chainsComplex conditional flows
Durable WorkflowLong-running with state persistenceHuman-in-the-loop approval

Graph Workflow Structure

graph LR
    A[Input] --> B[Agent 1]
    B --> C{Decision}
    C -->|Path A| D[Agent 2]
    C -->|Path B| E[Agent 3]
    D --> F[Output]
    E --> F

The graph workflow uses edges to connect executors, allowing conditional routing based on agent outputs.

Sources: python/samples/01-get-started/README.md

Memory and Context Architecture

Context Providers

Dynamic context injection is supported through custom ContextProvider implementations:

class MyContextProvider(ContextProvider):
    async def get_context(self, context_params) -> str:
        # Retrieve and format context
        return formatted_context

Memory Scoping

Scope ParameterDescription
application_idGlobal scope across entire application
agent_idAgent-specific memory isolation
user_idUser-specific memory partitioning

Context providers can optionally enable vector search for semantic retrieval:

SettingOptionsDescription
vectorizer_choice"openai", "hf"Embedding model selection
vector_field_namestringRedis field for vectors
overwrite_redis_indexbooleanIndex recreation control

Sources: python/samples/02-agents/context_providers/redis/README.md

Hosting and Deployment

Local Hosting with DevUI

DevUI provides a local development server with OpenAI-compatible endpoints:

devui /path/to/agents/folder

API endpoints exposed:

EndpointMethodDescription
/v1/responsesPOSTAgent invocation
/v1/entitiesGETList available entities

Agent Entity Structure

Agents must export an agent or workflow in their __init__.py:

# my_agent/__init__.py
from agent_framework import Agent

agent = Agent(
    name="MyAgent",
    client=OpenAIChatClient(),
)

Foundry Deployment

Production deployment to Azure AI Foundry uses the same agent configuration with environment-based credential resolution.

Sources: python/samples/02-agents/devui/README.md

Agent Mode System

The framework supports configurable agent operating modes for interactive planning and autonomous execution:

public sealed class AgentMode
{
    public string Name { get; }
    public string Description { get; }
}

public class AgentModeProviderOptions
{
    public IReadOnlyList<AgentMode>? Modes { get; set; }
    public string? DefaultMode { get; set; }
}
ModePurpose
"plan"Interactive planning with human oversight
"execute"Autonomous execution without intervention

Sources: dotnet/src/Microsoft.Agents.AI/Harness/AgentMode/AgentModeProviderOptions.cs

Integration Patterns

GitHub Copilot Integration

Agents can wrap GitHub Copilot clients for unified interaction:

public static AIAgent AsAIAgent(
    this CopilotClient client,
    bool ownsClient = false,
    string? id = null,
    string? name = null,
    string? description = null,
    IList<AITool>? tools = null,
    string? instructions = null);

This extension method creates an AIAgent backed by the Copilot client with optional additional tools and instructions.

Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs

Data Flow Summary

graph TD
    subgraph "Input Processing"
        A[User Input] --> B[Session Manager]
        B --> C[Context Provider]
    end
    
    subgraph "Agent Processing"
        C --> D[Agent Executor]
        D --> E[AI Chat Client]
        E --> F{Tool Call?}
    end
    
    subgraph "Tool Execution"
        F -->|Yes| G[Tool Executor]
        G --> H[Result Formatter]
        H --> E
    end
    
    F -->|No| I[Response Formatter]
    
    subgraph "Output"
        I --> J[RunResponse]
        J --> K[Client Application]
    end

Environment Configuration

Required Environment Variables

VariableDescriptionRequired For
FOUNDRY_PROJECT_ENDPOINTAzure AI Foundry project URLFoundry agents
FOUNDRY_MODELModel deployment nameFoundry agents
OPENAI_API_KEYOpenAI API keyOpenAI clients, embeddings
AZURE_AI_PROJECT_ENDPOINT.NET Foundry endpoint.NET Foundry
AZURE_AI_MODEL_DEPLOYMENT_NAME.NET model deployment.NET Foundry

Authentication Methods

MethodUse CaseCommand
AzureCliCredentialInteractive loginaz login
DefaultAzureCredentialAutomated environmentsManaged identity
API KeyDirect authenticationEnvironment variable

Sources: python/README.md

Sources: [docs/design/python-package-setup.md](https://github.com/microsoft/agent-framework/blob/main/docs/design/python-package-setup.md)

Agent System

Related topics: System Architecture, Tools and Skills, Workflows and Orchestration, AI Provider Integration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section AIAgent Interface

Continue reading this section for the full explanation and source context.

Section ChatClientAgent

Continue reading this section for the full explanation and source context.

Section AgentRunContext

Continue reading this section for the full explanation and source context.

Related topics: System Architecture, Tools and Skills, Workflows and Orchestration, AI Provider Integration

Agent System

The Agent System is the core abstraction layer in Microsoft Agent Framework that enables the creation, configuration, and execution of AI agents. Agents are autonomous or semi-autonomous software entities that can interact with users, execute tools, maintain conversation state, and perform complex multi-step tasks using Large Language Models (LLMs) as their reasoning engine.

Architecture Overview

The Agent System follows a layered architecture that separates concerns between the agent abstraction, runtime context, tool invocation, and the underlying chat client implementations.

graph TD
    subgraph "Agent Abstraction Layer"
        AIAgent[AIAgent Interface]
        ChatClientAgent[ChatClientAgent]
        AgentRunContext[AgentRunContext]
    end
    
    subgraph "Tool Layer"
        AITool[AITool]
        ToolDefinition[ToolDefinition]
        ToolResources[ToolResources]
    end
    
    subgraph "Client Layer"
        IChatClient[IChatClient]
        CopilotClient[CopilotClient]
        ClaudeClient[ClaudeClient]
    end
    
    subgraph "Context Layer"
        AIContext[AIContext]
        AgentSession[AgentSession]
        ContextProvider[ContextProvider]
    end
    
    AIAgent --> AgentRunContext
    AIAgent --> AITool
    AIAgent --> IChatClient
    ChatClientAgent --> IChatClient
    AgentRunContext --> AIContext
    AgentSession --> AIContext

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs

Core Components

AIAgent Interface

The AIAgent interface serves as the foundational abstraction for all agent implementations in the .NET SDK. It defines the contract that all concrete agent types must implement.

PropertyTypeDescription
IdstringUnique identifier for the agent instance
NamestringHuman-readable name for the agent
DescriptionstringDescription of the agent's purpose and capabilities
InstructionsstringSystem instructions that guide agent behavior
ToolsIList<AITool>Collection of tools available to the agent

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs

ChatClientAgent

ChatClientAgent is the primary concrete implementation of AIAgent that uses an IChatClient for LLM interactions. It provides comprehensive support for agent configuration, tool execution, and streaming responses.

#### Constructor Parameters

ParameterTypeRequiredDescription
chatClientIChatClientYesThe chat client used for LLM communication
instructionsstring?NoSystem instructions for agent behavior
namestring?NoAgent identifier for logging
descriptionstring?NoHuman-readable agent description
toolsIEnumerable<AITool>?NoTools the agent can invoke
loggerFactoryILoggerFactory?NoFactory for creating loggers
servicesIServiceProvider?NoService provider for dependency resolution
cancellationTokenCancellationTokenNoCancellation token for async operations

Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

AgentRunContext

The AgentRunContext provides runtime context for agent execution, including conversation history, tool configurations, and execution options.

PropertyTypeDescription
SessionIdstringUnique identifier for the current session
ConversationHistoryIList<ChatMessage>Messages exchanged in the conversation
OptionsChatOptionsConfiguration for the current run

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AgentRunContext.cs

Agent Configuration

Creating a Basic Agent

Agents can be configured with various levels of complexity depending on the use case.

# Python: Basic agent creation
# python/samples/01-get-started/01_hello_agent.py

from agent_framework import Agent

# Simple agent with instructions
agent = Agent(
    model="gpt-4o",
    instructions="You are a helpful assistant."
)
// C#: Basic agent creation
// dotnet/samples/01-get-started/01_hello_agent/Program.cs

using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Hosting.OpenAI;

// Create agent with instructions
var agent = new ChatClientAgent(
    chatClient: chatClient,
    instructions: "You are a helpful assistant that answers questions accurately."
);

Sources: python/samples/01-get-started/01_hello_agent.py Sources: dotnet/samples/01-get-started/01_hello_agent/Program.cs

Agent with Tools

Tools extend agent capabilities by allowing them to perform actions beyond text generation.

# Python: Agent with function tool
# python/samples/01-get-started/02_add_tools.py

from agent_framework import Agent, tool

@tool
def get_weather(location: str) -> str:
    """Get the weather for a specific location."""
    # Tool implementation
    return f"The weather in {location} is sunny."

agent = Agent(model="gpt-4o")
agent.tools.add(get_weather)
// C#: Agent with tools
// dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

// Tools augment any tools provided via ChatOptions.Tools when the agent is run
var agent = new ChatClientAgent(
    chatClient: chatClient,
    instructions: "You are a helpful assistant.",
    tools: new List<AITool> { customTool }
);

Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

Tool Security Considerations

By default, all provided tools are invoked without user approval. The AI selects which functions to call and chooses the arguments — these arguments should be treated as untrusted input.

Security Warning: Developers should require explicit approval for tools that have side effects, access sensitive data, or perform irreversible operations.

Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

Running Agents

Synchronous Execution

# Python: Non-streaming execution
# python/samples/01-get-started/01_hello_agent.py

result = agent.run("What is the capital of France?")
print(result)

Streaming Execution

# Python: Streaming execution
# python/samples/01-get-started/01_hello_agent.py

async for chunk in agent.run_streaming("Tell me a story"):
    print(chunk, end="", flush=True)

Multi-Turn Conversations

# Python: Multi-turn with AgentSession
# python/samples/01-get-started/03_multi_turn.py

session = AgentSession()

# First turn
response1 = await session.run(agent, "Hi, my name is Alice")
print(response1)

# Second turn - maintains context
response2 = await session.run(agent, "What is my name?")
print(response2)  # "Your name is Alice"

Sources: python/samples/01-get-started/03_multi_turn.py

Agent Execution Flow

sequenceDiagram
    participant User
    participant Agent as AIAgent/ChatClientAgent
    participant Context as AgentRunContext
    participant Tools as Tool System
    participant LLM as IChatClient
    
    User->>Agent: Run(userMessage, options)
    Agent->>Context: Create execution context
    Context->>LLM: Send chat request
    
    alt Tool Invocation Required
        LLM-->>Context: FunctionCall(tool_name, args)
        Context->>Tools: InvokeTool(tool_name, args)
        Tools-->>Context: ToolResult
        Context->>LLM: Continue with result
    end
    
    LLM-->>Agent: Final response
    Agent-->>User: Return result

Context Management

ContextProvider

Custom context providers allow agents to access dynamic context during execution.

# Python: Custom context provider
# python/samples/01-get-started/04_memory.py

from agent_framework import ContextProvider

class MemoryProvider(ContextProvider):
    def __init__(self):
        self.memories = []
    
    async def get_context(self) -> str:
        if self.memories:
            return "User preferences: " + ", ".join(self.memories)
        return ""
    
    async def update_context(self, interaction: dict):
        if "preference" in interaction:
            self.memories.append(interaction["preference"])

agent = Agent(model="gpt-4o")
agent.context_providers.add(MemoryProvider())

Sources: python/samples/01-get-started/04_memory.py

AgentSession

AgentSession maintains conversation state across multiple turns, enabling persistent interactions.

MethodDescription
run(agent, message)Execute a single turn with the agent
clear()Clear conversation history
get_history()Retrieve conversation history

Sources: python/samples/01-get-started/03_multi_turn.py

Dependency Injection

The .NET implementation supports dependency injection for resolving services required by AI functions.

// C#: Service provider integration
// dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

// The services parameter is particularly important when using custom tools 
// that require dependency injection
var services = new ServiceCollection();
services.AddSingleton<IMyService, MyServiceImplementation>();

var agent = new ChatClientAgent(
    chatClient: chatClient,
    services: services.BuildServiceProvider()
);

Sources: dotnet/src/Microsoft.Agents.AI/ChatClient/ChatClientAgent.cs

Agent Integrations

Copilot Integration

// C#: GitHub Copilot integration
// dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs

var agent = copilotClient.AsAIAgent(
    name: "CopilotAgent",
    description: "GitHub Copilot powered agent",
    tools: new List<AITool> { customTool }
);

Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs

Claude Integration

The Claude agent enables integration with Claude Agent SDK for accessing Claude's agentic capabilities.

Sources: python/packages/claude/README.md

Azure AI Foundry Integration

// C#: Azure AI Foundry Persistent Agents
// dotnet/src/Microsoft.Agents.AI.AzureAI.Persistent/PersistentAgentsClientExtensions.cs

[Obsolete("Please use the latest Foundry Agents service via the Microsoft.Agents.AI.AzureAI package.")]
public static async Task<ChatClientAgent> CreateAIAgentAsync(
    this PersistentAgentsClient persistentAgentsClient,
    string model,
    string? name = null,
    string? description = null,
    string? instructions = null,
    IEnumerable<ToolDefinition>? tools = null,
    ToolResources toolResources = null,
    double? temperature = null,
    double? topP = null,
    ResponseFormat? responseFormat = null,
    IDictionary<string, string>? metadata = null,
    IChatClientFactory? clientFactory = null,
    IServiceProvider services = null,
    CancellationToken cancellationToken = default)

Sources: dotnet/src/Microsoft.Agents.AI.AzureAI.Persistent/PersistentAgentsClientExtensions.cs

Best Practices

1. Clear Instructions

Provide specific, detailed instructions that define the agent's role, behavior, and constraints.

# Good: Specific instructions
agent = Agent(
    model="gpt-4o",
    instructions="""
    You are a technical documentation assistant.
    - Always use code blocks for code examples
    - Include practical examples
    - Explain technical terms on first use
    """
)

# Avoid: Vague instructions
agent = Agent(
    model="gpt-4o",
    instructions="Be helpful."
)

2. Tool Security

Implement approval mechanisms for sensitive tools:

// Review tools before allowing execution
public class SecureToolExecutor
{
    public async Task<ToolResult> ExecuteAsync(AITool tool, object args)
    {
        // Require approval for destructive or sensitive operations
        if (tool.HasSideEffects)
        {
            var approved = await RequestApprovalAsync(tool, args);
            if (!approved) throw new OperationCanceledException();
        }
        return await tool.InvokeAsync(args);
    }
}

3. Proper Resource Cleanup

Always dispose of agents and clients properly:

# Python: Async context manager usage
async with Agent(model="gpt-4o") as agent:
    result = await agent.run("Hello")
# Agent is automatically cleaned up

# Or explicit cleanup
agent = Agent(model="gpt-4o")
try:
    result = await agent.run("Hello")
finally:
    await agent.close()

See Also

Sources: [dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs](https://github.com/microsoft/agent-framework/blob/main/dotnet/src/Microsoft.Agents.AI.Abstractions/AIAgent.cs)

Tools and Skills

Related topics: Agent System, Middleware System

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Tools

Continue reading this section for the full explanation and source context.

Section Skills

Continue reading this section for the full explanation and source context.

Section File-Based Skills

Continue reading this section for the full explanation and source context.

Related topics: Agent System, Middleware System

Tools and Skills

Overview

Tools and Skills are core abstractions in the Microsoft Agent Framework that extend an agent's capabilities beyond its base instruction set. Tools enable functional operations (like calling APIs or executing code), while Skills provide domain-specific knowledge, structured instructions, resources, and scripts that guide agent behavior in specialized areas.

Tools are function-based capabilities that agents can invoke to perform specific tasks such as calculations, data retrieval, or external API calls. Sources: python/packages/core/agent_framework/_tools.py:1-50

Skills are containers of domain-specific knowledge that include instructions, reference documents (resources), and executable scripts. They enable agents to handle specialized tasks by providing contextual guidance and tooling. Sources: dotnet/src/Microsoft.Agents.AI/Skills/AgentSkill.cs:1-30

graph TD
    subgraph AgentFramework
        A[Agent] --> T[Tools]
        A --> S[Skills]
        T --> TF[Function Tools]
        T --> TT[Tool Definitions]
        S --> SF[File-Based Skills]
        S --> SC[Code-Defined Skills]
        S --> SB[Class-Based Skills]
        SF --> Instructions
        SF --> Resources
        SF --> Scripts
        SC --> Instructions
        SC --> Resources
        SC --> Scripts
    end

Core Concepts

Tools

Tools in the Agent Framework are the primary mechanism for enabling agents to perform actions. A tool is essentially a callable function that the agent can invoke during its execution. Sources: python/packages/core/agent_framework/_tools.py:1-80

Tool TypeDescriptionUse Case
Function ToolDecorated Python functionCustom operations in Python agents
Tool DefinitionDeclarative tool specificationCross-platform tool definition
Managed ToolPre-built tool from providersAnthropic skills, Azure AI services

Skills

Skills provide specialized knowledge and capabilities to agents. Each skill contains:

graph LR
    subgraph SkillStructure
        I[Instructions] --> C[Content]
        R[Resources] --> C
        S[Scripts] --> C
        F[Frontmatter] --> C
    end
    
    C --> A[Agent Skill]
    A --> P[Agent Skills Provider]

Skill Types

File-Based Skills

File-based skills are defined through a SKILL.md file containing YAML frontmatter and Markdown body. The frontmatter declares skill metadata, while the body contains instructions and references to resources and scripts. Sources: dotnet/src/Microsoft.Agents.AI/Skills/File/AgentFileSkill.cs:1-50

SKILL.md Structure:

Source: https://github.com/microsoft/agent-framework / Human Manual

Workflows and Orchestration

Related topics: Agent System, Hosting and Deployment Patterns, Observability and Telemetry

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Concepts

Continue reading this section for the full explanation and source context.

Section Executor Types

Continue reading this section for the full explanation and source context.

Section Sequential Workflow

Continue reading this section for the full explanation and source context.

Related topics: Agent System, Hosting and Deployment Patterns, Observability and Telemetry

Workflows and Orchestration

Overview

The Agent Framework provides a comprehensive workflow and orchestration system that enables developers to compose multiple agents into structured execution patterns. Workflows serve as the architectural backbone for multi-agent coordination, allowing agents to be chained, parallelized, or conditionally executed based on runtime state.

Workflows in the Agent Framework are built using an executor-based architecture where each component (agents, functions, workflows) implements a common executor interface. This design enables flexible composition through a builder pattern, supporting both imperative (code-based) and declarative (YAML-based) workflow definitions.

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs

Architecture

Core Concepts

graph TD
    A[Input] --> B[Executor]
    B --> C[Executor]
    C --> D[Executor]
    B --> E[Executor]
    D --> F[Aggregator]
    E --> F
    F --> G[Output]
    
    H[Workflow] --> B
    H --> C
    H --> D
    H --> E
    H --> F
    
    I[Builder] -->|Builds| H

The orchestration system is built on three fundamental abstractions:

ConceptDescription
ExecutorA callable unit that processes inputs and produces outputs
WorkflowA composed structure of executors connected by edges
BuilderFluent API for constructing workflows programmatically

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/Workflow.cs

Executor Types

Executors form the atomic units of workflow execution:

Executor TypePurpose
AIAgentEncapsulates an AI agent that processes text and returns responses
FunctionExecutorExecutes synchronous or asynchronous functions
WorkflowExecutorWraps an entire sub-workflow for nested orchestration
OutputMessagesExecutorTerminal executor that captures final output

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs

Workflow Composition Patterns

Sequential Workflow

Agents or functions execute in a linear chain, where each component receives the output of the previous one.

graph LR
    A[Input] --> B[Agent 1]
    B --> C[Agent 2]
    C --> D[Agent 3]
    D --> E[Output]

Example: Translation Chain

Input text (English)
    │
    ▼
┌─────────────┐    ┌──────────────┐    ┌──────────────┐
│ French Agent │ →  │ Spanish Agent │ →  │ English Agent │
│ (translate)  │    │ (translate)   │    │ (translate)   │
└─────────────┘    └──────────────┘    └──────────────┘
                                              │
                                              ▼
                                        Final output

Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Workflow-Simple/README.md

Concurrent Workflow

Multiple agents operate on the same input simultaneously, with outputs aggregated into a collection.

graph TD
    A[Input] --> B[Agent 1]
    A --> C[Agent 2]
    A --> D[Agent 3]
    B --> E[Aggregator]
    C --> E
    D --> E
    E --> F[Output Collection]

Conditional Workflow

Execution branches based on runtime conditions or agent responses.

graph TD
    A[Input] --> B[Router Agent]
    B -->|Condition A| C[Agent A]
    B -->|Condition B| D[Agent B]
    B -->|Default| E[Default Agent]
    C --> F[Output]
    D --> F
    E --> F

Declarative Workflows

The Agent Framework supports defining workflows using YAML, enabling configuration-driven orchestration without code changes.

Workflow Structure

name: my-workflow
description: A declarative workflow example

actions:
  - kind: SetValue
    path: turn.greeting
    value: Hello, World!

  - kind: SendActivity
    activity:
      text: =turn.greeting

Sources: python/samples/03-workflows/declarative/README.md

Action Types

#### Variable Actions

ActionPurpose
SetValueSet a variable in state
SetVariableSet a variable (.NET style naming)
AppendValueAppend to a list
ResetVariableClear a variable

#### Control Flow

ActionPurpose
IfConditional branching
SwitchMulti-way branching
ForeachIterate over collections
RepeatUntilLoop until condition
GotoActionJump to labeled action

#### Output

ActionPurpose
SendActivitySend text/attachments to user

Sources: python/samples/03-workflows/declarative/README.md

Durable Orchestration

For long-running workflows that may span hours or days, the Agent Framework provides durable orchestration using the Durable Task Framework.

Architecture

graph TD
    A[Client] -->|Schedule| B[Orchestrator]
    B -->|Calls| C[Activity]
    B -->|Calls| D[Agent]
    C -->|Result| B
    D -->|Result| B
    B -->|Persisted| E[State Store]

Key Features

FeatureDescription
Long-running executionWorkflows persist across process restarts
Human-in-the-loopWorkflows can pause and await human approval
Event-drivenActivities can send notifications and wait for responses
State managementBuilt-in state persistence with checkpointing

Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/ServiceCollectionExtensions.cs

Human-in-the-Loop Pattern

Durable workflows support pausing for human approval:

  1. Initial Generation: Agent creates content based on input
  2. Review Loop: Up to configurable maximum attempts
  • Activity notifies user for approval
  • Orchestration waits for approval event OR timeout
  1. Resolution:
  • Approved: Content published, workflow completes
  • Rejected: Feedback incorporated, regeneration triggered
  • Timeout: Error raised

Sources: python/samples/04-hosting/durabletask/07_single_agent_orchestration_hitl/README.md

Durable Workflow Context

The DurableWorkflowContext manages workflow state and events:

PropertyTypeDescription
SentMessagesList<TypedPayload>Messages sent during activity execution
OutboundEventsList<WorkflowEvent>Events added during execution
StateUpdatesDictionary<string, string?>State modifications
ClearedScopesHashSet<string>Scopes cleared during execution
HaltRequestedboolWhether executor requested workflow halt

Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/Workflows/DurableWorkflowContext.cs

Workflow Builder API

.NET Implementation

The WorkflowBuilder class provides a fluent API for composing workflows:

// Sequential composition
Workflow workflow = WorkflowBuilder.BuildSequential(
    "MyWorkflow",
    agent1, agent2, agent3);

// Concurrent composition  
Workflow workflow = WorkflowBuilder.BuildConcurrent(
    "ConcurrentWorkflow",
    agent1, agent2, agent3);

Builder Configuration Options:

OptionDescription
ReassignOtherAgentsAsUsersWhen true, other agents in scope become user participants
ForwardIncomingMessagesWhen true, incoming messages propagate through the chain

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs

Python Implementation

The Python workflow system provides similar builder patterns:

from agent_framework.workflows import WorkflowBuilder

workflow = WorkflowBuilder(
    start_executor=first_agent
).add_edge(
    from_node=first_agent,
    to_node=second_agent
).build()

MagenticBuilder for Multi-Agent Orchestration:

from agent_framework.orchestrations import MagenticBuilder

workflow = MagenticBuilder(
    participants=[researcher, writer, reviewer],
    manager_agent=manager_agent,
).build()

Sources: python/packages/orchestrations/README.md

Workflow State Management

State Persistence

Workflows maintain state throughout execution:

graph LR
    A[Checkpoint] --> B[State Dictionary]
    B --> C[Resume]
    D[Input] --> E[Executor]
    E --> F[Output]
    E -->|State Update| B

State Variables

Custom state variables are stored alongside system state:

KeyPurpose
_executor_stateInternal executor tracking (hidden from user state)
* (custom)User-defined state variables

Sources: python/packages/devui/frontend/src/components/features/workflow/checkpoint-info-modal.tsx

Configuration

Service Registration

#### .NET

<PropertyGroup>
  <InjectSharedWorkflowsSettings>true</InjectSharedWorkflowsSettings>
  <InjectSharedWorkflowsExecution>true</InjectSharedWorkflowsExecution>
</PropertyGroup>

#### Durable Options Configuration

services.ConfigureDurableWorkflows(options =>
{
    options.Workflows.HubName = "MyAgentHub";
    options.Workflows.TaskOrchestration.Type = OrchestrationType.InProcess;
});

Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/ServiceCollectionExtensions.cs

Python Environment Variables

export FOUNDRY_PROJECT_ENDPOINT="https://your-project-endpoint"
export FOUNDRY_MODEL="gpt-4o"   # optional, defaults to gpt-4o

Sources: python/samples/01-get-started/README.md

Sample Code Reference

Basic Sequential Workflow (.NET)

// Create agent executors
ExecutorBinding agent1 = agent1.BindAsExecutor(options);
ExecutorBinding agent2 = agent2.BindAsExecutor(options);

// Build sequential chain
WorkflowBuilder builder = new WorkflowBuilder(agent1);
builder.AddEdge(agent1, agent2);

// Add terminal output executor
OutputMessagesExecutor end = new();
builder = builder.AddEdge(agent2, end).WithOutputFrom(end);

Workflow workflow = builder.Build();

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs

Durable Workflow with HITL (Python)

# 1. Initial generation
content = yield writer_agent.generate(topic)

# 2. Notify for review
yield send_notification(content)

# 3. Wait for approval/rejection
approval_event = yield wait_for_event("ApprovalEvent")
if approval_event.approved:
    yield publish_content(content)
else:
    # Regenerate with feedback
    content = yield writer_agent.generate(topic, feedback=approval_event.feedback)

Sources: python/samples/04-hosting/durabletask/07_single_agent_orchestration_hitl/README.md

Monitoring and Debugging

Durable Task Dashboard

View orchestration state at http://localhost:8082:

ViewInformation Available
OrchestrationsInstance status, runtime state, input/output, execution history
AgentsConversation history, agent state

OpenTelemetry Traces

The framework emits Otel traces for workflow operations:

devui ./agents --instrumentation

Sources: python/packages/devui/README.md

See Also

Sources: [dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs](https://github.com/microsoft/agent-framework/blob/main/dotnet/src/Microsoft.Agents.AI.Workflows/WorkflowBuilder.cs)

Middleware System

Related topics: Agent System, Tools and Skills

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Concepts

Continue reading this section for the full explanation and source context.

Section Middleware Types

Continue reading this section for the full explanation and source context.

Section Function-Based Middleware

Continue reading this section for the full explanation and source context.

Related topics: Agent System, Tools and Skills

Middleware System

Overview

The Middleware System in the Microsoft Agent Framework provides a powerful extensibility mechanism that allows developers to intercept, modify, and control the flow of interactions between agents, tools, and AI models. Middleware components act as interceptors in the request-response pipeline, enabling cross-cutting concerns such as logging, authentication, tool approval, and request filtering.

According to the architecture decision record, the middleware system was designed to solve the problem of filtering agent requests and responses without tightly coupling such logic to the core agent implementation. Sources: docs/decisions/0007-agent-filtering-middleware.md

Architecture

Core Concepts

The middleware system follows a pipeline-based architecture where requests flow through a chain of middleware components before reaching the core agent logic, and responses flow back through the same chain in reverse order.

graph TD
    A[User Request] --> B[Middleware 1]
    B --> C[Middleware 2]
    C --> D[Middleware N]
    D --> E[Core Agent Logic]
    E --> F[Response from Agent]
    F --> D
    D --> C
    C --> B
    B --> G[User Response]
    
    H[Tool Calls] <-->|Intercepted| D
    I[AI Model] <-->|Filtered| D

Middleware Types

TypePurposePython Implementation.NET Implementation
Function-basedSimple callable middleware@middleware decoratorDelegate-based
Class-basedState-aware middleware with full lifecycle controlMiddleware abstract classIAgentMiddleware interface
Tool ApprovalApproves or rejects tool executionsCustom handlerToolApprovalAgent

Sources: python/packages/core/agent_framework/_middleware.py | dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs

Python Middleware Implementation

Function-Based Middleware

The simplest way to define middleware in Python is using the @middleware decorator. This creates a middleware that wraps an agent and intercepts all calls.

from agent_framework import Agent, middleware

@middleware
async def my_logging_middleware(agent, tool_call, context, call_next):
    print(f"Tool call: {tool_call.name}")
    result = await call_next(agent, tool_call, context)
    print(f"Result: {result}")
    return result

# Apply middleware to agent
agent = Agent(...)
wrapped_agent = my_logging_middleware(agent)

Sources: python/samples/02-agents/middleware/function_based_middleware.py

Middleware Base Class

For more complex scenarios, you can extend the Middleware abstract class:

from agent_framework import Middleware, Agent

class ToolApprovalMiddleware(Middleware):
    def __init__(self):
        self.pending_approvals = []
    
    async def on_tool_call(
        self, 
        agent: Agent, 
        tool_call: ToolCall, 
        context: Context
    ) -> Awaitable[Result]:
        # Custom logic to approve or reject
        if self._requires_approval(tool_call):
            return Result(success=False, error="Approval required")
        return await self.next(agent, tool_call, context)

Sources: python/packages/core/agent_framework/_middleware.py

Middleware Pipeline Execution

The middleware system processes requests through a pipeline where each middleware can:

  1. Pre-process: Act on the request before passing to the next middleware
  2. Pass through: Forward the request to the next component in the chain
  3. Post-process: Act on the response as it flows back up the chain
  4. Short-circuit: Return a response without calling subsequent middleware
sequenceDiagram
    participant Client
    participant MW1 as Middleware 1
    participant MW2 as Middleware 2
    participant Agent as Core Agent
    
    Client->>MW1: request
    MW1->>MW2: pass to next
    MW2->>Agent: forward request
    Agent-->>MW2: response
    MW2-->>MW1: post-process
    MW1-->>Client: final response

.NET Middleware Implementation

ToolApprovalAgent

The .NET implementation provides a ToolApprovalAgent that wraps an agent and requires approval for tool executions. This is particularly useful for scenarios where human-in-the-loop approval is required for sensitive operations.

public class ToolApprovalAgent : Agent
{
    public ToolApprovalAgent(
        Agent inner,
        IToolApprover toolApprover,
        Func<ToolCall, bool>? shouldApprove = null);
    
    public override async Task<Result> OnToolCallAsync(
        ToolCall toolCall,
        Context context,
        CancellationToken cancellationToken);
}

Sources: dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs

Middleware Registration

In .NET, middleware is typically registered through dependency injection and configured on the agent:

// Program.cs from the sample
var builder = Kernel.CreateBuilder();

// Register middleware
builder.Services.AddSingleton<IAgentMiddleware, LoggingMiddleware>();

var kernel = builder.Build();

// Configure agent with middleware
var agent = new ChatClientAgent(chatClient)
    .WithMiddleware<LoggingMiddleware>()
    .WithMiddleware<ToolApprovalMiddleware>();

Sources: dotnet/samples/02-agents/Agents/Agent_Step11_Middleware/Program.cs

Built-in .NET Middleware

MiddlewareDescription
LoggingMiddlewareLogs all requests, responses, and tool calls
ToolApprovalMiddlewareRequires approval before tool execution
RateLimitMiddlewareEnforces rate limiting on agent requests
AuthenticationMiddlewareValidates authentication tokens

Middleware API Reference

Python Middleware API

#### @middleware Decorator

Creates a simple function-based middleware.

@middleware
async def middleware_func(agent, tool_call, context, call_next):
    """Middleware function signature."""
    pass
ParameterTypeDescription
agentAgentThe agent instance being wrapped
tool_callToolCallThe tool call being processed
contextContextExecution context with state
call_nextCallableFunction to invoke the next middleware/agent

Sources: python/packages/core/agent_framework/_middleware.py

#### Middleware Base Class

Abstract class for stateful middleware:

class Middleware(ABC):
    @abstractmethod
    async def on_tool_call(
        self, 
        agent: Agent, 
        tool_call: ToolCall, 
        context: Context
    ) -> Result:
        """Called when a tool call is intercepted."""
        pass
MethodDescription
on_tool_callIntercepts and processes tool calls
on_requestIntercepts incoming requests
on_responseIntercepts outgoing responses
next()Passes control to the next middleware

.NET Middleware API

#### IAgentMiddleware Interface

public interface IAgentMiddleware
{
    Task<Result> InvokeAsync(
        AgentContext context,
        MiddlewareDelegate next,
        CancellationToken cancellationToken);
}
ParameterTypeDescription
contextAgentContextContains request, response, and state
nextMiddlewareDelegateDelegate to invoke the next middleware
cancellationTokenCancellationTokenCancellation support

#### Agent Extension Methods

public static class AgentMiddlewareExtensions
{
    public static TAgent WithMiddleware<TMiddleware>(
        this TAgent agent,
        params object[] args) where TAgent : Agent;
    
    public static TAgent WithMiddleware(
        this TAgent agent,
        Type middlewareType,
        params object[] args) where TAgent : Agent;
}

Use Cases

1. Tool Approval Workflow

A common use case is requiring human approval before executing sensitive tools:

graph LR
    A[Agent] --> B{ToolApprovalMiddleware}
    B --> C{Is Sensitive?}
    C -->|Yes| D[Request Human Approval]
    D --> E{Approved?}
    E -->|Yes| F[Execute Tool]
    E -->|No| G[Reject & Return Error]
    C -->|No| F

Sources: dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs

2. Request/Response Logging

Middleware can log all interactions for debugging and auditing:

@middleware
async def audit_logging_middleware(agent, tool_call, context, call_next):
    log_entry = {
        "timestamp": datetime.utcnow(),
        "tool_name": tool_call.name,
        "parameters": tool_call.arguments,
        "user": context.user_id
    }
    await audit_log(log_entry)
    return await call_next(agent, tool_call, context)

3. Request Filtering

Middleware can filter or modify requests before they reach the agent:

public class ContentFilterMiddleware : IAgentMiddleware
{
    public async Task<Result> InvokeAsync(
        AgentContext context,
        MiddlewareDelegate next,
        CancellationToken cancellationToken)
    {
        // Check for prohibited content
        if (ContainsProhibitedContent(context.Request.Text))
        {
            return new Result { Success = false, Error = "Content filtered" };
        }
        
        return await next(context, cancellationToken);
    }
}

Configuration

Python Configuration

agent = Agent(
    name="my_agent",
    instructions="You are a helpful assistant",
    middleware=[
        LoggingMiddleware(),
        ToolApprovalMiddleware(approver=human_approver),
        RateLimitMiddleware(max_calls_per_minute=60)
    ]
)

.NET Configuration

// Via dependency injection
builder.Services.AddTransient<IAgentMiddleware, LoggingMiddleware>();
builder.Services.AddSingleton<IToolApprover, HumanToolApprover>();

// Or inline during agent creation
var agent = new ChatClientAgent(chatClient)
    .WithMiddleware<LoggingMiddleware>()
    .WithMiddleware(sp.GetRequiredService<ToolApprovalMiddleware>());

Best Practices

  1. Keep middleware focused: Each middleware should handle a single concern (logging, authentication, etc.)
  1. Always call next or return: Ensure middleware either passes control to the next component or returns a response
  1. Handle exceptions: Wrap next calls in try-catch to prevent unhandled exceptions from breaking the pipeline
  1. Order matters: Register middleware in the correct order based on dependencies
  1. Avoid blocking operations: Use async/await patterns to prevent blocking the pipeline
  1. Document side effects: Clearly document any side effects middleware may have

Error Handling

Middleware should gracefully handle errors and either:

  • Recover and continue the pipeline
  • Short-circuit with an appropriate error response
  • Propagate the error with additional context
@middleware
async def error_handling_middleware(agent, tool_call, context, call_next):
    try:
        return await call_next(agent, tool_call, context)
    except ToolExecutionException as e:
        logger.error(f"Tool execution failed: {e}")
        return Result(
            success=False,
            error=f"Tool execution failed: {str(e)}",
            context={"original_error": e}
        )
ComponentRelationship
AgentCore component that middleware intercepts
ToolsOften the target of middleware interception
ContextState container passed through middleware pipeline
SkillsCan be combined with middleware for complex workflows

Summary

The Middleware System provides a flexible, extensible pipeline architecture for intercepting and modifying agent behavior. It supports both simple function-based middleware and complex class-based middleware with full lifecycle control. The system is available across both Python and .NET implementations, enabling consistent cross-platform extensibility patterns.

Key takeaways:

  • Middleware enables cross-cutting concerns without modifying core agent code
  • Both Python and .NET provide decorator/attribute-based middleware creation
  • Tool approval is a common built-in middleware pattern
  • Middleware can short-circuit, pass through, or modify requests and responses
  • Proper ordering and error handling are essential for reliable middleware pipelines

Sources: [python/packages/core/agent_framework/_middleware.py]() | [dotnet/src/Microsoft.Agents.AI/Harness/ToolApproval/ToolApprovalAgent.cs]()

AI Provider Integration

Related topics: Agent System, Getting Started with Microsoft Agent Framework

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Python Provider Packages

Continue reading this section for the full explanation and source context.

Section .NET Provider Assemblies

Continue reading this section for the full explanation and source context.

Section Python Implementation

Continue reading this section for the full explanation and source context.

Related topics: Agent System, Getting Started with Microsoft Agent Framework

AI Provider Integration

Overview

The AI Provider Integration layer in Microsoft Agent Framework enables agents to communicate with various Large Language Model (LLM) backends through a unified abstraction. This architecture allows developers to switch between different AI providers—such as OpenAI, Azure AI Foundry, Anthropic, and Ollama—without modifying agent logic. The provider system acts as the bridge between the agent's execution framework and the underlying AI models.

The framework supports both Python and .NET ecosystems, with provider implementations that expose chat completion clients, responses API clients, and specialized agent integrations. Each provider package implements common interfaces while leveraging provider-specific authentication, configuration, and API semantics.

Architecture Overview

graph TD
    subgraph "Agent Layer"
        A[Agent Instance]
        S[Skills/Tools]
    end
    
    subgraph "Provider Abstraction"
        P[Provider Interface]
    end
    
    subgraph "Concrete Providers"
        O[OpenAI]
        F[Azure AI Foundry]
        An[Anthropic Claude]
        Ol[Ollama]
        G[GitHub Copilot]
    end
    
    subgraph "External Services"
        OS[OpenAI API]
        FS[Azure Foundry]
        AS[Anthropic API]
        LS[Local Ollama]
        GS[GitHub Copilot]
    end
    
    A --> P
    S --> P
    P --> O
    P --> F
    P --> An
    P --> Ol
    P --> G
    O --> OS
    F --> FS
    An --> AS
    Ol --> LS
    G --> GS

Provider Packages

Python Provider Packages

PackagePurposeInstall Command
agent-framework-openaiOpenAI and Azure OpenAI integrationpip install agent-framework-openai
agent-framework-anthropicAnthropic Claude model supportpip install agent-framework-anthropic
agent-framework-foundryAzure AI Foundry integrationpip install agent-framework-foundry
agent-framework-claudeClaude-specific agentic capabilitiespip install agent-framework-claude --pre
agent-framework-ollamaLocal Ollama model supportpip install agent-framework-ollama --pre

Sources: python/samples/02-agents/providers/README.md

.NET Provider Assemblies

AssemblyNamespacePurpose
Microsoft.Agents.AI.OpenAIMicrosoft.Agents.AI.OpenAIOpenAI Response API and Chat Completions
Microsoft.Agents.AI.FoundryMicrosoft.Agents.AI.FoundryAzure AI Foundry agent and client integration
Microsoft.Agents.AI.GitHub.CopilotMicrosoft.Agents.AI.GitHub.CopilotGitHub Copilot agent extension

Azure AI Foundry Provider

Azure AI Foundry provides the primary production-grade provider for enterprise deployments. It integrates with Azure AI Foundry projects, enabling agents to leverage Foundry's model deployments, content safety, and telemetry.

Python Implementation

The Foundry provider package exports core classes for connecting to Azure AI Foundry projects:

# python/packages/foundry/agent_framework_foundry/__init__.py
# Core exports include:
# - FoundryChatCompletionClient
# - FoundryAgent
# - Configuration utilities

The provider requires environment configuration:

export FOUNDRY_PROJECT_ENDPOINT="https://<resource>.services.ai.azure.com/api/projects/<project>"
export FOUNDRY_MODEL="<deployment-name>"

Sources: python/samples/02-agents/providers/README.md

.NET Implementation

The .NET Foundry provider exposes two primary integration points:

#### FoundryAgent

The FoundryAgent class serves as the agent implementation backed by Azure AI Foundry:

// dotnet/src/Microsoft.Agents.AI.Foundry/FoundryAgent.cs
public class FoundryAgent
{
    // Provides agent creation and lifecycle management
    // Integrates with Azure AI Foundry service
}

#### AzureAIProjectChatClient

The AzureAIProjectChatClient wraps the Azure AI Foundry chat client with Agent Framework conventions:

// dotnet/src/Microsoft.Agents.AI.Foundry/AzureAIProjectChatClient.cs
public class AzureAIProjectChatClient
{
    // Manages project-scoped chat interactions
    // Handles authentication and connection to Foundry
}

Sources: dotnet/src/Microsoft.Agents.AI.Foundry/FoundryAgent.cs

Foundry Configuration Options

ParameterDescriptionDefault
project_endpointAzure AI Foundry project URLRequired
modelModel deployment namegpt-4o
api_versionAPI version for requestsLatest stable
credentialAzure authentication credentialDefaultAzureCredential

OpenAI Provider

The OpenAI provider enables agents to connect directly to OpenAI's API or Azure OpenAI Service endpoints.

Python Integration

from agent_framework.openai import OpenAIChatClient, OpenAIChatCompletionClient

# Direct OpenAI usage
client = OpenAIChatClient(model="gpt-4")

# Using Responses API
client = OpenAIChatCompletionClient(model="gpt-4")

Sources: python/packages/openai/agent_framework_openai/__init__.py

.NET Integration

The .NET OpenAI provider uses the OpenAIResponseClientExtensions class to create agent instances:

// dotnet/src/Microsoft.Agents.AI.OpenAI/Extensions/OpenAIResponseClientExtensions.cs
public static class OpenAIResponseClientExtensions
{
    public static ChatClientAgent AsAIAgent(
        this ResponsesClient client,
        string? model = null,
        string? instructions = null,
        string? name = null,
        string? description = null,
        IList<AITool>? tools = null,
        Func<IChatClient, IChatClient>? clientFactory = null,
        ILoggerFactory? loggerFactory = null,
        IServiceProvider? services = null)
}

Sources: dotnet/src/Microsoft.Agents.AI.OpenAI/Extensions/OpenAIResponseClientExtensions.cs

Anthropic Provider

The Anthropic provider integrates Claude models into the Agent Framework, supporting both direct API access and provider-specific agentic capabilities.

Python Integration

from agent_framework_anthropic import ClaudeAgent

agent = ClaudeAgent(
    model="claude-sonnet-4-20250514",
    # Provider-specific configuration
)

Sources: python/packages/anthropic/agent_framework_anthropic/__init__.py

The agent-framework-claude package specifically enables Claude agentic capabilities through the Agent Framework:

pip install agent-framework-claude --pre

Sources: python/packages/claude/README.md

Ollama Provider

Ollama enables local LLM deployments, useful for development, testing, and privacy-sensitive scenarios.

Configuration

export OLLAMA_BASE_URL="http://localhost:11434"  # Default
export OLLAMA_MODEL="llama3.2"  # Model to use

Installation

pip install agent-framework-ollama --pre

Sources: python/packages/ollama/README.md

Samples demonstrating Ollama connector usage are available at:

python/samples/02-agents/providers/ollama/

GitHub Copilot Provider

The .NET implementation includes a GitHub Copilot integration through the CopilotClientExtensions:

// dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs
public static AIAgent AsAIAgent(
    this CopilotClient client,
    bool ownsClient = false,
    string? id = null,
    string? name = null,
    string? description = null,
    IList<AITool>? tools = null,
    string? instructions = null)

Sources: dotnet/src/Microsoft.Agents.AI.GitHub.Copilot/CopilotClientExtensions.cs

Provider Selection Workflow

graph LR
    A[Choose Provider] --> B{Have Azure Account?}
    B -->|Yes| C[Azure AI Foundry]
    B -->|No| D[Direct OpenAI]
    C --> E[Configure Endpoint]
    D --> F[Set API Key]
    E --> G[Create ChatClient]
    F --> G
    G --> H[Initialize Agent]
    H --> I[Attach Skills/Tools]
    I --> J[Execute Agent]

Agent ID Model

Providers use a standardized AgentId model for identification:

// dotnet/src/Microsoft.Agents.AI.Hosting.OpenAI/Responses/Models/AgentId.cs
internal sealed class AgentId
{
    [JsonPropertyName("type")]
    public AgentIdType Type { get; init; }
    
    [JsonPropertyName("name")]
    public string Name { get; init; }
    
    [JsonPropertyName("version")]
    public string Version { get; init; }
}

Sources: dotnet/src/Microsoft.Agents.AI.Hosting.OpenAI/Responses/Models/AgentId.cs

Sample Applications

Python Provider Samples

SampleProviderDescription
providers/openai/OpenAIBasic OpenAI integration
providers/foundry/FoundryAzure AI Foundry integration
providers/anthropic/AnthropicClaude model usage
providers/ollama/OllamaLocal model deployment

Run samples:

cd python
uv run samples/02-agents/providers/<provider-name>/

Sources: python/samples/02-agents/providers/README.md

.NET Provider Samples

cd dotnet/samples/02-agents/AgentProviders
dotnet run

Sources: dotnet/samples/02-agents/AgentProviders/README.md

Authentication Patterns

ProviderAuthentication Method
Azure AI FoundryDefaultAzureCredential, AzureCliCredential
OpenAIAPI Key via environment or parameter
AnthropicAPI Key via environment
OllamaNo authentication (local)
GitHub CopilotCopilot client authentication

Most Azure-based providers support AzureCliCredential, requiring az login before execution:

az login

Best Practices

  1. Environment Variables: Store provider credentials in environment variables rather than hardcoding
  2. Provider Selection: Use Azure AI Foundry for production, OpenAI for development, Ollama for testing
  3. Client Reuse: Create chat clients once and reuse across agent instances when possible
  4. Error Handling: Implement retry logic for transient provider failures
  5. Model Selection: Match model capabilities to task requirements for cost efficiency

Deprecated Integrations

The Microsoft.Agents.AI.AzureAI.Persistent package is marked obsolete:

[Obsolete("Please use the latest Foundry Agents service via the Microsoft.Agents.AI.AzureAI package.")]
public static async Task<ChatClientAgent> CreateAIAgentAsync(...)

Sources: dotnet/src/Microsoft.Agents.AI.AzureAI.Persistent/PersistentAgentsClientExtensions.cs

Migration to the Foundry provider is recommended for persistent agent use cases.

Sources: [python/samples/02-agents/providers/README.md](https://github.com/microsoft/agent-framework/blob/main/python/samples/02-agents/providers/README.md)

Sessions, History, and State Management

Related topics: Agent System, Workflows and Orchestration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Session Structure

Continue reading this section for the full explanation and source context.

Section Session Lifecycle

Continue reading this section for the full explanation and source context.

Section Built-in History Providers

Continue reading this section for the full explanation and source context.

Related topics: Agent System, Workflows and Orchestration

Sessions, History, and State Management

Agent Framework provides a comprehensive system for managing conversation state across multi-turn interactions. This system encompasses sessions that track user conversations, history providers that store and retrieve chat messages, and state management mechanisms that preserve context throughout agent interactions.

Overview

The session and state management architecture in Agent Framework enables persistent conversations across multiple exchanges. At its core, the framework uses AgentSession objects to uniquely identify conversation threads, ChatHistoryProvider implementations to store message history, and various compaction strategies to manage context window constraints.

graph TD
    A[Agent Invocation] --> B[AgentSession]
    B --> C[ChatHistoryProvider]
    C --> D[State Storage]
    B --> E[StateBag]
    D --> F[Persistent Storage]
    E --> G[In-Memory State]
    C --> H[Compaction Strategy]
    H --> I[Context Reduction]
    
    style A fill:#e1f5ff
    style F fill:#fff3e0
    style G fill:#e8f5e9

Agent Session

An AgentSession represents a unique conversation context between a user and an agent. The session serves as the primary container for all stateful information related to a specific interaction.

Session Structure

The session object contains metadata and state information:

PropertyTypeDescription
session_idstringUnique identifier for the session
user_idstringIdentifier for the user
agent_idstringIdentifier for the agent
metadatadictApplication-specific metadata
state_bagdictCustom state storage
created_atdatetimeSession creation timestamp
last_accessed_atdatetimeLast activity timestamp

Sources: python/packages/core/agent_framework/_sessions.py

Session Lifecycle

Sessions are created when a user initiates a conversation and persist until explicitly terminated. The framework supports both in-memory and persistent session storage backends.

# Session creation pattern (Python)
session = AgentSession(
    user_id="user123",
    agent_id="assistant-01",
    metadata={"conversation_type": "support"}
)

Chat History Management

Chat history providers are responsible for storing, retrieving, and managing conversation messages. The framework provides multiple built-in providers and supports custom implementations.

Built-in History Providers

ProviderStorage BackendUse Case
InMemoryChatHistoryProviderMemoryDevelopment, testing
CosmosChatHistoryProviderAzure Cosmos DBProduction, scalable
RedisChatHistoryProviderRedisProduction, high-performance
Custom ProviderConfigurableApplication-specific needs

In-Memory Provider

The InMemoryChatHistoryProvider provides session-scoped message storage suitable for single-instance deployments:

public class InMemoryChatHistoryProvider : ChatHistoryProvider
{
    private readonly SessionState _sessionState;
    
    public List<ChatMessage> GetMessages(AgentSession? session)
        => this._sessionState.GetOrInitializeState(session).Messages;
    
    public void SetMessages(AgentSession? session, List<ChatMessage> messages)
    {
        Throw.IfNull(messages);
        State state = this._sessionState.GetOrInitializeState(session);
        state.Messages = messages;
    }
}

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/InMemoryChatHistoryProvider.cs:1-30

Cosmos DB Provider

For production deployments requiring persistence and scalability, the Cosmos DB provider offers fully managed storage:

# Cosmos DB history provider initialization
from agent_framework import AgentFrameworkClient

client = AgentFrameworkClient(endpoint="your-endpoint")
history_provider = client.create_chat_history_provider(
    provider_type="cosmos",
    connection_string="your-connection-string",
    database="agent_sessions",
    container="chat_history"
)

Sources: dotnet/src/Microsoft.Agents.AI.CosmosNoSql/CosmosChatHistoryProvider.cs

Redis Provider

The Redis provider provides low-latency access to chat history with automatic expiration:

# Redis session management
from agent_framework_redis import RedisSessionManager

session_manager = RedisSessionManager(
    host="localhost",
    port=6379,
    prefix="agent_session:",
    ttl=3600  # 1 hour TTL
)

Sources: python/packages/redis/agent_framework_redis/__init__.py

Custom History Provider

Developers can implement custom history providers by extending the base ChatHistoryProvider class:

from agent_framework import ChatHistoryProvider, ChatMessage
from typing import List, Optional

class CustomHistoryProvider(ChatHistoryProvider):
    def __init__(self, storage_backend):
        self._storage = storage_backend
    
    async def get_messages(self, session_id: str) -> List[ChatMessage]:
        return await self._storage.retrieve(session_id)
    
    async def add_message(self, session_id: str, message: ChatMessage) -> None:
        await self._storage.append(session_id, message)
    
    async def clear_history(self, session_id: str) -> None:
        await self._storage.delete(session_id)

Sources: python/samples/02-agents/conversations/custom_history_provider.py

Compaction and Context Management

As conversations grow, managing context window limits becomes critical. The framework provides compaction strategies that automatically reduce message history while preserving important context.

Compaction Strategy Interface

Both Python and .NET implementations define the CompactionStrategy interface:

PropertyTypeDescription
max_context_window_tokensintMaximum tokens in context window
max_output_tokensintReserved tokens for model output
available_input_tokensintComputed available for input
public abstract class CompactionStrategy
{
    public int MaxContextWindowTokens { get; }
    public int MaxOutputTokens { get; }
    public int AvailableInputTokens => MaxContextWindowTokens - MaxOutputTokens;
    
    public abstract Task<IEnumerable<ChatMessage>> CompactAsync(
        IList<ChatMessage> messages,
        CancellationToken cancellationToken = default);
}

Sources: dotnet/src/Microsoft.Agents.AI/Compaction/CompactionStrategy.cs

Compaction Trigger Events

The compaction process can be configured to trigger at different points in the message lifecycle:

Trigger EventTimingUse Case
BeforeMessagesRetrievalBefore history fetchOptimize retrieval
AfterMessagesRetrievalAfter history fetchPost-processing
OnTokenThresholdAt token limitAggressive reduction
// Configure pre-retrieval compaction
if (this.ReducerTriggerEvent == InMemoryChatHistoryProviderOptions.ChatReducerTriggerEvent.BeforeMessagesRetrieval 
    && this.ChatReducer is not null)
{
    await ReduceMessagesAsync(this.ChatReducer, state, cancellationToken);
}

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/InMemoryChatHistoryProvider.cs:25-45

Python Compaction Implementation

The Python implementation follows a similar pattern with configurable compaction strategies:

class CompactionStrategy(ABC):
    def __init__(
        self,
        max_context_window_tokens: int = 128000,
        max_output_tokens: int = 4096
    ):
        self.max_context_window_tokens = max_context_window_tokens
        self.max_output_tokens = max_output_tokens
    
    @property
    def available_input_tokens(self) -> int:
        return self.max_context_window_tokens - self.max_output_tokens
    
    @abstractmethod
    async def compact(
        self,
        messages: List[ChatMessage],
        cancellation_token: Optional[CancellationToken] = None
    ) -> List[ChatMessage]:
        pass

Sources: python/packages/core/agent_framework/_compaction.py

State Management

Session State Bag

The StateBag provides a dictionary-like interface for storing custom application state within a session:

public class AgentSession
{
    public IDictionary<string, object> StateBag { get; set; }
}

// Usage
session.StateBag["last_intent"] = "greeting";
session.StateBag["user_preference"] = new { theme = "dark", language = "en" };

Context Providers

AIContextProvider instances enable middleware-style processing of conversation context:

graph LR
    A[User Message] --> B[AIContextProvider.BeforeInvoke]
    B --> C[Agent Invocation]
    C --> D[AIContextProvider.Invoked]
    D --> E[Response to User]
    
    F[Update State] -.-> B
    G[Log/Audit] -.-> D
    H[Extract Memories] -.-> D
public ValueTask BeforeInvokeAsync(InvokingContext context, CancellationToken cancellationToken = default)
{
    // Use the request and response messages to:
    // - Update state based on conversation outcomes
    // - Extract and store memories or preferences
    // - Log or audit conversation details
    return default;
}

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AIContextProvider.cs:1-25

Chat History Memory Provider Scope

For scoping chat history across applications, agents, or sessions:

public sealed class ChatHistoryMemoryProviderScope
{
    public string? ApplicationId { get; set; }
    public string? AgentId { get; set; }
    public string? SessionId { get; set; }
    public string? UserId { get; set; }
}
Scope PropertyEffect When Set
ApplicationIdRestricts history to specific application
AgentIdRestricts history to specific agent
SessionIdRestricts history to specific session
UserIdRestricts history to specific user

Sources: dotnet/src/Microsoft.Agents.AI/Memory/ChatHistoryMemoryProviderScope.cs

Workflow Checkpointing

For long-running workflows, the framework supports checkpoint-based state persistence that allows recovery from failures and resumption of interrupted executions.

from agent_framework import CheckpointStorage, CosmosCheckpointStorage

checkpoint_storage = CosmosCheckpointStorage(
    endpoint="your-cosmos-endpoint",
    database="workflows",
    container="checkpoints"
)

# Save checkpoint
await checkpoint_storage.save_checkpoint(
    workflow_id="workflow-123",
    step="step-3",
    state={"progress": 75, "data": {...}},
    metadata={"started_at": "2024-01-15T10:00:00Z"}
)

# Resume from checkpoint
checkpoint = await checkpoint_storage.load_checkpoint(
    workflow_id="workflow-123"
)

Sources: python/samples/03-workflows/checkpoint/cosmos_workflow_checkpointing.py

Agent Configuration Options

The HarnessAgentOptions class demonstrates comprehensive configuration for session and history management:

public class HarnessAgentOptions
{
    public ChatOptions? ChatOptions { get; set; }
    public ChatHistoryProvider? ChatHistoryProvider { get; set; }
    public IEnumerable<AIContextProvider>? AIContextProviders { get; set; }
}
OptionDescription
ChatOptionsConfigures instructions, tools, and model parameters
ChatHistoryProviderStorage backend for conversation history
AIContextProvidersMiddleware providers for context processing

Sources: dotnet/src/Microsoft.Agents.AI.Harness/HarnessAgentOptions.cs:1-50

Agent Modes

Sessions can operate in different modes that affect behavior:

public sealed class AgentMode
{
    public string Name { get; }
    public string Description { get; }
}

public class AgentModeProviderOptions
{
    public IReadOnlyList<AgentMode>? Modes { get; set; }
    public string? DefaultMode { get; set; }
}
ModeDescription
planInteractive planning mode
executeAutonomous execution mode

Sources: dotnet/src/Microsoft.Agents.AI/Harness/AgentMode/AgentModeProviderOptions.cs

Response Updates

The AgentResponseUpdate class represents streaming response data with full metadata:

public class AgentResponseUpdate
{
    public string? AuthorName { get; set; }
    public ChatRole? Role { get; set; }
    public IList<AIContent>? Contents { get; set; }
    public FinishReason? FinishReason { get; set; }
    public string? MessageId { get; set; }
    public string? ResponseId { get; set; }
    public DateTimeOffset? CreatedAt { get; set; }
}

Sources: dotnet/src/Microsoft.Agents.AI.Abstractions/AgentResponseUpdate.cs:1-30

Best Practices

Session Management

  1. Session Initialization: Always initialize sessions with appropriate user and agent identifiers
  2. Session Cleanup: Implement session expiration for idle conversations
  3. State Isolation: Use separate state bags for different concerns

History Management

  1. Provider Selection: Choose providers based on scale requirements
  2. Compaction Tuning: Configure compaction thresholds based on model context limits
  3. History Pruning: Implement retention policies for regulatory compliance

State Management

  1. State Serialization: Ensure custom state objects are serializable
  2. Context Providers: Use context providers for cross-cutting concerns
  3. Checkpoint Frequency: Balance checkpoint overhead against recovery requirements

See Also

Sources: [python/packages/core/agent_framework/_sessions.py]()

Hosting and Deployment Patterns

Related topics: Workflows and Orchestration, Observability and Telemetry

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Python Azure Functions Package

Continue reading this section for the full explanation and source context.

Section .NET Azure Functions Package

Continue reading this section for the full explanation and source context.

Section Configuration

Continue reading this section for the full explanation and source context.

Related topics: Workflows and Orchestration, Observability and Telemetry

Hosting and Deployment Patterns

The Microsoft Agent Framework provides multiple hosting and deployment patterns to accommodate different runtime environments and enterprise requirements. This documentation covers the available hosting options, configuration requirements, and deployment strategies for both Python and .NET implementations.

Overview

The framework supports three primary hosting paradigms:

Hosting PatternLanguageRuntime EnvironmentUse Case
Azure FunctionsPython, .NETServerless/Event-drivenStateless agent invocations
Durable TaskPython, .NETLong-running workflowsComplex orchestrations with state persistence
Foundry HostingPython, .NETAzure AI FoundryManaged agent deployment with platform integration

Sources: python/samples/04-hosting/README.md:1-15 Sources: dotnet/samples/04-hosting/README.md:1-20

Architecture Overview

graph TD
    A[Client Request] --> B{Deployment Pattern}
    B -->|Azure Functions| C[Function App]
    B -->|Durable Task| D[Orchestration Engine]
    B -->|Foundry Hosting| E[Azure AI Foundry]
    
    C --> F[Stateless Agent Handler]
    D --> G[Stateful Orchestrator]
    E --> H[Managed Agent Runtime]
    
    F --> I[Response]
    G --> I
    H --> I

Azure Functions Hosting

Azure Functions provides a serverless hosting model suitable for event-driven agent invocations. The framework offers native integration through dedicated packages for both Python and .NET.

Python Azure Functions Package

The Python Azure Functions hosting package is located at python/packages/azurefunctions/agent_framework_azurefunctions/__init__.py.

Installation

pip install agent-framework-azurefunctions

Sources: python/packages/azurefunctions/agent_framework_azurefunctions/__init__.py

.NET Azure Functions Package

The .NET Azure Functions hosting is provided through the Microsoft.Agents.AI.Hosting.AzureFunctions NuGet package.

Installation

<ItemGroup>
  <PackageReference Include="Microsoft.Agents.AI.Hosting.AzureFunctions" Version="[CURRENTVERSION]" />
</ItemGroup>

Or via CLI:

dotnet add package Microsoft.Agents.AI.Hosting.AzureFunctions

Sources: dotnet/src/Microsoft.Agents.AI.Hosting.AzureFunctions/README.md:1-15

Configuration

Azure Functions samples require the following environment configuration:

VariableDescriptionExample
AZURE_OPENAI_ENDPOINTAzure OpenAI service endpointhttps://your-resource.openai.azure.com/
AZURE_OPENAI_DEPLOYMENT_NAMEModel deployment namegpt-4o
TASKHUB_NAMEDurable Task hub name (for orchestration)default

Sources: dotnet/samples/04-hosting/DurableAgents/AzureFunctions/README.md:1-30

Sample Structure

The repository includes Azure Functions samples organized by complexity:

dotnet/samples/04-hosting/DurableAgents/AzureFunctions/
├── 01_SingleAgent/
├── 02_MultiAgent/
└── README.md

Running the Sample

cd dotnet/samples/04-hosting/DurableAgents/AzureFunctions/01_SingleAgent
func start

The function app becomes available at http://localhost:7071.

Sources: dotnet/samples/04-hosting/DurableAgents/AzureFunctions/README.md:45-60

Durable Task Hosting

Durable Task hosting enables long-running agent workflows with state persistence and checkpoint capabilities. This pattern is essential for complex multi-step orchestrations.

Python Durable Task Package

Installation

pip install agent-framework-durabletask

The package is located at python/packages/durabletask/agent_framework_durabletask/__init__.py.

Sources: python/packages/durabletask/agent_framework_durabletask/__init__.py

.NET Durable Task Package

Installation

<ItemGroup>
  <PackageReference Include="Microsoft.Agents.AI.DurableTask" Version="[CURRENTVERSION]" />
</ItemGroup>

Sources: dotnet/src/Microsoft.Agents.AI.DurableTask/README.md:1-10

Workflow Orchestration

graph LR
    A[Start] --> B[Activity: Initialize]
    B --> C[Activity: Process]
    C --> D{Continue?}
    D -->|Yes| C
    D -->|No| E[Activity: Finalize]
    E --> F[Complete]
    
    G[Orchestrator] -.-> A
    G -.-> B
    G -.-> C
    G -.-> D
    G -.-> E
    G -.-> F

Azurite Emulator Requirement

When running Durable Task samples locally, start the Azurite emulator:

az login
azd pipeline config
azd up

Or manually:

azurite

Sources: python/samples/04-hosting/azure_functions/README.md:1-20

Foundry Hosting

Foundry Hosting provides the most comprehensive deployment option with deep integration into Azure AI Foundry. This pattern supports managed agents, model routing, and enterprise-grade security.

Python Foundry Hosting Package

Installation

pip install agent-framework-foundry-hosting

The package is located at python/packages/foundry_hosting/agent_framework_foundry_hosting/__init__.py.

Sources: python/packages/foundry_hosting/agent_framework_foundry_hosting/__init__.py

Configuration Requirements

Foundry-hosted agents require specific environment configuration:

VariableDescriptionRequired
FOUNDRY_PROJECT_ENDPOINTAzure AI Foundry project endpointYes
FOUNDRY_MODEL or AZURE_AI_MODEL_DEPLOYMENT_NAMEModel deployment nameYes
AZURE_BEARER_TOKENAuthentication token (for Docker)Docker only
AGENT_NAMEFoundry-managed agent nameLocal dev

Sources: python/samples/04-hosting/foundry-hosted-agents/README.md:1-40 Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent/README.md:1-25

Environment Setup

Bash/Linux

export FOUNDRY_PROJECT_ENDPOINT="https://<account>.services.ai.azure.com/api/projects/<project>"
export AZURE_AI_MODEL_DEPLOYMENT_NAME="<your-model-deployment-name>"

PowerShell

$env:FOUNDRY_PROJECT_ENDPOINT="https://<account>.services.ai.azure.com/api/projects/<project>"
$env:AZURE_AI_MODEL_DEPLOYMENT_NAME="<your-model-deployment-name>"

Foundry Hosted Agent Samples

The repository provides multiple Foundry hosting samples:

SampleDescriptionPath
Hosted-TextRagText-based RAG agentdotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-TextRag/
Hosted-FoundryAgentDirect Foundry agent hostingdotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent/
Hosted-AzureSearchRagAzure AI Search integrationdotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-AzureSearchRag/
Hosted-McpToolsMCP tools integrationdotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-McpTools/
Hosted-FilesBundled file handlingdotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Files/
Hosted-Workflow-SimpleMulti-step workflowdotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Workflow-Simple/

Sources: python/samples/04-hosting/README.md:1-50 Sources: dotnet/samples/04-hosting/README.md:1-60

Deployment Workflows

Direct Execution (Contributors)

For local development and contribution work:

cd dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent
AGENT_NAME=<your-agent-name> dotnet run

The agent starts on http://localhost:8088.

Docker Deployment

#### Publishing for Container Runtime

dotnet publish -c Debug -f net10.0 -r linux-musl-x64 --self-contained false -o out

#### Building the Image

docker build -f Dockerfile.contributor -t hosted-foundry-agent .

#### Running the Container

export AZURE_BEARER_TOKEN=$(az account get-access-token --resource https://ai.azure.com --query accessToken -o tsv)

docker run --rm -p 8088:8088 \
  -e AGENT_NAME=hosted-foundry-agent \
  -e AZURE_BEARER_TOKEN=$AZURE_BEARER_TOKEN \
  --env-file .env \
  hosted-foundry-agent

Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-FoundryAgent/README.md:20-60

Testing Hosted Agents

Using Azure Developer CLI

azd ai agent invoke --local "Hello!"

Using curl

curl -X POST http://localhost:8088/responses \
  -H "Content-Type: application/json" \
  -d '{"input": "Hello!", "model": "<your-agent-name>"}'

Testing Session Files

cd ../Using-Samples/SessionFilesClient
$env:AGENT_ENDPOINT = "http://localhost:8088"
$env:AGENT_NAME = "hosted-files"
dotnet run

You> What is the total revenue in the contoso file?

Sources: dotnet/samples/04-hosting/FoundryHostedAgents/responses/Hosted-Files/README.md:30-50

Comparison Matrix

FeatureAzure FunctionsDurable TaskFoundry Hosting
Stateful ExecutionNoYesYes
Long-running WorkflowsNoYesYes
ServerlessYesNoNo
Managed ScalingYesManualYes
Checkpoint/ResumeNoYesYes
Azure AI Foundry IntegrationNoNoYes
Local Development SupportLimitedYesYes
Docker DeploymentYesYesYes

Next Steps

Sources: [python/samples/04-hosting/README.md:1-15](https://github.com/microsoft/agent-framework/blob/main/python/samples/04-hosting/README.md)

Observability and Telemetry

Related topics: Workflows and Orchestration, Hosting and Deployment Patterns

Section Related Pages

Continue reading this section for the full explanation and source context.

Section High-Level Component Interaction

Continue reading this section for the full explanation and source context.

Section Auto-Wiring Mechanism

Continue reading this section for the full explanation and source context.

Section OpenTelemetryAgent

Continue reading this section for the full explanation and source context.

Related topics: Workflows and Orchestration, Hosting and Deployment Patterns

Observability and Telemetry

The Agent Framework provides comprehensive observability capabilities through OpenTelemetry integration, enabling distributed tracing, performance metrics collection, and detailed logging across both .NET and Python implementations.

Overview

Observability in the Agent Framework allows developers to:

  • Trace agent invocations across distributed systems
  • Collect performance metrics and timing information
  • Log request and response payloads (when enabled)
  • Track errors and capture exception details
  • Monitor usage statistics and token consumption

Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgentBuilderExtensions.cs:1-30

The implementation follows the OpenTelemetry Semantic Conventions for Generative AI systems as defined in the OpenTelemetry specification. The specification for Generative AI is still experimental and subject to change.

Sources: docs/decisions/0003-agent-opentelemetry-instrumentation.md:1-20

Architecture

High-Level Component Interaction

graph TD
    A[Application] --> B[OpenTelemetry Agent Wrapper]
    B --> C[Inner AIAgent]
    C --> D[IChatClient]
    D --> E[AI Provider<br/>OpenAI/Anthropic/GitHub Copilot]
    
    B -.-> F[OpenTelemetry Traces]
    B -.-> G[Metrics]
    B -.-> H[Logs]
    
    F --> I[OTLP Exporter]
    G --> I
    H --> I
    
    I --> J[Telemetry Backend<br/>Azure Monitor/Jaeger/...]

Auto-Wiring Mechanism

When using OpenTelemetryAgent, the framework automatically wraps underlying chat clients with telemetry instrumentation:

graph LR
    A[ChatClientAgent] --> B[OpenTelemetryAgent]
    B --> C{IChatClient}
    C -->|autoWireChatClient: true| D[Auto-wrap with<br/>OpenTelemetryChatClient]
    C -->|Already Instrumented| E[No Additional Wrapping]
    D --> F[Chat-Level Telemetry]

Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgent.cs:1-25

.NET Implementation

OpenTelemetryAgent

The OpenTelemetryAgent class wraps an existing AIAgent to add telemetry capabilities without modifying the underlying agent's behavior.

Class Declaration:

[Experimental(DiagnosticIds.Experiments.AgentsAIExperiments)]
public sealed class OpenTelemetryAgent : AIAgent

Constructor Parameters:

ParameterTypeDescription
innerAgentAIAgentThe underlying agent to be augmented with telemetry
sourceNamestring?Optional source name for telemetry identification
autoWireChatClientboolAuto-wrap ChatClientAgent's IChatClient with OpenTelemetryChatClient

Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgent.cs:1-40

Key Features:

  1. Provider Metadata Extraction: Automatically extracts provider metadata from the inner agent via AIAgentMetadata.
this._providerName = innerAgent.GetService<AIAgentMetadata>()?.ProviderName;
  1. Chat Client Auto-Wiring: When autoWireChatClient is true and the inner agent is a ChatClientAgent, the underlying IChatClient is automatically wrapped with OpenTelemetryChatClient.

Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgent.cs:1-50

Builder Extension

The recommended way to add telemetry to agents is through the AIAgentBuilder:

public static AIAgentBuilder UseOpenTelemetry(
    this AIAgentBuilder builder,
    string? sourceName = null,
    Action<OpenTelemetryAgent>? configure = null)

Usage:

AIAgent agent = builder
    .WithChatClient(chatClient)
    .UseOpenTelemetry(sourceName: "my-agent")
    .Build();

Sources: dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgentBuilderExtensions.cs:1-45

Workflow Telemetry Options

The WorkflowTelemetryOptions class provides configuration for workflow-level telemetry:

PropertyTypeDefaultDescription
EnableSensitiveDataboolfalseInclude potentially sensitive information in telemetry
DisableWorkflowBuildboolfalseDisable workflow.build activities
DisableWorkflowRunboolfalseDisable workflow_invoke activities

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/Observability/WorkflowTelemetryOptions.cs:1-40

Activity Extensions

The framework provides extension methods for creating and managing OpenTelemetry activities in workflows:

// Creating activity spans for workflow operations
ActivitySource activitySource = new ActivitySource("Microsoft.Agents.AI.Workflows");

// Activity creation following semantic conventions
var activity = activitySource.StartActivity("workflow.invoke");

These extensions ensure proper tagging and attributes according to OpenTelemetry's generative AI conventions.

Sources: dotnet/src/Microsoft.Agents.AI.Workflows/Observability/ActivityExtensions.cs:1-30

Python Implementation

Telemetry Module

The Python SDK provides telemetry capabilities through the _telemetry.py module:

from agent_framework._telemetry import configure_otel_providers

Key Functions:

FunctionDescription
configure_otel_providers()Configure OpenTelemetry providers with exporters
configure_otel_providers_with_env_var()Use standard OTEL environment variables

Sources: python/packages/core/agent_framework/_telemetry.py:1-50

Basic Configuration

from agent_framework.observability import configure_otel_providers

# Enable console exporters for development
configure_otel_providers(enable_console_exporters=True)

Sources: python/samples/02-agents/observability/agent_observability.py:1-20

GitHub Copilot Agent Integration

The GitHubCopilotAgent has OpenTelemetry tracing built-in:

from agent_framework.observability import configure_otel_providers
from agent_framework.github import GitHubCopilotAgent

configure_otel_providers(enable_console_exporters=True)

async with GitHubCopilotAgent() as agent:
    response = await agent.run("Hello!")

Sources: python/samples/02-agents/providers/github_copilot/README.md:1-30

Environment Variables

Python observability supports standard OpenTelemetry environment variables:

VariableDescription
OTEL_SERVICE_NAMEService name for telemetry
OTEL_EXPORTER_OTLP_ENDPOINTOTLP exporter endpoint
OTEL_EXPORTER_OTLP_PROTOCOLProtocol (grpc, http/protobuf)
OTEL_RESOURCE_ATTRIBUTESAdditional resource attributes

Sources: python/samples/02-agents/observability/README.md:1-30

Logging Configuration

Align Python logs with telemetry output:

import logging

logging.basicConfig(
    format="[%(asctime)s - %(pathname)s:%(lineno)d - %(levelname)s] %(message)s",
    datefmt="%Y-%m-%d %H:%M:%S",
)

# Get root logger and set detailed level
logger = logging.getLogger()
logger.setLevel(logging.NOTSET)

Sources: python/samples/02-agents/observability/README.md:1-60

Semantic Conventions

The Agent Framework adheres to OpenTelemetry's semantic conventions for generative AI systems. Key conventions include:

graph TD
    A[AI Agent Invocation] --> B[Semantic Convention Attributes]
    
    B --> C[gen_ai.system]
    B --> D[gen_ai.request.model]
    B --> E[gen_ai.response.id]
    B --> F[gen_ai.usage.prompt_tokens]
    B --> G[gen_ai.usage.completion_tokens]
    B --> H[gen_ai.response.finish_reason]

Standard Attributes:

AttributeDescription
gen_ai.systemThe AI system type (e.g., "openai", "anthropic")
gen_ai.request.modelModel identifier for the request
gen_ai.response.idUnique identifier for the response
gen_ai.usage.prompt_tokensNumber of tokens in the prompt
gen_ai.usage.completion_tokensNumber of tokens in completion
gen_ai.response.finish_reasonReason for completion termination

Sources: docs/decisions/0003-agent-opentelemetry-instrumentation.md:1-50

Configuration Examples

.NET: Full Agent with Telemetry

using Microsoft.Agents.AI;
using Microsoft.Agents.AI.Telemetry;

// Create the builder
IAIAgentBuilder builder = new AgentBuilder();

// Configure with telemetry
AIAgent agent = builder
    .WithChatClient(chatClient)
    .UseOpenTelemetry(
        sourceName: "my-agent",
        configure: agent => 
        {
            // Additional configuration
        })
    .Build();

// Use the agent - all invocations are automatically traced
var response = await agent.InvokeAsync("Hello, agent!");

Sources: dotnet/samples/02-agents/AgentOpenTelemetry/Program.cs:1-50

Python: Advanced Exporter Configuration

from agent_framework.observability import configure_otel_providers
from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExporter
from opentelemetry.sdk.trace.export import BatchSpanProcessor

# Create custom exporter
custom_exporter = OTLPSpanExporter(
    endpoint="https://your-endpoint.azure.com",
    insecure=True
)

# Configure with custom exporter
configure_otel_providers(
    service_name="my-agent-service",
    span_exporter=custom_exporter,
    enable_console_exporters=True
)

Sources: python/samples/02-agents/observability/configure_otel_providers_with_parameters.py:1-40

Best Practices

1. Consistent Source Naming

Use meaningful source names to identify telemetry data:

// Good
builder.UseOpenTelemetry(sourceName: "customer-support-agent");

// Avoid
builder.UseOpenTelemetry(); // Uses default

2. Sensitive Data Handling

By default, telemetry excludes raw inputs and outputs:

var options = new WorkflowTelemetryOptions
{
    EnableSensitiveData = false // Default - excludes raw content
};

Only enable sensitive data logging when necessary and ensure proper data protection.

3. Selective Activity Recording

Disable activities that generate excessive telemetry:

var options = new WorkflowTelemetryOptions
{
    DisableWorkflowBuild = true,  // Reduce noise in build-heavy workflows
    DisableWorkflowRun = false   // Keep run telemetry
};

4. Provider Compatibility

The telemetry implementation adapts to the underlying AI provider:

ProviderTelemetry Support
OpenAIFull
AnthropicFull
Azure AI FoundryFull
GitHub CopilotBuilt-in

5. Environment-Based Configuration

Use environment variables for deployment flexibility:

# Development
export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4317
export OTEL_SERVICE_NAME=agent-dev

# Production
export OTEL_EXPORTER_OTLP_ENDPOINT=https://telemetry.company.com
export OTEL_SERVICE_NAME=agent-prod

Troubleshooting

Missing Telemetry Data

  1. Verify OpenTelemetry SDK is properly configured
  2. Check that the exporter endpoint is accessible
  3. Ensure ActivitySource names match between instrumentation and export

Duplicate Telemetry

If using ChatClientAgent with OpenTelemetryAgent:

  • Set autoWireChatClient: false when chat client is already instrumented
  • Avoid manually wrapping already-wrapped clients

Performance Impact

Telemetry collection adds minimal overhead. For high-throughput scenarios:

  • Use batch exporters instead of simple exporters
  • Consider disabling verbose logging levels
  • Sample traces when full fidelity is not required

Sources: [dotnet/src/Microsoft.Agents.AI/OpenTelemetryAgentBuilderExtensions.cs:1-30]()

Doramagic Pitfall Log

Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.

high .NET: [Bug]: TextContent.AdditionalProperties dropped by AsAGUIEventStreamAsync for TEXT_MESSAGE_START/TEXT_MESSAGE_CON…

First-time setup may fail or require extra isolation and rollback planning.

high Bug: Agent responses lose structured JSON metadata in multi-agent orchestration (MAF 1.x.x)

Users may get misleading failures or incomplete behavior unless configuration is checked carefully.

high .NET: OpenAI-compatible extra body field thinking is not forwarded when using Microsoft.Agents.AI.OpenAI

The project may affect permissions, credentials, data exposure, or host boundaries.

high .NET: [Bug]: In v. 1.5.0 Microsoft.Agents.AI.Anthropic (and Google.GenAI) do not work [Regression]

The project may affect permissions, credentials, data exposure, or host boundaries.

Doramagic Pitfall Log

Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.

1. Installation risk: .NET: [Bug]: TextContent.AdditionalProperties dropped by AsAGUIEventStreamAsync for TEXT_MESSAGE_START/TEXT_MESSAGE_CON…

  • Severity: high
  • Finding: Installation risk is backed by a source signal: .NET: [Bug]: TextContent.AdditionalProperties dropped by AsAGUIEventStreamAsync for TEXT_MESSAGE_START/TEXT_MESSAGE_CON…. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/4923

2. Configuration risk: Bug: Agent responses lose structured JSON metadata in multi-agent orchestration (MAF 1.x.x)

  • Severity: high
  • Finding: Configuration risk is backed by a source signal: Bug: Agent responses lose structured JSON metadata in multi-agent orchestration (MAF 1.x.x). Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5785

3. Security or permission risk: .NET: OpenAI-compatible extra body field thinking is not forwarded when using Microsoft.Agents.AI.OpenAI

  • Severity: high
  • Finding: Security or permission risk is backed by a source signal: .NET: OpenAI-compatible extra body field thinking is not forwarded when using Microsoft.Agents.AI.OpenAI. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5708

4. Security or permission risk: .NET: [Bug]: In v. 1.5.0 Microsoft.Agents.AI.Anthropic (and Google.GenAI) do not work [Regression]

  • Severity: high
  • Finding: Security or permission risk is backed by a source signal: .NET: [Bug]: In v. 1.5.0 Microsoft.Agents.AI.Anthropic (and Google.GenAI) do not work [Regression]. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5707

5. Security or permission risk: .NET: [Bug]: Regression - Tool Events not being emitted correctly to the front end

  • Severity: high
  • Finding: Security or permission risk is backed by a source signal: .NET: [Bug]: Regression - Tool Events not being emitted correctly to the front end. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5794

6. Security or permission risk: Anthropic function limit fallback can return empty final response

  • Severity: high
  • Finding: Security or permission risk is backed by a source signal: Anthropic function limit fallback can return empty final response. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5769

7. Security or permission risk: Python: Add tutorial for building a custom chat client / LLM provider

  • Severity: high
  • Finding: Security or permission risk is backed by a source signal: Python: Add tutorial for building a custom chat client / LLM provider. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5505

8. Installation risk: python-1.2.1

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: python-1.2.1. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/releases/tag/python-1.2.1

9. Configuration risk: .NET: [Bug]: DurableTask: SuperstepState.AccumulatedEvents overflows CustomStatus 16 KB cap on multi-executor workflows…

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: .NET: [Bug]: DurableTask: SuperstepState.AccumulatedEvents overflows CustomStatus 16 KB cap on multi-executor workflows…. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5745

10. Configuration risk: Python: CosmosHistoryProvider Code interpreter tool calls are saved chunk by chunk

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: Python: CosmosHistoryProvider Code interpreter tool calls are saved chunk by chunk. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/issues/5793

11. Configuration risk: dotnet-1.5.0

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: dotnet-1.5.0. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/releases/tag/dotnet-1.5.0

12. Configuration risk: python-1.2.2

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: python-1.2.2. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/microsoft/agent-framework/releases/tag/python-1.2.2

Source: Doramagic discovery, validation, and Project Pack records

Community Discussion Evidence

These external discussion links are review inputs, not standalone proof that the project is production-ready.

Sources 12

Count of project-level external discussion links exposed on this manual page.

Use Review before install

Open the linked issues or discussions before treating the pack as ready for your environment.

Community Discussion Evidence

Doramagic exposes project-level community discussion separately from official documentation. Review these links before using agent-framework with real data or production workflows.

Source: Project Pack community evidence and pitfall evidence