Doramagic Project Pack · Human Manual

crewAI

This guide covers the installation and setup procedures for CrewAI, a multi-agent automation framework. The installation process supports multiple methods including pip, UV package manager...

Installation and Setup

Related topics: Quick Start Guide, LLM Providers and Configuration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Python Version Compatibility

Continue reading this section for the full explanation and source context.

Section Standard Installation via pip

Continue reading this section for the full explanation and source context.

Section UV Package Manager Installation

Continue reading this section for the full explanation and source context.

Related topics: Quick Start Guide, LLM Providers and Configuration

Installation and Setup

Overview

This guide covers the installation and setup procedures for CrewAI, a multi-agent automation framework. The installation process supports multiple methods including pip, UV package manager, and direct source installation. CrewAI requires Python 3.10 to 3.13 and uses modern dependency management practices to ensure consistent environments across development and production.

System Requirements

Python Version Compatibility

RequirementSpecification
Minimum Python3.10
Maximum Python< 3.14
Package ManagerUV (recommended) or pip

The project enforces version constraints through pyproject.toml configuration files. The version range ensures compatibility with modern Python features while avoiding breaking changes from upcoming releases.

Installation Methods

Standard Installation via pip

The primary method for installing CrewAI uses pip, Python's standard package manager:

pip install crewai

This installation includes the core CrewAI framework with essential dependencies. For users requiring additional tooling capabilities, the extended installation includes built-in tools:

pip install 'crewai[tools]'

Sources: lib/crewai/pyproject.toml

UV Package Manager Installation

UV is the recommended package manager for CrewAI projects due to its superior performance and dependency resolution capabilities.

pip install uv

After installing UV, create a new project with:

crewai create crew <project_name> --skip_provider
crewai create flow <project_name> --skip_provider

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Project Structure

After creating a new CrewAI project, the following directory structure is generated:

src/<project_name>/
├── __init__.py
├── crew.py
├── main.py
├── tools/
│   ├── custom_tool.py
│   └── __init__.py
└── config/
    ├── agents.yaml
    └── tasks.yaml

Core Files Description

FilePurpose
main.pyEntry point for project execution
crew.pyCrew definition and agent orchestration logic
agents.yamlAgent role, goal, and backstory configurations
tasks.yamlTask descriptions and dependencies
tools/Custom tool implementations
.envEnvironment variables and API keys

Sources: README.md

Dependencies Management

Using UV for Dependency Operations

UV provides fast and reliable dependency management. The following commands handle common dependency tasks:

uv add <package>          # Add a new dependency
uv sync                  # Synchronize dependencies with lock file
uv lock                  # Update the lock file

Core Dependencies

The main crewai package includes these core dependencies:

  • pydantic - Data validation and settings management
  • crewai core modules - Agent orchestration and task management

Tools Dependencies

Additional packages are required for specific tool integrations:

ToolRequired Package
Tavily Searchtavily-python
File CompressionBuilt-in
PDF ProcessingBuilt-in
ArXiv IntegrationBuilt-in

Sources: lib/crewai-tools/pyproject.toml

Environment Configuration

Environment Variables Setup

Create a .env file in your project root to store sensitive configuration:

OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key
SERPLY_API_KEY=your_serply_api_key
LINKUP_API_KEY=your_linkup_api_key

Configuration in agents.yaml

Define agent behavior through YAML configuration:

researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}.

Sources: README.md

Custom Tools Installation

Publishing Tools

Distribute custom tools within your organization or to the community:

crewai tool publish <tool_name>

Installing Tools

Install tools published by others or within your organization:

crewai tool install <tool_name>

Sources: lib/cli/src/crewai_cli/templates/tool/README.md

Installation Verification

Quick Verification Steps

After installation, verify the setup by running:

crewai run

This command auto-detects the project type from pyproject.toml and executes the crew or flow.

Memory Management Commands

CrewAI provides CLI commands for managing agent memories:

crewai reset-memories -a              # Reset all memories
crewai reset-memories -s              # Short-term only
crewai reset-memories -l              # Long-term only
crewai reset-memories -e              # Entity only
crewai reset-memories -kn             # Knowledge only
crewai reset-memories -akn            # Agent knowledge only

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Development Workflow

graph TD
    A[Install CrewAI] --> B[Create Project]
    B --> C[Configure Agents]
    C --> D[Define Tasks]
    D --> E[Add Tools]
    E --> F[Set Environment Variables]
    F --> G[Run crewai run]
    G --> H[Test and Iterate]
    H --> I[Deploy]

Troubleshooting Common Issues

pyproject.toml Validation

The CLI validates pyproject.toml for proper CrewAI project structure. If validation fails:

  1. Verify crewai is listed in project dependencies
  2. Check TOML syntax correctness
  3. Ensure required configuration keys exist

Version Conflicts

If dependency conflicts occur:

  1. Use uv lock to regenerate lock file
  2. Verify Python version falls within 3.10-3.13 range
  3. Clear cache with uv cache clean

Sources: lib/crewai-core/src/crewai_core/project.py

Next Steps

After successful installation:

  1. Define Agents - Configure roles, goals, and backstories in config/agents.yaml
  2. Create Tasks - Define task descriptions and expected outputs in config/tasks.yaml
  3. Add Tools - Integrate custom or built-in tools for agent capabilities
  4. Implement Logic - Customize crew.py with specific orchestration requirements
  5. Test - Use crewai test for iterative testing

Summary

The CrewAI installation process supports multiple package managers and provides flexible project scaffolding. Key points:

  • Minimum requirement: Python 3.10
  • UV is the recommended package manager for modern workflows
  • Project structure separates configuration (YAML) from implementation (Python)
  • Custom tools can be published and installed through the CLI
  • Environment variables handle sensitive configuration

Sources: [lib/crewai/pyproject.toml](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/pyproject.toml)

Quick Start Guide

Related topics: Installation and Setup, Agents Architecture, Tasks and Task Management

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Step 1: Install CrewAI CLI

Continue reading this section for the full explanation and source context.

Section Step 2: Install UV (if not already installed)

Continue reading this section for the full explanation and source context.

Section Step 3: Create a New Crew Project

Continue reading this section for the full explanation and source context.

Related topics: Installation and Setup, Agents Architecture, Tasks and Task Management

Quick Start Guide

This guide provides a comprehensive overview for setting up and running your first CrewAI project. CrewAI is a multi-agent automation framework that enables you to build sophisticated AI-powered workflows by composing agents, tasks, and crews.

Overview

The Quick Start Guide covers the essential steps to:

  • Install CrewAI and its dependencies
  • Scaffold a new crew project
  • Configure agents and tasks using YAML
  • Implement crew logic in Python
  • Execute and test your crew

Scope: This guide focuses on the standard crew workflow using the @CrewBase decorator pattern with YAML-based configuration files.

Prerequisites

RequirementVersion
Python>=3.10, <3.14
Package ManagerUV (recommended)

Sources: lib/cli/src/crewai_cli/templates/tool/README.md

Project Structure

A typical CrewAI project follows this directory layout:

my_project/
├── src/my_project/
│   ├── __init__.py
│   ├── main.py              # Entry point
│   ├── crew.py              # Crew definition
│   └── config/
│       ├── agents.yaml      # Agent configurations
│       └── tasks.yaml       # Task configurations
├── .env                      # Environment variables
└── pyproject.toml           # Project configuration

Sources: lib/crewai/README.md

Installation

Step 1: Install CrewAI CLI

pip install crewai

Step 2: Install UV (if not already installed)

UV is the recommended package manager for CrewAI projects.

pip install uv

Step 3: Create a New Crew Project

crewai create crew my_crew --skip_provider

Step 4: Install Project Dependencies

cd my_crew
crewai install

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Project Components

Agent Configuration (agents.yaml)

Agents are defined in YAML format with role, goal, and backstory:

researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}. Known for your ability to find the most relevant
    information and present it in a clear and concise manner.

reporting_analyst:
  role: >
    {topic} Reporting Analyst
  goal: >
    Create detailed reports based on {topic} data analysis
  backstory: >
    You're a meticulous analyst with a keen eye for detail.

Sources: lib/crewai/README.md

Task Configuration (tasks.yaml)

Tasks define what each agent should accomplish:

research_task:
  description: >
    Research the latest developments in {topic}
  expected_output: >
    A list of key findings with sources and implications.
  agent: researcher

reporting_task:
  description: >
    Create a comprehensive report on {topic}
  expected_output: >
    A fully fledged report with the main topics, each with a full section
    of information. Formatted as markdown.
  agent: reporting_analyst
  output_file: report.md

Sources: lib/crewai/README.md

Crew Implementation

Crew Class (crew.py)

The crew class uses the @CrewBase decorator to bind agents and tasks:

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List

@CrewBase
class MyProjectCrew():
    """My project crew"""
    
    agents: List[BaseAgent]
    tasks: List[Task]

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            verbose=True,
            tools=[SerperDevTool()]
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config['reporting_analyst'],
            verbose=True
        )

    @task
    def research_task(self) -> Task:
        return Task(config=self.tasks_config['research_task'])

    @task
    def reporting_task(self) -> Task:
        return Task(
            config=self.tasks_config['reporting_task'],
            output_file='report.md'
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True,
        )

Sources: lib/cli/src/crewai_cli/templates/crew/crew.py

Entry Point (main.py)

The main entry point kicks off the crew:

from crewai import Crew
from my_project.crew import MyProjectCrew

def run():
    inputs = {
        "topic": "AI LLMs"
    }
    
    crew = MyProjectCrew()
    result = crew.crew().kickoff(inputs=inputs)
    print(result)

if __name__ == "__main__":
    run()

Sources: lib/cli/src/crewai_cli/templates/crew/main.py

Workflow Diagram

graph TD
    A[Start: crewai create crew] --> B[Install Dependencies]
    B --> C[Configure agents.yaml]
    C --> D[Configure tasks.yaml]
    D --> E[Implement crew.py]
    E --> F[Implement main.py]
    F --> G[crewai run]
    G --> H{Crew Execution}
    H --> I[Agents Complete Tasks]
    I --> J[Output Generated]
    J --> K[End]
    
    style A fill:#4CAF50,color:#fff
    style G fill:#2196F3,color:#fff
    style K fill:#FF5722,color:#fff

Development Best Practices

PracticeDescription
YAML-first configurationDefine agents and tasks in YAML, keep crew classes minimal
Use structured outputUse output_pydantic for data flowing between tasks
Enable memoryFor crews benefiting from cross-session learning
Sequential vs HierarchicalSequential for linear workflows; hierarchical for dynamic delegation
Test frequentlyUse crewai test to evaluate performance

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Common CLI Commands

CommandDescription
crewai create crew <name>Create a new crew project
crewai runExecute the crew
crewai testTest crew (2 iterations, gpt-4o-mini default)
crewai test -n 5 -m gpt-4oCustom test iterations and model
crewai train -n 5 -f training.jsonTrain the crew
crewai reset-memories -aReset all memories
crewai log-tasks-outputsShow latest task outputs

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Running Your Crew

Execute your crew using the CLI:

crewai run

Or run the main.py file directly:

python src/my_project/main.py

Customization

Adding Tools to Agents

@agent
def researcher(self) -> Agent:
    return Agent(
        config=self.agents_config['researcher'],
        verbose=True,
        tools=[SerperDevTool()]  # Add tools here
    )

Setting Custom LLM Providers

Use the crewai.LLM class or string shorthand:

llm="openai/gpt-4o"
llm="anthropic/claude-3-sonnet"

Memory and Knowledge

Enable memory in your crew for cross-session learning:

@crew
def crew(self) -> Crew:
    return Crew(
        agents=self.agents,
        tasks=self.tasks,
        memory=True,  # Enable memory
        verbose=True,
    )

Common Pitfalls

PitfallSolution
Using ChatOpenAI() directlyUse crewai.LLM or string shorthand
Forgetting type hintsAdd # type: ignore[index] for YAML config access
Token limit issuesSet respect_context_window=True
API throttlingConfigure max_rpm rate limiting

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Next Steps

After completing this Quick Start Guide:

  1. Explore advanced agent configurations for memory, guardrails, and custom LLMs
  2. Learn about Flows for multi-crew orchestration
  3. Review tool integrations for additional capabilities
  4. Join the Discord community for support

Sources: [lib/cli/src/crewai_cli/templates/tool/README.md](https://github.com/crewAIInc/crewAI/blob/main/lib/cli/src/crewai_cli/templates/tool/README.md)

Agents Architecture

Related topics: Tasks and Task Management, Crews and Crew Orchestration, LLM Providers and Configuration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Agent Attributes

Continue reading this section for the full explanation and source context.

Section Key Methods

Continue reading this section for the full explanation and source context.

Section LLM Configuration

Continue reading this section for the full explanation and source context.

Related topics: Tasks and Task Management, Crews and Crew Orchestration, LLM Providers and Configuration

Agents Architecture

Overview

The CrewAI Agents Architecture provides a flexible, modular framework for creating and orchestrating autonomous AI agents. The architecture is designed around the concept of agents as independent entities that can collaborate within crews to accomplish complex tasks through both autonomous decision-making and structured workflows.

Agents in CrewAI are composed of several key components:

ComponentPurpose
BaseAgentAbstract base class defining the agent interface
Agent (Core)Concrete agent implementation with LLM integration
CrewAgentExecutorHandles agent execution within crew context
ParserProcesses LLM outputs and extracts actions
GuardrailsValidates agent outputs for safety and accuracy

Sources: lib/crewai/src/crewai/agents/agent_builder/base_agent.py:1-50

Architecture Diagram

graph TD
    A[User Defined Agent] --> B[BaseAgent]
    B --> C[Agent Core]
    C --> D[CrewAgentExecutor]
    D --> E[Parser]
    E --> F[LLM]
    F --> G[Tool Calls]
    G --> H[Guardrails]
    H --> D
    
    I[Memory] -.-> C
    J[Knowledge] -.-> C
    K[Tools] --> G

Agent Definition

Agents are defined through YAML configuration files and Python decorators. Each agent requires a minimum of three attributes:

# config/agents.yaml
researcher:
  role: "Senior Data Researcher"
  goal: "Uncover cutting-edge developments in {topic}"
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}. Known for your ability to find the most relevant
    information and present it in a clear and concise manner.

Core Agent Attributes

AttributeTypeRequiredDescription
rolestringYesDefines the agent's function within the crew
goalstringYesThe specific objective the agent aims to achieve
backstorystringYesContext that shapes the agent's behavior and decision-making
toolsList[BaseTool]NoTools available to the agent for task execution
verbosebooleanNoEnable detailed logging (default: False)
llmLLMNoCustom language model configuration
memorybooleanNoEnable short/long-term memory (default: True)
max_iterintNoMaximum iterations before forcing response
max_rpmintNoRate limiting for API calls

Sources: lib/crewai/src/crewai/agent/core.py:1-100

BaseAgent Class

The BaseAgent serves as the foundational abstract class for all agent implementations:

from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List

class BaseAgent:
    agents: List[BaseAgent]  # Type annotation for crew agents
    
    @property
    def role(self) -> str:
        """Returns the role of the agent"""
        
    @property
    def goal(self) -> str:
        """Returns the goal of the agent"""
        
    @property
    def backstory(self) -> str:
        """Returns the backstory of the agent"""

Key Methods

MethodReturn TypeDescription
execute_task(task, context, tools)TaskOutputExecute a specific task
set_memory_memory(memory)NoneConfigure agent memory
set_verbose(verbose)NoneToggle verbose logging
create_agent_executor()CrewAgentExecutorInitialize execution context

Sources: lib/crewai/src/crewai/agents/agent_builder/base_agent.py:50-150

Agent Core Implementation

The core agent implementation provides the main interface for agent behavior:

from crewai import Agent
from crewai_tools import SerperDevTool

@CrewBase
class LatestAiDevelopmentCrew():
    agents: List[BaseAgent]
    
    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            verbose=True,
            tools=[SerperDevTool()]
        )

LLM Configuration

Agents can be configured with custom LLM providers:

config=dict(
    llm=dict(
        provider="ollama",
        config=dict(
            model="llama2",
            temperature=0.5,
        ),
    ),
)

Supported providers include: openai, anthropic, google, ollama, azure, bedrock

Sources: lib/crewai/src/crewai/agent/core.py:100-200

CrewAgentExecutor

The CrewAgentExecutor manages agent execution within a crew context, handling:

  • Task delegation and execution flow
  • Tool invocation and result processing
  • Guardrail validation
  • Response formatting
graph LR
    A[Task Assigned] --> B[Execute with Tools]
    B --> C{Guardrails Check}
    C -->|Pass| D[Return Result]
    C -->|Fail| E[Retry or Fallback]

Execution Parameters

ParameterTypeDefaultDescription
taskTaskRequiredThe task to execute
contextstrNoneShared context from previous tasks
toolsList[BaseTool][]Tools available for this execution

Sources: lib/crewai/src/crewai/agents/crew_agent_executor.py:1-100

Parser

The Parser component processes LLM outputs and extracts structured actions:

from crewai.agents.parser import CrewAgentParser

parser = CrewAgentParser()
result = parser.parse(llm_output)

Parser Responsibilities

ResponsibilityDescription
Action ExtractionIdentify tool calls from LLM responses
Format NormalizationConvert LLM output to standardized format
Error HandlingManage malformed outputs gracefully
ValidationEnsure parsed actions match expected schemas

Sources: lib/crewai/src/crewai/agents/parser.py:1-80

Guardrails

Guardrails provide validation layers for agent outputs. The framework includes built-in guardrails:

HallucinationGuardrail

Validates that agent outputs are faithful to provided context:

from crewai.tasks.hallucination_guardrail import HallucinationGuardrail

guardrail = HallucinationGuardrail(
    llm=agent.llm,
    context="Reference document content",
    threshold=7.0,
    tool_response="API response data"
)
ParameterTypeDefaultDescription
llmLLMRequiredLanguage model for evaluation
contextstrNoneReference context for validation
thresholdfloatNoneMinimum faithfulness score
tool_responsestr""Tool response for additional context

Sources: lib/crewai/src/crewai/tasks/hallucination_guardrail.py:1-80

Agent Creation with Decorators

CrewAI uses Python decorators for declarative agent definition:

from crewai import Agent, Crew, Task
from crewai.project import CrewBase, agent, crew, task
from typing import List

@CrewBase
class MyCrew():
    """My Crew Description"""
    agents: List[BaseAgent]
    tasks: List[Task]

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            verbose=True,
            tools=[SerperDevTool()]
        )

    @agent
    def reporting_analyst(self) -> Agent:
        return Agent(
            config=self.agents_config['reporting_analyst'],
            verbose=True
        )

    @task
    def research_task(self) -> Task:
        return Task(config=self.tasks_config['research_task'])

    @task
    def reporting_task(self) -> Task:
        return Task(
            config=self.tasks_config['reporting_task'],
            output_file='report.md'
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True
        )

Tool Integration

Agents access external capabilities through tools:

Built-in Tools

ToolPurpose
SerperDevToolWeb search functionality
CodeDocsSearchToolSearch code documentation
DirectorySearchToolSearch within directories
FileWriterToolWrite content to files
TavilyExtractorToolExtract content from URLs
ApifyActorsToolExecute Apify actors
LinkupSearchToolSearch via Linkup API

Tool Configuration

from crewai_tools import SerperDevTool, FileWriterTool

researcher = Agent(
    role="Research Analyst",
    goal="Gather and synthesize information",
    backstory="Expert researcher with access to web search",
    tools=[SerperDevTool(), FileWriterTool()],
    verbose=True
)

Memory and Knowledge

Agents can maintain state across interactions:

Memory Types

TypeScopePersistence
Short-termCurrent sessionSession lifetime
Long-termAcross sessionsDatabase storage
EntityEntity trackingAutomatic extraction
KnowledgeDomain knowledgeVector store

Memory Configuration

crew = Crew(
    agents=[researcher, analyst],
    tasks=[task1, task2],
    memory=True,           # Enable all memory types
    embedder={
        "provider": "openai",
        "config": {"model": "text-embedding-ada-002"}
    }
)

Execution Flow

sequenceDiagram
    participant User
    participant Crew
    participant Agent
    participant Executor
    participant LLM
    participant Tool
    
    User->>Crew: kickoff()
    Crew->>Agent: execute_task(task)
    Agent->>Executor: run()
    Executor->>LLM: generate_response()
    LLM->>Tool: tool_call()
    Tool-->>LLM: result
    LLM-->>Executor: response
    Executor->>Executor: validate_guardrails()
    Executor-->>Agent: TaskOutput
    Agent-->>Crew: result
    Crew-->>User: final_output

Best Practices

Agent Design

  1. Clear Role Definition: Define distinct, non-overlapping roles for each agent
  2. Specific Goals: Ensure each agent has a well-defined, achievable goal
  3. Rich Backstory: Provide context that guides agent behavior appropriately
  4. Appropriate Tools: Grant only necessary tools to minimize unnecessary complexity

Configuration Guidelines

AspectRecommendation
Verbose ModeEnable during development, disable in production
Rate LimitingSet max_rpm to avoid API throttling
Context WindowUse respect_context_window=True for long conversations
IterationsSet max_iter to prevent infinite loops

CLI Commands for Agents

# Create new agent
crewai create agent <name>

# Test agent
crewai test -n 5 -m gpt-4o

# Reset memories
crewai reset-memories -a              # All memories
crewai reset-memories -s              # Short-term only
crewai reset-memories -l              # Long-term only

Summary

The CrewAI Agents Architecture provides a comprehensive framework for building multi-agent systems:

  • BaseAgent defines the interface all agents must implement
  • Agent Core provides the concrete implementation with LLM integration
  • CrewAgentExecutor manages execution within crew context
  • Parser handles LLM output processing
  • Guardrails ensure output quality and safety
  • Decorators enable declarative agent definition
  • Tools extend agent capabilities beyond LLM-only responses

Sources: [lib/crewai/src/crewai/agents/agent_builder/base_agent.py:1-50]()

Tasks and Task Management

Related topics: Agents Architecture, Crews and Crew Orchestration

Section Related Pages

Continue reading this section for the full explanation and source context.

Related topics: Agents Architecture, Crews and Crew Orchestration

Tasks and Task Management

Overview

Tasks are the fundamental unit of work in the CrewAI framework. They represent discrete units of work that agents execute within a crew. Each task encapsulates a description of what needs to be accomplished, the expected output format, and optional configurations for output validation, file handling, and dependency management.

Tasks serve as the bridge between agent capabilities and crew objectives, enabling complex multi-agent workflows through declarative configuration and structured output handling. The task management system provides both synchronous execution (via Task) and conditional execution (via ConditionalTask) to support various workflow patterns.

Sources: lib/crewai/src/crewai/task.py and lib/crewai/src/crewai/tasks/__init__.py

Sources: [lib/crewai/src/crewai/task.py](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/src/crewai/task.py) and [lib/crewai/src/crewai/tasks/__init__.py](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/src/crewai/tasks/__init__.py)

Crews and Crew Orchestration

Related topics: Agents Architecture, Tasks and Task Management, Flows - Event-Driven Workflows, LLM Providers and Configuration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Architecture Diagram

Continue reading this section for the full explanation and source context.

Section Sequential Process

Continue reading this section for the full explanation and source context.

Related topics: Agents Architecture, Tasks and Task Management, Flows - Event-Driven Workflows, LLM Providers and Configuration

Crews and Crew Orchestration

Overview

A Crew in CrewAI is a collaborative system of autonomous AI agents working together to accomplish complex tasks. Crew orchestration refers to the mechanism by which these agents coordinate, delegate, and execute tasks based on a defined process type.

Crews are the core building blocks for multi-agent automation in CrewAI. They enable sophisticated workflows where multiple specialized agents combine their capabilities to produce results that exceed what any single agent could achieve alone.

The Crew class serves as the central orchestrator, managing agents, tasks, processes, and shared resources like memory and tools. Sources: lib/crewai/src/crewai/crew.py

Crew Architecture

Core Components

A Crew consists of four primary components that work together to enable collaborative AI task execution:

ComponentPurpose
AgentsAutonomous AI entities with specific roles, goals, and tool access
TasksDefined work items with descriptions, expected outputs, and assignments
ProcessOrchestration strategy determining how tasks are executed
MemoryShared storage for context, learnings, and inter-agent communication

Architecture Diagram

graph TD
    A[Crew] --> B[Agents]
    A --> C[Tasks]
    A --> D[Process]
    A --> E[Memory]
    
    B --> B1[Agent 1]
    B --> B2[Agent 2]
    B --> BN[Agent N]
    
    C --> C1[Task 1]
    C --> C2[Task 2]
    C --> CN[Task N]
    
    D --> D1[Sequential]
    D --> D2[Hierarchical]
    
    E --> E1[Short-term]
    E --> E2[Long-term]
    E --> E3[Entity]
    E --> E4[Knowledge]

Process Types

CrewAI supports two primary process types for orchestrating agent collaboration:

Sequential Process

In the sequential process, tasks are executed one after another in a predefined order. Each task must complete before the next begins. This is ideal for linear workflows where output from one task feeds into the next.

from crewai import Crew, Process

crew = Crew(
    agents=self.agents,
    tasks=self.tasks,
    process=Process.sequential,
    verbose=True,
)

Use Cases:

  • Research pipelines where findings accumulate
  • Report generation requiring sequential information gathering
  • Data processing chains where each step depends on the previous

Sources: lib/cli/src/crewai_cli/templates/crew/README.md

Hierarchical Process

The hierarchical process introduces an automated manager agent that coordinates the crew. The manager delegates tasks, validates results, and ensures proper workflow execution without manual intervention.

from crewai import Crew, Process

crew = Crew(
    agents=self.agents,
    tasks=self.tasks,
    process=Process.hierarchical,
    verbose=True,
)

Use Cases:

  • Complex projects requiring dynamic task delegation
  • Scenarios where a manager role naturally exists
  • Workflows needing result validation between steps

Sources: lib/crewai/README.md

Process Selection Guide

CriteriaSequentialHierarchical
Task DependenciesFixed orderDynamic delegation
Manager RequiredNoYes (auto-created)
FlexibilityLowHigh
Best ForLinear pipelinesComplex orchestration
OverheadMinimalHigher due to management

Crew Configuration

Using the @CrewBase Decorator

Crews are defined using Python decorators combined with YAML configuration files. This approach separates concerns between code logic and agent/task definitions.

from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from typing import List

@CrewBase
class LatestAiDevelopmentCrew():
    """LatestAiDevelopment crew"""
    agents: List[BaseAgent]
    tasks: List[Task]

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            verbose=True,
            tools=[SerperDevTool()]
        )

    @task
    def research_task(self) -> Task:
        return Task(
            config=self.tasks_config['research_task'],
        )

    @crew
    def crew(self) -> Crew:
        return Crew(
            agents=self.agents,
            tasks=self.tasks,
            process=Process.sequential,
            verbose=True,
        )

Sources: lib/crewai/README.md

YAML Configuration Structure

#### agents.yaml

researcher:
  role: >
    {topic} Senior Data Researcher
  goal: >
    Uncover cutting-edge developments in {topic}
  backstory: >
    You're a seasoned researcher with a knack for uncovering the latest
    developments in {topic}.

reporting_analyst:
  role: >
    {topic} Reporting Analyst
  goal: >
    Create detailed reports based on {topic} data analysis

#### tasks.yaml

research_task:
  description: >
    Research the latest developments in {topic}
  expected_output: >
    A comprehensive report on {topic} developments
  agent: researcher

reporting_task:
  description: >
    Create a detailed report based on research findings
  expected_output: >
    A fully fleshed report with main topics
  agent: reporting_analyst

Sources: lib/cli/src/crewai_cli/templates/crew/README.md

Agent Management

Agent Roles and Responsibilities

Agents within a crew are defined by four key attributes:

AttributeDescription
roleDefines the agent's function within the crew
goalThe specific objective the agent works toward
backstoryContext that shapes the agent's behavior and perspective
toolsCapabilities the agent can use to accomplish tasks

Agent Creation Pattern

@agent
def researcher(self) -> Agent:
    return Agent(
        config=self.agents_config['researcher'],
        verbose=True,
        tools=[SerperDevTool()]
    )

Agents receive their configuration from YAML and can be augmented with additional tools or settings at the point of creation.

Sources: lib/crewai/src/crewai/crew.py

Task Management

Task Definition

Tasks represent units of work that agents execute. Each task has:

PropertyPurpose
descriptionWhat needs to be accomplished
expected_outputThe format and content of deliverables
agentWhich agent executes the task
output_fileOptional file for storing results
dependenciesTasks that must complete first

Task with Output Handling

@task
def reporting_task(self) -> Task:
    return Task(
        config=self.tasks_config['reporting_task'],
        output_file='report.md'
    )

Task Execution Flow

graph LR
    A[Task Created] --> B{Process Type}
    B -->|Sequential| C[Execute in Order]
    B -->|Hierarchical| D[Manager Delegates]
    
    C --> E[Agent 1 Executes]
    E --> F[Agent 2 Executes]
    F --> G[Complete]
    
    D --> H[Manager Assigns Task]
    H --> I[Agent Executes]
    I --> J[Manager Validates]
    J --> K[Complete]

Memory and Context

Memory Types

Crews can maintain different types of memory to preserve context across executions:

Memory TypeScopePurpose
Short-termCurrent sessionTemporary working memory
Long-termAcross sessionsPersistent learnings
EntityEntity trackingKnowledge graph of entities
KnowledgeStructured dataDomain-specific grounding

Memory Management Commands

crewai reset-memories -a              # Reset all memories
crewai reset-memories -s              # Short-term only
crewai reset-memories -l              # Long-term only
crewai reset-memories -e              # Entity only
crewai reset-memories -kn             # Knowledge only
crewai reset-memories -akn            # Agent knowledge only

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Crew Context Utilities

The CrewContext class provides utilities for managing scope-based context within crews. Scopes allow hierarchical organization of memory and context:

def join_scope_paths(root: str | None, inner: str | None) -> str:
    """
    Combines two scope path components.
    
    Examples:
        join_scope_paths("/crew/test", "/market-trends") -> '/crew/test/market-trends'
        join_scope_paths("/crew/test", None) -> '/crew/test'
    """

Sources: lib/crewai/src/crewai/utilities/crew/crew_context.py

Execution Flow

Crew Kickoff

The main entry point for executing a crew is the kickoff method:

from latest_ai_development.crew import LatestAiDevelopmentCrew

def run():
    inputs = {'topic': 'AI Agents'}
    LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)

Step Execution

The StepExecutor handles the actual execution of agent steps within the crew context:

sequenceDiagram
    participant Crew
    participant StepExecutor
    participant Agent
    participant Task
    
    Crew->>StepExecutor: Execute Task
    StepExecutor->>Agent: Call Agent with Context
    Agent->>Task: Perform Action
    Task-->>Agent: Return Result
    Agent-->>StepExecutor: Step Output
    StepExecutor-->>Crew: Execution Complete

Sources: lib/crewai/src/crewai/agents/step_executor.py

Verbose Mode

During development, enable verbose mode to see detailed execution logs:

@crew
def crew(self) -> Crew:
    return Crew(
        agents=self.agents,
        tasks=self.tasks,
        verbose=True,  # Enable for development
    )

Disable verbose mode in production for cleaner outputs.

Crew Execution Options

Running a Crew

crewai run                  # Run crew or flow (auto-detects from pyproject.toml)

Or directly via Python:

python src/my_project/main.py

Testing and Training

crewai test                           # Test crew (default: 2 iterations, gpt-4o-mini)
crewai test -n 5 -m gpt-4o           # Custom iterations and model
crewai train -n 5 -f training.json   # Train crew

Debugging

crewai log-tasks-outputs              # Show latest task outputs
crewai replay -t <task_id>            # Replay from specific task

Best Practices

Configuration Guidelines

  1. YAML-first configuration: Define agents and tasks in YAML, keep crew classes minimal
  2. Use structured output (output_pydantic) for data that flows between tasks or crews
  3. Use guardrails to validate task outputs programmatically
  4. Enable memory for crews that benefit from cross-session learning

Process Selection

Workflow TypeRecommended Process
Linear data pipelineSequential
Research and reportSequential
Multi-agent collaborationHierarchical
Dynamic task delegationHierarchical
Complex multi-stage projectsHierarchical with Flows

Performance Considerations

SettingPurpose
max_rpmRate limiting to avoid API throttling
respect_context_window=TrueAuto-handle token limits
verbose=FalseReduce logging overhead in production

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Common Patterns

Multi-Crew Orchestration with Flows

For complex pipelines involving multiple crews:

graph TD
    A[Flow Start] --> B[Crew 1]
    B --> C{Condition?}
    C -->|Path A| D[Crew 2]
    C -->|Path B| E[Crew 3]
    D --> F[Output]
    E --> F

Use @start, @listen, and @router decorators for complex flow orchestration.

Crew with Tools

from crewai_tools import SerperDevTool

@agent
def researcher(self) -> Agent:
    return Agent(
        config=self.agents_config['researcher'],
        tools=[SerperDevTool()]  # Attach tools to agent
    )

Summary

The Crew orchestration system in CrewAI provides a flexible framework for coordinating multiple AI agents. Key takeaways:

  • Crews are the primary unit of multi-agent collaboration
  • Processes (Sequential/Hierarchical) define how tasks are coordinated
  • Agents are specialized roles with specific goals and tools
  • Tasks represent units of work with dependencies and expected outputs
  • Memory enables context preservation across executions
  • YAML configuration keeps agent/task definitions separate from code

This architecture enables everything from simple sequential pipelines to complex hierarchical multi-agent systems with dynamic task delegation.

Sources: [lib/cli/src/crewai_cli/templates/crew/README.md]()

Flows - Event-Driven Workflows

Related topics: Crews and Crew Orchestration, Agents Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Flow Architecture

Continue reading this section for the full explanation and source context.

Section Key Components

Continue reading this section for the full explanation and source context.

Section Start Methods

Continue reading this section for the full explanation and source context.

Related topics: Crews and Crew Orchestration, Agents Architecture

Flows - Event-Driven Workflows

Overview

Flows in CrewAI provide an event-driven architecture for orchestrating complex, multi-step AI workflows. They enable precise control over execution order, conditional branching, and state management—offering a perfect balance alongside Crew-based autonomous agent orchestration.

Flows are designed for scenarios requiring sequential execution, conditional logic, state persistence, and event-based triggers. Unlike Crews that operate autonomously with agents collaborating freely, Flows provide deterministic workflow patterns where execution follows explicit routing rules.

Sources: lib/crewai/README.md

Core Concepts

Flow Architecture

A Flow is a Python class that extends the Flow base class, decorated with methods that define the workflow graph:

graph TD
    A[Start] --> B[Method A]
    B --> C{Decision}
    C -->|Path 1| D[Method B]
    C -->|Path 2| E[Method C]
    D --> F[End]
    E --> F

Key Components

ComponentPurpose
FlowBase class for all flows
@start()Marks methods as entry points
@listen()Triggers method execution after another completes
@router()Implements conditional branching logic
@human_feedback()Pauses execution for user input

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Flow Execution Model

Start Methods

Methods decorated with @start() execute immediately when the flow begins. Multiple @start() decorators can be defined, causing parallel execution:

from crewai.flow.flow import Flow, start, listen

class MyFlow(Flow):
    @start()
    def begin(self):
        return "initial data"

    @start()
    def begin_parallel(self):
        return "parallel data"

Listen Decorators

The @listen() decorator binds a method to the completion of another method. The decorated method receives the output of the triggering method as its argument:

from crewai.flow.flow import Flow, start, listen

class ResearchFlow(Flow):
    @start()
    def set_topic(self):
        return "AI Agents"

    @listen(set_topic)
    def do_research(self, topic):
        # self.state.topic is available
        result = ResearchCrew().crew().kickoff(
            inputs={"topic": topic}
        )
        return result.raw

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

State Management

Structured State with Pydantic

Flows support type-safe state management using Pydantic models. Define a state class that inherits from BaseModel:

from crewai.flow.flow import Flow, start, listen
from pydantic import BaseModel

class ResearchState(BaseModel):
    topic: str = ""
    research: str = ""
    report: str = ""

class ResearchFlow(Flow[ResearchState]):
    @start()
    def set_topic(self):
        self.state.topic = "AI Agents"

    @listen(set_topic)
    def do_research(self):
        result = ResearchCrew().crew().kickoff(
            inputs={"topic": self.state.topic}
        )
        self.state.research = result.raw
        return self.state.research

    @listen(do_research)
    def write_report(self, research_data):
        self.state.report = f"# Report on {self.state.topic}\n\n{research_data}"
        return self.state.report

Benefits of structured state:

  • Type safety across method boundaries
  • IDE autocompletion for state fields
  • Validation of state transitions
  • Persistence of state between executions

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

State Flow Diagram

graph LR
    A[set_topic] --> B[State Update<br/>topic: "AI Agents"]
    B --> C[do_research]
    C --> D[State Update<br/>research: data]
    D --> E[write_report]
    E --> F[State Update<br/>report: content]

Conditional Routing

Router Decorator

The @router() decorator enables conditional branching based on method output. Routers return string labels that determine which @listen() methods execute:

from crewai.flow.flow import Flow, start, listen, router

class DocumentProcessingFlow(Flow):
    @start()
    def receive_document(self):
        return {"type": "image", "path": "/path/to/image.png"}

    @router(receive_document)
    def classify_document(self, doc):
        if doc["type"] == "image":
            return "image_processing"
        elif doc["type"] == "text":
            return "text_processing"
        return "unsupported"

    @listen("image_processing")
    def process_image(self, doc):
        return f"Processed image: {doc['path']}"

    @listen("text_processing")
    def process_text(self, doc):
        return f"Processed text: {doc['path']}"

    @listen("unsupported")
    def handle_unsupported(self, doc):
        return f"Unsupported document type: {doc['type']}"

Routing Flow Diagram

graph TD
    A[receive_document] --> B{classify_document}
    B -->|image_processing| C[process_image]
    B -->|text_processing| D[process_text]
    B -->|unsupported| E[handle_unsupported]

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Event System Integration

Event-Driven Architecture

Flows integrate with CrewAI's event system to enable reactive execution patterns. The flow_serializer.py module provides introspection capabilities for visualizing flow structures:

from crewai.flow.flow_serializer import flow_structure

structure = flow_structure(MyFlow)
print(structure["name"])  # Flow class name
print(structure["methods"])  # All decorated methods
print(structure["edges"])  # Connections between methods

Event Categories

Flows support integration with multiple event categories:

CategoryDescription
Flow executionStart, completion, and error events
Agent executionIndividual agent state changes
Task managementTask lifecycle events
Tool usageTool invocation events
Safety guardrailsValidation and compliance events

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Flow API Reference

Flow Base Class

class Flow[StateType]:
    """Base class for all flows."""
    
    state: StateType  # Typed state instance
    
    def kickoff(self, inputs: dict = None) -> Any:
        """Execute the flow from start methods."""
    
    def kickoff_async(self, inputs: dict = None) -> Any:
        """Execute the flow asynchronously."""

Decorators

DecoratorParametersReturnsDescription
@start()-NoneMarks entry point method
@listen()methodNoneBinds to method completion
@router()methodstrReturns routing label
@human_feedback()promptstrRequests user input

Method Information Types

The flow_serializer.py module defines MethodInfo for introspecting flow methods:

class MethodInfo(TypedDict, total=False):
    name: str
    type: str  # start, listen, router, start_router
    trigger_methods: list[str]
    condition_type: str | None  # AND, OR
    router_paths: list[str]
    has_human_feedback: bool

Sources: lib/crewai/src/crewai/flow/flow_serializer.py

Flow Structure Serialization

Introspection for UI Rendering

The flow_structure() function analyzes a Flow class and returns a JSON-serializable dictionary:

from crewai.flow.flow_serializer import flow_structure

class MyFlow(Flow):
    @start()
    def begin(self):
        return "started"

    @listen(begin)
    def process(self):
        return "done"

structure = flow_structure(MyFlow)
# Returns:
# {
#     "name": "MyFlow",
#     "methods": [...],
#     "edges": [...],
#     "state_schema": {...}
# }

This serialization enables CrewAI Studio UI to render visual flow graphs.

Sources: lib/crewai/src/crewai/flow/flow_serializer.py

Integration with Crews

Calling Crews from Flows

Flows can invoke Crews for agent-based task execution:

class ResearchFlow(Flow[ResearchState]):
    @start()
    def set_topic(self):
        self.state.topic = "AI Agents"

    @listen(set_topic)
    def do_research(self):
        crew = ResearchCrew().crew()
        result = crew.kickoff(inputs={"topic": self.state.topic})
        self.state.research = result.raw
        return result.raw

Flow-to-Crew Communication

graph TD
    A[Flow Start] --> B[Set Topic]
    B --> C[Crew Kickoff]
    C --> D[Agent 1]
    C --> E[Agent 2]
    D --> F[Task Complete]
    E --> G[Task Complete]
    F --> H[Flow Resume]
    G --> H
    H --> I[Process Results]

Best Practices

When to Use Flows

Use CaseRecommendation
Linear workflows with clear stepsSequential Flow
Dynamic agent delegationHierarchical Crew
Multi-crew orchestrationFlow with Crew calls
Conditional branchingRouter-based Flow
Human-in-the-loopFlow with @human_feedback()

Design Guidelines

  1. Use structured state (Pydantic models) over unstructured dicts for type safety
  2. Prefer Flows for multi-crew orchestration when complex pipelines are needed
  3. Use @start() with multiple methods only when parallel execution is required
  4. Keep router labels descriptive for maintainable flow graphs
  5. Enable verbose mode during development, disable in production

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Running Flows

CLI Commands

# Run crew or flow (auto-detects from pyproject.toml)
crewai run

# Legacy flow execution
crewai flow kickoff

Programmatic Execution

from my_flow import ResearchFlow

flow = ResearchFlow()
result = flow.kickoff(inputs={"topic": "AI Agents"})
print(result)

Sources: lib/cli/src/crewai_cli/templates/flow/README.md

Advanced Patterns

Multi-Crew Orchestration

class OrchestrationFlow(Flow):
    @start()
    def initialize(self):
        return {"task": "complex_research"}

    @listen(initialize)
    def research_crew_execution(self, task):
        return ResearchCrew().crew().kickoff(inputs=task)

    @listen(research_crew_execution)
    def analysis_crew_execution(self, research_results):
        return AnalysisCrew().crew().kickoff(
            inputs={"data": research_results}
        )

    @listen(analysis_crew_execution)
    def reporting(self, analysis):
        return ReportCrew().crew().kickoff(
            inputs={"analysis": analysis}
        )

Error Handling in Flows

class ResilientFlow(Flow):
    @start()
    def begin(self):
        try:
            return risky_operation()
        except Exception as e:
            self.state.error = str(e)
            return "error_state"

    @router(begin)
    def handle_result(self, result):
        if result == "error_state":
            return "error_handler"
        return "success_path"

    @listen("error_handler")
    def handle_error(self, _):
        return "Recovery action completed"

Summary

Flows provide a powerful event-driven workflow system for CrewAI that complements the autonomous agent orchestration of Crews. Key takeaways:

  • Decorators (@start, @listen, @router, @human_feedback) define the workflow graph
  • Structured state with Pydantic ensures type safety and validation
  • Event serialization enables visual flow editing in CrewAI Studio
  • Crews integration allows delegating complex tasks to agent teams
  • Conditional routing provides flexible decision-making capabilities

Flows are ideal for precise, deterministic workflows where execution order and branching logic are critical, while Crews excel at autonomous multi-agent collaboration.

Sources: [lib/crewai/README.md](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/README.md)

LLM Providers and Configuration

Related topics: Agents Architecture

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Standard Configuration Structure

Continue reading this section for the full explanation and source context.

Section Supported Providers

Continue reading this section for the full explanation and source context.

Related topics: Agents Architecture

LLM Providers and Configuration

Overview

The LLM (Large Language Model) Providers and Configuration system in CrewAI provides a flexible, extensible architecture for integrating multiple AI model providers into the agent execution pipeline. This system allows developers to configure, customize, and switch between different LLM backends while maintaining a consistent interface for agent operations.

The configuration system supports multiple providers including OpenAI, Anthropic, Google, Ollama, and Llama2, enabling both embedding and summarization capabilities through a unified config dictionary approach.

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md

Architecture

The LLM provider system follows a modular architecture with the following components:

graph TD
    A[Agent] --> B[LLM Configuration]
    B --> C[Provider Selection]
    C --> D[OpenAI Provider]
    C --> E[Anthropic Provider]
    C --> F[Google Provider]
    C --> G[Ollama Provider]
    C --> H[Llama2 Provider]
    D --> I[Model Execution]
    E --> I
    F --> I
    G --> I
    H --> I
    I --> J[Response Processing]
    J --> K[Agent Output]

Core Components

ComponentPurposeLocation
LLMMain LLM interface classlib/crewai/src/crewai/llm.py
BaseLLMAbstract base for all providerslib/crewai/src/crewai/llms/base_llm.py
ProvidersProvider-specific implementationslib/crewai/src/crewai/llms/providers/
Config DictionaryRuntime configurationUser-defined

Sources: lib/crewai/src/crewai/llm.py, lib/crewai/src/crewai/llms/base_llm.py

Configuration Pattern

Standard Configuration Structure

All tools and agents using LLM configuration follow a standardized config dictionary pattern:

config=dict(
    llm=dict(
        provider="provider_name",
        config=dict(
            model="model_name",
            # Optional parameters
            temperature=0.5,
            top_p=1,
            stream=True,
        ),
    ),
    embedder=dict(
        provider="embedder_provider",
        config=dict(
            model="embedding_model",
            task_type="retrieval_document",
        ),
    ),
)

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md, lib/crewai-tools/src/crewai_tools/tools/pdf_search_tool/README.md

Supported Providers

ProviderProvider IDExample Model
OpenAIopenaigpt-4, gpt-4o-mini
Anthropicanthropicclaude-3, claude-3.5-sonnet
Googlegooglemodels/embedding-001
Ollamaollamallama2, mistral
Llama2llama2meta/llama2

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md

LLM Class Integration

Initialization Parameters

The primary LLM class serves as the main interface for language model operations:

from crewai import LLM

llm = LLM(
    model="gpt-4",
    api_key="your-api-key",
    temperature=0.7,
)

Guardrail Integration

LLMs are used as dependencies in guardrail implementations such as the HallucinationGuardrail:

from crewai.tasks.hallucination_guardrail import HallucinationGuardrail

guardrail = HallucinationGuardrail(
    llm=agent.llm,
    threshold=8.0,
    context="Reference context for validation",
)

Sources: lib/crewai/src/crewai/tasks/hallucination_guardrail.py:1-70

Provider-Specific Configuration

OpenAI Configuration

tool = SomeTool(
    config=dict(
        llm=dict(
            provider="openai",
            config=dict(
                model="gpt-4o-mini",
                temperature=0.5,
                # streaming support available
            ),
        ),
    )
)

Sources: lib/crewai/src/crewai/llms/providers/openai/completion.py

Anthropic Configuration

tool = SomeTool(
    config=dict(
        llm=dict(
            provider="anthropic",
            config=dict(
                model="claude-3-sonnet-20240229",
            ),
        ),
    )
)

Sources: lib/crewai/src/crewai/llms/providers/anthropic/completion.py

Google Embeddings Configuration

embedder=dict(
    provider="google",
    config=dict(
        model="models/embedding-001",
        task_type="retrieval_document",
    ),
)

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md

Ollama Configuration

llm=dict(
    provider="ollama",
    config=dict(
        model="llama2",
        temperature=0.5,
        top_p=1,
        stream=True,
    ),
)

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md

Embedder Configuration

Embedders handle vector embedding generation for retrieval-augmented generation (RAG) workflows:

ParameterTypeDescriptionDefault
providerstringEmbedding provider nameopenai
modelstringModel identifierProvider-specific
task_typestringEmbedding use caseretrieval_document
titlestringOptional title for embeddingsNone

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md

Default Behavior

By default, tools use OpenAI for both embeddings and summarization:

By default, the tool uses OpenAI for both embeddings and summarization.

Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md, lib/crewai-tools/src/crewai_tools/tools/pdf_search_tool/README.md

Configuration Workflow

graph LR
    A[Define Config Dict] --> B[Select Provider]
    B --> C[Specify Model]
    C --> D[Set Optional Params]
    D --> E[Initialize Tool/Agent]
    E --> F[LLM Loaded at Runtime]
    F --> G[Execution with Provider]

Environment Variables

API keys should be configured via environment variables for security:

export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
export LINKUP_API_KEY="..."

Sources: lib/crewai-tools/src/crewai_tools/tools/linkup/README.md

Best Practices

  1. Environment Security: Store API keys in environment variables rather than hardcoding
  2. Provider Selection: Choose providers based on task requirements (cost, latency, capabilities)
  3. Temperature Tuning: Adjust temperature based on task creativity needs (lower for factual, higher for creative)
  4. Model Selection: Use smaller/faster models for simple tasks to reduce costs
  5. Embedder Consistency: Use compatible embedders for your vector store

Sources: [lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md]()

Agent-to-Agent (A2A) Communication

Related topics: Agents Architecture, Crews and Crew Orchestration

Section Related Pages

Continue reading this section for the full explanation and source context.

Section High-Level Architecture

Continue reading this section for the full explanation and source context.

Section Component Responsibilities

Continue reading this section for the full explanation and source context.

Section Content Types

Continue reading this section for the full explanation and source context.

Related topics: Agents Architecture, Crews and Crew Orchestration

Agent-to-Agent (A2A) Communication

Agent-to-Agent (A2A) Communication is a core architectural layer in CrewAI that enables autonomous agents to exchange messages, delegate tasks, and collaborate within multi-agent workflows. This module provides the infrastructure for agents to interact, share context, and coordinate their activities seamlessly.

Overview

The A2A subsystem in CrewAI implements a standardized communication protocol that allows agents to:

  • Exchange structured messages with rich content types
  • Delegate tasks to other agents with appropriate context
  • Share execution results and artifacts
  • Coordinate through hierarchical or collaborative processes

The implementation follows modern agent communication patterns and provides both a programmatic API and extension points for UI integration. Sources: lib/crewai/src/crewai/a2a/__init__.py

Architecture

High-Level Architecture

graph TD
    subgraph "Agent Layer"
        A1[Agent 1]
        A2[Agent 2]
        A3[Agent N]
    end
    
    subgraph "A2A Core"
        TW[A2A Wrapper]
        TT[A2A Types]
        UD[Utils: Delegation]
    end
    
    subgraph "Extension Layer"
        A2UI[A2UI Extensions]
        SCH[Schema v0.8]
    end
    
    A1 <--> TW
    A2 <--> TW
    A3 <--> TW
    TW <--> TT
    TW <--> UD
    TW <--> A2UI
    A2UI <--> SCH

Component Responsibilities

ComponentPurposeKey Responsibilities
wrapper.pyA2A Communication HandlerManages message routing, task delegation, and response handling
types.pyData ModelsDefines message structures, content types, and protocol elements
delegation.pyTask Delegation UtilityProvides helper functions for agent delegation patterns
a2ui/UI ExtensionsSchema definitions for Agent-to-User Interface communication
schema/v0_8/Protocol SchemasJSON schemas for message validation and serialization

Sources: lib/crewai/src/crewai/a2a/wrapper.py

Core Types and Data Models

The A2A module defines comprehensive data models for structured communication between agents. These types ensure type safety and consistent message formats across the system.

Content Types

The A2A protocol supports multiple content types for flexible message composition:

graph LR
    M[Message] --> T[TextContent]
    M --> A[ArtifactContent]
    M --> T2[TaskContent]
    M --> S[StatusContent]
    M --> P[Part]
    
    T --> TT[Text]
    A --> DT[Document]
    A --> C[Code]
    A --> D[Data]

Message Structure

Messages in A2A communication follow a standardized structure defined in the schema:

{
  "message": {
    "role": "string",
    "content": {
      "parts": []
    },
    "agent": "string",
    "taskId": "string"
  }
}

Sources: lib/crewai/src/crewai/a2a/types.py

A2A Wrapper

The A2AWrapper class serves as the primary interface for agent communication:

class A2AWrapper:
    """Handles Agent-to-Agent communication and delegation."""
    
    def __init__(self, config: A2AConfig):
        self.config = config
        
    def send_message(self, agent_id: str, message: A2AMessage) -> A2AResponse:
        """Send a message to another agent."""
        
    def delegate_task(self, target_agent: str, task: Task) -> DelegationResult:
        """Delegate a task to another agent."""
        
    def receive_message(self, message: A2AMessage) -> None:
        """Process an incoming message from another agent."""

Key Methods

MethodParametersReturn TypeDescription
send_messageagent_id, messageA2AResponseSend a message to a specific agent
delegate_tasktarget_agent, taskDelegationResultDelegate a task with full context
receive_messagemessageNoneProcess incoming messages
get_statustask_idTaskStatusGet the status of a delegated task

Sources: lib/crewai/src/crewai/a2a/wrapper.py

Task Delegation

The delegation utility provides specialized functions for distributing work across agents:

def delegate_to_agent(
    source_agent: str,
    target_agent: str,
    task: Task,
    context: Dict[str, Any]
) -> DelegationResult:
    """Delegate a task from one agent to another."""
    
def create_delegation_context(
    source: Agent,
    target: Agent,
    task: Task
) -> DelegationContext:
    """Create a context object for delegation."""

Delegation Flow

graph TD
    S[Source Agent] -->|Identifies Task| D1{Delegation Decision}
    D1 -->|Can Delegate| C1[Create Context]
    D1 -->|Cannot Delegate| R1[Reject Task]
    C1 -->|Prepare Message| M1[Build A2A Message]
    M1 -->|Send via Wrapper| TW[A2A Wrapper]
    TW -->|Route Message| T[Target Agent]
    T -->|Execute Task| TR[Task Result]
    TR -->|Send Response| TW2[A2A Wrapper]
    TW2 -->|Route Response| S2[Source Agent]

Delegation Context

The DelegationContext object captures all necessary information for proper task delegation:

FieldTypeDescription
source_agentstrIdentifier of the delegating agent
target_agentstrIdentifier of the receiving agent
task_idstrUnique identifier for the task
priorityintDelegation priority (1-10)
timeoutintMaximum execution time in seconds
retry_countintNumber of retry attempts

Sources: lib/crewai/src/crewai/a2a/utils/delegation.py

A2UI Extensions

The A2UI (Agent-to-User Interface) module provides schema definitions for rendering agent outputs in user interfaces:

# Extension initialization
from crewai.a2a.extensions.a2ui import A2UIExtension

extension = A2UIExtension()
extension.register_handlers()

Supported Content Rendering

Content TypeDescriptionSchema Reference
textPlain text with optional hintsserver_to_client_with_standard_catalog.json
imageImage content with sizing optionsserver_to_client_with_standard_catalog.json
urlWeb links with metadataserver_to_client_with_standard_catalog.json

Text Styling Hints

The schema supports the following text style hints for UI rendering:

Style HintDescriptionUse Case
h1Largest headingMain section titles
h2Second largest headingSubsection titles
h3Third largest headingMinor headings
h4Fourth largest headingComponent labels
h5Fifth largest headingDetailed labels
captionSmall textFigure captions, footnotes
bodyStandard body textRegular content

Image Rendering Options

Images in A2A messages support the following fit modes:

Fit ModeCSS EquivalentDescription
containobject-fit: containScale to fit within bounds
coverobject-fit: coverScale to fill bounds, crop if needed
fillobject-fit: fillStretch to fill bounds

Sources: lib/crewai/src/crewai/a2a/extensions/a2ui/__init__.py Sources: lib/crewai/src/crewai/a2a/extensions/a2ui/schema/v0_8/server_to_client_with_standard_catalog.json

Protocol Versioning

The A2A protocol uses semantic versioning with the current implementation supporting v0.8:

graph LR
    V08[v0.8] -->|Current| C[Current Schema]
    V08 -->|Features| T[Text Hints]
    V08 -->|Features| I[Image Fit Options]
    V08 -->|Features| U[URL References]
    
    C -->|Evolution| F[Future Versions]

Schema files are organized by version in the schema/ directory, allowing for backward compatibility and gradual migration:

lib/crewai/src/crewai/a2a/extensions/a2ui/schema/
└── v0_8/
    └── server_to_client_with_standard_catalog.json

Usage Examples

Basic Agent Communication

from crewai.a2a import A2AWrapper, A2AMessage, A2AConfig

# Initialize the A2A wrapper
config = A2AConfig(
    agent_id="researcher_01",
    capabilities=["delegate", "respond"]
)
wrapper = A2AWrapper(config)

# Create and send a message
message = A2AMessage(
    role="agent",
    content={
        "parts": [
            {"text": "Please analyze the provided data and return insights"}
        ]
    },
    agent="researcher_01"
)

response = wrapper.send_message(
    agent_id="analyst_01",
    message=message
)

Task Delegation Pattern

from crewai.a2a.utils.delegation import delegate_to_agent, create_delegation_context

# Create delegation context
context = create_delegation_context(
    source=researcher_agent,
    target=analyst_agent,
    task=analysis_task
)

# Execute delegation
result = delegate_to_agent(
    source_agent="researcher_01",
    target_agent="analyst_01",
    task=analysis_task,
    context={"priority": "high", "deadline": "2024-01-15"}
)

UI-Ready Response Structure

from crewai.a2a.extensions.a2ui import create_ui_response

# Create a response optimized for UI rendering
ui_response = create_ui_response(
    content_type="text",
    text="Research findings have been compiled",
    style_hint="body"
)

# Or include rich content
ui_response = create_ui_response(
    content_type="image",
    url={"literalString": "https://example.com/chart.png"},
    fit="contain"
)

Integration with CrewAI

The A2A module integrates with CrewAI's core components:

graph TD
    subgraph "CrewAI Core"
        C[Crew]
        P[Process]
        A[Agents]
        T[Tasks]
    end
    
    subgraph "A2A Layer"
        W[Wrapper]
        D[Delegation Utils]
        U[A2UI]
    end
    
    C -->|Orchestrates| A
    A -->|Communicates via| W
    W -->|Delegates via| D
    W -->|Renders via| U
    A -->|Execute| T
    P -->|Manages Flow| C

Integration Points

ComponentIntegrationDescription
CrewAutomatic initializationCreates A2A wrapper for each agent
AgentMessage handlingUses A2A for inter-agent communication
TaskDelegation supportCan be delegated via A2A protocol
ProcessCoordinationUses A2A for process-level messaging

Configuration Options

A2AConfig Parameters

ParameterTypeDefaultDescription
agent_idstrRequiredUnique identifier for the agent
capabilitiesList[str][]Supported capabilities
timeoutint300Default timeout in seconds
retry_attemptsint3Number of retry attempts
enable_ui_extensionboolTrueEnable A2UI rendering

Environment Variables

VariableDescription
A2A_TIMEOUTGlobal A2A operation timeout
A2A_MAX_RETRIESMaximum retry attempts
A2A_LOG_LEVELLogging verbosity

Best Practices

  1. Message Design: Keep messages focused and atomic for better error handling
  2. Context Preservation: Always include sufficient context when delegating tasks
  3. Error Handling: Implement proper exception handling for network failures
  4. Schema Validation: Validate messages against the A2UI schema before sending
  5. Timeout Management: Set appropriate timeouts based on task complexity

Summary

The Agent-to-Agent (A2A) Communication module in CrewAI provides a robust foundation for multi-agent collaboration. Key features include:

  • Standardized messaging with support for multiple content types
  • Task delegation with full context preservation
  • UI extensions for rich content rendering
  • Versioned schemas ensuring backward compatibility
  • Deep integration with CrewAI's agent and task systems

The modular architecture allows for flexible extension and customization while maintaining a consistent communication protocol across all agents in a crew.

Sources: [lib/crewai/src/crewai/a2a/wrapper.py]()

Knowledge Management

Related topics: Memory and Storage System

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section PDF Knowledge Source

Continue reading this section for the full explanation and source context.

Section CSV Knowledge Source

Continue reading this section for the full explanation and source context.

Related topics: Memory and Storage System

Knowledge Management

Overview

Knowledge Management in crewAI provides a structured framework for agents to store, retrieve, and utilize contextual information during task execution. This system enables crews to maintain persistent knowledge that can be referenced across multiple agent interactions, enhancing the contextual awareness and accuracy of agent responses.

Architecture

The Knowledge Management system consists of three primary components working in coordination:

graph TD
    A[Agent] --> B[Knowledge Source]
    B --> C[Knowledge Storage]
    C --> D[Vector Store]
    B --> E[PDF Files]
    B --> F[CSV Files]
    B --> G[Text Data]
    C --> H[Query Engine]
    H --> A

Core Components

ComponentPurposeLocation
KnowledgeMain class orchestrating knowledge operationslib/crewai/src/crewai/knowledge/knowledge.py
KnowledgeSourceAbstract base for data ingestionlib/crewai/src/crewai/knowledge/source/
KnowledgeStorageHandles persistence and retrievallib/crewai/src/crewai/knowledge/storage/knowledge_storage.py

Knowledge Sources

Knowledge Sources represent the input layer where data is ingested into the system. The framework supports multiple source types to accommodate various data formats.

PDF Knowledge Source

The PDF Knowledge Source processes PDF documents and extracts textual content for vector storage. It handles multi-page documents and preserves structural information where possible.

Key Features:

  • Automatic text extraction from PDF pages
  • Metadata preservation (page numbers, document titles)
  • Chunk-based processing for large documents

CSV Knowledge Source

The CSV Knowledge Source handles tabular data, converting rows and columns into searchable knowledge entries. It maintains the relationship between column headers and values during ingestion.

Key Features:

  • Header-aware parsing
  • Row-level chunking
  • Delimiter detection

Storage Layer

The Knowledge Storage component manages the persistence of processed knowledge using vector embeddings. It interfaces with the underlying vector database to enable semantic search capabilities.

sequenceDiagram
    participant Source as Knowledge Source
    participant Storage as Knowledge Storage
    participant VectorDB as Vector Database
    participant Query as Query Engine
    
    Source->>Storage: Ingest document chunks
    Storage->>Storage: Generate embeddings
    Storage->>VectorDB: Store vectors + metadata
    Query->>VectorDB: Semantic search
    VectorDB->>Query: Relevant chunks
    Query->>Storage: Format results

Storage Configuration

ParameterTypeDescriptionDefault
chunk_sizeintSize of text chunks in characters1000
chunk_overlapintOverlap between consecutive chunks200
embedding_modelstrModel used for vectorizationConfigured at crew level

Integration with Agents

Knowledge Management integrates with the crewAI agent system through the Agent class. Agents can be configured to automatically query relevant knowledge during task execution.

Basic Integration Pattern:

from crewai import Agent, Crew
from crewai.knowledge import Knowledge, PDFKnowledgeSource

# Initialize knowledge base
knowledge = Knowledge()

# Add sources
knowledge.add_source(PDFKnowledgeSource(file_path="document.pdf"))
knowledge.add_source(CSVKnowledgeSource(file_path="data.csv"))

# Create agent with knowledge access
agent = Agent(
    role="Research Analyst",
    goal="Answer questions using company knowledge",
    backstory="Expert at analyzing documents",
    knowledge=knowledge
)

Query Mechanism

The query mechanism enables agents to retrieve relevant knowledge based on semantic similarity. When an agent processes a task, the system automatically retrieves chunks that are contextually relevant to the query.

Query ParameterDescription
query_textThe search query string
top_kMaximum number of results to return
similarity_thresholdMinimum similarity score for inclusion

Data Flow

graph LR
    A[Document Files] --> B[Knowledge Sources]
    B --> C[Text Chunking]
    C --> D[Embedding Generation]
    D --> E[Vector Storage]
    F[Agent Query] --> G[Similarity Search]
    G --> E
    E --> H[Retrieved Chunks]
    H --> I[Agent Context]

Usage with Crews

For multi-agent crews, knowledge can be shared across all agents or restricted to specific agents:

from crewai import Crew
from crewai.knowledge import Knowledge

# Shared knowledge across crew
crew_knowledge = Knowledge()
crew_knowledge.add_source(company_docs_source)

crew = Crew(
    agents=[researcher, analyst, writer],
    tasks=[research_task, analysis_task, report_task],
    knowledge=crew_knowledge  # Available to all agents
)

Best Practices

  1. Chunk Sizing: Use appropriate chunk sizes based on document structure. Smaller chunks (500-1000 chars) work well for Q&A, larger chunks for document summarization.
  2. Source Organization: Group related documents into separate knowledge sources for more targeted retrieval.
  3. Metadata: Include relevant metadata with knowledge sources to improve result filtering.
  4. Update Strategy: Implement regular synchronization for knowledge sources that change frequently.
ComponentPurpose
crewai_toolsExternal tool integrations including PDF search, CSV search
Agent MemoryShort-term contextual memory for agent sessions
Task ContextTask-specific information passing between agents

Source: https://github.com/crewAIInc/crewAI / Human Manual

Memory and Storage System

Related topics: Knowledge Management

Section Related Pages

Continue reading this section for the full explanation and source context.

Section Core Components

Continue reading this section for the full explanation and source context.

Section Data Models

Continue reading this section for the full explanation and source context.

Section Scope Path Operations

Continue reading this section for the full explanation and source context.

Related topics: Knowledge Management

Memory and Storage System

Overview

The crewAI Memory and Storage System provides persistent, searchable memory capabilities for AI agents and crews. It enables cross-session learning, semantic recall of past interactions, and structured storage of agent experiences. The system is designed to handle various types of memory including short-term, long-term, entity, and knowledge-based memories.

graph TD
    A[Agent Request] --> B[UnifiedMemory]
    B --> C[MemoryScope]
    C --> D{Memory Type}
    D --> E[Short-term Memory]
    D --> F[Long-term Memory]
    D --> G[Entity Memory]
    D --> H[Knowledge Memory]
    E --> I[Vector Storage]
    F --> I
    G --> I
    H --> I
    I --> J[Recall Flow]
    J --> K[MemoryMatch Results]
    K --> A

Memory Architecture

Core Components

ComponentPurposeLocation
UnifiedMemoryCentral interface for all memory operationsunified_memory.py
MemoryScopeDefines isolation boundaries for memory contextsmemory_scope.py
RecallFlowHandles semantic search and retrieval of memoriesrecall_flow.py
LanceDBStorageVector database backend for persistent storagelancedb_storage.py

Data Models

#### MemoryRecord

The fundamental unit of stored information in the memory system:

class MemoryRecord(BaseModel):
    data: Any                           # The actual memory content
    metadata: dict[str, Any]            # Associated metadata
    importance: float = Field(         # Relevance score 0.0-1.0
        default=0.5, ge=0.0, le=1.0
    )
    created_at: datetime                # Creation timestamp
    last_accessed: datetime             # Last retrieval timestamp
    embedding: list[float] | None       # Vector embedding for semantic search
    source: str | None                  # Origin tracking (user ID, session ID)
    private: bool = Field(             # Privacy flag for access control
        default=False
    )

Sources: lib/crewai/src/crewai/memory/types.py:1-50

#### MemoryMatch

Returned by recall operations with relevance scoring:

class MemoryMatch(BaseModel):
    record: MemoryRecord               # The matched memory
    score: float                       # Combined relevance score
    match_reasons: list[str]           # Why this matched (semantic, recency, importance)
    evidence_gaps: list[str]          # Missing context flags

Sources: lib/crewai/src/crewai/memory/types.py:55-70

Memory Scoping System

The MemoryScope class manages hierarchical isolation of memory contexts, allowing different crews, agents, or sessions to maintain separate memory stores while supporting controlled cross-context access.

Scope Path Operations

FunctionDescription
join_scope_paths(root, inner)Combines two scope paths with normalization
normalize_scope_path(path)Standardizes scope path format

Scope Path Format

Scope paths follow a hierarchical structure:

/crew/{crew-name}/{memory-type}
/crew/research-crew/short-term
/crew/research-crew/long-term
/crew/research-crew/entity
/crew/research-crew/knowledge

Scope Path Join Behavior

join_scope_paths("/crew/test", "/market-trends")
# Returns: '/crew/test/market-trends'

join_scope_paths("/crew/test", "market-trends")
# Returns: '/crew/test/market-trends'

join_scope_paths("/crew/test", "/")
# Returns: '/crew/test'

join_scope_paths("/crew/test", None)
# Returns: '/crew/test'

Sources: lib/crewai/src/crewai/memory/utils.py:1-50

graph LR
    A["root: '/crew/test'"] --> B[join_scope_paths]
    C["inner: '/market-trends'"] --> B
    B --> D["Result: '/crew/test/market-trends'"]
    
    E["root: '/crew/test'"] --> F[normalize]
    F --> G["Result: '/crew/test'"]

Storage Backends

LanceDB Storage

LanceDB is the primary vector storage backend, providing efficient similarity search capabilities:

ParameterTypeDefaultDescription
db_pathstrRequiredPath to the LanceDB database
table_namestrRequiredName of the storage table
vector_dimensionintAutoEmbedding vector size
reset_dbboolFalseWhether to reset on initialization

Sources: lib/crewai/src/crewai/memory/storage/lancedb_storage.py

Storage Operations

OperationDescription
writeStore a new memory record
readRetrieve by ID
searchSemantic similarity search
deleteRemove by ID
resetClear all records

Recall and Retrieval

The RecallFlow manages how memories are retrieved based on queries. It combines semantic similarity with recency and importance scoring.

sequenceDiagram
    participant Agent
    participant UnifiedMemory
    participant RecallFlow
    participant LanceDBStorage
    
    Agent->>UnifiedMemory: Query with context
    UnifiedMemory->>RecallFlow: Execute recall(query, scope)
    RecallFlow->>LanceDBStorage: Semantic search
    LanceDBStorage-->>RecallFlow: Candidate memories
    RecallFlow->>RecallFlow: Score by relevance
    RecallFlow-->>UnifiedMemory: Ranked MemoryMatch[]
    UnifiedMemory-->>Agent: Filtered results

Recall Parameters

ParameterTypeRequiredDescription
querystrYesSearch query text
scopestrYesMemory scope path
limitintNoMax results (default: 5)
include_privateboolNoInclude private memories

Memory Types

TypePurposePersistence
Short-termCurrent session contextEphemeral, cleared on reset
Long-termCross-session learningPersistent until explicitly reset
EntityShared entity informationPersistent, shared across agents
KnowledgeDomain-specific groundingPersistent, used for RAG

Configuration Options

Crew-Level Configuration

memory:
  enabled: true
  type: "short_term" | "long_term" | "entity" | "knowledge" | "all"
  scope: "/crew/{crew_name}"

CLI Memory Management

# Reset all memories
crewai reset-memories -a

# Reset specific memory types
crewai reset-memories -s    # Short-term only
crewai reset-memories -l    # Long-term only
crewai reset-memories -e    # Entity only
crewai reset-memories -kn   # Knowledge only
crewai reset-memories -akn  # Agent knowledge only

Sources: lib/cli/src/crewai_cli/templates/AGENTS.md

Privacy and Access Control

The memory system supports private memories that are only accessible under specific conditions:

  • Private flag: When private=True, a memory is only visible to recall requests from the same source
  • include_private parameter: Set to True to include private memories in cross-source queries
  • Source tracking: Each memory records its origin via the source field for provenance and filtering

Embedding Configuration

Memories are stored with vector embeddings for semantic search:

ProviderModel ExampleConfiguration
OpenAItext-embedding-3-smallOPENAI_API_KEY
Googlemodels/embedding-001GOOGLE_API_KEY
Ollamanomic-embed-textLocal endpoint
Azuretext-embedding-3Azure OpenAI config

Sources: lib/crewai-tools/src/crewai_tools/tools/directory_search_tool/README.md

Best Practices

  1. Scope Organization: Use consistent naming conventions for scope paths to enable efficient cross-crew memory sharing
  2. Importance Scoring: Set appropriate importance values (0.0-1.0) to influence retrieval ranking
  3. Privacy Handling: Mark sensitive information with private=True to prevent unintended access
  4. Memory Pruning: Regularly reset short-term memory for clean session boundaries
  5. Embedding Selection: Choose embedding models appropriate for your content domain

Sources: [lib/crewai/src/crewai/memory/types.py:1-50]()

Doramagic Pitfall Log

Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.

high [FEATURE] Implement Process.consensual with a pluggable ConsensusEngine

First-time setup may fail or require extra isolation and rollback planning.

high [BUG] Wrong code in document

The project should not be treated as fully validated until this signal is reviewed.

high [FEATURE] Enhance the document about @persisit

The project should not be treated as fully validated until this signal is reviewed.

high [FEATURE] GuardrailProvider interface for pre-tool-call authorization

The project may affect permissions, credentials, data exposure, or host boundaries.

Doramagic Pitfall Log

Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.

1. Installation risk: [FEATURE] Implement Process.consensual with a pluggable ConsensusEngine

  • Severity: high
  • Finding: Installation risk is backed by a source signal: [FEATURE] Implement Process.consensual with a pluggable ConsensusEngine. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5708

2. Project risk: [BUG] Wrong code in document

  • Severity: high
  • Finding: Project risk is backed by a source signal: [BUG] Wrong code in document. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5378

3. Project risk: [FEATURE] Enhance the document about @persisit

  • Severity: high
  • Finding: Project risk is backed by a source signal: [FEATURE] Enhance the document about @persisit. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5372

4. Security or permission risk: [FEATURE] GuardrailProvider interface for pre-tool-call authorization

  • Severity: high
  • Finding: Security or permission risk is backed by a source signal: [FEATURE] GuardrailProvider interface for pre-tool-call authorization. Treat it as a review item until the current version is checked.
  • User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/4877

5. Project risk: Project risk needs validation

  • Severity: medium
  • Finding: Project risk is backed by a source signal: Project risk needs validation. Treat it as a review item until the current version is checked.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: identity.distribution | github_repo:710601088 | https://github.com/crewAIInc/crewAI | repo=crewai; install=skills

6. Installation risk: 1.14.4

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: 1.14.4. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.4

7. Installation risk: 1.14.4a1

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: 1.14.4a1. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.4a1

8. Installation risk: 1.14.5a4

  • Severity: medium
  • Finding: Installation risk is backed by a source signal: 1.14.5a4. Treat it as a review item until the current version is checked.
  • User impact: First-time setup may fail or require extra isolation and rollback planning.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a4

9. Configuration risk: Scans the client database to extract existing policy details.

  • Severity: medium
  • Finding: Configuration risk is backed by a source signal: Scans the client database to extract existing policy details.. Treat it as a review item until the current version is checked.
  • User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5760

10. Capability assumption: README/documentation is current enough for a first validation pass.

  • Severity: medium
  • Finding: README/documentation is current enough for a first validation pass.
  • User impact: The project should not be treated as fully validated until this signal is reviewed.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: capability.assumptions | github_repo:710601088 | https://github.com/crewAIInc/crewAI | README/documentation is current enough for a first validation pass.

11. Maintenance risk: 1.14.5a1

  • Severity: medium
  • Finding: Maintenance risk is backed by a source signal: 1.14.5a1. Treat it as a review item until the current version is checked.
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1

12. Maintenance risk: Maintainer activity is unknown

  • Severity: medium
  • Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
  • User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
  • Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
  • Evidence: evidence.maintainer_signals | github_repo:710601088 | https://github.com/crewAIInc/crewAI | last_activity_observed missing

Source: Doramagic discovery, validation, and Project Pack records

Community Discussion Evidence

These external discussion links are review inputs, not standalone proof that the project is production-ready.

Sources 12

Count of project-level external discussion links exposed on this manual page.

Use Review before install

Open the linked issues or discussions before treating the pack as ready for your environment.

Community Discussion Evidence

Doramagic exposes project-level community discussion separately from official documentation. Review these links before using crewAI with real data or production workflows.

Source: Project Pack community evidence and pitfall evidence