Doramagic Project Pack · Human Manual
crewAI
This guide covers the installation and setup procedures for CrewAI, a multi-agent automation framework. The installation process supports multiple methods including pip, UV package manager...
Installation and Setup
Related topics: Quick Start Guide, LLM Providers and Configuration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Quick Start Guide, LLM Providers and Configuration
Installation and Setup
Overview
This guide covers the installation and setup procedures for CrewAI, a multi-agent automation framework. The installation process supports multiple methods including pip, UV package manager, and direct source installation. CrewAI requires Python 3.10 to 3.13 and uses modern dependency management practices to ensure consistent environments across development and production.
System Requirements
Python Version Compatibility
| Requirement | Specification |
|---|---|
| Minimum Python | 3.10 |
| Maximum Python | < 3.14 |
| Package Manager | UV (recommended) or pip |
The project enforces version constraints through pyproject.toml configuration files. The version range ensures compatibility with modern Python features while avoiding breaking changes from upcoming releases.
Installation Methods
Standard Installation via pip
The primary method for installing CrewAI uses pip, Python's standard package manager:
pip install crewai
This installation includes the core CrewAI framework with essential dependencies. For users requiring additional tooling capabilities, the extended installation includes built-in tools:
pip install 'crewai[tools]'
Sources: lib/crewai/pyproject.toml
UV Package Manager Installation
UV is the recommended package manager for CrewAI projects due to its superior performance and dependency resolution capabilities.
pip install uv
After installing UV, create a new project with:
crewai create crew <project_name> --skip_provider
crewai create flow <project_name> --skip_provider
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Project Structure
After creating a new CrewAI project, the following directory structure is generated:
src/<project_name>/
├── __init__.py
├── crew.py
├── main.py
├── tools/
│ ├── custom_tool.py
│ └── __init__.py
└── config/
├── agents.yaml
└── tasks.yaml
Core Files Description
| File | Purpose |
|---|---|
main.py | Entry point for project execution |
crew.py | Crew definition and agent orchestration logic |
agents.yaml | Agent role, goal, and backstory configurations |
tasks.yaml | Task descriptions and dependencies |
tools/ | Custom tool implementations |
.env | Environment variables and API keys |
Sources: README.md
Dependencies Management
Using UV for Dependency Operations
UV provides fast and reliable dependency management. The following commands handle common dependency tasks:
uv add <package> # Add a new dependency
uv sync # Synchronize dependencies with lock file
uv lock # Update the lock file
Core Dependencies
The main crewai package includes these core dependencies:
pydantic- Data validation and settings managementcrewaicore modules - Agent orchestration and task management
Tools Dependencies
Additional packages are required for specific tool integrations:
| Tool | Required Package |
|---|---|
| Tavily Search | tavily-python |
| File Compression | Built-in |
| PDF Processing | Built-in |
| ArXiv Integration | Built-in |
Sources: lib/crewai-tools/pyproject.toml
Environment Configuration
Environment Variables Setup
Create a .env file in your project root to store sensitive configuration:
OPENAI_API_KEY=your_openai_api_key
TAVILY_API_KEY=your_tavily_api_key
SERPLY_API_KEY=your_serply_api_key
LINKUP_API_KEY=your_linkup_api_key
Configuration in agents.yaml
Define agent behavior through YAML configuration:
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}.
Sources: README.md
Custom Tools Installation
Publishing Tools
Distribute custom tools within your organization or to the community:
crewai tool publish <tool_name>
Installing Tools
Install tools published by others or within your organization:
crewai tool install <tool_name>
Sources: lib/cli/src/crewai_cli/templates/tool/README.md
Installation Verification
Quick Verification Steps
After installation, verify the setup by running:
crewai run
This command auto-detects the project type from pyproject.toml and executes the crew or flow.
Memory Management Commands
CrewAI provides CLI commands for managing agent memories:
crewai reset-memories -a # Reset all memories
crewai reset-memories -s # Short-term only
crewai reset-memories -l # Long-term only
crewai reset-memories -e # Entity only
crewai reset-memories -kn # Knowledge only
crewai reset-memories -akn # Agent knowledge only
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Development Workflow
graph TD
A[Install CrewAI] --> B[Create Project]
B --> C[Configure Agents]
C --> D[Define Tasks]
D --> E[Add Tools]
E --> F[Set Environment Variables]
F --> G[Run crewai run]
G --> H[Test and Iterate]
H --> I[Deploy]Troubleshooting Common Issues
pyproject.toml Validation
The CLI validates pyproject.toml for proper CrewAI project structure. If validation fails:
- Verify
crewaiis listed in project dependencies - Check TOML syntax correctness
- Ensure required configuration keys exist
Version Conflicts
If dependency conflicts occur:
- Use
uv lockto regenerate lock file - Verify Python version falls within 3.10-3.13 range
- Clear cache with
uv cache clean
Sources: lib/crewai-core/src/crewai_core/project.py
Next Steps
After successful installation:
- Define Agents - Configure roles, goals, and backstories in
config/agents.yaml - Create Tasks - Define task descriptions and expected outputs in
config/tasks.yaml - Add Tools - Integrate custom or built-in tools for agent capabilities
- Implement Logic - Customize
crew.pywith specific orchestration requirements - Test - Use
crewai testfor iterative testing
Summary
The CrewAI installation process supports multiple package managers and provides flexible project scaffolding. Key points:
- Minimum requirement: Python 3.10
- UV is the recommended package manager for modern workflows
- Project structure separates configuration (YAML) from implementation (Python)
- Custom tools can be published and installed through the CLI
- Environment variables handle sensitive configuration
Sources: [lib/crewai/pyproject.toml](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/pyproject.toml)
Quick Start Guide
Related topics: Installation and Setup, Agents Architecture, Tasks and Task Management
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Installation and Setup, Agents Architecture, Tasks and Task Management
Quick Start Guide
This guide provides a comprehensive overview for setting up and running your first CrewAI project. CrewAI is a multi-agent automation framework that enables you to build sophisticated AI-powered workflows by composing agents, tasks, and crews.
Overview
The Quick Start Guide covers the essential steps to:
- Install CrewAI and its dependencies
- Scaffold a new crew project
- Configure agents and tasks using YAML
- Implement crew logic in Python
- Execute and test your crew
Scope: This guide focuses on the standard crew workflow using the @CrewBase decorator pattern with YAML-based configuration files.
Prerequisites
| Requirement | Version |
|---|---|
| Python | >=3.10, <3.14 |
| Package Manager | UV (recommended) |
Sources: lib/cli/src/crewai_cli/templates/tool/README.md
Project Structure
A typical CrewAI project follows this directory layout:
my_project/
├── src/my_project/
│ ├── __init__.py
│ ├── main.py # Entry point
│ ├── crew.py # Crew definition
│ └── config/
│ ├── agents.yaml # Agent configurations
│ └── tasks.yaml # Task configurations
├── .env # Environment variables
└── pyproject.toml # Project configuration
Sources: lib/crewai/README.md
Installation
Step 1: Install CrewAI CLI
pip install crewai
Step 2: Install UV (if not already installed)
UV is the recommended package manager for CrewAI projects.
pip install uv
Step 3: Create a New Crew Project
crewai create crew my_crew --skip_provider
Step 4: Install Project Dependencies
cd my_crew
crewai install
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Project Components
Agent Configuration (agents.yaml)
Agents are defined in YAML format with role, goal, and backstory:
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis
backstory: >
You're a meticulous analyst with a keen eye for detail.
Sources: lib/crewai/README.md
Task Configuration (tasks.yaml)
Tasks define what each agent should accomplish:
research_task:
description: >
Research the latest developments in {topic}
expected_output: >
A list of key findings with sources and implications.
agent: researcher
reporting_task:
description: >
Create a comprehensive report on {topic}
expected_output: >
A fully fledged report with the main topics, each with a full section
of information. Formatted as markdown.
agent: reporting_analyst
output_file: report.md
Sources: lib/crewai/README.md
Crew Implementation
Crew Class (crew.py)
The crew class uses the @CrewBase decorator to bind agents and tasks:
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from crewai_tools import SerperDevTool
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
@CrewBase
class MyProjectCrew():
"""My project crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(config=self.tasks_config['research_task'])
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
Sources: lib/cli/src/crewai_cli/templates/crew/crew.py
Entry Point (main.py)
The main entry point kicks off the crew:
from crewai import Crew
from my_project.crew import MyProjectCrew
def run():
inputs = {
"topic": "AI LLMs"
}
crew = MyProjectCrew()
result = crew.crew().kickoff(inputs=inputs)
print(result)
if __name__ == "__main__":
run()
Sources: lib/cli/src/crewai_cli/templates/crew/main.py
Workflow Diagram
graph TD
A[Start: crewai create crew] --> B[Install Dependencies]
B --> C[Configure agents.yaml]
C --> D[Configure tasks.yaml]
D --> E[Implement crew.py]
E --> F[Implement main.py]
F --> G[crewai run]
G --> H{Crew Execution}
H --> I[Agents Complete Tasks]
I --> J[Output Generated]
J --> K[End]
style A fill:#4CAF50,color:#fff
style G fill:#2196F3,color:#fff
style K fill:#FF5722,color:#fffDevelopment Best Practices
| Practice | Description |
|---|---|
| YAML-first configuration | Define agents and tasks in YAML, keep crew classes minimal |
| Use structured output | Use output_pydantic for data flowing between tasks |
| Enable memory | For crews benefiting from cross-session learning |
| Sequential vs Hierarchical | Sequential for linear workflows; hierarchical for dynamic delegation |
| Test frequently | Use crewai test to evaluate performance |
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Common CLI Commands
| Command | Description |
|---|---|
crewai create crew <name> | Create a new crew project |
crewai run | Execute the crew |
crewai test | Test crew (2 iterations, gpt-4o-mini default) |
crewai test -n 5 -m gpt-4o | Custom test iterations and model |
crewai train -n 5 -f training.json | Train the crew |
crewai reset-memories -a | Reset all memories |
crewai log-tasks-outputs | Show latest task outputs |
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Running Your Crew
Execute your crew using the CLI:
crewai run
Or run the main.py file directly:
python src/my_project/main.py
Customization
Adding Tools to Agents
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()] # Add tools here
)
Setting Custom LLM Providers
Use the crewai.LLM class or string shorthand:
llm="openai/gpt-4o"
llm="anthropic/claude-3-sonnet"
Memory and Knowledge
Enable memory in your crew for cross-session learning:
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
memory=True, # Enable memory
verbose=True,
)
Common Pitfalls
| Pitfall | Solution |
|---|---|
Using ChatOpenAI() directly | Use crewai.LLM or string shorthand |
| Forgetting type hints | Add # type: ignore[index] for YAML config access |
| Token limit issues | Set respect_context_window=True |
| API throttling | Configure max_rpm rate limiting |
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Next Steps
After completing this Quick Start Guide:
- Explore advanced agent configurations for memory, guardrails, and custom LLMs
- Learn about Flows for multi-crew orchestration
- Review tool integrations for additional capabilities
- Join the Discord community for support
Sources: [lib/cli/src/crewai_cli/templates/tool/README.md](https://github.com/crewAIInc/crewAI/blob/main/lib/cli/src/crewai_cli/templates/tool/README.md)
Agents Architecture
Related topics: Tasks and Task Management, Crews and Crew Orchestration, LLM Providers and Configuration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Tasks and Task Management, Crews and Crew Orchestration, LLM Providers and Configuration
Agents Architecture
Overview
The CrewAI Agents Architecture provides a flexible, modular framework for creating and orchestrating autonomous AI agents. The architecture is designed around the concept of agents as independent entities that can collaborate within crews to accomplish complex tasks through both autonomous decision-making and structured workflows.
Agents in CrewAI are composed of several key components:
| Component | Purpose |
|---|---|
| BaseAgent | Abstract base class defining the agent interface |
| Agent (Core) | Concrete agent implementation with LLM integration |
| CrewAgentExecutor | Handles agent execution within crew context |
| Parser | Processes LLM outputs and extracts actions |
| Guardrails | Validates agent outputs for safety and accuracy |
Sources: lib/crewai/src/crewai/agents/agent_builder/base_agent.py:1-50
Architecture Diagram
graph TD
A[User Defined Agent] --> B[BaseAgent]
B --> C[Agent Core]
C --> D[CrewAgentExecutor]
D --> E[Parser]
E --> F[LLM]
F --> G[Tool Calls]
G --> H[Guardrails]
H --> D
I[Memory] -.-> C
J[Knowledge] -.-> C
K[Tools] --> GAgent Definition
Agents are defined through YAML configuration files and Python decorators. Each agent requires a minimum of three attributes:
# config/agents.yaml
researcher:
role: "Senior Data Researcher"
goal: "Uncover cutting-edge developments in {topic}"
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}. Known for your ability to find the most relevant
information and present it in a clear and concise manner.
Core Agent Attributes
| Attribute | Type | Required | Description |
|---|---|---|---|
role | string | Yes | Defines the agent's function within the crew |
goal | string | Yes | The specific objective the agent aims to achieve |
backstory | string | Yes | Context that shapes the agent's behavior and decision-making |
tools | List[BaseTool] | No | Tools available to the agent for task execution |
verbose | boolean | No | Enable detailed logging (default: False) |
llm | LLM | No | Custom language model configuration |
memory | boolean | No | Enable short/long-term memory (default: True) |
max_iter | int | No | Maximum iterations before forcing response |
max_rpm | int | No | Rate limiting for API calls |
Sources: lib/crewai/src/crewai/agent/core.py:1-100
BaseAgent Class
The BaseAgent serves as the foundational abstract class for all agent implementations:
from crewai.agents.agent_builder.base_agent import BaseAgent
from typing import List
class BaseAgent:
agents: List[BaseAgent] # Type annotation for crew agents
@property
def role(self) -> str:
"""Returns the role of the agent"""
@property
def goal(self) -> str:
"""Returns the goal of the agent"""
@property
def backstory(self) -> str:
"""Returns the backstory of the agent"""
Key Methods
| Method | Return Type | Description |
|---|---|---|
execute_task(task, context, tools) | TaskOutput | Execute a specific task |
set_memory_memory(memory) | None | Configure agent memory |
set_verbose(verbose) | None | Toggle verbose logging |
create_agent_executor() | CrewAgentExecutor | Initialize execution context |
Sources: lib/crewai/src/crewai/agents/agent_builder/base_agent.py:50-150
Agent Core Implementation
The core agent implementation provides the main interface for agent behavior:
from crewai import Agent
from crewai_tools import SerperDevTool
@CrewBase
class LatestAiDevelopmentCrew():
agents: List[BaseAgent]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
LLM Configuration
Agents can be configured with custom LLM providers:
config=dict(
llm=dict(
provider="ollama",
config=dict(
model="llama2",
temperature=0.5,
),
),
)
Supported providers include: openai, anthropic, google, ollama, azure, bedrock
Sources: lib/crewai/src/crewai/agent/core.py:100-200
CrewAgentExecutor
The CrewAgentExecutor manages agent execution within a crew context, handling:
- Task delegation and execution flow
- Tool invocation and result processing
- Guardrail validation
- Response formatting
graph LR
A[Task Assigned] --> B[Execute with Tools]
B --> C{Guardrails Check}
C -->|Pass| D[Return Result]
C -->|Fail| E[Retry or Fallback]Execution Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
task | Task | Required | The task to execute |
context | str | None | Shared context from previous tasks |
tools | List[BaseTool] | [] | Tools available for this execution |
Sources: lib/crewai/src/crewai/agents/crew_agent_executor.py:1-100
Parser
The Parser component processes LLM outputs and extracts structured actions:
from crewai.agents.parser import CrewAgentParser
parser = CrewAgentParser()
result = parser.parse(llm_output)
Parser Responsibilities
| Responsibility | Description |
|---|---|
| Action Extraction | Identify tool calls from LLM responses |
| Format Normalization | Convert LLM output to standardized format |
| Error Handling | Manage malformed outputs gracefully |
| Validation | Ensure parsed actions match expected schemas |
Sources: lib/crewai/src/crewai/agents/parser.py:1-80
Guardrails
Guardrails provide validation layers for agent outputs. The framework includes built-in guardrails:
HallucinationGuardrail
Validates that agent outputs are faithful to provided context:
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
guardrail = HallucinationGuardrail(
llm=agent.llm,
context="Reference document content",
threshold=7.0,
tool_response="API response data"
)
| Parameter | Type | Default | Description |
|---|---|---|---|
llm | LLM | Required | Language model for evaluation |
context | str | None | Reference context for validation |
threshold | float | None | Minimum faithfulness score |
tool_response | str | "" | Tool response for additional context |
Sources: lib/crewai/src/crewai/tasks/hallucination_guardrail.py:1-80
Agent Creation with Decorators
CrewAI uses Python decorators for declarative agent definition:
from crewai import Agent, Crew, Task
from crewai.project import CrewBase, agent, crew, task
from typing import List
@CrewBase
class MyCrew():
"""My Crew Description"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@agent
def reporting_analyst(self) -> Agent:
return Agent(
config=self.agents_config['reporting_analyst'],
verbose=True
)
@task
def research_task(self) -> Task:
return Task(config=self.tasks_config['research_task'])
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True
)
Tool Integration
Agents access external capabilities through tools:
Built-in Tools
| Tool | Purpose |
|---|---|
SerperDevTool | Web search functionality |
CodeDocsSearchTool | Search code documentation |
DirectorySearchTool | Search within directories |
FileWriterTool | Write content to files |
TavilyExtractorTool | Extract content from URLs |
ApifyActorsTool | Execute Apify actors |
LinkupSearchTool | Search via Linkup API |
Tool Configuration
from crewai_tools import SerperDevTool, FileWriterTool
researcher = Agent(
role="Research Analyst",
goal="Gather and synthesize information",
backstory="Expert researcher with access to web search",
tools=[SerperDevTool(), FileWriterTool()],
verbose=True
)
Memory and Knowledge
Agents can maintain state across interactions:
Memory Types
| Type | Scope | Persistence |
|---|---|---|
| Short-term | Current session | Session lifetime |
| Long-term | Across sessions | Database storage |
| Entity | Entity tracking | Automatic extraction |
| Knowledge | Domain knowledge | Vector store |
Memory Configuration
crew = Crew(
agents=[researcher, analyst],
tasks=[task1, task2],
memory=True, # Enable all memory types
embedder={
"provider": "openai",
"config": {"model": "text-embedding-ada-002"}
}
)
Execution Flow
sequenceDiagram
participant User
participant Crew
participant Agent
participant Executor
participant LLM
participant Tool
User->>Crew: kickoff()
Crew->>Agent: execute_task(task)
Agent->>Executor: run()
Executor->>LLM: generate_response()
LLM->>Tool: tool_call()
Tool-->>LLM: result
LLM-->>Executor: response
Executor->>Executor: validate_guardrails()
Executor-->>Agent: TaskOutput
Agent-->>Crew: result
Crew-->>User: final_outputBest Practices
Agent Design
- Clear Role Definition: Define distinct, non-overlapping roles for each agent
- Specific Goals: Ensure each agent has a well-defined, achievable goal
- Rich Backstory: Provide context that guides agent behavior appropriately
- Appropriate Tools: Grant only necessary tools to minimize unnecessary complexity
Configuration Guidelines
| Aspect | Recommendation |
|---|---|
| Verbose Mode | Enable during development, disable in production |
| Rate Limiting | Set max_rpm to avoid API throttling |
| Context Window | Use respect_context_window=True for long conversations |
| Iterations | Set max_iter to prevent infinite loops |
CLI Commands for Agents
# Create new agent
crewai create agent <name>
# Test agent
crewai test -n 5 -m gpt-4o
# Reset memories
crewai reset-memories -a # All memories
crewai reset-memories -s # Short-term only
crewai reset-memories -l # Long-term only
Summary
The CrewAI Agents Architecture provides a comprehensive framework for building multi-agent systems:
- BaseAgent defines the interface all agents must implement
- Agent Core provides the concrete implementation with LLM integration
- CrewAgentExecutor manages execution within crew context
- Parser handles LLM output processing
- Guardrails ensure output quality and safety
- Decorators enable declarative agent definition
- Tools extend agent capabilities beyond LLM-only responses
Sources: [lib/crewai/src/crewai/agents/agent_builder/base_agent.py:1-50]()
Tasks and Task Management
Related topics: Agents Architecture, Crews and Crew Orchestration
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agents Architecture, Crews and Crew Orchestration
Tasks and Task Management
Overview
Tasks are the fundamental unit of work in the CrewAI framework. They represent discrete units of work that agents execute within a crew. Each task encapsulates a description of what needs to be accomplished, the expected output format, and optional configurations for output validation, file handling, and dependency management.
Tasks serve as the bridge between agent capabilities and crew objectives, enabling complex multi-agent workflows through declarative configuration and structured output handling. The task management system provides both synchronous execution (via Task) and conditional execution (via ConditionalTask) to support various workflow patterns.
Sources: lib/crewai/src/crewai/task.py and lib/crewai/src/crewai/tasks/__init__.py
Sources: [lib/crewai/src/crewai/task.py](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/src/crewai/task.py) and [lib/crewai/src/crewai/tasks/__init__.py](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/src/crewai/tasks/__init__.py)
Crews and Crew Orchestration
Related topics: Agents Architecture, Tasks and Task Management, Flows - Event-Driven Workflows, LLM Providers and Configuration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agents Architecture, Tasks and Task Management, Flows - Event-Driven Workflows, LLM Providers and Configuration
Crews and Crew Orchestration
Overview
A Crew in CrewAI is a collaborative system of autonomous AI agents working together to accomplish complex tasks. Crew orchestration refers to the mechanism by which these agents coordinate, delegate, and execute tasks based on a defined process type.
Crews are the core building blocks for multi-agent automation in CrewAI. They enable sophisticated workflows where multiple specialized agents combine their capabilities to produce results that exceed what any single agent could achieve alone.
The Crew class serves as the central orchestrator, managing agents, tasks, processes, and shared resources like memory and tools. Sources: lib/crewai/src/crewai/crew.py
Crew Architecture
Core Components
A Crew consists of four primary components that work together to enable collaborative AI task execution:
| Component | Purpose |
|---|---|
| Agents | Autonomous AI entities with specific roles, goals, and tool access |
| Tasks | Defined work items with descriptions, expected outputs, and assignments |
| Process | Orchestration strategy determining how tasks are executed |
| Memory | Shared storage for context, learnings, and inter-agent communication |
Architecture Diagram
graph TD
A[Crew] --> B[Agents]
A --> C[Tasks]
A --> D[Process]
A --> E[Memory]
B --> B1[Agent 1]
B --> B2[Agent 2]
B --> BN[Agent N]
C --> C1[Task 1]
C --> C2[Task 2]
C --> CN[Task N]
D --> D1[Sequential]
D --> D2[Hierarchical]
E --> E1[Short-term]
E --> E2[Long-term]
E --> E3[Entity]
E --> E4[Knowledge]Process Types
CrewAI supports two primary process types for orchestrating agent collaboration:
Sequential Process
In the sequential process, tasks are executed one after another in a predefined order. Each task must complete before the next begins. This is ideal for linear workflows where output from one task feeds into the next.
from crewai import Crew, Process
crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
Use Cases:
- Research pipelines where findings accumulate
- Report generation requiring sequential information gathering
- Data processing chains where each step depends on the previous
Sources: lib/cli/src/crewai_cli/templates/crew/README.md
Hierarchical Process
The hierarchical process introduces an automated manager agent that coordinates the crew. The manager delegates tasks, validates results, and ensures proper workflow execution without manual intervention.
from crewai import Crew, Process
crew = Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.hierarchical,
verbose=True,
)
Use Cases:
- Complex projects requiring dynamic task delegation
- Scenarios where a manager role naturally exists
- Workflows needing result validation between steps
Sources: lib/crewai/README.md
Process Selection Guide
| Criteria | Sequential | Hierarchical |
|---|---|---|
| Task Dependencies | Fixed order | Dynamic delegation |
| Manager Required | No | Yes (auto-created) |
| Flexibility | Low | High |
| Best For | Linear pipelines | Complex orchestration |
| Overhead | Minimal | Higher due to management |
Crew Configuration
Using the @CrewBase Decorator
Crews are defined using Python decorators combined with YAML configuration files. This approach separates concerns between code logic and agent/task definitions.
from crewai import Agent, Crew, Process, Task
from crewai.project import CrewBase, agent, crew, task
from typing import List
@CrewBase
class LatestAiDevelopmentCrew():
"""LatestAiDevelopment crew"""
agents: List[BaseAgent]
tasks: List[Task]
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
@task
def research_task(self) -> Task:
return Task(
config=self.tasks_config['research_task'],
)
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
process=Process.sequential,
verbose=True,
)
Sources: lib/crewai/README.md
YAML Configuration Structure
#### agents.yaml
researcher:
role: >
{topic} Senior Data Researcher
goal: >
Uncover cutting-edge developments in {topic}
backstory: >
You're a seasoned researcher with a knack for uncovering the latest
developments in {topic}.
reporting_analyst:
role: >
{topic} Reporting Analyst
goal: >
Create detailed reports based on {topic} data analysis
#### tasks.yaml
research_task:
description: >
Research the latest developments in {topic}
expected_output: >
A comprehensive report on {topic} developments
agent: researcher
reporting_task:
description: >
Create a detailed report based on research findings
expected_output: >
A fully fleshed report with main topics
agent: reporting_analyst
Sources: lib/cli/src/crewai_cli/templates/crew/README.md
Agent Management
Agent Roles and Responsibilities
Agents within a crew are defined by four key attributes:
| Attribute | Description |
|---|---|
| role | Defines the agent's function within the crew |
| goal | The specific objective the agent works toward |
| backstory | Context that shapes the agent's behavior and perspective |
| tools | Capabilities the agent can use to accomplish tasks |
Agent Creation Pattern
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
verbose=True,
tools=[SerperDevTool()]
)
Agents receive their configuration from YAML and can be augmented with additional tools or settings at the point of creation.
Sources: lib/crewai/src/crewai/crew.py
Task Management
Task Definition
Tasks represent units of work that agents execute. Each task has:
| Property | Purpose |
|---|---|
| description | What needs to be accomplished |
| expected_output | The format and content of deliverables |
| agent | Which agent executes the task |
| output_file | Optional file for storing results |
| dependencies | Tasks that must complete first |
Task with Output Handling
@task
def reporting_task(self) -> Task:
return Task(
config=self.tasks_config['reporting_task'],
output_file='report.md'
)
Task Execution Flow
graph LR
A[Task Created] --> B{Process Type}
B -->|Sequential| C[Execute in Order]
B -->|Hierarchical| D[Manager Delegates]
C --> E[Agent 1 Executes]
E --> F[Agent 2 Executes]
F --> G[Complete]
D --> H[Manager Assigns Task]
H --> I[Agent Executes]
I --> J[Manager Validates]
J --> K[Complete]Memory and Context
Memory Types
Crews can maintain different types of memory to preserve context across executions:
| Memory Type | Scope | Purpose |
|---|---|---|
| Short-term | Current session | Temporary working memory |
| Long-term | Across sessions | Persistent learnings |
| Entity | Entity tracking | Knowledge graph of entities |
| Knowledge | Structured data | Domain-specific grounding |
Memory Management Commands
crewai reset-memories -a # Reset all memories
crewai reset-memories -s # Short-term only
crewai reset-memories -l # Long-term only
crewai reset-memories -e # Entity only
crewai reset-memories -kn # Knowledge only
crewai reset-memories -akn # Agent knowledge only
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Crew Context Utilities
The CrewContext class provides utilities for managing scope-based context within crews. Scopes allow hierarchical organization of memory and context:
def join_scope_paths(root: str | None, inner: str | None) -> str:
"""
Combines two scope path components.
Examples:
join_scope_paths("/crew/test", "/market-trends") -> '/crew/test/market-trends'
join_scope_paths("/crew/test", None) -> '/crew/test'
"""
Sources: lib/crewai/src/crewai/utilities/crew/crew_context.py
Execution Flow
Crew Kickoff
The main entry point for executing a crew is the kickoff method:
from latest_ai_development.crew import LatestAiDevelopmentCrew
def run():
inputs = {'topic': 'AI Agents'}
LatestAiDevelopmentCrew().crew().kickoff(inputs=inputs)
Step Execution
The StepExecutor handles the actual execution of agent steps within the crew context:
sequenceDiagram
participant Crew
participant StepExecutor
participant Agent
participant Task
Crew->>StepExecutor: Execute Task
StepExecutor->>Agent: Call Agent with Context
Agent->>Task: Perform Action
Task-->>Agent: Return Result
Agent-->>StepExecutor: Step Output
StepExecutor-->>Crew: Execution CompleteSources: lib/crewai/src/crewai/agents/step_executor.py
Verbose Mode
During development, enable verbose mode to see detailed execution logs:
@crew
def crew(self) -> Crew:
return Crew(
agents=self.agents,
tasks=self.tasks,
verbose=True, # Enable for development
)
Disable verbose mode in production for cleaner outputs.
Crew Execution Options
Running a Crew
crewai run # Run crew or flow (auto-detects from pyproject.toml)
Or directly via Python:
python src/my_project/main.py
Testing and Training
crewai test # Test crew (default: 2 iterations, gpt-4o-mini)
crewai test -n 5 -m gpt-4o # Custom iterations and model
crewai train -n 5 -f training.json # Train crew
Debugging
crewai log-tasks-outputs # Show latest task outputs
crewai replay -t <task_id> # Replay from specific task
Best Practices
Configuration Guidelines
- YAML-first configuration: Define agents and tasks in YAML, keep crew classes minimal
- Use structured output (
output_pydantic) for data that flows between tasks or crews - Use guardrails to validate task outputs programmatically
- Enable memory for crews that benefit from cross-session learning
Process Selection
| Workflow Type | Recommended Process |
|---|---|
| Linear data pipeline | Sequential |
| Research and report | Sequential |
| Multi-agent collaboration | Hierarchical |
| Dynamic task delegation | Hierarchical |
| Complex multi-stage projects | Hierarchical with Flows |
Performance Considerations
| Setting | Purpose |
|---|---|
max_rpm | Rate limiting to avoid API throttling |
respect_context_window=True | Auto-handle token limits |
verbose=False | Reduce logging overhead in production |
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Common Patterns
Multi-Crew Orchestration with Flows
For complex pipelines involving multiple crews:
graph TD
A[Flow Start] --> B[Crew 1]
B --> C{Condition?}
C -->|Path A| D[Crew 2]
C -->|Path B| E[Crew 3]
D --> F[Output]
E --> FUse @start, @listen, and @router decorators for complex flow orchestration.
Crew with Tools
from crewai_tools import SerperDevTool
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
tools=[SerperDevTool()] # Attach tools to agent
)
Summary
The Crew orchestration system in CrewAI provides a flexible framework for coordinating multiple AI agents. Key takeaways:
- Crews are the primary unit of multi-agent collaboration
- Processes (Sequential/Hierarchical) define how tasks are coordinated
- Agents are specialized roles with specific goals and tools
- Tasks represent units of work with dependencies and expected outputs
- Memory enables context preservation across executions
- YAML configuration keeps agent/task definitions separate from code
This architecture enables everything from simple sequential pipelines to complex hierarchical multi-agent systems with dynamic task delegation.
Sources: [lib/cli/src/crewai_cli/templates/crew/README.md]()
Flows - Event-Driven Workflows
Related topics: Crews and Crew Orchestration, Agents Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Crews and Crew Orchestration, Agents Architecture
Flows - Event-Driven Workflows
Overview
Flows in CrewAI provide an event-driven architecture for orchestrating complex, multi-step AI workflows. They enable precise control over execution order, conditional branching, and state management—offering a perfect balance alongside Crew-based autonomous agent orchestration.
Flows are designed for scenarios requiring sequential execution, conditional logic, state persistence, and event-based triggers. Unlike Crews that operate autonomously with agents collaborating freely, Flows provide deterministic workflow patterns where execution follows explicit routing rules.
Sources: lib/crewai/README.md
Core Concepts
Flow Architecture
A Flow is a Python class that extends the Flow base class, decorated with methods that define the workflow graph:
graph TD
A[Start] --> B[Method A]
B --> C{Decision}
C -->|Path 1| D[Method B]
C -->|Path 2| E[Method C]
D --> F[End]
E --> FKey Components
| Component | Purpose |
|---|---|
Flow | Base class for all flows |
@start() | Marks methods as entry points |
@listen() | Triggers method execution after another completes |
@router() | Implements conditional branching logic |
@human_feedback() | Pauses execution for user input |
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Flow Execution Model
Start Methods
Methods decorated with @start() execute immediately when the flow begins. Multiple @start() decorators can be defined, causing parallel execution:
from crewai.flow.flow import Flow, start, listen
class MyFlow(Flow):
@start()
def begin(self):
return "initial data"
@start()
def begin_parallel(self):
return "parallel data"
Listen Decorators
The @listen() decorator binds a method to the completion of another method. The decorated method receives the output of the triggering method as its argument:
from crewai.flow.flow import Flow, start, listen
class ResearchFlow(Flow):
@start()
def set_topic(self):
return "AI Agents"
@listen(set_topic)
def do_research(self, topic):
# self.state.topic is available
result = ResearchCrew().crew().kickoff(
inputs={"topic": topic}
)
return result.raw
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
State Management
Structured State with Pydantic
Flows support type-safe state management using Pydantic models. Define a state class that inherits from BaseModel:
from crewai.flow.flow import Flow, start, listen
from pydantic import BaseModel
class ResearchState(BaseModel):
topic: str = ""
research: str = ""
report: str = ""
class ResearchFlow(Flow[ResearchState]):
@start()
def set_topic(self):
self.state.topic = "AI Agents"
@listen(set_topic)
def do_research(self):
result = ResearchCrew().crew().kickoff(
inputs={"topic": self.state.topic}
)
self.state.research = result.raw
return self.state.research
@listen(do_research)
def write_report(self, research_data):
self.state.report = f"# Report on {self.state.topic}\n\n{research_data}"
return self.state.report
Benefits of structured state:
- Type safety across method boundaries
- IDE autocompletion for state fields
- Validation of state transitions
- Persistence of state between executions
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
State Flow Diagram
graph LR
A[set_topic] --> B[State Update<br/>topic: "AI Agents"]
B --> C[do_research]
C --> D[State Update<br/>research: data]
D --> E[write_report]
E --> F[State Update<br/>report: content]Conditional Routing
Router Decorator
The @router() decorator enables conditional branching based on method output. Routers return string labels that determine which @listen() methods execute:
from crewai.flow.flow import Flow, start, listen, router
class DocumentProcessingFlow(Flow):
@start()
def receive_document(self):
return {"type": "image", "path": "/path/to/image.png"}
@router(receive_document)
def classify_document(self, doc):
if doc["type"] == "image":
return "image_processing"
elif doc["type"] == "text":
return "text_processing"
return "unsupported"
@listen("image_processing")
def process_image(self, doc):
return f"Processed image: {doc['path']}"
@listen("text_processing")
def process_text(self, doc):
return f"Processed text: {doc['path']}"
@listen("unsupported")
def handle_unsupported(self, doc):
return f"Unsupported document type: {doc['type']}"
Routing Flow Diagram
graph TD
A[receive_document] --> B{classify_document}
B -->|image_processing| C[process_image]
B -->|text_processing| D[process_text]
B -->|unsupported| E[handle_unsupported]Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Event System Integration
Event-Driven Architecture
Flows integrate with CrewAI's event system to enable reactive execution patterns. The flow_serializer.py module provides introspection capabilities for visualizing flow structures:
from crewai.flow.flow_serializer import flow_structure
structure = flow_structure(MyFlow)
print(structure["name"]) # Flow class name
print(structure["methods"]) # All decorated methods
print(structure["edges"]) # Connections between methods
Event Categories
Flows support integration with multiple event categories:
| Category | Description |
|---|---|
| Flow execution | Start, completion, and error events |
| Agent execution | Individual agent state changes |
| Task management | Task lifecycle events |
| Tool usage | Tool invocation events |
| Safety guardrails | Validation and compliance events |
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Flow API Reference
Flow Base Class
class Flow[StateType]:
"""Base class for all flows."""
state: StateType # Typed state instance
def kickoff(self, inputs: dict = None) -> Any:
"""Execute the flow from start methods."""
def kickoff_async(self, inputs: dict = None) -> Any:
"""Execute the flow asynchronously."""
Decorators
| Decorator | Parameters | Returns | Description |
|---|---|---|---|
@start() | - | None | Marks entry point method |
@listen() | method | None | Binds to method completion |
@router() | method | str | Returns routing label |
@human_feedback() | prompt | str | Requests user input |
Method Information Types
The flow_serializer.py module defines MethodInfo for introspecting flow methods:
class MethodInfo(TypedDict, total=False):
name: str
type: str # start, listen, router, start_router
trigger_methods: list[str]
condition_type: str | None # AND, OR
router_paths: list[str]
has_human_feedback: bool
Sources: lib/crewai/src/crewai/flow/flow_serializer.py
Flow Structure Serialization
Introspection for UI Rendering
The flow_structure() function analyzes a Flow class and returns a JSON-serializable dictionary:
from crewai.flow.flow_serializer import flow_structure
class MyFlow(Flow):
@start()
def begin(self):
return "started"
@listen(begin)
def process(self):
return "done"
structure = flow_structure(MyFlow)
# Returns:
# {
# "name": "MyFlow",
# "methods": [...],
# "edges": [...],
# "state_schema": {...}
# }
This serialization enables CrewAI Studio UI to render visual flow graphs.
Sources: lib/crewai/src/crewai/flow/flow_serializer.py
Integration with Crews
Calling Crews from Flows
Flows can invoke Crews for agent-based task execution:
class ResearchFlow(Flow[ResearchState]):
@start()
def set_topic(self):
self.state.topic = "AI Agents"
@listen(set_topic)
def do_research(self):
crew = ResearchCrew().crew()
result = crew.kickoff(inputs={"topic": self.state.topic})
self.state.research = result.raw
return result.raw
Flow-to-Crew Communication
graph TD
A[Flow Start] --> B[Set Topic]
B --> C[Crew Kickoff]
C --> D[Agent 1]
C --> E[Agent 2]
D --> F[Task Complete]
E --> G[Task Complete]
F --> H[Flow Resume]
G --> H
H --> I[Process Results]Best Practices
When to Use Flows
| Use Case | Recommendation |
|---|---|
| Linear workflows with clear steps | Sequential Flow |
| Dynamic agent delegation | Hierarchical Crew |
| Multi-crew orchestration | Flow with Crew calls |
| Conditional branching | Router-based Flow |
| Human-in-the-loop | Flow with @human_feedback() |
Design Guidelines
- Use structured state (Pydantic models) over unstructured dicts for type safety
- Prefer Flows for multi-crew orchestration when complex pipelines are needed
- Use
@start()with multiple methods only when parallel execution is required - Keep router labels descriptive for maintainable flow graphs
- Enable verbose mode during development, disable in production
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Running Flows
CLI Commands
# Run crew or flow (auto-detects from pyproject.toml)
crewai run
# Legacy flow execution
crewai flow kickoff
Programmatic Execution
from my_flow import ResearchFlow
flow = ResearchFlow()
result = flow.kickoff(inputs={"topic": "AI Agents"})
print(result)
Sources: lib/cli/src/crewai_cli/templates/flow/README.md
Advanced Patterns
Multi-Crew Orchestration
class OrchestrationFlow(Flow):
@start()
def initialize(self):
return {"task": "complex_research"}
@listen(initialize)
def research_crew_execution(self, task):
return ResearchCrew().crew().kickoff(inputs=task)
@listen(research_crew_execution)
def analysis_crew_execution(self, research_results):
return AnalysisCrew().crew().kickoff(
inputs={"data": research_results}
)
@listen(analysis_crew_execution)
def reporting(self, analysis):
return ReportCrew().crew().kickoff(
inputs={"analysis": analysis}
)
Error Handling in Flows
class ResilientFlow(Flow):
@start()
def begin(self):
try:
return risky_operation()
except Exception as e:
self.state.error = str(e)
return "error_state"
@router(begin)
def handle_result(self, result):
if result == "error_state":
return "error_handler"
return "success_path"
@listen("error_handler")
def handle_error(self, _):
return "Recovery action completed"
Summary
Flows provide a powerful event-driven workflow system for CrewAI that complements the autonomous agent orchestration of Crews. Key takeaways:
- Decorators (
@start,@listen,@router,@human_feedback) define the workflow graph - Structured state with Pydantic ensures type safety and validation
- Event serialization enables visual flow editing in CrewAI Studio
- Crews integration allows delegating complex tasks to agent teams
- Conditional routing provides flexible decision-making capabilities
Flows are ideal for precise, deterministic workflows where execution order and branching logic are critical, while Crews excel at autonomous multi-agent collaboration.
Sources: [lib/crewai/README.md](https://github.com/crewAIInc/crewAI/blob/main/lib/crewai/README.md)
LLM Providers and Configuration
Related topics: Agents Architecture
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agents Architecture
LLM Providers and Configuration
Overview
The LLM (Large Language Model) Providers and Configuration system in CrewAI provides a flexible, extensible architecture for integrating multiple AI model providers into the agent execution pipeline. This system allows developers to configure, customize, and switch between different LLM backends while maintaining a consistent interface for agent operations.
The configuration system supports multiple providers including OpenAI, Anthropic, Google, Ollama, and Llama2, enabling both embedding and summarization capabilities through a unified config dictionary approach.
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md
Architecture
The LLM provider system follows a modular architecture with the following components:
graph TD
A[Agent] --> B[LLM Configuration]
B --> C[Provider Selection]
C --> D[OpenAI Provider]
C --> E[Anthropic Provider]
C --> F[Google Provider]
C --> G[Ollama Provider]
C --> H[Llama2 Provider]
D --> I[Model Execution]
E --> I
F --> I
G --> I
H --> I
I --> J[Response Processing]
J --> K[Agent Output]Core Components
| Component | Purpose | Location |
|---|---|---|
LLM | Main LLM interface class | lib/crewai/src/crewai/llm.py |
BaseLLM | Abstract base for all providers | lib/crewai/src/crewai/llms/base_llm.py |
Providers | Provider-specific implementations | lib/crewai/src/crewai/llms/providers/ |
| Config Dictionary | Runtime configuration | User-defined |
Sources: lib/crewai/src/crewai/llm.py, lib/crewai/src/crewai/llms/base_llm.py
Configuration Pattern
Standard Configuration Structure
All tools and agents using LLM configuration follow a standardized config dictionary pattern:
config=dict(
llm=dict(
provider="provider_name",
config=dict(
model="model_name",
# Optional parameters
temperature=0.5,
top_p=1,
stream=True,
),
),
embedder=dict(
provider="embedder_provider",
config=dict(
model="embedding_model",
task_type="retrieval_document",
),
),
)
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md, lib/crewai-tools/src/crewai_tools/tools/pdf_search_tool/README.md
Supported Providers
| Provider | Provider ID | Example Model |
|---|---|---|
| OpenAI | openai | gpt-4, gpt-4o-mini |
| Anthropic | anthropic | claude-3, claude-3.5-sonnet |
google | models/embedding-001 | |
| Ollama | ollama | llama2, mistral |
| Llama2 | llama2 | meta/llama2 |
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md
LLM Class Integration
Initialization Parameters
The primary LLM class serves as the main interface for language model operations:
from crewai import LLM
llm = LLM(
model="gpt-4",
api_key="your-api-key",
temperature=0.7,
)
Guardrail Integration
LLMs are used as dependencies in guardrail implementations such as the HallucinationGuardrail:
from crewai.tasks.hallucination_guardrail import HallucinationGuardrail
guardrail = HallucinationGuardrail(
llm=agent.llm,
threshold=8.0,
context="Reference context for validation",
)
Sources: lib/crewai/src/crewai/tasks/hallucination_guardrail.py:1-70
Provider-Specific Configuration
OpenAI Configuration
tool = SomeTool(
config=dict(
llm=dict(
provider="openai",
config=dict(
model="gpt-4o-mini",
temperature=0.5,
# streaming support available
),
),
)
)
Sources: lib/crewai/src/crewai/llms/providers/openai/completion.py
Anthropic Configuration
tool = SomeTool(
config=dict(
llm=dict(
provider="anthropic",
config=dict(
model="claude-3-sonnet-20240229",
),
),
)
)
Sources: lib/crewai/src/crewai/llms/providers/anthropic/completion.py
Google Embeddings Configuration
embedder=dict(
provider="google",
config=dict(
model="models/embedding-001",
task_type="retrieval_document",
),
)
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md
Ollama Configuration
llm=dict(
provider="ollama",
config=dict(
model="llama2",
temperature=0.5,
top_p=1,
stream=True,
),
)
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md
Embedder Configuration
Embedders handle vector embedding generation for retrieval-augmented generation (RAG) workflows:
| Parameter | Type | Description | Default |
|---|---|---|---|
provider | string | Embedding provider name | openai |
model | string | Model identifier | Provider-specific |
task_type | string | Embedding use case | retrieval_document |
title | string | Optional title for embeddings | None |
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md
Default Behavior
By default, tools use OpenAI for both embeddings and summarization:
By default, the tool uses OpenAI for both embeddings and summarization.
Sources: lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md, lib/crewai-tools/src/crewai_tools/tools/pdf_search_tool/README.md
Configuration Workflow
graph LR
A[Define Config Dict] --> B[Select Provider]
B --> C[Specify Model]
C --> D[Set Optional Params]
D --> E[Initialize Tool/Agent]
E --> F[LLM Loaded at Runtime]
F --> G[Execution with Provider]Environment Variables
API keys should be configured via environment variables for security:
export OPENAI_API_KEY="sk-..."
export ANTHROPIC_API_KEY="sk-ant-..."
export GOOGLE_API_KEY="..."
export LINKUP_API_KEY="..."
Sources: lib/crewai-tools/src/crewai_tools/tools/linkup/README.md
Best Practices
- Environment Security: Store API keys in environment variables rather than hardcoding
- Provider Selection: Choose providers based on task requirements (cost, latency, capabilities)
- Temperature Tuning: Adjust temperature based on task creativity needs (lower for factual, higher for creative)
- Model Selection: Use smaller/faster models for simple tasks to reduce costs
- Embedder Consistency: Use compatible embedders for your vector store
Related Documentation
Sources: [lib/crewai-tools/src/crewai_tools/tools/code_docs_search_tool/README.md]()
Agent-to-Agent (A2A) Communication
Related topics: Agents Architecture, Crews and Crew Orchestration
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Agents Architecture, Crews and Crew Orchestration
Agent-to-Agent (A2A) Communication
Agent-to-Agent (A2A) Communication is a core architectural layer in CrewAI that enables autonomous agents to exchange messages, delegate tasks, and collaborate within multi-agent workflows. This module provides the infrastructure for agents to interact, share context, and coordinate their activities seamlessly.
Overview
The A2A subsystem in CrewAI implements a standardized communication protocol that allows agents to:
- Exchange structured messages with rich content types
- Delegate tasks to other agents with appropriate context
- Share execution results and artifacts
- Coordinate through hierarchical or collaborative processes
The implementation follows modern agent communication patterns and provides both a programmatic API and extension points for UI integration. Sources: lib/crewai/src/crewai/a2a/__init__.py
Architecture
High-Level Architecture
graph TD
subgraph "Agent Layer"
A1[Agent 1]
A2[Agent 2]
A3[Agent N]
end
subgraph "A2A Core"
TW[A2A Wrapper]
TT[A2A Types]
UD[Utils: Delegation]
end
subgraph "Extension Layer"
A2UI[A2UI Extensions]
SCH[Schema v0.8]
end
A1 <--> TW
A2 <--> TW
A3 <--> TW
TW <--> TT
TW <--> UD
TW <--> A2UI
A2UI <--> SCHComponent Responsibilities
| Component | Purpose | Key Responsibilities |
|---|---|---|
wrapper.py | A2A Communication Handler | Manages message routing, task delegation, and response handling |
types.py | Data Models | Defines message structures, content types, and protocol elements |
delegation.py | Task Delegation Utility | Provides helper functions for agent delegation patterns |
a2ui/ | UI Extensions | Schema definitions for Agent-to-User Interface communication |
schema/v0_8/ | Protocol Schemas | JSON schemas for message validation and serialization |
Sources: lib/crewai/src/crewai/a2a/wrapper.py
Core Types and Data Models
The A2A module defines comprehensive data models for structured communication between agents. These types ensure type safety and consistent message formats across the system.
Content Types
The A2A protocol supports multiple content types for flexible message composition:
graph LR
M[Message] --> T[TextContent]
M --> A[ArtifactContent]
M --> T2[TaskContent]
M --> S[StatusContent]
M --> P[Part]
T --> TT[Text]
A --> DT[Document]
A --> C[Code]
A --> D[Data]Message Structure
Messages in A2A communication follow a standardized structure defined in the schema:
{
"message": {
"role": "string",
"content": {
"parts": []
},
"agent": "string",
"taskId": "string"
}
}
Sources: lib/crewai/src/crewai/a2a/types.py
A2A Wrapper
The A2AWrapper class serves as the primary interface for agent communication:
class A2AWrapper:
"""Handles Agent-to-Agent communication and delegation."""
def __init__(self, config: A2AConfig):
self.config = config
def send_message(self, agent_id: str, message: A2AMessage) -> A2AResponse:
"""Send a message to another agent."""
def delegate_task(self, target_agent: str, task: Task) -> DelegationResult:
"""Delegate a task to another agent."""
def receive_message(self, message: A2AMessage) -> None:
"""Process an incoming message from another agent."""
Key Methods
| Method | Parameters | Return Type | Description |
|---|---|---|---|
send_message | agent_id, message | A2AResponse | Send a message to a specific agent |
delegate_task | target_agent, task | DelegationResult | Delegate a task with full context |
receive_message | message | None | Process incoming messages |
get_status | task_id | TaskStatus | Get the status of a delegated task |
Sources: lib/crewai/src/crewai/a2a/wrapper.py
Task Delegation
The delegation utility provides specialized functions for distributing work across agents:
def delegate_to_agent(
source_agent: str,
target_agent: str,
task: Task,
context: Dict[str, Any]
) -> DelegationResult:
"""Delegate a task from one agent to another."""
def create_delegation_context(
source: Agent,
target: Agent,
task: Task
) -> DelegationContext:
"""Create a context object for delegation."""
Delegation Flow
graph TD
S[Source Agent] -->|Identifies Task| D1{Delegation Decision}
D1 -->|Can Delegate| C1[Create Context]
D1 -->|Cannot Delegate| R1[Reject Task]
C1 -->|Prepare Message| M1[Build A2A Message]
M1 -->|Send via Wrapper| TW[A2A Wrapper]
TW -->|Route Message| T[Target Agent]
T -->|Execute Task| TR[Task Result]
TR -->|Send Response| TW2[A2A Wrapper]
TW2 -->|Route Response| S2[Source Agent]Delegation Context
The DelegationContext object captures all necessary information for proper task delegation:
| Field | Type | Description |
|---|---|---|
source_agent | str | Identifier of the delegating agent |
target_agent | str | Identifier of the receiving agent |
task_id | str | Unique identifier for the task |
priority | int | Delegation priority (1-10) |
timeout | int | Maximum execution time in seconds |
retry_count | int | Number of retry attempts |
Sources: lib/crewai/src/crewai/a2a/utils/delegation.py
A2UI Extensions
The A2UI (Agent-to-User Interface) module provides schema definitions for rendering agent outputs in user interfaces:
# Extension initialization
from crewai.a2a.extensions.a2ui import A2UIExtension
extension = A2UIExtension()
extension.register_handlers()
Supported Content Rendering
| Content Type | Description | Schema Reference |
|---|---|---|
text | Plain text with optional hints | server_to_client_with_standard_catalog.json |
image | Image content with sizing options | server_to_client_with_standard_catalog.json |
url | Web links with metadata | server_to_client_with_standard_catalog.json |
Text Styling Hints
The schema supports the following text style hints for UI rendering:
| Style Hint | Description | Use Case |
|---|---|---|
h1 | Largest heading | Main section titles |
h2 | Second largest heading | Subsection titles |
h3 | Third largest heading | Minor headings |
h4 | Fourth largest heading | Component labels |
h5 | Fifth largest heading | Detailed labels |
caption | Small text | Figure captions, footnotes |
body | Standard body text | Regular content |
Image Rendering Options
Images in A2A messages support the following fit modes:
| Fit Mode | CSS Equivalent | Description |
|---|---|---|
contain | object-fit: contain | Scale to fit within bounds |
cover | object-fit: cover | Scale to fill bounds, crop if needed |
fill | object-fit: fill | Stretch to fill bounds |
Sources: lib/crewai/src/crewai/a2a/extensions/a2ui/__init__.py Sources: lib/crewai/src/crewai/a2a/extensions/a2ui/schema/v0_8/server_to_client_with_standard_catalog.json
Protocol Versioning
The A2A protocol uses semantic versioning with the current implementation supporting v0.8:
graph LR
V08[v0.8] -->|Current| C[Current Schema]
V08 -->|Features| T[Text Hints]
V08 -->|Features| I[Image Fit Options]
V08 -->|Features| U[URL References]
C -->|Evolution| F[Future Versions]Schema files are organized by version in the schema/ directory, allowing for backward compatibility and gradual migration:
lib/crewai/src/crewai/a2a/extensions/a2ui/schema/
└── v0_8/
└── server_to_client_with_standard_catalog.json
Usage Examples
Basic Agent Communication
from crewai.a2a import A2AWrapper, A2AMessage, A2AConfig
# Initialize the A2A wrapper
config = A2AConfig(
agent_id="researcher_01",
capabilities=["delegate", "respond"]
)
wrapper = A2AWrapper(config)
# Create and send a message
message = A2AMessage(
role="agent",
content={
"parts": [
{"text": "Please analyze the provided data and return insights"}
]
},
agent="researcher_01"
)
response = wrapper.send_message(
agent_id="analyst_01",
message=message
)
Task Delegation Pattern
from crewai.a2a.utils.delegation import delegate_to_agent, create_delegation_context
# Create delegation context
context = create_delegation_context(
source=researcher_agent,
target=analyst_agent,
task=analysis_task
)
# Execute delegation
result = delegate_to_agent(
source_agent="researcher_01",
target_agent="analyst_01",
task=analysis_task,
context={"priority": "high", "deadline": "2024-01-15"}
)
UI-Ready Response Structure
from crewai.a2a.extensions.a2ui import create_ui_response
# Create a response optimized for UI rendering
ui_response = create_ui_response(
content_type="text",
text="Research findings have been compiled",
style_hint="body"
)
# Or include rich content
ui_response = create_ui_response(
content_type="image",
url={"literalString": "https://example.com/chart.png"},
fit="contain"
)
Integration with CrewAI
The A2A module integrates with CrewAI's core components:
graph TD
subgraph "CrewAI Core"
C[Crew]
P[Process]
A[Agents]
T[Tasks]
end
subgraph "A2A Layer"
W[Wrapper]
D[Delegation Utils]
U[A2UI]
end
C -->|Orchestrates| A
A -->|Communicates via| W
W -->|Delegates via| D
W -->|Renders via| U
A -->|Execute| T
P -->|Manages Flow| CIntegration Points
| Component | Integration | Description |
|---|---|---|
Crew | Automatic initialization | Creates A2A wrapper for each agent |
Agent | Message handling | Uses A2A for inter-agent communication |
Task | Delegation support | Can be delegated via A2A protocol |
Process | Coordination | Uses A2A for process-level messaging |
Configuration Options
A2AConfig Parameters
| Parameter | Type | Default | Description |
|---|---|---|---|
agent_id | str | Required | Unique identifier for the agent |
capabilities | List[str] | [] | Supported capabilities |
timeout | int | 300 | Default timeout in seconds |
retry_attempts | int | 3 | Number of retry attempts |
enable_ui_extension | bool | True | Enable A2UI rendering |
Environment Variables
| Variable | Description |
|---|---|
A2A_TIMEOUT | Global A2A operation timeout |
A2A_MAX_RETRIES | Maximum retry attempts |
A2A_LOG_LEVEL | Logging verbosity |
Best Practices
- Message Design: Keep messages focused and atomic for better error handling
- Context Preservation: Always include sufficient context when delegating tasks
- Error Handling: Implement proper exception handling for network failures
- Schema Validation: Validate messages against the A2UI schema before sending
- Timeout Management: Set appropriate timeouts based on task complexity
Summary
The Agent-to-Agent (A2A) Communication module in CrewAI provides a robust foundation for multi-agent collaboration. Key features include:
- Standardized messaging with support for multiple content types
- Task delegation with full context preservation
- UI extensions for rich content rendering
- Versioned schemas ensuring backward compatibility
- Deep integration with CrewAI's agent and task systems
The modular architecture allows for flexible extension and customization while maintaining a consistent communication protocol across all agents in a crew.
Sources: [lib/crewai/src/crewai/a2a/wrapper.py]()
Knowledge Management
Related topics: Memory and Storage System
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Memory and Storage System
Knowledge Management
Overview
Knowledge Management in crewAI provides a structured framework for agents to store, retrieve, and utilize contextual information during task execution. This system enables crews to maintain persistent knowledge that can be referenced across multiple agent interactions, enhancing the contextual awareness and accuracy of agent responses.
Architecture
The Knowledge Management system consists of three primary components working in coordination:
graph TD
A[Agent] --> B[Knowledge Source]
B --> C[Knowledge Storage]
C --> D[Vector Store]
B --> E[PDF Files]
B --> F[CSV Files]
B --> G[Text Data]
C --> H[Query Engine]
H --> ACore Components
| Component | Purpose | Location |
|---|---|---|
Knowledge | Main class orchestrating knowledge operations | lib/crewai/src/crewai/knowledge/knowledge.py |
KnowledgeSource | Abstract base for data ingestion | lib/crewai/src/crewai/knowledge/source/ |
KnowledgeStorage | Handles persistence and retrieval | lib/crewai/src/crewai/knowledge/storage/knowledge_storage.py |
Knowledge Sources
Knowledge Sources represent the input layer where data is ingested into the system. The framework supports multiple source types to accommodate various data formats.
PDF Knowledge Source
The PDF Knowledge Source processes PDF documents and extracts textual content for vector storage. It handles multi-page documents and preserves structural information where possible.
Key Features:
- Automatic text extraction from PDF pages
- Metadata preservation (page numbers, document titles)
- Chunk-based processing for large documents
CSV Knowledge Source
The CSV Knowledge Source handles tabular data, converting rows and columns into searchable knowledge entries. It maintains the relationship between column headers and values during ingestion.
Key Features:
- Header-aware parsing
- Row-level chunking
- Delimiter detection
Storage Layer
The Knowledge Storage component manages the persistence of processed knowledge using vector embeddings. It interfaces with the underlying vector database to enable semantic search capabilities.
sequenceDiagram
participant Source as Knowledge Source
participant Storage as Knowledge Storage
participant VectorDB as Vector Database
participant Query as Query Engine
Source->>Storage: Ingest document chunks
Storage->>Storage: Generate embeddings
Storage->>VectorDB: Store vectors + metadata
Query->>VectorDB: Semantic search
VectorDB->>Query: Relevant chunks
Query->>Storage: Format resultsStorage Configuration
| Parameter | Type | Description | Default |
|---|---|---|---|
chunk_size | int | Size of text chunks in characters | 1000 |
chunk_overlap | int | Overlap between consecutive chunks | 200 |
embedding_model | str | Model used for vectorization | Configured at crew level |
Integration with Agents
Knowledge Management integrates with the crewAI agent system through the Agent class. Agents can be configured to automatically query relevant knowledge during task execution.
Basic Integration Pattern:
from crewai import Agent, Crew
from crewai.knowledge import Knowledge, PDFKnowledgeSource
# Initialize knowledge base
knowledge = Knowledge()
# Add sources
knowledge.add_source(PDFKnowledgeSource(file_path="document.pdf"))
knowledge.add_source(CSVKnowledgeSource(file_path="data.csv"))
# Create agent with knowledge access
agent = Agent(
role="Research Analyst",
goal="Answer questions using company knowledge",
backstory="Expert at analyzing documents",
knowledge=knowledge
)
Query Mechanism
The query mechanism enables agents to retrieve relevant knowledge based on semantic similarity. When an agent processes a task, the system automatically retrieves chunks that are contextually relevant to the query.
| Query Parameter | Description |
|---|---|
query_text | The search query string |
top_k | Maximum number of results to return |
similarity_threshold | Minimum similarity score for inclusion |
Data Flow
graph LR
A[Document Files] --> B[Knowledge Sources]
B --> C[Text Chunking]
C --> D[Embedding Generation]
D --> E[Vector Storage]
F[Agent Query] --> G[Similarity Search]
G --> E
E --> H[Retrieved Chunks]
H --> I[Agent Context]Usage with Crews
For multi-agent crews, knowledge can be shared across all agents or restricted to specific agents:
from crewai import Crew
from crewai.knowledge import Knowledge
# Shared knowledge across crew
crew_knowledge = Knowledge()
crew_knowledge.add_source(company_docs_source)
crew = Crew(
agents=[researcher, analyst, writer],
tasks=[research_task, analysis_task, report_task],
knowledge=crew_knowledge # Available to all agents
)
Best Practices
- Chunk Sizing: Use appropriate chunk sizes based on document structure. Smaller chunks (500-1000 chars) work well for Q&A, larger chunks for document summarization.
- Source Organization: Group related documents into separate knowledge sources for more targeted retrieval.
- Metadata: Include relevant metadata with knowledge sources to improve result filtering.
- Update Strategy: Implement regular synchronization for knowledge sources that change frequently.
Related Components
| Component | Purpose |
|---|---|
crewai_tools | External tool integrations including PDF search, CSV search |
Agent Memory | Short-term contextual memory for agent sessions |
Task Context | Task-specific information passing between agents |
Source: https://github.com/crewAIInc/crewAI / Human Manual
Memory and Storage System
Related topics: Knowledge Management
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Continue reading this section for the full explanation and source context.
Related Pages
Related topics: Knowledge Management
Memory and Storage System
Overview
The crewAI Memory and Storage System provides persistent, searchable memory capabilities for AI agents and crews. It enables cross-session learning, semantic recall of past interactions, and structured storage of agent experiences. The system is designed to handle various types of memory including short-term, long-term, entity, and knowledge-based memories.
graph TD
A[Agent Request] --> B[UnifiedMemory]
B --> C[MemoryScope]
C --> D{Memory Type}
D --> E[Short-term Memory]
D --> F[Long-term Memory]
D --> G[Entity Memory]
D --> H[Knowledge Memory]
E --> I[Vector Storage]
F --> I
G --> I
H --> I
I --> J[Recall Flow]
J --> K[MemoryMatch Results]
K --> AMemory Architecture
Core Components
| Component | Purpose | Location |
|---|---|---|
UnifiedMemory | Central interface for all memory operations | unified_memory.py |
MemoryScope | Defines isolation boundaries for memory contexts | memory_scope.py |
RecallFlow | Handles semantic search and retrieval of memories | recall_flow.py |
LanceDBStorage | Vector database backend for persistent storage | lancedb_storage.py |
Data Models
#### MemoryRecord
The fundamental unit of stored information in the memory system:
class MemoryRecord(BaseModel):
data: Any # The actual memory content
metadata: dict[str, Any] # Associated metadata
importance: float = Field( # Relevance score 0.0-1.0
default=0.5, ge=0.0, le=1.0
)
created_at: datetime # Creation timestamp
last_accessed: datetime # Last retrieval timestamp
embedding: list[float] | None # Vector embedding for semantic search
source: str | None # Origin tracking (user ID, session ID)
private: bool = Field( # Privacy flag for access control
default=False
)
Sources: lib/crewai/src/crewai/memory/types.py:1-50
#### MemoryMatch
Returned by recall operations with relevance scoring:
class MemoryMatch(BaseModel):
record: MemoryRecord # The matched memory
score: float # Combined relevance score
match_reasons: list[str] # Why this matched (semantic, recency, importance)
evidence_gaps: list[str] # Missing context flags
Sources: lib/crewai/src/crewai/memory/types.py:55-70
Memory Scoping System
The MemoryScope class manages hierarchical isolation of memory contexts, allowing different crews, agents, or sessions to maintain separate memory stores while supporting controlled cross-context access.
Scope Path Operations
| Function | Description |
|---|---|
join_scope_paths(root, inner) | Combines two scope paths with normalization |
normalize_scope_path(path) | Standardizes scope path format |
Scope Path Format
Scope paths follow a hierarchical structure:
/crew/{crew-name}/{memory-type}
/crew/research-crew/short-term
/crew/research-crew/long-term
/crew/research-crew/entity
/crew/research-crew/knowledge
Scope Path Join Behavior
join_scope_paths("/crew/test", "/market-trends")
# Returns: '/crew/test/market-trends'
join_scope_paths("/crew/test", "market-trends")
# Returns: '/crew/test/market-trends'
join_scope_paths("/crew/test", "/")
# Returns: '/crew/test'
join_scope_paths("/crew/test", None)
# Returns: '/crew/test'
Sources: lib/crewai/src/crewai/memory/utils.py:1-50
graph LR
A["root: '/crew/test'"] --> B[join_scope_paths]
C["inner: '/market-trends'"] --> B
B --> D["Result: '/crew/test/market-trends'"]
E["root: '/crew/test'"] --> F[normalize]
F --> G["Result: '/crew/test'"]Storage Backends
LanceDB Storage
LanceDB is the primary vector storage backend, providing efficient similarity search capabilities:
| Parameter | Type | Default | Description |
|---|---|---|---|
db_path | str | Required | Path to the LanceDB database |
table_name | str | Required | Name of the storage table |
vector_dimension | int | Auto | Embedding vector size |
reset_db | bool | False | Whether to reset on initialization |
Sources: lib/crewai/src/crewai/memory/storage/lancedb_storage.py
Storage Operations
| Operation | Description |
|---|---|
write | Store a new memory record |
read | Retrieve by ID |
search | Semantic similarity search |
delete | Remove by ID |
reset | Clear all records |
Recall and Retrieval
The RecallFlow manages how memories are retrieved based on queries. It combines semantic similarity with recency and importance scoring.
sequenceDiagram
participant Agent
participant UnifiedMemory
participant RecallFlow
participant LanceDBStorage
Agent->>UnifiedMemory: Query with context
UnifiedMemory->>RecallFlow: Execute recall(query, scope)
RecallFlow->>LanceDBStorage: Semantic search
LanceDBStorage-->>RecallFlow: Candidate memories
RecallFlow->>RecallFlow: Score by relevance
RecallFlow-->>UnifiedMemory: Ranked MemoryMatch[]
UnifiedMemory-->>Agent: Filtered resultsRecall Parameters
| Parameter | Type | Required | Description |
|---|---|---|---|
query | str | Yes | Search query text |
scope | str | Yes | Memory scope path |
limit | int | No | Max results (default: 5) |
include_private | bool | No | Include private memories |
Memory Types
| Type | Purpose | Persistence |
|---|---|---|
| Short-term | Current session context | Ephemeral, cleared on reset |
| Long-term | Cross-session learning | Persistent until explicitly reset |
| Entity | Shared entity information | Persistent, shared across agents |
| Knowledge | Domain-specific grounding | Persistent, used for RAG |
Configuration Options
Crew-Level Configuration
memory:
enabled: true
type: "short_term" | "long_term" | "entity" | "knowledge" | "all"
scope: "/crew/{crew_name}"
CLI Memory Management
# Reset all memories
crewai reset-memories -a
# Reset specific memory types
crewai reset-memories -s # Short-term only
crewai reset-memories -l # Long-term only
crewai reset-memories -e # Entity only
crewai reset-memories -kn # Knowledge only
crewai reset-memories -akn # Agent knowledge only
Sources: lib/cli/src/crewai_cli/templates/AGENTS.md
Privacy and Access Control
The memory system supports private memories that are only accessible under specific conditions:
- Private flag: When
private=True, a memory is only visible to recall requests from the same source - include_private parameter: Set to
Trueto include private memories in cross-source queries - Source tracking: Each memory records its origin via the
sourcefield for provenance and filtering
Embedding Configuration
Memories are stored with vector embeddings for semantic search:
| Provider | Model Example | Configuration |
|---|---|---|
| OpenAI | text-embedding-3-small | OPENAI_API_KEY |
models/embedding-001 | GOOGLE_API_KEY | |
| Ollama | nomic-embed-text | Local endpoint |
| Azure | text-embedding-3 | Azure OpenAI config |
Sources: lib/crewai-tools/src/crewai_tools/tools/directory_search_tool/README.md
Best Practices
- Scope Organization: Use consistent naming conventions for scope paths to enable efficient cross-crew memory sharing
- Importance Scoring: Set appropriate importance values (0.0-1.0) to influence retrieval ranking
- Privacy Handling: Mark sensitive information with
private=Trueto prevent unintended access - Memory Pruning: Regularly reset short-term memory for clean session boundaries
- Embedding Selection: Choose embedding models appropriate for your content domain
Sources: [lib/crewai/src/crewai/memory/types.py:1-50]()
Doramagic Pitfall Log
Source-linked risks stay visible on the manual page so the preview does not read like a recommendation.
First-time setup may fail or require extra isolation and rollback planning.
The project should not be treated as fully validated until this signal is reviewed.
The project should not be treated as fully validated until this signal is reviewed.
The project may affect permissions, credentials, data exposure, or host boundaries.
Doramagic Pitfall Log
Doramagic extracted 16 source-linked risk signals. Review them before installing or handing real data to the project.
1. Installation risk: [FEATURE] Implement Process.consensual with a pluggable ConsensusEngine
- Severity: high
- Finding: Installation risk is backed by a source signal: [FEATURE] Implement Process.consensual with a pluggable ConsensusEngine. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5708
2. Project risk: [BUG] Wrong code in document
- Severity: high
- Finding: Project risk is backed by a source signal: [BUG] Wrong code in document. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5378
3. Project risk: [FEATURE] Enhance the document about @persisit
- Severity: high
- Finding: Project risk is backed by a source signal: [FEATURE] Enhance the document about @persisit. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5372
4. Security or permission risk: [FEATURE] GuardrailProvider interface for pre-tool-call authorization
- Severity: high
- Finding: Security or permission risk is backed by a source signal: [FEATURE] GuardrailProvider interface for pre-tool-call authorization. Treat it as a review item until the current version is checked.
- User impact: The project may affect permissions, credentials, data exposure, or host boundaries.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/4877
5. Project risk: Project risk needs validation
- Severity: medium
- Finding: Project risk is backed by a source signal: Project risk needs validation. Treat it as a review item until the current version is checked.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: identity.distribution | github_repo:710601088 | https://github.com/crewAIInc/crewAI | repo=crewai; install=skills
6. Installation risk: 1.14.4
- Severity: medium
- Finding: Installation risk is backed by a source signal: 1.14.4. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.4
7. Installation risk: 1.14.4a1
- Severity: medium
- Finding: Installation risk is backed by a source signal: 1.14.4a1. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.4a1
8. Installation risk: 1.14.5a4
- Severity: medium
- Finding: Installation risk is backed by a source signal: 1.14.5a4. Treat it as a review item until the current version is checked.
- User impact: First-time setup may fail or require extra isolation and rollback planning.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a4
9. Configuration risk: Scans the client database to extract existing policy details.
- Severity: medium
- Finding: Configuration risk is backed by a source signal: Scans the client database to extract existing policy details.. Treat it as a review item until the current version is checked.
- User impact: Users may get misleading failures or incomplete behavior unless configuration is checked carefully.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/issues/5760
10. Capability assumption: README/documentation is current enough for a first validation pass.
- Severity: medium
- Finding: README/documentation is current enough for a first validation pass.
- User impact: The project should not be treated as fully validated until this signal is reviewed.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: capability.assumptions | github_repo:710601088 | https://github.com/crewAIInc/crewAI | README/documentation is current enough for a first validation pass.
11. Maintenance risk: 1.14.5a1
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: 1.14.5a1. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: Source-linked evidence: https://github.com/crewAIInc/crewAI/releases/tag/1.14.5a1
12. Maintenance risk: Maintainer activity is unknown
- Severity: medium
- Finding: Maintenance risk is backed by a source signal: Maintainer activity is unknown. Treat it as a review item until the current version is checked.
- User impact: Users cannot judge support quality until recent activity, releases, and issue response are checked.
- Recommended check: Open the linked source, confirm whether it still applies to the current version, and keep the first run isolated.
- Evidence: evidence.maintainer_signals | github_repo:710601088 | https://github.com/crewAIInc/crewAI | last_activity_observed missing
Source: Doramagic discovery, validation, and Project Pack records
Community Discussion Evidence
These external discussion links are review inputs, not standalone proof that the project is production-ready.
Count of project-level external discussion links exposed on this manual page.
Open the linked issues or discussions before treating the pack as ready for your environment.
Community Discussion Evidence
Doramagic exposes project-level community discussion separately from official documentation. Review these links before using crewAI with real data or production workflows.
- Question: integration path for Agent Threat Rules detection in crewai/se - github / github_issue
- [[FEATURE] Implement Process.consensual with a pluggable ConsensusEngine](https://github.com/crewAIInc/crewAI/issues/5708) - github / github_issue
- [[FEATURE] GuardrailProvider interface for pre-tool-call authorization](https://github.com/crewAIInc/crewAI/issues/4877) - github / github_issue
- Security: OWASP Agent Memory Guard – protect CrewAI agents from memory p - github / github_issue
- Scans the client database to extract existing policy details. - github / github_issue
- Security: Request to enable Private Vulnerability Reporting / coordinate - github / github_issue
- [[FEATURE] Enhance the document about @persisit](https://github.com/crewAIInc/crewAI/issues/5372) - github / github_issue
- [[BUG] Wrong code in document](https://github.com/crewAIInc/crewAI/issues/5378) - github / github_issue
- [[FEATURE] Tool to add input_files](https://github.com/crewAIInc/crewAI/issues/5758) - github / github_issue
- Project risk needs validation - GitHub / issue
- 1.14.4 - GitHub / issue
- 1.14.4a1 - GitHub / issue
Source: Project Pack community evidence and pitfall evidence