# https://github.com/mem0ai/mem0 项目说明书

生成时间：2026-05-16 07:03:34 UTC

## 目录

- [Introduction to Mem0](#page-introduction)
- [Quick Start Guide](#page-quickstart)
- [Use Cases and Applications](#page-use-cases)
- [System Architecture](#page-architecture)
- [Memory Operations](#page-memory-operations)
- [AI Model Integration](#page-ai-integration)
- [Vector Stores and Storage](#page-vector-stores)
- [Embedding Models](#page-embeddings)
- [Python SDK](#page-python-sdk)
- [TypeScript/Node.js SDK](#page-typescript-sdk)

<a id='page-introduction'></a>

## Introduction to Mem0

### 相关页面

相关主题：[System Architecture](#page-architecture), [Quick Start Guide](#page-quickstart), [Use Cases and Applications](#page-use-cases)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/mem0ai/mem0/blob/main/README.md)
- [evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)
- [cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)
- [cli/python/README.md](https://github.com/mem0ai/mem0/blob/main/cli/python/README.md)
- [mem0-ts/src/oss/README.md](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/oss/README.md)
- [skills/README.md](https://github.com/mem0ai/mem0/blob/main/skills/README.md)
</details>

# Introduction to Mem0

Mem0 is an open-source memory infrastructure designed specifically for AI agents and applications. It provides intelligent, persistent memory management that enables AI systems to retain, retrieve, and utilize information across conversations and sessions. Unlike traditional retrieval-augmented generation (RAG) approaches that treat all context equally, Mem0 implements a hierarchical memory system that automatically prioritizes and maintains relevant information over time.

The platform addresses one of the most significant challenges in AI development: creating systems that can remember user preferences, conversation context, and learned facts in a way that feels natural and improves over time. Mem0 serves as a foundational layer for building production-ready AI agents with scalable long-term memory capabilities.

## Core Concepts

### Memory Hierarchy

Mem0 organizes memory into multiple scopes, enabling fine-grained control over information retention and retrieval. The system distinguishes between user-level, agent-level, and session-level memories, allowing developers to choose the appropriate context for different types of information.

| Scope Level | Description | Use Case |
|-------------|-------------|----------|
| **User** | Global preferences and facts about a specific user | User preferences, historical context |
| **Agent** | Information relevant to a specific AI agent instance | Agent-specific learning, personality traits |
| **Session** | Temporary context within a single conversation | Current discussion topics, immediate context |
| **Run** | Information specific to a particular execution context | Workflow-specific state |

### Memory Operations

The memory system supports four fundamental operations that form the backbone of all interactions:

**Add** - Stores new information in the memory system with automatic entity extraction and deduplication. The system intelligently parses input to identify key facts, relationships, and metadata.

**Search** - Retrieves relevant memories using vector similarity search combined with semantic understanding. The search operation supports hybrid queries that combine keyword matching with semantic similarity.

**Update** - Modifies existing memories when new information supersedes or refines previously stored facts. The system maintains version history for audit purposes.

**Delete** - Removes specific memories or bulk deletes based on scope filters. Supports soft deletes and hard deletes depending on compliance requirements.

## Architecture Overview

Mem0's architecture is designed with modularity and extensibility in mind. The system consists of several interconnected components that work together to provide seamless memory management.

```mermaid
graph TD
    A[AI Agent / Application] --> B[Mem0 API Layer]
    B --> C[Memory Core Engine]
    C --> D[Vector Store]
    C --> E[Graph Store]
    C --> F[SQLite / Database]
    D --> G[Embedding Models]
    E --> H[Entity Extraction]
    F --> I[Metadata Storage]
    B --> J[LLM Integration]
    J --> K[Fact Extraction]
    J --> L[Memory Synthesis]
```

### Key Components

| Component | Function | Extensible |
|-----------|----------|------------|
| **API Layer** | REST interface for memory operations | Yes - custom endpoints |
| **Memory Core** | Orchestrates memory operations | Yes - custom strategies |
| **Vector Store** | Stores embeddings for semantic search | Yes - multiple backends |
| **Graph Store** | Manages entity relationships | Yes - Neo4j, in-memory |
| **LLM Integration** | Powers extraction and synthesis | Yes - OpenAI, Anthropic, local |
| **Embedding Service** | Generates vector representations | Yes - OpenAI, HuggingFace |

## Deployment Options

Mem0 offers multiple deployment options to meet different organizational requirements and use cases.

### Cloud Platform

The managed Mem0 Platform provides a fully hosted solution with zero infrastructure management. Users can sign up at app.mem0.ai and immediately begin using the memory infrastructure via SDK or API keys. The cloud platform includes built-in monitoring, automatic scaling, and enterprise-grade security features.

### Self-Hosted Server

For organizations requiring on-premise deployment or data sovereignty, Mem0 provides a self-hosted option using Docker Compose. The server includes a web-based dashboard for configuration and management.

```bash
# Recommended bootstrap command
cd server && make bootstrap

# Manual start
cd server && docker compose up -d
```

Self-hosted deployments support authentication out of the box, with options to configure admin accounts and API keys through a setup wizard or environment variables. The `ADMIN_API_KEY` environment variable enables programmatic admin creation for automated deployments.

### Python SDK

The primary Python SDK provides the most comprehensive feature set for Python-based applications:

```bash
pip install mem0ai
```

For NLP-enhanced features including BM25 keyword matching and entity extraction:

```bash
pip install mem0ai[nlp]
python -m spacy download en_core_web_sm
```

### TypeScript/JavaScript SDK

The official npm package provides TypeScript-first support for JavaScript and TypeScript applications:

```bash
npm install mem0ai
```

The TypeScript implementation (`mem0-ts`) offers an alternative open-source option using OpenAI for embeddings and completions, with SQLite-based history tracking and optional graph-based memory relationships.

### CLI Tools

Command-line interfaces are available for both Python and Node.js environments:

```bash
# Python CLI
pip install mem0-cli

# Node.js CLI
npm install -g @mem0/cli
```

## Key Features

### Intelligent Memory Extraction

Mem0 automatically extracts entities, facts, and relationships from conversational input. The extraction process uses large language models to understand context and identify meaningful information that should be stored. This reduces the burden on developers to explicitly specify what to remember.

### Hybrid Search Capabilities

Memory retrieval combines multiple search techniques for optimal results:

- **Vector similarity search** - Finds semantically similar memories using embeddings
- **BM25 keyword matching** - Ensures exact keyword matches are captured
- **Entity extraction** - Identifies specific entities for targeted retrieval

### Graph-Based Memory (Mem0+)

An enhanced version called Mem0+ adds graph-based relationship tracking, enabling the system to understand connections between entities and facts. This is particularly useful for complex reasoning tasks that require understanding relationships between different pieces of information.

### Custom Instructions

Mem0 supports custom extraction instructions that guide the memory system to prioritize specific types of information based on use case requirements. The platform can auto-generate these instructions based on a description of the application domain.

## Configuration and Customization

### Embedding Models

Mem0 supports multiple embedding providers and models:

| Provider | Default Model | Custom Model Support |
|----------|--------------|---------------------|
| OpenAI | text-embedding-3-small | Yes |
| HuggingFace | Various sentence-transformers | Yes |
| Azure OpenAI | text-embedding-3-small | Yes |

### LLM Configuration

Language model settings control fact extraction and memory synthesis:

- Provider selection (OpenAI, Anthropic, local models)
- Model selection per operation type
- API key management and key rotation
- Temperature and generation parameters

### Memory Storage

Configurable storage backends adapt to different deployment requirements:

```mermaid
graph LR
    A[Memory Write] --> B{Storage Backend}
    B --> C[In-Memory]
    B --> D[SQLite]
    B --> E[PostgreSQL + pgvector]
    B --> F[Qdrant]
    B --> G[ChromaDB]
    B --> H[Weaviate]
```

## Use Cases

Mem0 supports a wide range of applications where persistent memory is valuable:

**Personal AI Assistants** - Maintain user preferences, conversation history, and learned habits across sessions to provide increasingly personalized experiences.

**Customer Service Bots** - Remember customer context across multiple support interactions, eliminating the need for customers to repeat information.

**Developer Tools** - Enable AI coding assistants to learn team conventions, project-specific patterns, and individual developer preferences.

**Healthcare Applications** - Maintain patient history and context across appointments while ensuring data privacy and compliance.

**Educational Platforms** - Track student progress, learning preferences, and knowledge gaps to provide personalized tutoring experiences.

## Evaluation Framework

Mem0 includes a comprehensive evaluation framework for assessing memory system performance across different scenarios. The framework supports comparison between multiple memory techniques including base Mem0, Mem0+, RAG implementations, and LangMem.

| Technique | Command | Description |
|-----------|---------|-------------|
| `run-mem0-add` | Add memories using Mem0 | Standard memory addition |
| `run-mem0-search` | Search memories using Mem0 | Standard memory retrieval |
| `run-mem0-plus-add` | Add memories using Mem0+ | Graph-enhanced addition |
| `run-mem0-plus-search` | Search memories using Mem0+ | Graph-enhanced retrieval |
| `run-rag` | RAG with chunk size 500 | Baseline RAG comparison |

The evaluation framework uses Makefile commands for standardized testing and supports custom parameter configuration via command-line arguments.

## Citation

If you use Mem0 in your research or development, please cite the following paper:

```bibtex
@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}
```

## License

Mem0 is released under the Apache 2.0 license, enabling both commercial and open-source usage with minimal restrictions. The permissive license allows integration into proprietary applications while requiring attribution and preservation of copyright notices.

---

<a id='page-quickstart'></a>

## Quick Start Guide

### 相关页面

相关主题：[Introduction to Mem0](#page-introduction), [Python SDK](#page-python-sdk), [TypeScript/Node.js SDK](#page-typescript-sdk)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/mem0ai/mem0/blob/main/README.md)
- [evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)
- [cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)
- [openmemory/api/README.md](https://github.com/mem0ai/mem0/blob/main/openmemory/api/README.md)
- [server/dashboard/src/app/setup/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/setup/page.tsx)
- [server/dashboard/src/app/(root)/dashboard/configuration/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/(root)/dashboard/configuration/page.tsx)
- [server/dashboard/src/app/(root)/dashboard/api-keys/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/(root)/dashboard/api-keys/page.tsx)
</details>

# Quick Start Guide

Mem0 provides a comprehensive memory infrastructure for AI applications, enabling persistent, personalized, and adaptive AI experiences. This guide covers all deployment options to get you up and running quickly.

## Overview

Mem0 is a production-ready memory layer for AI agents that handles memory management including storing, retrieving, and updating user/agent memories across interactions. The platform supports multiple deployment options: cloud-hosted, self-hosted server, and local SDK integration.

**Key Features:**
- Multi-level memory (user, agent, session, app)
- Hybrid search with semantic and keyword matching
- Entity extraction and relationship tracking
- Cloud, self-hosted, and SDK deployment options
- Cross-platform SDK support (Python, Node.js, CLI)

资料来源：[README.md:1-30]()

## Installation Methods

Mem0 supports multiple installation pathways depending on your use case and deployment preference.

### Python SDK

Install the core Mem0 package via pip:

```bash
pip install mem0ai
```

For enhanced search capabilities with NLP support (BM25 keyword matching and entity extraction):

```bash
pip install mem0ai[nlp]
python -m spacy download en_core_web_sm
```

资料来源：[README.md:12-16]()

### Node.js SDK

For JavaScript/TypeScript environments:

```bash
npm install mem0ai
```

资料来源：[README.md:20-22]()

### CLI Tool

Install the Mem0 CLI for terminal-based memory management:

```bash
npm install -g @mem0/cli   # or: pip install mem0-cli
```

资料来源：[README.md:24-26]()

## Deployment Options

Mem0 offers three deployment models to fit different infrastructure requirements.

```mermaid
graph TD
    A[Mem0 Deployment Options] --> B[Cloud Platform]
    A --> C[Self-Hosted Server]
    A --> D[Local SDK Integration]
    
    B --> B1[app.mem0.ai]
    B --> B2[API Key Required]
    
    C --> C1[Docker Compose]
    C --> C2[Custom Configuration]
    
    D --> D1[Python SDK]
    D --> D2[Node.js SDK]
```

资料来源：[README.md:1-30]()

### Cloud Platform

The quickest path to production memory infrastructure:

1. Sign up at [Mem0 Platform](https://app.mem0.ai?utm_source=oss&utm_medium=readme)
2. Embed the memory layer via SDK or API keys
3. Start using memory operations immediately

资料来源：[README.md:28-32]()

### Self-Hosted Server

For organizations requiring full control over their infrastructure.

#### Quick Bootstrap (Recommended)

```bash
cd server && make bootstrap
```

This single command starts the Docker stack, creates an admin account, and issues your first API key.

#### Manual Setup

```bash
cd server && docker compose up -d
```

Access the setup wizard at `http://localhost:3000`.

> **Note:** Self-hosted authentication is enabled by default. If upgrading from a pre-auth build, set `ADMIN_API_KEY`, register an admin through the wizard, or use `AUTH_DISABLED=true` for local development only.

资料来源：[README.md:17-19]()

**Configuration Requirements:**
For detailed configuration options, refer to the [self-hosted documentation](https://docs.mem0.ai/open-source/overview).

资料来源：[README.md:18-19]()

## Initial Setup Workflow

```mermaid
graph LR
    A[Initialize Mem0] --> B[Configure Provider]
    B --> C[Set API Keys]
    C --> D[Add Memories]
    D --> E[Search/Retrieve]
    
    F[CLI: mem0 init] --> A
    G[SDK: Mem0() config] --> A
```

### Web Dashboard Setup

The self-hosted server includes a guided setup wizard with the following steps:

| Step | Title | Description |
|------|-------|-------------|
| 0 | Create Admin Account | Set up initial admin credentials (name, email, password) |
| 1 | Configure Provider | Select LLM provider and enter API credentials |
| 2 | Select Use Case | Choose preset or enter custom use case for instruction generation |
| 3 | Generate Instructions | Auto-generate custom memory extraction instructions |
| 4 | Test Setup | Verify configuration with a test API call |

资料来源：[server/dashboard/src/app/setup/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/setup/page.tsx)

**Setup Commands Example:**

```bash
curl -X POST ${apiUrl}/memories \
  -H "X-API-Key: ${apiKey}" \
  -H "Content-Type: application/json" \
  -d '{"messages": [{"role": "user", "content": "${testMessage}"}], "user_id": "setup-test"}'
```

资料来源：[server/dashboard/src/app/setup/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/setup/page.tsx)

### CLI Setup

Initialize the CLI with your credentials:

```bash
mem0 init
mem0 add "Prefers dark mode and vim keybindings" --user-id alice
mem0 search "What does Alice prefer?" --user-id alice
```

资料来源：[README.md:26-27]()

### Provider Configuration

Configure your LLM and embedding providers:

| Setting | Description | Example Value |
|---------|-------------|---------------|
| LLM Provider | Language model provider | OpenAI, Anthropic, Azure OpenAI |
| LLM Model | Specific model identifier | gpt-4o, claude-3-5-sonnet-20240620 |
| Embedder Provider | Embedding model provider | OpenAI, Azure OpenAI |
| Embedder Model | Embedding model identifier | text-embedding-3-small |
| API Key | Provider authentication key | sk-... |

资料来源：[server/dashboard/src/app/(root)/dashboard/configuration/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/(root)/dashboard/configuration/page.tsx)

## Core Operations

### Adding Memories

Memories can be added through various interfaces:

**CLI:**
```bash
mem0 add "User prefers dark mode" --user-id alice
mem0 add "Agent configuration" --agent-id bot-123
```

**SDK (Python):**
```python
from mem0 import Mem0

client = Mem0()
client.add("User prefers dark mode", user_id="alice")
```

### Searching Memories

```bash
mem0 search "What are user preferences?" --user-id alice
```

### Bulk Import

Import memories from a JSON file:

```bash
mem0 import data.json --user-id alice
```

JSON file format:
```json
[
  {
    "memory": "User prefers dark mode",
    "user_id": "alice",
    "metadata": {"source": "survey"}
  }
]
```

Each item can include `memory` (or `text` or `content`), optional `user_id`, `agent_id`, and `metadata` fields.

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

### Entity Management

```bash
# List entities
mem0 entity list users
mem0 entity list agents --output json

# Delete entities
mem0 entity delete --user-id alice --force
```

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

## API Key Management

### Creating API Keys

1. Navigate to **Dashboard → API Keys**
2. Click **Create API Key**
3. Save the generated key securely

> **Important:** Save your API key immediately after creation — it will not be displayed again.

资料来源：[server/dashboard/src/app/(root)/dashboard/api-keys/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/(root)/dashboard/api-keys/page.tsx)

### Key Limitations

| Plan | Key Limit | Notes |
|------|-----------|-------|
| Free | 3 keys | Consider Cloud for multiple applications |
| Cloud | Multiple | Project-based isolation available |

A warning banner appears when you reach the 3-key limit on self-hosted deployments.

资料来源：[server/dashboard/src/app/(root)/dashboard/api-keys/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/(root)/dashboard/api-keys/page.tsx)

## CLI Commands Reference

| Command | Description |
|---------|-------------|
| `mem0 init` | Initialize CLI with credentials |
| `mem0 add <text>` | Add a memory |
| `mem0 search <query>` | Search memories |
| `mem0 import <file>` | Bulk import from JSON |
| `mem0 config show` | Display current config |
| `mem0 config get <key>` | Get specific config value |
| `mem0 config set <key> <value>` | Set a config value |
| `mem0 entity list <type>` | List entities (users/agents/apps/runs) |
| `mem0 entity delete` | Delete an entity |
| `mem0 event list` | List background events |
| `mem0 event status <id>` | Check event status |
| `mem0 status` | Verify API connection |
| `mem0 version` | Print CLI version |

**Flags:**
- `--user-id <id>` — Specify user context
- `--agent-id <id>` — Specify agent context
- `--preview` — Preview without deleting (for delete operations)
- `--force` — Skip confirmation prompt
- `-o, --output` — Output format (text/json)

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

## Docker Development (OpenMemory)

For local API development using OpenMemory:

```bash
# Build containers
make build

# Create environment file
make env
# Then edit api/.env and enter OPENAI_API_KEY

# Start services
make up
```

The API will be available at `http://localhost:8765`

**Common Commands:**
```bash
make logs      # View container logs
make shell     # Open shell in container
make migrate   # Run database migrations
make test      # Run tests
make test-clean # Run tests and clean up
make down      # Stop containers
```

API documentation available at:
- Swagger UI: `http://localhost:8765/docs`
- ReDoc: `http://localhost:8765/redoc`

资料来源：[openmemory/api/README.md](https://github.com/mem0ai/mem0/blob/main/openmemory/api/README.md)

## Running Experiments

For evaluation purposes, Mem0 provides experiment scripts:

```bash
# Memory Techniques
make run-mem0-add         # Add memories using Mem0
make run-mem0-search      # Search memories using Mem0
make run-mem0-plus-add    # Add memories using Mem0+ (graph-based)
make run-mem0-plus-search # Search memories using Mem0+

# RAG Experiments
make run-rag              # Run RAG with chunk size 500
make run-full-context     # Run RAG with full context

# Other Techniques
make run-langmem          # Run LangMem experiments
make run-zep-add          # Add memories using Zep
make run-zep-search       # Search memories using Zep
make run-openai           # Run OpenAI experiments
```

**Custom Parameters:**

| Parameter | Description | Default |
|-----------|-------------|---------|
| `--technique_type` | Memory technique (mem0, rag, langmem) | mem0 |
| `--method` | Method to use (add, search) | add |
| `--chunk_size` | Chunk size for processing | 1000 |
| `--top_k` | Number of results to retrieve | varies |

Alternatively, run experiments directly:
```bash
python run_experiments.py --technique_type [mem0|rag|langmem] [additional parameters]
```

资料来源：[evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)

## Next Steps

- **Configuration:** Customize provider settings in the dashboard configuration page
- **API Reference:** Explore the full API at `/docs` when running self-hosted
- **Documentation:** Visit [docs.mem0.ai](https://docs.mem0.ai) for detailed guides
- **Examples:** Check the `examples/` directory for integration demos
- **CLI Help:** Run `mem0 --help` for command options

## Citation

If you use Mem0 in your research or application, please cite:

```bibtex
@article{mem0,
  title={Mem0: Building Production-Ready AI Agents with Scalable Long-Term Memory},
  author={Chhikara, Prateek and Khant, Dev and Aryan, Saket and Singh, Taranjeet and Yadav, Deshraj},
  journal={arXiv preprint arXiv:2504.19413},
  year={2025}
}
```

资料来源：[README.md:1-10]()

---

<a id='page-use-cases'></a>

## Use Cases and Applications

### 相关页面

相关主题：[Introduction to Mem0](#page-introduction)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [README.md](https://github.com/mem0ai/mem0/blob/main/README.md)
- [openclaw/README.md](https://github.com/mem0ai/mem0/blob/main/openclaw/README.md)
- [evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)
- [cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)
- [examples/multimodal-demo/src/components/messages.tsx](https://github.com/mem0ai/mem0/blob/main/examples/multimodal-demo/src/components/messages.tsx)
</details>

# Use Cases and Applications

Mem0 provides a comprehensive memory infrastructure for AI applications, enabling developers to build intelligent systems that maintain context across conversations, users, and sessions. This page documents the primary use cases, application patterns, and real-world scenarios where Mem0 adds significant value.

## Overview

Mem0 is designed as a memory layer for AI agents and applications. It addresses the fundamental challenge of maintaining stateful, personalized interactions in AI systems that are inherently stateless. The platform supports multiple deployment models including self-hosted servers, cloud platforms, and embedded SDK integrations. 资料来源：[README.md:1-30]()

## Core Use Cases

### Personal AI Assistants

Mem0 powers personal AI assistants that learn and remember user preferences, habits, and historical interactions. These assistants can recall past conversations, understand user context, and provide personalized responses based on accumulated knowledge.

```mermaid
graph TD
    A[User Input] --> B[Mem0 Memory Layer]
    B --> C{Retrieve Relevant Memories}
    C --> D[User Preferences]
    C --> E[Conversation History]
    C --> F[Historical Context]
    D --> G[AI Response Generation]
    E --> G
    F --> G
    G --> H[Store New Memories]
    H --> B
```

**Key Features:**
- Persistent user profiles across sessions
- Preference learning and adaptation
- Context-aware response generation
- Multi-turn conversation continuity

### Customer Support Chatbots

Enterprise customer support systems benefit from Mem0's ability to maintain conversation history and customer context. Support agents and chatbots can access previous tickets, understand ongoing issues, and provide consistent assistance across multiple interaction channels. 资料来源：[README.md:40-60]()

**Implementation Pattern:**
```python
# Typical customer support memory flow
memory.add(
    text="Customer reported payment failure on order #12345",
    user_id="customer_456",
    metadata={"ticket_id": "T-789", "priority": "high"}
)
```

### Healthcare Assistants

AI-powered healthcare applications use Mem0 to maintain patient context, track medical history, and ensure continuity of care across multiple interactions. These systems must handle sensitive data with appropriate privacy considerations while providing valuable clinical insights. 资料来源：[README.md:50-80]()

**Key Considerations:**
- HIPAA compliance for patient data
- Structured memory storage for medical records
- Temporal context preservation
- Multi-provider information aggregation

### Enterprise Knowledge Management

Organizations leverage Mem0 to build knowledge bases that automatically capture, organize, and retrieve institutional knowledge. Unlike static knowledge bases, Mem0-powered systems continuously learn from interactions and user feedback.

| Feature | Description | Benefit |
|---------|-------------|---------|
| Semantic Search | Natural language queries across memories | Fast information retrieval |
| Hybrid Search | BM25 + vector embeddings | Comprehensive results |
| Entity Extraction | Automatic categorization | Organized knowledge |
| Temporal Weighting | Recent information prioritized | Relevant responses |

资料来源：[README.md:35-45]()

## Application Architecture Patterns

### Multi-Agent Systems

Mem0 supports complex multi-agent architectures where different agents share contextual information through a unified memory layer.

```mermaid
graph LR
    A[Agent A] -->|Read/Write| M[Mem0 Memory]
    B[Agent B] -->|Read/Write| M
    C[Agent C] -->|Read/Write| M
    M --> D[Shared Context]
    D --> E[Coordinated Actions]
```

**Multi-Agent Memory Configuration:**
```python
from mem0 import Memory

memory = Memory.from_ids(
    user_id="shared_session_123",
    agent_id=None,  # Shared across agents
    run_id="workflow_456"
)
```

### Retrieval-Augmented Generation (RAG)

Mem0 integrates with RAG pipelines to enhance LLM responses with retrieved memories. The platform supports configurable chunk sizes, embedding models, and hybrid search strategies. 资料来源：[evaluation/README.md:1-50]()

| RAG Configuration | Parameter | Default Value |
|-------------------|-----------|---------------|
| Chunk Size | `chunk_size` | 1000 |
| Embedding Model | `embedding_model` | text-embedding-3-small |
| Search Technique | `technique_type` | mem0, rag, langmem |
| Top-K Results | `top_k` | Configurable |

### Multi-Modal Applications

Modern AI applications process multiple input types including text, images, and audio. Mem0 stores and retrieves context from multi-modal conversations, enabling coherent responses across different content types. 资料来源：[examples/multimodal-demo/src/components/messages.tsx:1-60]()

## Deployment Scenarios

### Self-Hosted Server

Organizations requiring full control over their data can deploy Mem0 as a self-hosted solution. The self-hosted server includes a dashboard for management, API key generation, and configuration options. 资料来源：[README.md:60-80]()

```bash
# Quick start with bootstrap
cd server && make bootstrap

# Manual Docker deployment
cd server && docker compose up -d
```

**Self-Hosted Features:**
- Admin account creation via setup wizard
- API key management through dashboard
- Configuration for LLM and embedding providers
- Request logging and analytics
- Webhook support for event notifications

### Cloud Platform

The Mem0 cloud platform provides a managed solution with additional features including project-based isolation, SSO/SAML authentication, and enterprise support. 资料来源：[README.md:50-60]()

### Embedded SDK Integration

For applications requiring client-side or edge deployment, Mem0 provides lightweight SDKs:

| Platform | Installation | Use Case |
|----------|--------------|----------|
| Python | `pip install mem0ai` | Backend services, data processing |
| JavaScript/TypeScript | `npm install mem0ai` | Web applications, Node.js services |
| CLI | `npm install -g @mem0/cli` | Local development, debugging |

资料来源：[README.md:25-40]()

## CLI Applications

The Mem0 CLI enables developers to manage memories directly from the terminal, useful for development, debugging, and automation tasks. 资料来源：[cli/node/README.md:1-80]()

```bash
# Initialize CLI configuration
mem0 init

# Add memories
mem0 add "User prefers dark mode" --user-id alice

# Search memories
mem0 search "What does Alice prefer?" --user-id alice

# Manage entities
mem0 entity list users
mem0 entity delete --user-id alice --force
```

**CLI Commands Reference:**

| Command | Description | Key Flags |
|---------|-------------|-----------|
| `mem0 add` | Add a memory | `--user-id`, `--agent-id`, `--metadata` |
| `mem0 search` | Search memories | `--user-id`, `--output` |
| `mem0 list` | List all memories | `--user-id`, `--limit` |
| `mem0 delete` | Delete memories | `--user-id`, `--force` |
| `mem0 import` | Bulk import | JSON file support |
| `mem0 config` | Manage settings | `show`, `get`, `set` |
| `mem0 status` | Check connection | Project verification |
| `mem0 event` | Monitor async events | `list`, `status` |

## Evaluation and Benchmarking

Mem0 includes comprehensive evaluation tools for comparing different memory techniques and configurations. The evaluation framework supports multiple approaches including Mem0, Mem0+, RAG, and LangMem. 资料来源：[evaluation/README.md:50-100]()

```bash
# Run Mem0 experiments
make run-mem0-add
make run-mem0-search

# Run Mem0+ with graph-based search
make run-mem0-plus-add
make run-mem0-plus-search

# Run RAG experiments
make run-rag
make run-full-context

# Run custom experiments
python run_experiments.py --technique_type mem0 --method add
```

**Experiment Parameters:**

| Parameter | Description | Valid Values |
|-----------|-------------|---------------|
| `--technique_type` | Memory technique | mem0, rag, langmem |
| `--method` | Operation type | add, search |
| `--chunk_size` | Processing chunk size | Integer |
| `--top_k` | Results to retrieve | Integer |

## Industry-Specific Applications

### OpenClaw Platform Integration

OpenClaw demonstrates how Mem0 integrates with specialized AI platforms for specific domains. The platform supports both hosted API mode and self-hosted open-source mode with configurable memory behaviors. 资料来源：[openclaw/README.md:1-50]()

**Platform Mode Configuration:**
| Key | Type | Description |
|-----|------|-------------|
| `apiKey` | string | Mem0 API key (required) |
| `customInstructions` | string | Extraction rules |
| `customCategories` | object | Category definitions |

**Open-Source Mode Defaults:**
| Component | Default Value |
|-----------|---------------|
| Embeddings | text-embedding-3-small |
| Vector Store | Local SQLite |
| LLM | gpt-5-mini |
| Database Path | ~/.mem0/vector_store.db |

### Support Inbox Automation

Automated support systems use Mem0 to track issue resolution history, maintain customer context across channels, and enable intelligent routing based on historical patterns.

### Email Automation

Email-based workflows leverage Mem0's ability to maintain conversation context across email threads, automatically categorizing and prioritizing messages based on user history and past interactions.

## Best Practices

### Memory Structuring

Organize memories with appropriate metadata for optimal retrieval:

```python
memory.add(
    text="Customer's subscription expired",
    user_id="customer_123",
    metadata={
        "category": "billing",
        "priority": "medium",
        "timestamp": "2025-01-15"
    }
)
```

### Privacy Considerations

- Implement data retention policies
- Use encryption for sensitive information
- Leverage user consent mechanisms
- Enable data export and deletion capabilities

### Performance Optimization

- Configure appropriate embedding models for your use case
- Use hybrid search combining semantic and keyword matching
- Implement caching for frequently accessed memories
- Monitor request latency through the dashboard

## Additional Resources

- [Quick Start Guide](https://docs.mem0.ai/)
- [API Reference](https://docs.mem0.ai/api-reference/)
- [Self-Hosted Documentation](https://docs.mem0.ai/open-source/overview)
- [CLI Reference](https://docs.mem0.ai/platform/cli)
- [Platform Documentation](https://app.mem0.ai)

---

<a id='page-architecture'></a>

## System Architecture

### 相关页面

相关主题：[Introduction to Mem0](#page-introduction), [Memory Operations](#page-memory-operations), [Python SDK](#page-python-sdk), [Vector Stores and Storage](#page-vector-stores)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0/memory/main.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/main.py)
- [mem0/memory/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/base.py)
- [mem0/memory/storage.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/storage.py)
- [server/main.py](https://github.com/mem0ai/mem0/blob/main/server/main.py)
- [server/routers/__init__.py](https://github.com/mem0ai/mem0/blob/main/server/routers/__init__.py)
- [mem0-ts/src/client/mem0.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.ts)
- [server/README.md](https://github.com/mem0ai/mem0/blob/main/server/README.md)
</details>

# System Architecture

## Overview

Mem0 is an intelligent memory layer designed for AI agents and applications. It provides persistent, scalable long-term memory capabilities that enable AI systems to retain, retrieve, and manage information across conversations and sessions.

资料来源：[server/README.md](https://github.com/mem0ai/mem0/blob/main/server/README.md)

The architecture follows a modular design pattern with distinct layers for memory management, storage, API serving, and client implementations. This separation enables flexibility in deployment options and supports multiple use cases from embedded applications to cloud-based services.

## High-Level Architecture

```mermaid
graph TD
    subgraph Client_Layer["Client Layer"]
        CLI["CLI Application<br/>mem0"]
        TS_Client["TypeScript Client<br/>mem0-ts"]
        Python_SDK["Python SDK<br/>mem0ai/mem0"]
    end
    
    subgraph API_Layer["API Layer"]
        Server["FastAPI Server<br/>server/main.py"]
        Routers["API Routers<br/>server/routers/"]
    end
    
    subgraph Memory_Core["Memory Core"]
        Main["Memory Manager<br/>mem0/memory/main.py"]
        Base["Base Memory<br/>mem0/memory/base.py"]
        Storage["Storage Engine<br/>mem0/memory/storage.py"]
    end
    
    subgraph Storage_Backend["Storage Backend"]
        VectorStore["Vector Store"]
        DB["Database"]
    end
    
    CLI --> Server
    TS_Client --> Server
    Python_SDK --> Main
    Main --> Base
    Main --> Storage
    Storage --> VectorStore
    Storage --> DB
    Server --> Main
```

## Core Components

### Memory Module Architecture

The memory module is the heart of the Mem0 system, implementing the core memory operations.

资料来源：[mem0/memory/main.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/main.py)

| Component | File | Purpose |
|-----------|------|---------|
| MemoryManager | `mem0/memory/main.py` | Orchestrates memory operations |
| BaseMemory | `mem0/memory/base.py` | Abstract base class defining the memory interface |
| Storage | `mem0/memory/storage.py` | Handles persistence and retrieval of memory data |

### Base Memory Class

The base class defines the contract that all memory implementations must follow.

资料来源：[mem0/memory/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/base.py)

```mermaid
classDiagram
    class BaseMemory {
        <<abstract>>
        +add() AddMemory
        +search() SearchMemory
        +get() GetMemory
        +update() UpdateMemory
        +delete() DeleteMemory
        +list() ListMemories
    }
    
    class MemoryManager {
        +add()
        +search()
        +get()
        +update()
        +delete()
        +list()
        -storage: Storage
    }
    
    BaseMemory <|-- MemoryManager
```

### Storage Engine

The storage layer handles the persistence of memory data using vector embeddings and traditional database storage.

资料来源：[mem0/memory/storage.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/storage.py)

#### Supported Storage Backends

| Storage Type | Description |
|--------------|-------------|
| Vector Store | Embedding-based similarity search |
| SQL Database | Structured data storage for metadata |
| Memory | In-memory storage for testing |
| Graph | Graph-based relationships (Mem0+) |

## API Layer

### Server Architecture

The server layer is built on FastAPI, providing RESTful endpoints for memory operations.

资料来源：[server/main.py](https://github.com/mem0ai/mem0/blob/main/server/main.py)

```mermaid
graph LR
    subgraph Endpoints["API Endpoints"]
        A["Add Memory"]
        S["Search Memory"]
        G["Get Memory"]
        U["Update Memory"]
        D["Delete Memory"]
        L["List Memories"]
    end
    
    subgraph Router["Router Module"]
        R["server/routers/__init__.py"]
    end
    
    A --> R
    S --> R
    G --> R
    U --> R
    D --> R
    L --> R
    R --> MemoryCore["Memory Core"]
```

### API Configuration

The system supports various configuration options for deployment flexibility.

资料来源：[server/README.md](https://github.com/mem0ai/mem0/blob/main/server/README.md)

| Parameter | Description | Default |
|-----------|-------------|---------|
| `OPENAI_API_KEY` | API key for GPT models and embeddings | Required |
| `MEM0_API_KEY` | Mem0 API key for cloud features | Optional |
| `MEM0_PROJECT_ID` | Project identifier | Optional |
| `MEM0_ORGANIZATION_ID` | Organization identifier | Optional |
| `MODEL` | LLM model for completions | `gpt-4o-mini` |
| `EMBEDDING_MODEL` | Embedding model | `text-embedding-3-small` |
| `ZEP_API_KEY` | Zep service API key | Optional |

## Client Implementations

### Python SDK

The Python SDK provides the primary interface for integrating Mem0 into applications.

资料来源：[mem0ai/mem0](https://github.com/mem0ai/mem0/blob/main/mem0ai/mem0)

```python
from mem0 import Memory

memory = Memory()
memory.add("User prefers dark mode", user_id="alice")
results = memory.search("What are user preferences?", user_id="alice")
```

### TypeScript Client

The TypeScript implementation provides memory capabilities for JavaScript/TypeScript environments.

资料来源：[mem0-ts/src/client/mem0.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.ts)

```typescript
import { Memory } from "mem0-ts";

const memory = new Memory({
  embedder: {
    provider: "openai",
    config: { apiKey: process.env.OPENAI_API_KEY }
  }
});
```

### CLI Application

The command-line interface provides direct access to memory operations.

资料来源：[cli/README.md](https://github.com/mem0ai/mem0/blob/main/cli/README.md)

| Command | Description |
|---------|-------------|
| `mem0 init` | Setup wizard for authentication |
| `mem0 add` | Add memory from text, JSON, or file |
| `mem0 search` | Search memories using natural language |
| `mem0 list` | List memories with filters |
| `mem0 get` | Retrieve specific memory by ID |
| `mem0 update` | Update memory text or metadata |
| `mem0 delete` | Delete memory or entity |
| `mem0 import` | Bulk import from JSON file |

#### CLI Agent Mode

The CLI supports agent mode for AI agent tool loops:

```bash
mem0 --agent search "user preferences" --user-id alice
mem0 --agent add "User prefers dark mode" --user-id alice
```

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

## Memory Techniques

Mem0 supports multiple memory retrieval techniques for different use cases.

资料来源：[evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)

```mermaid
graph TD
    subgraph Techniques["Memory Techniques"]
        M0["Mem0<br/>Vector-based retrieval"]
        M0P["Mem0+<br/>Graph-based search"]
        RAG["RAG<br/>Chunk-based retrieval"]
        LM["LangMem<br/>Language model memory"]
    end
    
    subgraph Use_Cases["Use Cases"]
        UC1["Personal assistants"]
        UC2["Customer support"]
        UC3["Research tools"]
        UC4["Enterprise applications"]
    end
    
    M0 --> UC1
    M0P --> UC2
    RAG --> UC3
    LM --> UC4
```

### Technique Comparison

| Technique | Description | Best For |
|-----------|-------------|----------|
| Mem0 | Vector-based semantic search | General purpose memory |
| Mem0+ | Graph-enhanced retrieval | Complex relationship queries |
| RAG | Chunk-based retrieval | Document-heavy applications |
| LangMem | LLM-native memory | Language model integration |

## Data Models

### Memory Entity Structure

| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique memory identifier |
| `memory` | string | Memory content text |
| `user_id` | string | Associated user identifier |
| `agent_id` | string | Associated agent identifier |
| `app_id` | string | Associated application identifier |
| `run_id` | string | Associated run identifier |
| `metadata` | object | Custom metadata key-value pairs |
| `created_at` | datetime | Creation timestamp |
| `updated_at` | datetime | Last update timestamp |

### Evaluation Metrics

The system tracks multiple metrics for performance evaluation.

资料来源：[evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)

| Metric | Description |
|--------|-------------|
| BLEU Score | Text similarity measure |
| F1 Score | Precision/recall balance |
| LLM Score | Judge-based evaluation |

## Deployment Options

### Local/Embedded Mode

For applications requiring local-only memory:

- SQLite-based vector store: `~/.mem0/vector_store.db`
- History database: `~/.mem0/history.db`
- Memory consolidation state: `<pluginStateDir>/dream-state.json`

资料来源：[openclaw/README.md](https://github.com/mem0ai/mem0/blob/main/openclaw/README.md)

### Cloud Mode

For managed Mem0 cloud services:

- Requires `MEM0_API_KEY`
- Project and organization configuration
- Scalable vector storage

### Server Deployment

The FastAPI server can be deployed independently:

```bash
# Start server
python server/main.py

# Configure via environment variables
# - Set API keys
# - Configure storage backends
# - Set model preferences
```

资料来源：[server/main.py](https://github.com/mem0ai/mem0/blob/main/server/main.py)

## Vercel AI SDK Integration

Mem0 provides seamless integration with the Vercel AI SDK for streaming responses with memory.

资料来源：[vercel-ai-sdk/README.md](https://github.com/mem0ai/mem0/blob/main/vercel-ai-sdk/README.md)

```typescript
const mem0 = createMem0({
  config: {
    // Model configuration options
  }
});
```

### Best Practices for Vercel Integration

1. **User Identification**: Always provide a unique `user_id` for consistent memory retrieval
2. **Context Management**: Balance context window sizes with memory requirements
3. **Error Handling**: Implement proper error handling for memory operations
4. **Memory Cleanup**: Regularly clean up unused memory contexts

## Evaluation Framework

The evaluation module provides comprehensive testing capabilities.

资料来源：[evaluation/README.md](https://github.com/mem0ai/mem0/blob/main/evaluation/README.md)

### Running Experiments

```bash
# Run Mem0 experiments
make run-mem0-add
make run-mem0-search

# Run Mem0+ experiments
make run-mem0-plus-add
make run-mem0-plus-search

# Run RAG experiments
make run-rag
```

### Evaluation Command-Line Parameters

| Parameter | Description | Default |
|-----------|-------------|---------|
| `--technique_type` | Memory technique | `mem0` |
| `--method` | Method to use | `add` |
| `--chunk_size` | Processing chunk size | `1000` |
| `--top_k` | Top memories to retrieve | `30` |
| `--is_graph` | Use graph-based search | `False` |

## System Flow Diagrams

### Memory Addition Flow

```mermaid
sequenceDiagram
    participant Client
    participant API
    participant MemoryManager
    participant Storage
    participant VectorStore
    
    Client->>API: Add memory request
    API->>MemoryManager: Process memory
    MemoryManager->>MemoryManager: Extract facts
    MemoryManager->>Storage: Store memory
    Storage->>VectorStore: Generate embeddings
    VectorStore->>Storage: Store vectors
    Storage->>MemoryManager: Confirm storage
    MemoryManager->>API: Return memory ID
    API->>Client: Success response
```

### Memory Search Flow

```mermaid
sequenceDiagram
    participant Client
    participant API
    participant MemoryManager
    participant Storage
    participant VectorStore
    
    Client->>API: Search request
    API->>MemoryManager: Process query
    MemoryManager->>VectorStore: Generate query embedding
    VectorStore->>MemoryManager: Return similar memories
    MemoryManager->>API: Format results
    API->>Client: Return search results
```

## Security Considerations

### API Key Management

- Use environment variables for sensitive credentials
- Rotate API keys periodically
- Implement proper access controls for production deployments

### Data Privacy

- User data isolation via `user_id` scoping
- Support for entity-level deletion
- Optional metadata encryption for sensitive information

## Extensibility Points

The architecture supports extension through:

1. **Custom Storage Backends**: Implement the storage interface for new backends
2. **Custom Embedding Providers**: Add support for alternative embedding models
3. **Custom Memory Techniques**: Extend base class for specialized retrieval
4. **Plugin System**: OpenClaw integration for additional capabilities

## References

- Main Repository: [mem0ai/mem0](https://github.com/mem0ai/mem0)
- Documentation: [docs.mem0.ai](https://docs.mem0.ai)
- Paper Citation: [arXiv:2504.19413](https://arxiv.org/abs/2504.19413)

---

<a id='page-memory-operations'></a>

## Memory Operations

### 相关页面

相关主题：[System Architecture](#page-architecture), [AI Model Integration](#page-ai-integration), [Vector Stores and Storage](#page-vector-stores), [Python SDK](#page-python-sdk)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0/memory/main.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/main.py)
- [docs/core-concepts/memory-operations/add.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/add.mdx)
- [docs/core-concepts/memory-operations/search.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/search.mdx)
- [docs/core-concepts/memory-operations/update.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/update.mdx)
- [docs/core-concepts/memory-operations/delete.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/delete.mdx)
- [docs/core-concepts/memory-types.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-types.mdx)
- [docs/open-source/features/async-memory.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/async-memory.mdx)
- [docs/open-source/features/metadata-filtering.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/metadata-filtering.mdx)
- [docs/open-source/features/custom-instructions.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/custom-instructions.mdx)
</details>

# Memory Operations

Memory operations are the core CRUD (Create, Read, Update, Delete) interactions that power the Mem0 memory system. These operations enable AI agents to store, retrieve, modify, and delete persistent memory across user sessions, agent executions, and application contexts. The memory operations layer abstracts the complexity of vector storage, semantic indexing, and multi-entity management into a unified API that supports both synchronous and asynchronous execution patterns.

## Overview

The Mem0 memory system provides five fundamental operations that form the backbone of persistent memory management. Each operation is designed to work with multiple entity scopes, including user-level, agent-level, application-level, and run-level contexts. The operations support rich metadata filtering, custom instructions for memory processing, and both blocking and non-blocking execution modes for handling large-scale memory operations.

Memory operations in Mem0 are built on a layered architecture where the core memory module (`mem0/memory/main.py`) handles the business logic, while underlying vector stores and databases manage persistence. This separation allows Mem0 to support different deployment scenarios from local SQLite-based storage to cloud-hosted vector databases.

## Core Memory Operations

### Add Memory

The **Add** operation is the primary mechanism for storing new information in the memory system. When a memory is added, Mem0 performs several processing steps including embedding generation, fact extraction, and semantic categorization before storing the data in the appropriate vector store.

**Function signature and parameters:**

```python
def add(
    messages: str | list[dict],
    user_id: str | None = None,
    agent_id: str | None = None,
    app_id: str | None = None,
    run_id: str | None = None,
    metadata: dict | None = None,
    filter_version: str | None = "v1.0",
    prompt: str | None = None,
    max_items: int | None = None
) -> dict
```

资料来源：[mem0/memory/main.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/main.py)

**Operation workflow:**

```mermaid
graph TD
    A[Input: messages + entity identifiers] --> B[Validate input and entity scope]
    B --> C[Generate vector embeddings]
    C --> D[Extract facts using LLM]
    D --> E[Apply custom instructions if configured]
    E --> F[Store in vector store with metadata]
    F --> G[Return memory IDs and stored content]
```

**Adding memories via CLI:**

```bash
# Add a simple text memory
mem0 add "I prefer dark mode" --user-id alice

# Add from a JSON messages array
mem0 add --file conversation.json --user-id alice

# Add from stdin
echo "Loves hiking on weekends" | mem0 add --user-id alice

# Add with metadata
mem0 add "User prefers TypeScript over JavaScript" --metadata '{"category": "preference", "priority": "high"}'
```

资料来源：[cli/python/README.md](https://github.com/mem0ai/mem0/blob/main/cli/python/README.md)

### Search Memory

The **Search** operation retrieves relevant memories based on natural language queries. Mem0 converts the query into a vector embedding and performs similarity search against stored memories, returning results ranked by relevance. The search operation supports filtering by entity scope, metadata attributes, and memory types.

**Function signature and parameters:**

```python
def search(
    query: str,
    user_id: str | None = None,
    agent_id: str | None = None,
    app_id: str | None = None,
    run_id: str | None = None,
    version: str | None = "v1.1",
    limit: int = 10,
    category: str | None = None,
    filter: dict | None = None,
    rerank: bool = False
) -> list[dict]
```

资料来源：[docs/core-concepts/memory-operations/search.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/search.mdx)

**Search with metadata filtering:**

Metadata filtering allows precise memory retrieval based on specific attributes stored with each memory. This is particularly useful for retrieving memories that match certain criteria without relying solely on semantic similarity.

```python
result = memory.search(
    query="user preferences",
    user_id="alice",
    filter={
        "category": "preference",
        "priority": {"$eq": "high"}
    }
)
```

资料来源：[docs/open-source/features/metadata-filtering.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/metadata-filtering.mdx)

**CLI search examples:**

```bash
# Basic semantic search
mem0 search "What are Alice's preferences?" --user-id alice

# Search with output formatting
mem0 search "preferences" --output json --top-k 20

# Search within specific scope
mem0 search "agent behavior" --agent-id agent-123
```

### Get Memory

The **Get** operation retrieves a specific memory by its unique identifier. Unlike search which performs semantic similarity, get provides direct access to a known memory record for viewing, editing, or deletion operations.

**CLI usage:**

```bash
# Retrieve a specific memory by ID
mem0 get 7b3c1a2e-4d5f-6789-abcd-ef0123456789

# Get memory with JSON output for AI agent processing
mem0 get 7b3c1a2e-4d5f-6789-abcd-ef0123456789 --output json
```

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

### Update Memory

The **Update** operation modifies the content or metadata of an existing memory while preserving the memory's history and relationships. The update operation preserves the original memory ID and maintains audit trails of modifications.

**Function signature and parameters:**

```python
def update(
    memory_id: str,
    data: str | None = None,
    metadata: dict | None = None,
    user_id: str | None = None
) -> dict
```

资料来源：[docs/core-concepts/memory-operations/update.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/update.mdx)

**Update operation workflow:**

```mermaid
graph TD
    A[Update request with memory_id] --> B[Locate existing memory record]
    B --> C[Apply content or metadata changes]
    C --> D[Update vector embeddings if content changed]
    D --> E[Preserve modification history]
    E --> F[Return updated memory object]
```

**CLI update examples:**

```bash
# Update memory text
mem0 update <memory-id> "Updated preference text"

# Update metadata only
mem0 update <memory-id> --metadata '{"priority": "high"}'

# Update via stdin
echo "new text" | mem0 update <memory-id>
```

### Delete Memory

The **Delete** operation removes memories from the storage system. Mem0 supports multiple deletion strategies including single memory deletion, bulk deletion by scope, and entity-level deletion that removes all associated memories.

**Function signature and parameters:**

```python
def delete(
    memory_id: str | None = None,
    user_id: str | None = None,
    agent_id: str | None = None,
    app_id: str | None = None,
    run_id: str | None = None,
    delete_all: bool = False,
    confirm: bool = False
) -> dict
```

资料来源：[docs/core-concepts/memory-operations/delete.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-operations/delete.mdx)

**CLI delete examples:**

```bash
# Delete a single memory
mem0 delete <memory-id>

# Delete all memories for a user (with confirmation)
mem0 delete --all --user-id alice

# Delete all memories project-wide
mem0 delete --all --project --force

# Preview what would be deleted
mem0 delete --all --user-id alice --dry-run
```

**Delete flags reference:**

| Flag | Description |
|------|-------------|
| `--all` | Delete all memories matching scope filters |
| `--entity` | Delete the entity and all its memories |
| `--project` | With `--all`: delete all memories project-wide |
| `--dry-run` | Preview without deleting |
| `--force` | Skip confirmation prompt |

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

### List Memories

The **List** operation retrieves memories with optional filters, pagination, and sorting. Unlike search which returns semantically relevant results, list provides comprehensive enumeration of stored memories within specified scopes.

**CLI usage:**

```bash
# List all memories for a user
mem0 list --user-id alice

# List with pagination
mem0 list --user-id alice --page 1 --page-size 50

# List in JSON format for agent consumption
mem0 list --user-id alice --output json
```

## Entity Scopes

Mem0 organizes memories within hierarchical entity scopes that provide logical separation and access control. Each memory belongs to at least one entity identifier, creating an ownership hierarchy.

```mermaid
graph TB
    A[Memory Record] --> B[user_id]
    A --> C[agent_id]
    A --> D[app_id]
    A --> E[run_id]
    
    B --> F[User Entity]
    C --> G[Agent Entity]
    D --> H[Application Entity]
    E --> I[Run Entity]
    
    F --> J[Project/Organization]
    G --> J
    H --> J
    I --> J
```

**Entity scope parameters:**

| Parameter | Description | Use Case |
|-----------|-------------|----------|
| `user_id` | Identifies the end user | Personal preferences, history |
| `agent_id` | Identifies the AI agent | Agent behavior patterns, policies |
| `app_id` | Identifies the application | App-specific configurations |
| `run_id` | Identifies a session/run | Conversation context within a session |

资料来源：[docs/core-concepts/memory-types.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-types.mdx)

## Asynchronous Memory Operations

For large-scale memory operations that may take extended time to complete, Mem0 provides asynchronous execution modes. Async operations return immediately with an event ID that can be used to track progress and retrieve results.

**Async operation support:**

| Operation | Async Support | Return Value |
|-----------|---------------|--------------|
| `add` | Yes (bulk adds) | Event ID |
| `search` | Yes | Event ID |
| `delete` | Yes (bulk deletes) | Event ID |
| `update` | No | Updated memory |
| `get` | No | Memory object |
| `list` | Yes | Event ID |

资料来源：[docs/open-source/features/async-memory.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/async-memory.mdx)

**Event monitoring via CLI:**

```bash
# List recent background processing events
mem0 event list

# Check the status of a specific event
mem0 event status <event-id>
```

## Memory Types

Mem0 supports different memory types that serve distinct purposes in AI agent architectures. Each memory type has specific characteristics optimized for different retrieval patterns and use cases.

**Memory type reference:**

| Type | Purpose | Retrieval Pattern | Use Case |
|------|---------|-------------------|----------|
| `semantic` | Store facts and preferences | Semantic similarity search | User preferences, facts |
| `episodic` | Record events and conversations | Time-based, sequential | Conversation history |
| `procedural` | Store agent behaviors/actions | Task-based patterns | Agent workflows |
| `long-term` | Persistent cross-session memory | Multi-dimensional search | User profiles, knowledge |

资料来源：[docs/core-concepts/memory-types.mdx](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-types.mdx)

## Metadata and Filtering

Mem0 supports rich metadata storage and filtering capabilities that enable precise memory retrieval beyond semantic similarity. Metadata can include arbitrary key-value pairs that are indexed for efficient filtering.

**Metadata structure example:**

```python
memory = {
    "id": "mem_xxxxx",
    "memory": "User prefers dark mode for the interface",
    "metadata": {
        "category": "preference",
        "priority": "high",
        "source": "explicit_feedback",
        "tags": ["ui", "theme", "dark-mode"]
    },
    "created_at": "2025-01-15T10:30:00Z",
    "user_id": "alice"
}
```

**Filter operators supported:**

| Operator | Description | Example |
|----------|-------------|---------|
| `$eq` | Equals | `{"priority": {"$eq": "high"}}` |
| `$ne` | Not equals | `{"status": {"$ne": "archived"}}` |
| `$in` | In array | `{"category": {"$in": ["fact", "preference"]}}` |
| `$nin` | Not in array | `{"source": {"$nin": ["deprecated"]}}` |
| `$gt`, `$gte` | Greater than (or equal) | `{"score": {"$gt": 0.8}}` |
| `$lt`, `$lte` | Less than (or equal) | `{"priority": {"$lte": 5}}` |

资料来源：[docs/open-source/features/metadata-filtering.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/metadata-filtering.mdx)

## Custom Instructions

Custom instructions provide a mechanism to customize how Mem0 processes and interprets memories. These instructions guide the LLM in extracting relevant facts, categorizing information, and determining storage behavior.

**Configuration example:**

```python
memory = Memory()

# Set custom instructions for the memory instance
memory.configure(
    custom_instructions="Focus on extracting user preferences about product features. "
                        "Categorize memories by product area. "
                        "Prioritize recent explicit feedback over implicit observations."
)

# Add memory with custom processing
result = memory.add(
    messages="I really love the new dark mode feature in the settings panel",
    user_id="alice"
)
```

资料来源：[docs/open-source/features/custom-instructions.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/custom-instructions.mdx)

## Bulk Import

Mem0 supports bulk importing of memories from JSON files, enabling migration from other systems or initial data population.

**Import file format:**

```json
[
  {
    "memory": "User prefers dark mode",
    "user_id": "alice",
    "metadata": {"category": "preference"}
  },
  {
    "text": "Agent uses fallback strategy when API fails",
    "agent_id": "agent-123",
    "metadata": {"behavior": "error-handling"}
  },
  {
    "content": "Application has rate limiting enabled",
    "app_id": "app-production",
    "metadata": {"configuration": true}
  }
]
```

**CLI import command:**

```bash
mem0 import data.json --user-id alice
```

资料来源：[cli/node/README.md](https://github.com/mem0ai/mem0/blob/main/cli/node/README.md)

## Agent Mode

The CLI supports an agent mode that formats output specifically for AI agent tool loops. This mode returns structured JSON that can be easily parsed by AI systems for decision-making.

**Agent mode usage:**

```bash
mem0 --agent search "user preferences" --user-id alice
mem0 --agent add "User prefers dark mode" --user-id alice
mem0 --agent list --user-id alice
```

资料来源：[cli/python/README.md](https://github.com/mem0ai/mem0/blob/main/cli/python/README.md)

## Dashboard Memory Management

The Mem0 dashboard provides a web-based interface for viewing, searching, and managing memories. The memory operations are accessible through a visual interface that includes pagination, detail views, and deletion confirmation modals.

**Dashboard features:**

- Paginated memory listing with navigation controls
- Memory detail view showing content, ID, timestamps, and metadata
- Inline deletion with confirmation modal
- Search functionality within the memories page

资料来源：[server/dashboard/src/app/(root)/dashboard/memories/page.tsx](https://github.com/mem0ai/mem0/blob/main/server/dashboard/src/app/(root)/dashboard/memories/page.tsx)

## Configuration and Status

The Mem0 CLI provides commands for managing configuration and verifying connectivity.

**Configuration commands:**

```bash
mem0 config show              # Display current config (secrets redacted)
mem0 config get api_key       # Get a specific value
mem0 config set user_id bob   # Set a value

mem0 status                   # Verify API connection and display project
mem0 version                  # Print CLI version
```

## Operation Flow Summary

```mermaid
graph LR
    A[Client Request] --> B{Operation Type}
    
    B -->|add| C[Process & Store]
    B -->|search| D[Embed Query & Search]
    B -->|get| E[Direct Lookup]
    B -->|update| F[Modify & Re-index]
    B -->|delete| G[Remove from Store]
    
    C --> H[(Vector Store)]
    D --> H
    E --> H
    F --> H
    G --> H
    
    C --> I[Event ID]
    D --> J[Results]
    E --> K[Memory Object]
    F --> K
    G --> L[Confirmation]
```

## Error Handling

Memory operations may encounter various error conditions that should be handled appropriately in client applications.

**Common error scenarios:**

| Error | Cause | Resolution |
|-------|-------|------------|
| `EntityNotFoundError` | Referenced user/agent/app doesn't exist | Verify entity IDs before operations |
| `MemoryNotFoundError` | Memory ID doesn't exist | Check memory ID or use search |
| `ValidationError` | Invalid input format | Validate request parameters |
| `RateLimitError` | API rate limit exceeded | Implement exponential backoff |
| `ConnectionError` | Network or API endpoint unavailable | Retry with circuit breaker |

## See Also

- [Memory Types](https://github.com/mem0ai/mem0/blob/main/docs/core-concepts/memory-types.mdx) - Understanding semantic, episodic, procedural, and long-term memory
- [Async Memory](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/async-memory.mdx) - Large-scale asynchronous operations
- [Metadata Filtering](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/metadata-filtering.mdx) - Advanced filtering capabilities
- [Custom Instructions](https://github.com/mem0ai/mem0/blob/main/docs/open-source/features/custom-instructions.mdx) - Customizing memory processing behavior

---

<a id='page-ai-integration'></a>

## AI Model Integration

### 相关页面

相关主题：[Memory Operations](#page-memory-operations), [Embedding Models](#page-embeddings), [System Architecture](#page-architecture)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0/llms/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/llms/base.py)
- [mem0/llms/openai.py](https://github.com/mem0ai/mem0/blob/main/mem0/llms/openai.py)
- [mem0/llms/anthropic.py](https://github.com/mem0ai/mem0/blob/main/mem0/llms/anthropic.py)
- [mem0/llms/azure_openai.py](https://github.com/mem0ai/mem0/blob/main/mem0/llms/azure_openai.py)
- [mem0/llms/gemini.py](https://github.com/mem0ai/mem0/blob/main/mem0/llms/gemini.py)
- [mem0/configs/llms/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/configs/llms/base.py)
- [mem0/configs/llms/__init__.py](https://github.com/mem0ai/mem0/blob/main/mem0/configs/llms/__init__.py)
- [docs/components/llms/overview.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/llms/overview.mdx)
- [docs/components/llms/models/openai.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/llms/models/openai.mdx)
- [docs/components/llms/models/anthropic.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/llms/models/anthropic.mdx)
</details>

# AI Model Integration

## Overview

The AI Model Integration module in mem0 provides a unified abstraction layer for interacting with various large language model (LLM) providers. This architecture enables seamless switching between different AI backends while maintaining a consistent interface for memory operations. 资料来源：[docs/components/llms/overview.mdx:1-5]()

## Architecture

The integration follows a **Provider Pattern** with a base class defining the contract and provider-specific implementations extending it.

```mermaid
graph TD
    A[mem0 Core] --> B[LLM Base Interface]
    B --> C[OpenAI Provider]
    B --> D[Anthropic Provider]
    B --> E[Azure OpenAI Provider]
    B --> F[Gemini Provider]
    
    C --> G[OpenAI API]
    D --> H[Anthropic API]
    E --> I[Azure Cognitive Services]
    F --> J[Google AI API]
```

## Supported Providers

| Provider | Model Class | API Type | Status |
|----------|-------------|----------|--------|
| OpenAI | `OpenAILargeLanguageModel` | REST | Production |
| Anthropic | `AnthropicLargeLanguageModel` | REST | Production |
| Azure OpenAI | `AzureOpenAILargeLanguageModel` | REST | Production |
| Google Gemini | `GeminiLargeLanguageModel` | REST | Production |

资料来源：[mem0/llms/base.py:1-20]()

## Base Interface

All LLM providers inherit from `LargeLanguageModel` base class which defines the core contract:

```python
class LargeLanguageModel(ABC):
    @abstractmethod
    def generate_response(self, messages, **kwargs):
        pass
    
    @abstractmethod
    def get_model_name(self):
        pass
```

资料来源：[mem0/llms/base.py:15-30]()

### Core Methods

| Method | Purpose | Parameters |
|--------|---------|------------|
| `generate_response` | Generate text completion | `messages`, `**kwargs` |
| `get_model_name` | Return model identifier | None |

## Provider Implementations

### OpenAI Integration

The OpenAI provider supports GPT-4, GPT-4 Turbo, and GPT-3.5 Turbo models through the OpenAI API.

```python
class OpenAILargeLanguageModel(LargeLanguageModel):
    def __init__(
        self,
        model: str = "gpt-4",
        api_key: str = None,
        temperature: float = 0.7,
        max_tokens: int = 2000,
        **kwargs
    ):
```

资料来源：[mem0/llms/openai.py:10-25]()

**Configuration Parameters:**

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `model` | `str` | `"gpt-4"` | Model identifier |
| `api_key` | `str` | `None` | OpenAI API key |
| `temperature` | `float` | `0.7` | Response randomness |
| `max_tokens` | `int` | `2000` | Maximum response length |

**Environment Variable:** `OPENAI_API_KEY`

资料来源：[docs/components/llms/models/openai.mdx:1-15]()

### Anthropic Integration

The Anthropic provider enables access to Claude models through the Anthropic API.

```python
class AnthropicLargeLanguageModel(LargeLanguageModel):
    def __init__(
        self,
        model: str = "claude-3-5-sonnet-20241022",
        api_key: str = None,
        temperature: float = 0.7,
        max_tokens: int = 2000,
        **kwargs
    ):
```

资料来源：[mem0/llms/anthropic.py:10-25]()

**Supported Models:**

| Model | Context Window | Best For |
|-------|----------------|----------|
| `claude-3-5-sonnet-20241022` | 200K tokens | Balanced performance |
| `claude-3-opus-20240229` | 200K tokens | Complex reasoning |
| `claude-3-haiku-20240307` | 200K tokens | Fast, cost-effective |

**Environment Variable:** `ANTHROPIC_API_KEY`

资料来源：[docs/components/llms/models/anthropic.mdx:1-20]()

### Azure OpenAI Integration

Azure OpenAI provides enterprise-grade access with compliance features and regional deployment options.

```python
class AzureOpenAILargeLanguageModel(LargeLanguageModel):
    def __init__(
        self,
        model: str = "gpt-4",
        api_key: str = None,
        azure_endpoint: str = None,
        api_version: str = "2024-02-01",
        temperature: float = 0.7,
        max_tokens: int = 2000,
        **kwargs
    ):
```

资料来源：[mem0/llms/azure_openai.py:10-30]()

**Azure-Specific Parameters:**

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `azure_endpoint` | `str` | Yes | Azure endpoint URL |
| `api_version` | `str` | Yes | API version string |
| `azure_deployment` | `str` | No | Deployment name |

**Environment Variables:**
- `AZURE_OPENAI_API_KEY`
- `AZURE_OPENAI_ENDPOINT`

### Google Gemini Integration

The Gemini provider integrates with Google AI's Gemini models for multimodal capabilities.

```python
class GeminiLargeLanguageModel(LargeLanguageModel):
    def __init__(
        self,
        model: str = "gemini-2.0-flash-exp",
        api_key: str = None,
        temperature: float = 0.7,
        max_tokens: int = 2000,
        **kwargs
    ):
```

资料来源：[mem0/llms/gemini.py:10-25]()

**Supported Models:**

| Model | Context Window | Features |
|-------|----------------|----------|
| `gemini-2.0-flash-exp` | 1M tokens | Latest, fastest |
| `gemini-1.5-pro` | 1M tokens | Long context |
| `gemini-1.5-flash` | 1M tokens | Balanced |

**Environment Variable:** `GEMINI_API_KEY`

## Configuration System

### Base Configuration

All LLM configurations inherit from `LLMConfig` using Pydantic for validation:

```python
class LLMConfig(BaseModel):
    provider: str
    model: str
    temperature: float = 0.7
    max_tokens: int = 2000
    extra_params: dict = {}
```

资料来源：[mem0/configs/llms/base.py:1-20]()

### Configuration Factory

The `LLMConfigs` class provides a centralized configuration registry:

```python
class LLMConfigs:
    @staticmethod
    def get_config(provider: str) -> LLMConfig:
        # Returns provider-specific configuration
        pass
```

资料来源：[mem0/configs/llms/__init__.py:1-30]()

## Usage Patterns

### Direct Instantiation

```python
from mem0.llms.openai import OpenAILargeLanguageModel

llm = OpenAILargeLanguageModel(
    model="gpt-4",
    temperature=0.3,
    max_tokens=1000
)

response = llm.generate_response(messages=[
    {"role": "user", "content": "Summarize my notes"}
])
```

### Configuration-Based

```python
from mem0.configs.llms import LLMConfigs

config = LLMConfigs.get_config("openai")
llm = config.initialize()
```

## Message Format

All providers accept a standardized message format:

```python
messages = [
    {"role": "system", "content": "You are a helpful assistant"},
    {"role": "user", "content": "What is mem0?"},
    {"role": "assistant", "content": "Mem0 is a memory system..."},
    {"role": "user", "content": "Tell me more"}
]
```

| Role | Description |
|------|-------------|
| `system` | System-level instructions |
| `user` | User input messages |
| `assistant` | Model responses |

## Error Handling

All LLM providers implement consistent error handling:

```python
try:
    response = llm.generate_response(messages)
except AuthenticationError:
    # Handle invalid API key
    pass
except RateLimitError:
    # Handle rate limiting
    pass
except APIConnectionError:
    # Handle connection issues
    pass
```

## Extending the Framework

To add a new LLM provider:

1. Create a new class inheriting from `LargeLanguageModel`
2. Implement `generate_response()` and `get_model_name()` methods
3. Add provider-specific configuration in `mem0/configs/llms/`
4. Register the provider in the configuration factory

```python
class CustomLLM(LargeLanguageModel):
    def __init__(self, model: str = "custom-model", **kwargs):
        self.model = model
    
    def generate_response(self, messages, **kwargs):
        # Implementation
        pass
    
    def get_model_name(self):
        return self.model
```

## Security Considerations

- API keys should be provided via environment variables, not hardcoded
- Rate limiting is handled by the underlying provider APIs
- Azure OpenAI supports managed identity for enterprise deployments
- Gemini supports API key restrictions in Google Cloud Console

---

<a id='page-vector-stores'></a>

## Vector Stores and Storage

### 相关页面

相关主题：[System Architecture](#page-architecture), [Embedding Models](#page-embeddings), [Memory Operations](#page-memory-operations)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0/vector_stores/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/base.py)
- [mem0/vector_stores/pinecone.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/pinecone.py)
- [mem0/vector_stores/qdrant.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/qdrant.py)
- [mem0/vector_stores/chroma.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/chroma.py)
- [mem0/vector_stores/pgvector.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/pgvector.py)
- [mem0/vector_stores/weaviate.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/weaviate.py)
- [mem0/vector_stores/redis.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/redis.py)
- [mem0/vector_stores/configs.py](https://github.com/mem0ai/mem0/blob/main/mem0/vector_stores/configs.py)
- [docs/components/vectordbs/overview.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/vectordbs/overview.mdx)
- [docs/components/vectordbs/config.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/vectordbs/config.mdx)
</details>

# Vector Stores and Storage

## Overview

Vector stores in Mem0 provide the foundational persistence layer for semantic memory storage and retrieval. Mem0 supports multiple vector database backends, allowing users to choose the storage solution that best fits their infrastructure requirements, scale needs, and operational constraints.

The vector store system enables:

- **Semantic Search**: Store memory embeddings and retrieve relevant memories based on cosine similarity
- **Multi-Provider Support**: Integrate with popular vector databases including Pinecone, Qdrant, Chroma, PGVector, Weaviate, and Redis
- **Unified Interface**: Consistent API across all providers through an abstract base class
- **Metadata Filtering**: Filter memories by user_id, agent_id, run_id, and custom metadata
- **Scalability**: Support for both local development (Chroma) and production-scale deployments (Pinecone, Qdrant)

资料来源：[docs/components/vectordbs/overview.mdx]()

## Architecture

Mem0 implements a provider-based architecture for vector stores. The system consists of:

1. **Base Vector Store Interface**: Abstract class defining the contract all providers must implement
2. **Provider Implementations**: Concrete implementations for each supported vector database
3. **Configuration System**: Provider-specific configuration management
4. **Factory Pattern**: Dynamic instantiation based on provider selection

```mermaid
graph TD
    A[Mem0 Memory Core] --> B[VectorStoreFactory]
    B --> C[BaseVectorStore]
    C --> D[Pinecone]
    C --> E[Qdrant]
    C --> F[Chroma]
    C --> G[PGVector]
    C --> H[Weaviate]
    C --> I[Redis]
    
    J[Embedding Service] --> K[Vector Store]
    K --> L[Semantic Search Results]
```

资料来源：[mem0/vector_stores/base.py]()

## Base Vector Store Interface

All vector store providers inherit from `BaseVectorStore`, which defines the core operations required for memory storage and retrieval.

### Core Methods

| Method | Description |
|--------|-------------|
| `add` | Insert vectors with associated metadata into the store |
| `search` | Query vectors by semantic similarity with optional filters |
| `get` | Retrieve specific vector entries by ID |
| `delete` | Remove vectors from the store |
| `update` | Modify existing vector entries |
| `list` | List all vectors with optional pagination and filters |

资料来源：[mem0/vector_stores/base.py]()

### Data Model

Each vector entry in the store contains:

```python
{
    "id": str,           # Unique identifier (UUID)
    "vector": List[float],  # Embedding vector
    "data": str,         # Original text content
    "metadata": {
        "user_id": str,
        "agent_id": Optional[str],
        "run_id": Optional[str],
        "event": Optional[str],
        "created_at": str,
        "memory_type": Optional[str]
    }
}
```

资料来源：[mem0/vector_stores/base.py]()

## Supported Providers

### Provider Comparison

| Provider | Type | Deployment | Scalability | Use Case |
|----------|------|------------|-------------|----------|
| **Chroma** | Local/Embedded | In-process | Low | Development, prototyping |
| **Pinecone** | Cloud/Managed | Hosted | Very High | Production at scale |
| **Qdrant** | Self-hosted/Cloud | Docker/K8s | High | Self-hosted production |
| **PGVector** | Self-hosted | PostgreSQL extension | High | Existing Postgres infra |
| **Weaviate** | Self-hosted/Cloud | Docker/K8s | High | Knowledge graphs |
| **Redis** | Self-hosted/Cloud | Redis Stack | Medium | Cache + vector hybrid |

资料来源：[docs/components/vectordbs/overview.mdx]()

### Chroma (Development)

Chroma is the default vector store for local development and testing. It runs as an embedded database within the application process.

**Characteristics:**
- Zero-configuration setup
- In-process operation
- File-based persistence
- Best for development and evaluation

资料来源：[mem0/vector_stores/chroma.py]()

### Pinecone (Cloud)

Pinecone is a managed vector database service offering serverless and pod-based deployments.

**Configuration:**
```python
{
    "vector_store": {
        "provider": "pinecone",
        "config": {
            "api_key": "your-api-key",
            "index_name": "mem0-memory",
            "environment": "gcp-starter"
        }
    }
}
```

资料来源：[mem0/vector_stores/pinecone.py](), [docs/components/vectordbs/config.mdx]()

### Qdrant (Self-hosted/Cloud)

Qdrant is an open-source vector search engine with both self-hosted and cloud options.

**Configuration:**
```python
{
    "vector_store": {
        "provider": "qdrant",
        "config": {
            "host": "localhost",
            "port": 6333,
            "collection_name": "mem0"
        }
    }
}
```

资料来源：[mem0/vector_stores/qdrant.py]()

### PGVector (PostgreSQL)

PGVector extends PostgreSQL with vector similarity search capabilities, ideal for applications already using PostgreSQL.

**Configuration:**
```python
{
    "vector_store": {
        "provider": "pgvector",
        "config": {
            "host": "localhost",
            "port": 5432,
            "dbname": "mem0",
            "user": "postgres",
            "password": "password"
        }
    }
}
```

资料来源：[mem0/vector_stores/pgvector.py]()

### Weaviate

Weaviate is an open-source vector database with built-in support for hybrid search and knowledge graphs.

**Configuration:**
```python
{
    "vector_store": {
        "provider": "weaviate",
        "config": {
            "url": "http://localhost:8080",
            "api_key": "your-api-key",  # Optional, for cloud
            "index_name": "Mem0"
        }
    }
}
```

资料来源：[mem0/vector_stores/weaviate.py]()

### Redis

Redis Stack provides vector search capabilities built on the popular in-memory data store.

**Configuration:**
```python
{
    "vector_store": {
        "provider": "redis",
        "config": {
            "host": "localhost",
            "port": 6379,
            "index_name": "mem0",
            "password": "password"  # Optional
        }
    }
}
```

资料来源：[mem0/vector_stores/redis.py]()

## Configuration System

### Configuration Schema

The vector store configuration is defined in `configs.py` and follows a structured schema:

```python
@dataclass
class VectorStoreConfig:
    provider: str                    # Provider name
    collection_name: str             # Collection/index name
    embedding_model_dims: int        # Embedding dimension size
    api_key: Optional[str] = None   # Provider API key
    # ... additional provider-specific fields
```

资料来源：[mem0/vector_stores/configs.py]()

### Configuration File

Vector store settings are typically defined in `config.yaml`:

```yaml
vector_store:
  provider: "chroma"  # or pinecone, qdrant, pgvector, weaviate, redis
  collection_name: "mem0"
  embedding_model_dims: 1536
```

资料来源：[docs/components/vectordbs/config.mdx]()

### Environment Variables

Many providers support configuration via environment variables:

| Variable | Provider | Description |
|----------|----------|-------------|
| `PINECONE_API_KEY` | Pinecone | Pinecone API key |
| `QDRANT_HOST` | Qdrant | Qdrant server host |
| `REDIS_PASSWORD` | Redis | Redis authentication |
| `WEAVIATE_API_KEY` | Weaviate | Weaviate cloud API key |

## Search Operations

### Semantic Search

The primary operation for memory retrieval is semantic search, which finds vectors most similar to a query embedding.

```python
results = vector_store.search(
    query="user's preference for morning coffee",
    limit=5,
    filters={
        "user_id": "user-123"
    }
)
```

### Search Parameters

| Parameter | Type | Default | Description |
|-----------|------|---------|-------------|
| `query` | str | Required | Search query text |
| `limit` | int | 10 | Maximum results to return |
| `filters` | dict | None | Metadata filters |
| `min_score` | float | None | Minimum similarity threshold |

资料来源：[mem0/vector_stores/base.py]()

### Metadata Filtering

Mem0 supports filtering search results by various metadata fields:

```python
filters = {
    "user_id": "user-123",           # Required: filter by user
    "agent_id": "agent-456",         # Optional: filter by agent
    "run_id": "run-789",             # Optional: filter by session
    "memory_type": "preference",     # Optional: filter by type
    "created_at": {"$gte": "2024-01-01"}  # Optional: time-based
}
```

## Memory Management

### Adding Memories

```python
vector_store.add(
    vectors=embeddings,
    documents=memory_texts,
    metadatas=metadata_list
)
```

### Updating Memories

```python
vector_store.update(
    id="memory-uuid",
    vector=new_embedding,
    data=new_text,
    metadata=updated_metadata
)
```

### Deleting Memories

```python
# Delete single memory
vector_store.delete(id="memory-uuid")

# Delete all memories for a user
vector_store.delete(filters={"user_id": "user-123"})

# Delete all memories
vector_store.delete(delete_all=True)
```

## Embedding Integration

Vector stores work in conjunction with Mem0's embedding service to convert text into vector representations.

```mermaid
graph LR
    A[User Message] --> B[Embedding Service]
    B --> C[Embedding Vector]
    C --> D[Vector Store]
    D --> E[Storage / Retrieval]
    
    F[Search Query] --> G[Embedding Service]
    G --> H[Query Vector]
    H --> D
    D --> I[Similarity Search]
    I --> J[Top-K Results]
```

The embedding dimension must match the vector store configuration. Mem0 uses 1536 dimensions by default (OpenAI text-embedding-3-small).

资料来源：[docs/components/vectordbs/overview.mdx]()

## Best Practices

### Development vs Production

| Aspect | Development | Production |
|--------|-------------|------------|
| **Provider** | Chroma | Pinecone/Qdrant/PGVector |
| **Deployment** | Local embedded | Managed/self-hosted |
| **Persistence** | File-based | Cloud/server |
| **Scaling** | Limited | Horizontal |

### Performance Considerations

1. **Index Management**: Ensure proper indexing is configured for your provider
2. **Batch Operations**: Use batch inserts when adding multiple memories
3. **Connection Pooling**: Configure connection pools for high-throughput scenarios
4. **Embedding Cache**: Cache embeddings to avoid redundant computations

### Security

- Store API keys in environment variables, not in configuration files
- Use TLS/SSL connections for production deployments
- Implement proper access controls based on user_id filtering

## Troubleshooting

### Common Issues

| Issue | Cause | Solution |
|-------|-------|----------|
| `Dimension mismatch` | Embedding model dims != index config | Update `embedding_model_dims` in config |
| `Connection refused` | Wrong host/port | Verify provider configuration |
| `Authentication failed` | Invalid API key | Check API key in environment |
| `Index not found` | Collection doesn't exist | Create index or use auto-creation |

### Debug Mode

Enable verbose logging for vector store operations:

```python
import logging
logging.getLogger("mem0.vector_stores").setLevel(logging.DEBUG)
```

## See Also

- [Memory Core](memory-core) - The main memory orchestration layer
- [Embedding Services](embeddings) - Text vectorization
- [Configuration Guide](../setup/configuration) - Full configuration reference
- [Deployment Guide](../deployment) - Production deployment patterns

---

<a id='page-embeddings'></a>

## Embedding Models

### 相关页面

相关主题：[AI Model Integration](#page-ai-integration), [Vector Stores and Storage](#page-vector-stores)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0/embeddings/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/embeddings/base.py)
- [mem0/embeddings/openai.py](https://github.com/mem0ai/mem0/blob/main/mem0/embeddings/openai.py)
- [mem0/embeddings/azure_openai.py](https://github.com/mem0ai/mem0/blob/main/mem0/embeddings/azure_openai.py)
- [mem0/embeddings/huggingface.py](https://github.com/mem0ai/mem0/blob/main/mem0/embeddings/huggingface.py)
- [mem0/embeddings/ollama.py](https://github.com/mem0ai/mem0/blob/main/mem0/embeddings/ollama.py)
- [mem0/embeddings/configs.py](https://github.com/mem0ai/mem0/blob/main/mem0/embeddings/configs.py)
- [docs/components/embedders/overview.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/embedders/overview.mdx)
- [docs/components/embedders/models/openai.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/embedders/models/openai.mdx)
- [docs/components/embedders/models/huggingface.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/embedders/models/huggingface.mdx)
- [docs/components/embedders/models/ollama.mdx](https://github.com/mem0ai/mem0/blob/main/docs/components/embedders/models/ollama.mdx)
</details>

# Embedding Models

Embedding models are a fundamental component of the mem0 memory system. They transform textual information into dense vector representations (embeddings) that enable semantic search, similarity matching, and efficient memory retrieval. The embedding layer sits at the core of mem0's architecture, bridging raw user interactions with the vector-based storage layer.

## Overview

Mem0 provides a flexible, provider-agnostic embedding abstraction that supports multiple embedding backends while maintaining a consistent interface. This design allows users to choose embedding providers based on their requirements for cost, latency, privacy, or quality.

The embedding system in mem0 is built around an abstract base class that defines the contract for all concrete implementations. Each provider implementation handles the specifics of API communication, response parsing, and error handling while conforming to the unified interface.

**Key characteristics of mem0's embedding layer:**

- Provider-agnostic abstraction with consistent API across implementations
- Support for both cloud-based and local embedding models
- Configuration-driven provider selection
- Seamless integration with the vector storage layer
- Extensible architecture for adding custom embedding providers

## Architecture

```mermaid
graph TD
    A[User Input] --> B[Memory Layer]
    B --> C[Embedding Module]
    C --> D[Vector Store]
    
    C --> E[OpenAI Embedder]
    C --> F[Azure OpenAI Embedder]
    C --> G[HuggingFace Embedder]
    C --> H[Ollama Embedder]
    
    E --> I[text-embedding-3-small]
    F --> J[Azure OpenAI Models]
    G --> K[HF Sentence Transformers]
    H --> L[Local Ollama Models]
    
    D --> M[Semantic Search]
    D --> N[Memory Retrieval]
    D --> O[Similarity Matching]
```

## Supported Providers

Mem0 supports multiple embedding providers to accommodate various deployment scenarios. Each provider implements the same abstract interface, allowing transparent switching between backends.

### Provider Comparison

| Provider | Type | Default Model | API Key Required | Local Model Support |
|----------|------|---------------|------------------|---------------------|
| OpenAI | Cloud | `text-embedding-3-small` | Yes | No |
| Azure OpenAI | Cloud | Configurable | Yes | No |
| HuggingFace | Cloud/Self-hosted | Various sentence-transformers | Optional | Yes |
| Ollama | Local | `nomic-embed-text` | No | Yes |

## Configuration

Embedding models are configured through the mem0 configuration system. Each provider has its own configuration parameters, but all share a common structure.

### Basic Configuration

```python
from mem0 import Memory

config = {
    "embedder": {
        "provider": "openai",
        "config": {
            "model": "text-embedding-3-small",
            "api_key": "sk-..."
        }
    }
}

memory = Memory.from_config(config)
```

### Environment Variable Configuration

Many configuration parameters can be set via environment variables, simplifying deployment and reducing boilerplate code:

| Environment Variable | Description | Provider |
|---------------------|-------------|----------|
| `OPENAI_API_KEY` | OpenAI API key for embeddings | OpenAI |
| `AZURE_OPENAI_API_KEY` | Azure OpenAI API key | Azure OpenAI |
| `HF_TOKEN` | HuggingFace API token | HuggingFace |
| `OLLAMA_BASE_URL` | Ollama server URL | Ollama |

## OpenAI Embeddings

The OpenAI embedder provides access to OpenAI's embedding models through the official API. It is the default provider in mem0 and offers a balance of quality, cost, and ease of use.

### Supported Models

| Model | Dimensions | Output Format | Use Case |
|-------|------------|---------------|----------|
| `text-embedding-3-small` | 1536 | Float32 | General purpose, recommended |
| `text-embedding-3-large` | 3072 | Float32 | Higher quality, larger vectors |
| `text-embedding-ada-002` | 1536 | Float32 | Legacy model, compatible |

### Configuration Options

```python
{
    "provider": "openai",
    "config": {
        "model": "text-embedding-3-small",  # Optional, defaults to text-embedding-3-small
        "api_key": "sk-...",                 # Optional if OPENAI_API_KEY is set
        "base_url": "https://api.openai.com/v1",  # Optional, for proxies
        "timeout": 60,                       # Optional, request timeout in seconds
        "max_retries": 3                     # Optional, number of retries on failure
    }
}
```

## Azure OpenAI Embeddings

Azure OpenAI embeddings provide the same model quality as OpenAI with enterprise-grade security, compliance, and regional availability. This is the preferred option for organizations requiring Azure infrastructure.

### Configuration Options

```python
{
    "provider": "azure_openai",
    "config": {
        "model": "text-embedding-3-small",    # Model deployment name
        "api_key": "your-azure-api-key",
        "azure_endpoint": "https://your-resource.openai.azure.com",
        "azure_deployment": "your-deployment-name",
        "api_version": "2024-02-01"           # Optional, Azure API version
    }
}
```

## HuggingFace Embeddings

The HuggingFace embedder supports both cloud-based inference and local models from the HuggingFace ecosystem. This provides flexibility for privacy-sensitive applications or cost optimization.

### Supported Model Families

| Model Type | Examples | Description |
|------------|----------|-------------|
| Sentence Transformers | `all-MiniLM-L6-v2`, `BAAI/bge-large` | Optimized for sentence-level embeddings |
| Generic Transformers | `bert-base-uncased` | General-purpose transformer models |

### Configuration Options

```python
{
    "provider": "huggingface",
    "config": {
        "model": "sentence-transformers/all-MiniLM-L6-v2",  # Model identifier
        "token": "hf_...",           # Optional, for gated models
        "device": "cpu",             # Optional, cpu/cuda/mps
        "encode_kwargs": {          # Optional, encoding parameters
            "normalize_embeddings": True
        }
    }
}
```

## Ollama Embeddings

Ollama enables running embedding models entirely locally, providing complete data privacy and no API costs. This is ideal for development, testing, or production environments with strict data residency requirements.

### Supported Models

| Model | Dimensions | Description |
|-------|------------|-------------|
| `nomic-embed-text` | 768 | High-quality, efficient embeddings |
| `mxbai-embed-large` | 1024 | Larger model for higher quality |
| Custom Ollama models | Variable | Any embedding model available in Ollama |

### Configuration Options

```python
{
    "provider": "ollama",
    "config": {
        "model": "nomic-embed-text",        # Model name
        "base_url": "http://localhost:11434" # Ollama server URL
    }
}
```

## Base Interface

All embedding providers inherit from the abstract base class that defines the standard interface:

```python
class EmbedderBase(ABC):
    @abstractmethod
    def embed(self, text: str) -> List[float]:
        """Generate embedding vector for a single text."""
        pass
    
    @abstractmethod
    def embed_batch(self, texts: List[str]) -> List[List[float]]:
        """Generate embedding vectors for multiple texts."""
        pass
    
    @abstractmethod
    def get_vector_size(self) -> int:
        """Return the dimensionality of embedding vectors."""
        pass
```

## Usage Patterns

### Single Text Embedding

```python
from mem0 import Memory

memory = Memory()
result = memory.add("User prefers dark mode theme", user_id="user123")
```

### Batch Embedding

```python
from mem0 import Memory

memory = Memory()
messages = [
    "User lives in San Francisco",
    "Prefers coffee over tea",
    "Works as a software engineer"
]
result = memory.add_batch(messages, user_id="user123")
```

### Semantic Search with Custom Embedder

```python
from mem0 import Memory

config = {
    "embedder": {
        "provider": "ollama",
        "config": {
            "model": "nomic-embed-text",
            "base_url": "http://localhost:11434"
        }
    }
}

memory = Memory.from_config(config)
results = memory.search("What are the user's preferences?", user_id="user123")
```

## Extending with Custom Providers

To add a new embedding provider, implement the `EmbedderBase` abstract class:

```python
from mem0.embeddings.base import EmbedderBase

class CustomEmbedder(EmbedderBase):
    def __init__(self, config: dict):
        self.config = config
        # Initialize your embedding client
    
    def embed(self, text: str) -> List[float]:
        # Implement single text embedding
        pass
    
    def embed_batch(self, texts: List[str]) -> List[List[float]]:
        # Implement batch embedding
        pass
    
    def get_vector_size(self) -> int:
        # Return embedding dimensions
        pass
```

## Best Practices

1. **Model Selection**: Choose `text-embedding-3-small` for general use cases as it offers the best balance of quality and cost. Use `text-embedding-3-large` when higher accuracy is required.

2. **Local Deployment**: For privacy-sensitive applications, use Ollama with `nomic-embed-text` to keep all data local.

3. **Consistent Embedding Dimensions**: Ensure all memories use the same embedding model and configuration for proper similarity calculations.

4. **API Key Management**: Use environment variables for API keys in production environments rather than hardcoding credentials.

5. **Error Handling**: Implement appropriate retry logic and timeout settings, especially when using cloud-based embedding providers.

## Related Components

- **Vector Store**: The embedding layer feeds into the vector storage system for efficient similarity search
- **Memory Layer**: High-level memory operations use embeddings for storage and retrieval
- **Configuration System**: Centralized configuration management for all embedding providers

---

<a id='page-python-sdk'></a>

## Python SDK

### 相关页面

相关主题：[Memory Operations](#page-memory-operations), [TypeScript/Node.js SDK](#page-typescript-sdk)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0/__init__.py](https://github.com/mem0ai/mem0/blob/main/mem0/__init__.py)
- [mem0/memory/main.py](https://github.com/mem0ai/mem0/blob/main/mem0/memory/main.py)
- [mem0/client/main.py](https://github.com/mem0ai/mem0/blob/main/mem0/client/main.py)
- [mem0/configs/base.py](https://github.com/mem0ai/mem0/blob/main/mem0/configs/base.py)
- [mem0/exceptions.py](https://github.com/mem0ai/mem0/blob/main/mem0/exceptions.py)
- [docs/open-source/python-quickstart.mdx](https://github.com/mem0ai/mem0/blob/main/docs/open-source/python-quickstart.mdx)
- [docs/api-reference.mdx](https://github.com/mem0ai/mem0/blob/main/docs/api-reference.mdx)
</details>

# Python SDK

## Overview

The mem0 Python SDK provides a programmatic interface for integrating memory management capabilities into AI applications. It enables developers to store, retrieve, search, and manage persistent memory across AI agent interactions, supporting both self-hosted deployments and managed cloud services.

资料来源：[mem0/__init__.py:1-50]()

## Architecture

The SDK is structured around three core components that handle different aspects of memory operations:

```mermaid
graph TD
    A[Client Layer] --> B[Memory Layer]
    A --> C[Configuration]
    B --> D[Vector Store]
    B --> E[LLM Integration]
    C --> F[BaseConfig]
    C --> G[LLMConfig]
    C --> H[VectorStoreConfig]
```

### Core Components

| Component | File | Purpose |
|-----------|------|---------|
| Client | `mem0/client/main.py` | High-level API for cloud and self-hosted deployments |
| Memory | `mem0/memory/main.py` | Core memory operations engine |
| Configs | `mem0/configs/base.py` | Configuration management for providers |

资料来源：[mem0/client/main.py:1-30]()

## Installation

Install the mem0 package along with required dependencies:

```bash
pip install mem0ai
```

For specific LLM and vector store backends, install additional packages:

```bash
# OpenAI + Qdrant
pip install mem0ai[openai,qdrant]

# Azure OpenAI + Chroma
pip install mem0ai[azure-openai,chromadb]
```

资料来源：[docs/open-source/python-quickstart.mdx:1-50]()

## Quick Start

### Basic Memory Operations

```python
from mem0 import Memory

# Initialize memory instance
memory = Memory()

# Add memories
result = memory.add(
    messages=[
        {"role": "user", "content": "I'm planning to visit Tokyo next month."},
        {"role": "assistant", "content": "That's exciting! Tokyo has great places to visit."}
    ],
    user_id="user_123"
)

# Search memories
results = memory.search(
    query="What are my travel plans?",
    user_id="user_123"
)

# Get all memories for a user
all_memories = memory.get_all(user_id="user_123")

# Update a memory
memory.update(memory_id="mem_xxx", data="Updated content here")

# Delete a memory
memory.delete(memory_id="mem_xxx")
```

资料来源：[docs/open-source/python-quickstart.mdx:50-100]()

## Configuration

### Configuration Parameters

| Parameter | Type | Description | Default |
|-----------|------|-------------|---------|
| `llm` | dict | LLM provider configuration | Required |
| `vector_store` | dict | Vector store provider configuration | Required |
| `embedder` | dict | Embedding model configuration | Optional |
| `memory_history_limit` | int | Number of conversation turns to retain | 20 |

资料来源：[mem0/configs/base.py:1-80]()

### LLM Configuration

```python
from mem0 import Memory
from mem0.configs.base import LLMConfig

config = LLMConfig(
    provider="openai",
    model="gpt-4o",
    api_key="your-api-key"
)

memory = Memory.from_config(llm_config=config)
```

### Vector Store Configuration

```python
from mem0.configs.base import VectorStoreConfig

vector_config = VectorStoreConfig(
    provider="qdrant",
    host="localhost",
    port=6333,
    collection_name="memories"
)
```

资料来源：[mem0/configs/base.py:80-150]()

## Memory Operations API

### Adding Memories

The `add()` method stores new memories from conversation messages:

```python
memory.add(
    messages=[{"role": "user", "content": "User message"}],
    user_id="user_123",
    session_id="session_456",
    metadata={"source": "chat"}
)
```

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `messages` | list[dict] | Yes | List of message objects with role and content |
| `user_id` | str | Yes | Unique identifier for the user |
| `session_id` | str | No | Session or conversation identifier |
| `metadata` | dict | No | Additional metadata to attach |

资料来源：[mem0/memory/main.py:100-180]()

### Searching Memories

```python
results = memory.search(
    query="Find information about...",
    user_id="user_123",
    limit=5,
    rerank=True
)
```

| Parameter | Type | Required | Description |
|-----------|------|----------|-------------|
| `query` | str | Yes | Search query text |
| `user_id` | str | No | Filter by user |
| `limit` | int | No | Maximum results (default: 10) |
| `rerank` | bool | No | Apply reranking to results |

资料来源：[mem0/memory/main.py:180-250]()

### Retrieving Memories

```python
# Get all memories for a user
all_memories = memory.get_all(user_id="user_123")

# Get specific memory by ID
memory_item = memory.get(memory_id="mem_xxx")
```

### Updating Memories

```python
memory.update(
    memory_id="mem_xxx",
    data="Updated memory content",
    metadata={"key": "value"}
)
```

### Deleting Memories

```python
# Delete specific memory
memory.delete(memory_id="mem_xxx")

# Delete all memories for a user
memory.delete_all(user_id="user_123")
```

资料来源：[mem0/memory/main.py:250-350]()

## Client Interface

The `Mem0` client provides a unified interface for interacting with mem0 services:

```python
from mem0 import Mem0

# Initialize client
client = Mem0(api_key="your-api-key", app_id="your-app-id")

# Add memories via client
result = client.add(
    messages=[{"role": "user", "content": "Hello"}],
    user_id="user_123"
)
```

资料来源：[mem0/client/main.py:1-100]()

## Exception Handling

The SDK defines custom exceptions for error handling:

| Exception | Description |
|-----------|-------------|
| `Mem0Exception` | Base exception class for all mem0 errors |
| `ValidationError` | Invalid input parameters |
| `AuthenticationError` | Invalid or missing API credentials |
| `RateLimitError` | API rate limit exceeded |
| `NotFoundError` | Requested resource not found |

资料来源：[mem0/exceptions.py:1-50]()

### Handling Exceptions

```python
from mem0.exceptions import Mem0Exception, ValidationError

try:
    memory.add(messages=[], user_id="user_123")
except ValidationError as e:
    print(f"Invalid input: {e}")
except Mem0Exception as e:
    print(f"Memory operation failed: {e}")
```

## Data Flow

```mermaid
sequenceDiagram
    participant App as Application
    participant SDK as Python SDK
    participant Memory as Memory Engine
    participant Vector as Vector Store
    participant LLM as LLM Provider

    App->>SDK: memory.add(messages)
    SDK->>Memory: process_messages()
    Memory->>LLM: extract_and_summarize()
    LLM-->>Memory: structured_memories
    Memory->>Vector: store(memories)
    Vector-->>Memory: confirm
    Memory-->>SDK: result
    SDK-->>App: MemoryResult
```

## Supported Providers

### LLM Providers

| Provider | Package | Configuration Key |
|----------|---------|-------------------|
| OpenAI | `openai` | `openai` |
| Azure OpenAI | `azure-openai` | `azure_openai` |
| Anthropic | `anthropic` | `anthropic` |
| Groq | `groq` | `groq` |
| Ollama | `ollama` | `ollama` |
| LM Studio | `lmstudio` | `lmstudio` |

### Vector Store Providers

| Provider | Package | Configuration Key |
|----------|---------|-------------------|
| Qdrant | `qdrant-client` | `qdrant` |
| Chroma | `chromadb` | `chroma` |
| Weaviate | `weaviate-client` | `weaviate` |
| Milvus | `pymilvus` | `milvus` |
| Pinecone | `pinecone-client` | `pinecone` |

资料来源：[mem0/configs/base.py:150-250]()

## Advanced Configuration

### Self-Hosted Deployment

```python
from mem0 import Memory

memory = Memory()

# Configure with custom providers
memory.configure(
    llm={
        "provider": "ollama",
        "model": "llama3.1",
        "api_base": "http://localhost:11434"
    },
    vector_store={
        "provider": "qdrant",
        "host": "localhost",
        "port": 6333
    }
)
```

资料来源：[docs/open-source/python-quickstart.mdx:100-150]()

### Embedder Configuration

```python
memory.configure(
    embedder={
        "provider": "openai",
        "model": "text-embedding-3-small",
        "dimension": 1536
    }
)
```

## Best Practices

1. **User Identification**: Always provide unique `user_id` for each user to maintain proper memory isolation
2. **Session Management**: Use `session_id` to organize memories within conversation threads
3. **Metadata**: Attach relevant metadata for better searchability and filtering
4. **Error Handling**: Implement proper exception handling for production applications
5. **Configuration**: Store API keys securely using environment variables

## See Also

- [API Reference Documentation](https://github.com/mem0ai/mem0/blob/main/docs/api-reference.mdx)
- [Open Source Quickstart](https://github.com/mem0ai/mem0/blob/main/docs/open-source/python-quickstart.mdx)
- [GitHub Repository](https://github.com/mem0ai/mem0)

---

<a id='page-typescript-sdk'></a>

## TypeScript/Node.js SDK

### 相关页面

相关主题：[Python SDK](#page-python-sdk)

<details>
<summary>相关源码文件</summary>

以下源码文件用于生成本页说明：

- [mem0-ts/src/client/mem0.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.ts)
- [mem0-ts/src/client/index.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/index.ts)
- [mem0-ts/src/client/mem0.types.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.types.ts)
- [mem0-ts/src/client/config.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/config.ts)
- [mem0-ts/src/oss/src/memory/index.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/oss/src/memory/index.ts)
- [mem0-ts/src/oss/src/types/index.ts](https://github.com/memp0ai/mem0/blob/main/mem0-ts/src/oss/src/types/index.ts)
- [mem0-ts/README.md](https://github.com/mem0ai/mem0/blob/main/mem0-ts/README.md)
- [mem0-ts/package.json](https://github.com/mem0ai/mem0/blob/main/mem0-ts/package.json)
</details>

# TypeScript/Node.js SDK

The mem0 TypeScript/Node.js SDK provides a robust client library for integrating memory management capabilities into JavaScript and TypeScript applications. It enables developers to store, retrieve, search, and manage persistent memory across user interactions and AI agent workflows.

## Overview

The SDK offers two primary deployment modes:

| Mode | Description | Use Case |
|------|-------------|----------|
| **Hosted (mem0ai)** | Cloud-hosted memory service with API key authentication | Production applications requiring managed infrastructure |
| **Open Source (OSS)** | Self-hosted memory implementation running entirely within the application | Privacy-sensitive applications, on-premise deployments, custom infrastructure |

资料来源：[mem0-ts/README.md](https://github.com/mem0ai/mem0/blob/main/mem0-ts/README.md)

## Architecture

```mermaid
graph TD
    A[Application] --> B[Mem0Client]
    B --> C{Deployment Mode}
    C -->|Hosted| D[mem0ai Cloud API]
    C -->|OSS| E[Local Memory Store]
    D --> F[Vector Database]
    E --> G[SQLite/Vector Store]
    
    H[Mem0Config] --> B
    I[API Key] --> B
```

The SDK architecture separates configuration management, client initialization, and memory operations into distinct modules. The `Mem0Client` class serves as the primary interface, accepting a `Mem0Config` object during instantiation to determine deployment mode and connection parameters.

资料来源：[mem0-ts/src/client/mem0.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.ts)

## Installation

Install the SDK using npm, yarn, or pnpm:

```bash
npm install mem0ai
# or
yarn add mem0ai
# or
pnpm add mem0ai
```

The package name is `mem0ai` on npm, supporting both CommonJS and ESM module formats.

资料来源：[mem0-ts/package.json](https://github.com/mem0ai/mem0/blob/main/mem0-ts/package.json)

## Configuration

### Mem0Config Parameters

| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `apiKey` | `string` | Conditional | - | API key for hosted mem0ai service. Required when `orgId` or `projectId` is provided |
| `orgId` | `string` | No | - | Organization ID for hosted deployment |
| `projectId` | `string` | No | - | Project ID for hosted deployment |
| `host` | `string` | No | `"https://api.mem0.ai"` | Base URL for hosted API endpoint |

### OSS Configuration

| Parameter | Type | Required | Default | Description |
|-----------|------|----------|---------|-------------|
| `embedder` | `Embedder` | Yes | - | Embedding model configuration for vectorization |
| `vectorStore` | `VectorStore` | Yes | - | Vector storage backend (Chroma, Qdrant, or in-memory) |
| `db` | `Database` | Yes | - | SQLite database for structured data |
| `version` | `string` | No | `"v1.0"` | Memory schema version |

资料来源：[mem0-ts/src/client/config.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/config.ts)

## Client Initialization

### Hosted Mode

```typescript
import { Mem0Client } from "mem0ai";

const client = new Mem0Client({
  apiKey: "your-api-key",
  orgId: "your-org-id",    // optional
  projectId: "your-project-id"  // optional
});
```

### Open Source Mode

```typescript
import { Mem0Client } from "mem0ai";

const client = new Mem0Client({
  embedder: {
    provider: "openai",
    config: {
      api_key: "your-openai-key",
      model: "text-embedding-3-small"
    }
  },
  vectorStore: {
    provider: "chroma",
    config: {
      collection_name: "memory"
    }
  },
  db: {
    provider: "sqlite"
  }
});
```

资料来源：[mem0-ts/src/client/index.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/index.ts)

## Core API Methods

### Memory Operations

| Method | Parameters | Return Type | Description |
|--------|------------|-------------|-------------|
| `add` | `messages`, `userId`, `metadata`, `filters` | `Promise<MemoryResult[]>` | Store new memories |
| `search` | `query`, `userId`, `filters`, `limit` | `Promise<MemoryResult[]>` | Semantic search across memories |
| `getAll` | `userId`, `filters` | `Promise<MemoryResult[]>` | Retrieve all memories for a user |
| `get` | `memoryId` | `Promise<MemoryResult>` | Fetch a specific memory by ID |
| `update` | `memoryId`, `data`, `metadata` | `Promise<MemoryResult>` | Modify existing memory content |
| `delete` | `memoryId` | `Promise<void>` | Remove a memory entry |
| `reset` | `userId` | `Promise<void>` | Delete all memories for a user |

资料来源：[mem0-ts/src/client/mem0.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.ts)

### MemoryResult Data Model

```typescript
interface MemoryResult {
  id: string;           // Unique memory identifier
  memory: string;       // Memory content text
  event: string;        // Event type (e.g., "memory", "preference", "fact")
  created_at: string;   // ISO timestamp
  updated_at: string;   // ISO timestamp
  metadata?: {          // Optional metadata object
    category?: string;
    source?: string;
    [key: string]: any;
  };
}
```

资料来源：[mem0-ts/src/client/mem0.types.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.types.ts)

### Add Memories

```typescript
// Add a single memory
const memories = await client.add({
  messages: [
    { role: "user", content: "I prefer dark mode in my IDE" },
    { role: "assistant", content: "I'll remember that you prefer dark mode" }
  ],
  userId: "user-123"
});

// With metadata
const memories = await client.add({
  messages: [
    { role: "user", content: "Book a flight to Tokyo next month" }
  ],
  userId: "user-123",
  metadata: {
    category: "travel",
    priority: "high"
  }
});
```

### Search Memories

```typescript
const results = await client.search({
  query: "What are my IDE preferences?",
  userId: "user-123",
  limit: 5
});

results.forEach(memory => {
  console.log(`${memory.id}: ${memory.memory}`);
  console.log(`Category: ${memory.metadata?.category}`);
});
```

### Get All Memories

```typescript
const allMemories = await client.getAll({
  userId: "user-123"
});
```

### Update Memory

```typescript
await client.update({
  memoryId: "memory-uuid-here",
  data: "Updated memory content here",
  metadata: {
    category: "updated-category"
  }
});
```

### Delete Memory

```typescript
await client.delete({
  memoryId: "memory-uuid-here"
});
```

### Reset User Memories

```typescript
await client.reset({
  userId: "user-123"
});
```

资料来源：[mem0-ts/README.md](https://github.com/mem0ai/mem0/blob/main/mem0-ts/README.md)

## Open Source Module Structure

The OSS implementation follows a modular architecture with separate concerns for memory management, embedding, and storage.

```mermaid
graph LR
    A[Mem0Client] --> B[Memory Class]
    B --> C[Embedding]
    B --> D[Vector Store]
    B --> E[SQLite DB]
    C --> F[OpenAI Embeddings]
    D --> G[Chroma/Qdrant/In-Memory]
```

### Memory Class

The `Memory` class orchestrates the OSS memory operations, coordinating between the embedding service, vector store, and SQLite database.

| Method | Description |
|--------|-------------|
| `add` | Process and store new memories with embeddings |
| `search` | Perform vector similarity search |
| `get` | Retrieve memories by ID |
| `delete` | Remove memory from all stores |
| `reset` | Clear all user memories |

资料来源：[mem0-ts/src/oss/src/memory/index.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/oss/src/memory/index.ts)

## Message Format

The SDK uses a standardized message format for conversation history:

```typescript
interface Message {
  role: "system" | "user" | "assistant";
  content: string;
}
```

Messages are processed to extract semantic meaning and stored as discrete memory entries with associated event types.

资料来源：[mem0-ts/src/oss/src/types/index.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/oss/src/types/index.ts)

## Supported Embedders

| Provider | Model Options | Configuration |
|----------|---------------|---------------|
| OpenAI | `text-embedding-3-small`, `text-embedding-3-large`, `text-embedding-ada-002` | `api_key` |
| Local | Custom embedding models | `model_path` |

资料来源：[mem0-ts/src/client/config.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/config.ts)

## Supported Vector Stores

| Provider | Description | Persistence |
|----------|-------------|-------------|
| Chroma | Open source vector database | Durable |
| Qdrant | High-performance vector search | Durable |
| In-memory | Temporary storage for testing | Volatile |

## Event Types

Memories are categorized by event types for organizational purposes:

| Event Type | Usage |
|------------|-------|
| `memory` | General conversation memories |
| `preference` | User preferences and settings |
| `fact` | Factual information about users |
| `knowledge` | Learned domain knowledge |

资料来源：[mem0-ts/src/client/mem0.types.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/mem0.types.ts)

## Workflow Diagram

```mermaid
sequenceDiagram
    participant App as Application
    participant Client as Mem0Client
    participant API as mem0ai API
    
    App->>Client: new Mem0Client(config)
    Note over Client: Initialize with config
    
    App->>Client: add({messages, userId})
    Client->>API: POST /memories
    API-->>Client: MemoryResult[]
    Client-->>App: Promise<MemoryResult[]>
    
    App->>Client: search({query, userId})
    Client->>API: POST /memories/search
    API-->>Client: MemoryResult[]
    Client-->>App: Promise<MemoryResult[]>
    
    App->>Client: getAll({userId})
    Client->>API: GET /memories
    API-->>Client: MemoryResult[]
    Client-->>App: Promise<MemoryResult[]>
```

## Error Handling

The SDK uses standard JavaScript error handling patterns:

```typescript
try {
  const memories = await client.search({
    query: "test query",
    userId: "user-123"
  });
} catch (error) {
  if (error.status === 401) {
    console.error("Invalid API key");
  } else if (error.status === 404) {
    console.error("Resource not found");
  } else {
    console.error("Request failed:", error.message);
  }
}
```

## Environment Variables

While not required, the SDK supports environment-based configuration:

```bash
export MEM0_API_KEY="your-api-key"
export OPENAI_API_KEY="your-openai-key"
```

## TypeScript Support

The SDK is written in TypeScript and provides full type definitions out of the box. No additional `@types` packages are required.

```typescript
import { Mem0Client, Mem0Config, MemoryResult, Message } from "mem0ai";
```

All exported types are available from the main package entry point.

资料来源：[mem0-ts/src/client/index.ts](https://github.com/mem0ai/mem0/blob/main/mem0-ts/src/client/index.ts)

## Quick Reference

### Minimal Hosted Example

```typescript
import { Mem0Client } from "mem0ai";

const client = new Mem0Client({ apiKey: "your-key" });
const memories = await client.add({
  messages: [{ role: "user", content: "Hello" }],
  userId: "user-1"
});
```

### Minimal OSS Example

```typescript
import { Mem0Client } from "mem0ai";

const client = new Mem0Client({
  embedder: { provider: "openai", config: { api_key: "key", model: "text-embedding-3-small" } },
  vectorStore: { provider: "chroma", config: { collection_name: "memories" } },
  db: { provider: "sqlite" }
});
```

资料来源：[mem0-ts/README.md](https://github.com/mem0ai/mem0/blob/main/mem0-ts/README.md)

---

---

## Doramagic 踩坑日志

项目：mem0ai/mem0

摘要：发现 6 个潜在踩坑项，其中 0 个为 high/blocking；最高优先级：能力坑 - 能力判断依赖假设。

## 1. 能力坑 · 能力判断依赖假设

- 严重度：medium
- 证据强度：source_linked
- 发现：README/documentation is current enough for a first validation pass.
- 对用户的影响：假设不成立时，用户拿不到承诺的能力。
- 建议检查：将假设转成下游验证清单。
- 防护动作：假设必须转成验证项；没有验证结果前不能写成事实。
- 证据：capability.assumptions | github_repo:656099147 | https://github.com/mem0ai/mem0 | README/documentation is current enough for a first validation pass.

## 2. 维护坑 · 维护活跃度未知

- 严重度：medium
- 证据强度：source_linked
- 发现：未记录 last_activity_observed。
- 对用户的影响：新项目、停更项目和活跃项目会被混在一起，推荐信任度下降。
- 建议检查：补 GitHub 最近 commit、release、issue/PR 响应信号。
- 防护动作：维护活跃度未知时，推荐强度不能标为高信任。
- 证据：evidence.maintainer_signals | github_repo:656099147 | https://github.com/mem0ai/mem0 | last_activity_observed missing

## 3. 安全/权限坑 · 下游验证发现风险项

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：下游已经要求复核，不能在页面中弱化。
- 建议检查：进入安全/权限治理复核队列。
- 防护动作：下游风险存在时必须保持 review/recommendation 降级。
- 证据：downstream_validation.risk_items | github_repo:656099147 | https://github.com/mem0ai/mem0 | no_demo; severity=medium

## 4. 安全/权限坑 · 存在评分风险

- 严重度：medium
- 证据强度：source_linked
- 发现：no_demo
- 对用户的影响：风险会影响是否适合普通用户安装。
- 建议检查：把风险写入边界卡，并确认是否需要人工复核。
- 防护动作：评分风险必须进入边界卡，不能只作为内部分数。
- 证据：risks.scoring_risks | github_repo:656099147 | https://github.com/mem0ai/mem0 | no_demo; severity=medium

## 5. 维护坑 · issue/PR 响应质量未知

- 严重度：low
- 证据强度：source_linked
- 发现：issue_or_pr_quality=unknown。
- 对用户的影响：用户无法判断遇到问题后是否有人维护。
- 建议检查：抽样最近 issue/PR，判断是否长期无人处理。
- 防护动作：issue/PR 响应未知时，必须提示维护风险。
- 证据：evidence.maintainer_signals | github_repo:656099147 | https://github.com/mem0ai/mem0 | issue_or_pr_quality=unknown

## 6. 维护坑 · 发布节奏不明确

- 严重度：low
- 证据强度：source_linked
- 发现：release_recency=unknown。
- 对用户的影响：安装命令和文档可能落后于代码，用户踩坑概率升高。
- 建议检查：确认最近 release/tag 和 README 安装命令是否一致。
- 防护动作：发布节奏未知或过期时，安装说明必须标注可能漂移。
- 证据：evidence.maintainer_signals | github_repo:656099147 | https://github.com/mem0ai/mem0 | release_recency=unknown

<!-- canonical_name: mem0ai/mem0; human_manual_source: deepwiki_human_wiki -->
