Mem0 is a Python long-term memory framework (github.com/mem0ai/mem0) providing a personalization memory layer for LLM applications and agents. The self-hosted Memory class runs an in-process V3 phased extraction-and-storage pipeline (Phase 0 context-gather through Phase 8 message-persist), backed by pluggable vector store, embedding, LLM, and reranker providers. Hybrid retrieval combines semantic similarity, optional BM25 / backend-native FTS keyword search, and entity-boost scoring.
A separate hosted SaaS path (MemoryClient / api.mem0.ai) shares the public API but defers extraction to the platform. OSS v2.0.0 ships 18 LLMs, 24 vector stores, 11 embeddings, and 5 rerankers.
This skill embeds 52 constraints covering typical pitfalls: silently dropped graph_store config in OSS, PostHog telemetry on by default, Memory.chat() raising NotImplementedError, and the hosted-vs-self-hosted timing difference after add(). The host AI applies these constraints automatically after installation.
WHENWhen configuring MemoryConfig for self-hosted Memory in OSS v2.0.0 following AGENTS.md/LLM.md graph examples
ACTIONDo not include any graph_store / graph_db / graph kwarg in MemoryConfig; treat graph memory as hosted-platform-only or use out-of-tree integration (UC-009 strands_agent / UC-017 examples/graph-db-demo notebooks). Surface the gap explicitly in your skill or wrapper so users see a hard error rather than silent no-op.
v0.1.0: Initial release on Doramagic.ai. Long-term memory layer on mem0ai/mem0 v2.0.0 with bilingual metadata, 52 anti-pattern constraints, and 3 FAQs.
v0.1.02026-04-25·Contributors: tangweigang-jpg
v0.1.0: Initial release on Doramagic.ai. Long-term memory layer on mem0ai/mem0 v2.0.0 with bilingual metadata, 52 anti-pattern constraints, and 3 FAQs.