Canonical markdown: architecture/OPENCLAW_MEMORY_ARCHITECTURE.md. This rendered HTML page is supplemental reference only.
OpenClaw Memory Architecture¶
Last Updated: 2026-03-12
Current Memory Model¶
- The bridge/session path determines the effective memory scope for a conversation.
- OpenClaw memory-search is a separate configuration lane from OpenClaw chat generation.
- The current memory-search provider is
geminiwith modelgemini-embedding-001. - Memory-search fallback is treated separately from the chat fallback chain.
Document RAG Embedding Is A Separate Pipeline¶
- The document ingestion pipeline (PDF, DOCX, PPTX, JSON, HTML, Google Docs/Drive) uses its own experimental embedding lane for semantic indexing into pgvector. The specific model and dimensionality are determined by a bounded evaluation spike and are not yet canonical default policy.
- This is distinct from the OpenClaw memory-search lane above, which uses
gemini-embedding-001(768 dimensions) for conversational memory retrieval. - The two pipelines share the pgvector storage layer but use separate vector stores.
Why This Matters¶
- It prevents confusion between chat generation models, memory-search models, and document RAG models.
- It keeps scope and namespace behavior explicit.
- It explains why memory configuration changes are not the same as chat-model or document-embedding changes.