Agent Memory Systems: Context That Persists
Without memory, agents forget everything after each conversation. Learn the three types of memory that make agents truly useful.
The Memory Problem
LLMs are stateless. Ask the same question twice, you get the same answerβno recollection of previous conversations.
Memory systems solve this by storing and retrieving context.
Three Types of Memory
1. Short-Term Memory (Conversation)
Remembers the current conversation:
- Previous messages in the chat
- What was just discussed
- Temporary context for this session
Implementation: Message history passed to each LLM call.
2. Long-Term Memory (User)
Remembers across conversations:
- User preferences and settings
- Past interactions and decisions
- Important facts about the user
Implementation: Database or vector store keyed by user ID.
3. Knowledge Memory (RAG)
Remembers information from documents:
- Company policies and procedures
- Product documentation
- Historical data and records
Implementation: Vector database with semantic search (RAG).
How Memory Flows
- User sends message
- Agent retrieves relevant long-term memory
- Agent retrieves relevant knowledge
- All context combined with short-term history
- LLM generates response
- Important new info saved to memory
Memory Storage Options
- JSON Files β Simple, good for small agents
- SQL Database β Structured, queryable
- Vector DB β Semantic search for knowledge
- Hybrid β Combine approaches for best results
Memory Best Practices
- Summarize β Don't store raw conversation, store insights
- Prune β Remove outdated or irrelevant memories
- Namespace β Keep user memories separate
- Encrypt β Sensitive data needs protection
The OpenClaw Approach
OpenClaw uses memory files:
MEMORY.mdβ Long-term curated memoriesmemory/YYYY-MM-DD.mdβ Daily logsmemory/heartbeat-state.jsonβ State tracking