Memory & Context
Memory Configuration
Complete guide to configuring memory resources for persistent agent context management, conversation continuity, and intelligent data retention
Core Memory Capabilities
Conversation Continuity
Maintain conversation history across sessions with automatic token management and context preservation
Multi-Agent Context Sharing
Enable seamless context sharing between agents with hierarchical keys and access controls
Intelligent Data Management
Automated summarization, priority-based eviction, and token-aware capacity management
Privacy & Security
Built-in data redaction, compliance controls, encryption, and audit logging
Quick Start Configuration
1
Create Memory Resource
Define your memory configuration
resource: memory
id: conversation_memory
description: Stores conversation history for agents
version: 1.0.0
type: token_based
max_tokens: 2000
max_context_ratio: 0.8
persistence:
type: redis
ttl: 24h
privacy_policy:
redact_patterns:
- '\b\d{3}-\d{2}-\d{4}\b' # SSN pattern
default_redaction_string: "[REDACTED]"
2
Configure Agent Memory
Connect agents to memory resources
resource: agent
id: chat_agent
config:
model: gpt-4
temperature: 0.7
memory:
- id: conversation_memory
key: "user:{{.workflow.input.user_id}}"
mode: read-write
Related Documentation
Memory Operations
Use configured memory in workflows and agents with append, flush, and clear operations
Integration Patterns
Multi-agent memory sharing, context isolation, and collaborative workflows
Privacy & Security
Data redaction, compliance, encryption, and audit logging
Basic Memory Resource Setup
Foundation Configuration
Persistence Configuration
Memory Management Configuration
Privacy and Security Configuration
Data Redaction
Configure sensitive data protection:
privacy_policy:
redact_patterns:
# Personal identifiers
- '\b\d{3}-\d{2}-\d{4}\b' # SSN
- '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' # Email
- '\b(?:\+?1[-.\s]?)?\(?[2-9]\d{2}\)?[-.\s]?\d{3}[-.\s]?\d{4}\b' # Phone
# Financial information
- '\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{3,6}\b' # Credit cards
redaction_config:
default_redaction_string: "[REDACTED]"
preserve_format: true
case_sensitive: false
non_persistable_message_types:
- system
- tool_internal
- debug
Configuration Templates
resource: memory
id: dev_memory
description: Development memory configuration
version: 1.0.0
key: "user:{{.workflow.input.user_id}}"
type: token_based
max_tokens: 2000
persistence:
type: redis
ttl: 8h
- Use Case: Local development and testing
- TTL: Short-term (8 hours)
- Security: Basic (no sensitive data handling)
Configuration Reference
Core Configuration
Resource identification and basic settings:
resource: memory # Resource type (always "memory")
id: unique_memory_id # Unique identifier
description: "Purpose and context"
version: "1.0.0" # Configuration version
# Memory key template
key: "user:{{.workflow.input.user_id}}"
# or hierarchical: "project:{{.project_id}}:user:{{.user_id}}"
# Memory type
type: token_based # or message_count_based, buffer
max_tokens: 4000
max_context_ratio: 0.8
model: gpt-4
Available Template Variables:
{{.workflow.input.*}}
- Workflow input parameters{{.session_id}}
,{{.project_id}}
,{{.agent_id}}
,{{.user_id}}
Eviction and Flushing
Control memory management strategies:
# Eviction policies
eviction_policy:
type: fifo # or lru, priority
# For priority eviction
eviction_policy:
type: priority
priority_keywords: ["error", "critical", "important"]
# Flushing strategies
flushing_strategy:
type: hybrid_summary # or simple_fifo, lru, token_aware_lru
summarize_threshold: 0.8
summary_tokens: 500
Persistence and TTL
Configure data storage and expiration:
# Redis persistence (recommended)
persistence:
type: redis
ttl: 24h
circuit_breaker:
enabled: true
timeout: "100ms"
max_failures: 5
# TTL fine-tuning
append_ttl: "30m" # Extend on append
clear_ttl: "5m" # Short cleanup
flush_ttl: "1h" # Preserve after flush
TTL Formats: 30s
, 5m
, 2h
, 24h
, 168h
Token Providers
Configure accurate token counting:
# OpenAI provider
token_provider:
provider: openai
model: gpt-4
api_key_env: OPENAI_API_KEY
fallback: tiktoken
# Anthropic provider
token_provider:
provider: anthropic
model: claude-3-sonnet
api_key_env: ANTHROPIC_API_KEY
# Local fallback
token_provider:
provider: tiktoken
model: gpt-4
Privacy and Security Reference
Protect sensitive information:
privacy_policy:
redact_patterns:
- '\b\d{3}-\d{2}-\d{4}\b' # SSN
- '\b[A-Za-z0-9._%+-]+@[A-Za-z0-9.-]+\.[A-Z|a-z]{2,}\b' # Email
- '\b\d{4}[\s-]?\d{4}[\s-]?\d{4}[\s-]?\d{3,6}\b' # Credit cards
non_persistable_message_types:
- system
- tool_internal
- debug
default_redaction_string: "[REDACTED]"
Environment Variables
Use environment variables for sensitive data:
# Direct reference
api_key: "{{ .env.OPENAI_API_KEY }}"
# With fallback
api_key: "{{ .env.OPENAI_API_KEY | default \"fallback-key\" }}"
# Conditional
api_key: "{{ if .env.USE_OPENAI }}{{ .env.OPENAI_API_KEY }}{{ else }}{{ .env.FALLBACK_KEY }}{{ end }}"