This guide shows you how to create custom vector memory implementations by extending BaseVectorMemory and implementing the IVectorMemory interface. Custom vector memories allow you to integrate with any vector database or implement specialized semantic search behaviors.
🏗️ Custom Vector Memory Architecture
🎯 When to Build Custom Vector Memory
Consider building a custom vector memory when:
Integrating New Vector Databases: Your organization uses a vector database not natively supported (e.g., Elasticsearch, MongoDB Atlas Vector Search, Redis Vector)
Custom Embedding Logic: You need specialized embedding generation (e.g., custom models, pre-processing, caching)
Specialized Search: You require advanced filtering, hybrid search, or custom ranking algorithms
Performance Optimization: You need specific optimizations for your use case (e.g., approximate nearest neighbor tuning)
Multi-Collection Management: You need to search across multiple collections with custom merging logic
Access Control: You require row-level security or tenant isolation in vector search
📚 Understanding BaseVectorMemory
The BaseVectorMemory class provides most of the functionality you need:
What BaseVectorMemory Provides
What You Need to Implement
When extending BaseVectorMemory, you must implement these key methods:
Key Properties in BaseVectorMemory
🔌 IVectorMemory Interface
The complete interface you must implement:
Example 1: ElasticsearchVectorMemory
A complete implementation using Elasticsearch with vector similarity search:
Usage Example
Example 2: RedisVectorMemory
Implementation using Redis with RediSearch vector similarity:
Example 3: CachedVectorMemory
A wrapper that adds caching layer to any vector memory:
Usage Example
Example 4: MultiCollectionVectorMemory
Search across multiple collections with custom ranking:
Usage Example
Testing Your Custom Vector Memory
Unit Test Example
Best Practices
1. Always Call super.configure()
2. Validate Configuration
3. Handle Errors Gracefully
4. Optimize Embedding Generation
5. Implement Proper Export/Import
6. Monitor Performance
7. Support Metadata Filtering
Common Patterns
Pattern 1: Wrapper Pattern
Wrap existing memory to add functionality:
Pattern 2: Adapter Pattern
Adapt existing clients to IVectorMemory interface:
// Automatic handling of:
- Message storage and retrieval
- Embedding generation via configured provider
- Basic configuration management
- Message counting and clearing
- Export/import functionality (partial)
- System message handling
/**
* Store a message with its vector representation
*/
function add( required any message )
/**
* Retrieve semantically relevant messages
*/
function getRelevant( required string query, numeric limit = 5 )
/**
* Get all stored messages (for non-vector operations)
*/
function getAll()
/**
* Remove all messages from vector storage
*/
function clear()
/**
* Count total messages in vector storage
*/
function count()
variables.collection // Collection/index name
variables.embeddingProvider // AI provider for embeddings (openai, etc.)
variables.embeddingModel // Model to use (text-embedding-3-small, etc.)
variables.dimensions // Vector dimensions (1536, 768, etc.)
variables.metric // Distance metric (cosine, euclidean, dot)
variables.key // Conversation/session identifier
interface {
/**
* Configure the vector memory
*/
function configure( required struct config );
/**
* Set the conversation key
*/
function key( required string key );
/**
* Add a message to vector storage
*/
function add( required any message );
/**
* Get semantically relevant messages
*/
function getRelevant( required string query, numeric limit = 5 );
/**
* Get all stored messages
*/
function getAll();
/**
* Clear all messages
*/
function clear();
/**
* Count stored messages
*/
function count();
/**
* Set system message
*/
function setSystemMessage( required string message );
/**
* Get system message
*/
function getSystemMessage();
/**
* Export memory state
*/
function export();
/**
* Import memory state
*/
function import( required struct data );
}