Memory

Introduction

The Universal Memory Library for AI Agents

Memory is an open-source Knowledge Engine designed to give AI agents a persistent, intelligent, and structured memory.

It functions as a universal library of context that allows agents to ingest interactions, semanticize them into atomic facts, and retrieve them with high precision. By treating memory as a product rather than just a database, it enables:

  • Infinite Context: Beyond the context window of LLMs.
  • Fact-Based Recall: Extracting the "truth" from noisy conversations.
  • Procedural Awareness: Remembering how a task was performed, not just the result.

Key Capabilities

  • Universal API: A simple REST interface for any agent framework.
  • Atomic Fact Storage: Deconstructs raw text into independent facts for better retrieval accuracy.
  • Visual Memory Graph: (Coming soon) Visualize connections between valid extracted facts.
  • Semantic Search: Powered by Qdrant to find relevant memories by meaning.
  • RAG & Reasoning: Integrated retrieval-augmented generation to answer questions with cited sources.
  • Procedural Memory: A dedicated system for tracking execution steps, preventing agents from getting stuck in loops.

System Architecture

Memory is built as a standalone engine that integrates seamlessly into your agent's stack:

  1. API Gateway: Handles incoming requests and enforces schemas.
  2. Core Components:
    • Memory Manager: Orchestrates the lifecycle of a memory (creation, deduplication, retrieval).
    • Fact Extractor: An LLM-driven module that distilled noise into signal.
    • Embedding Engine: Converts facts into high-dimensional vectors.
  3. Storage Layer:
    • PostgreSQL: For structured metadata and relational tracking.
    • Qdrant: For vector storage and semantic queries.

On this page