Engram
Persistent knowledge base for AI agents
An open-source MCP server that gives your AI agent a long-term memory. Markdown files as source of truth, two pluggable search backends, typed graph relations between entries.
What is Engram?
AI agents lose their context between sessions. Every conversation starts from scratch, and hard-won knowledge about your infrastructure, your codebase, or your preferences vanishes the moment the session ends.
Engram solves this. It is a Model Context Protocol (MCP) server that gives any compatible AI agent -- Claude, ChatGPT, or others -- a persistent, searchable knowledge base. Entries are plain Markdown files with YAML front matter. The search index is a rebuildable cache, never the source of truth.
Your agent can remember facts, search across them with full-text queries and tag filters, recall individual entries with their graph relations, and forget what is no longer relevant. All backed by files you can version, grep, and read yourself.
With SSE or HTTP transport, multiple agents -- even from different providers -- can share the same knowledge base. What one agent learns, all others can recall.
Features
Full-Text Search
Two search backends out of the box: Xapian (fast, configurable stemming) and SQLite FTS5 (zero dependencies). Swap backends with a single parameter.
Smart Upsert
The remember tool detects duplicate titles before
creating a new entry. Update existing knowledge or force a new
entry -- your choice.
Graph Relations
Link entries with typed relations using kb://uuid#type
URLs. Recall any entry and see both outgoing links and incoming
backlinks.
Markdown Source of Truth
Every entry is a Markdown file with YAML front matter. The search index is a rebuildable cache. Your data is always human-readable and version-controllable.
Three Transports
Connect via stdio for local tools like Claude Code, SSE for network clients, or streamable HTTP for web integrations. One binary, three modes.
Docker-Ready
Alpine-based image, single command to run. Mount a volume for your knowledge directory and you are up in seconds.
Install
1. Pull Engram
docker pull cylian/engram:latest2. Choose your search backend
Fast full-text search with configurable stemming. The default — recommended for most use cases.
Your search index is a SQLite database — query it directly with standard SQL tools.
3. Configure your Agent
Your agent starts and manages Engram automatically. Recommended for Claude Code, ChatGPT Desktop, and most MCP clients.
Register in one command:
Start Engram as a persistent server. Your agent connects via URL. Ideal for sharing a single knowledge base across multiple agent instances.
Start the server:
Register in one command:
Start Engram as a stateless HTTP server — easier to load-balance and proxy. Share a single knowledge base across your entire agent fleet.
Start the server:
Register in one command:
4. Verify
Ask your AI agent to store and recall something:
"Remember that our API runs on port 8080."Then ask:
"What port does the API run on?"If it answers correctly, Engram is working. Your knowledge is stored
in ./knowledge as plain Markdown files — you can back it
up, version it, or read it anytime.
How it Works
entries/The Markdown files are the single source of truth. Each entry is stored as
a .md file with a UUID filename, YAML front matter
(id, title, tags), and free-form
Markdown content.
The search index is a performance cache built from these files. It can be
deleted and fully rebuilt at any time using the rebuild tool
-- no data is ever lost. Two backends ship out of the box — Xapian
and SQLite FTS5 — and you can swap between them with a single
parameter.
Engram communicates with your AI agent through the Model Context Protocol, which defines a standard way for AI tools to expose capabilities. Your agent discovers the available tools automatically and calls them as needed.
Graph Relations
Entries can reference each other using kb://uuid#type links
embedded in their Markdown content. The #type fragment defines
the relation kind -- runs-on, depends-on,
mirrors, or any string you choose. If omitted, the type
defaults to related.
Example
This service runs on [Saturn](kb://a1b2c3d4-...#runs-on)
and depends on [PostgreSQL](kb://f9e8d7c6-...#depends-on).What recall returns
When you recall an entry, the relations field includes both
directions:
{
"id": "a1b2c3d4-...",
"title": "My API Service",
...
"relations": {
"out": [
{"type": "runs-on", "id": "e5f6...", "title": "Saturn"}
],
"in": [
{"type": "depends-on", "id": "b7c8...", "title": "Frontend App"}
]
}
}Each relation includes the type, the target id,
and the target title. Think of it as
HATEOAS
for knowledge -- every response carries the links to navigate the graph.
No dedicated graph database, just Markdown files and conventions.
Works with any MCP client
Engram implements the open Model Context Protocol standard. It works out of the box with: