cortex
We all have a past. Your AI doesn’t.
Every decision you’ve ever made is context your AI should have. You’re just not using it. Cortex is how you start.
delta
Same prompt. Different substrate.
Without Cortex
The deployment failed due to insufficient disk space on the build server.
Which feature?
Can you share the error logs?
What environment is this running in?
...no memory of what happened last time.
With Cortex
[history] 3 deployments failed this quarter. 2 were auth-related.
[pattern] Last failure (Mar 12) traced to token expiry config.
[change] This release modified auth middleware. Same subsystem.
[conflict] Token TTL set to 24h in config, 1h in middleware.
[link] Related: incident #47, decision to extend TTL (Jan 15)
Root cause identified. Config/code mismatch in auth token TTL.
The gap between these two is everything your AI should already know.
evolution
Beyond the filing cabinet.
Every other AI memory system is a filing cabinet with a search bar. Cortex reasons. It traces cause→effect chains. It catches contradictions. It detects when your team keeps fixing the same entity, a systemic issue. It knows when a decision you depend on has been superseded. Baseline tested at 100% accuracy. Zero API calls. Sub-second execution.
Formal Reasoning
When a decision supersedes another, Cortex traces the chain. When two facts contradict, it catches it. When knowledge goes stale, it knows. Built on OWL-RL inference over a SPARQL graph.
Dual-Store Architecture
A SPARQL graph database for relationships and reasoning. SQLite with full-text search for speed. Every piece of knowledge lives in both, queryable from either side.
Hybrid Retrieval
Four signals ranked together: keyword relevance, semantic similarity, graph connectivity, and recency. Weighted, transparent, tunable.
Zero Cloud Dependency
Runs entirely on your machine. Local embeddings, local inference, local storage. Add an LLM when you want smart classification, not because the system requires it.
stack
Local. Private. Yours.
Cortex runs on your machine. Your knowledge lives in SQLite and Oxigraph on your filesystem, not a cloud service. Local embeddings mean semantic search without API calls to third parties.
It speaks the Model Context Protocol, the open standard that lets any AI agent connect to external tools. Claude, Cursor, Windsurf, any MCP client. One memory layer, every agent.
CLI for developers. REST API for integration. A dashboard for everyone else. No subscription, no telemetry. Your context is yours.
runtime python 3.12+ storage sqlite + fts5 · oxigraph (sparql) embeddings all-mpnet-base-v2 (local, optional) protocol model context protocol (mcp) interface cli · rest api · dashboard tools 22 mcp tools tests 1,010 passing version 0.2.1
intake
Twenty sources. One memory.
Terminal sessions, Slack threads, PDFs, code repositories, meeting notes. Cortex captures from everywhere you already work.
syn