ZenBrain: A Neuroscience-Inspired 7-Layer Memory Architecture for Autonomous AI Systems
Overview
ZenBrain is a 7-layer neuroscience-inspired memory architecture for autonomous AI systems. It bridges the gap between biological memory principles and practical AI system design, delivering measurable performance improvements across long-context recall, memory stability, and knowledge retrieval tasks.
Paper: ZenBrain v5 (Zenodo) | DOI: 10.5281/zenodo.19353663
Key Results (7 Experiments)
| Experiment | Metric | Result |
|---|---|---|
| Exp 1 β LoCoMo Retrieval | F1 score | +21.6% vs. Flat Store |
| Exp 2 β Layer Ablation | Storage efficiency | +47.4% vs. single-layer |
| Exp 3 β Retention Curves | Retention@30d | 89.9% (vs. 0% pure Ebbinghaus) |
| Exp 4 β Sleep Consolidation | Memory stability | +37.0% vs. no-sleep baseline |
| Exp 5 β Hebbian Retrieval | Precision@5 | 0.955 |
| Exp 6 β Bayesian Confidence | Confidence AUC | 0.797 (+49.5% vs. 0.533) |
| Exp 7 β MemoryArena | Retrieval accuracy | +19.5% vs. Flat Store |
Experiments 1, 2, 4, 5, 7 use synthetic benchmark data (FakeEmbeddingProvider). Real-data runs require the LoCoMo and MemoryArena datasets β see experiments/README.md.
Architecture: 7 Memory Layers
Layer 1: Working Memory β Active task focus (capacity-limited, 7Β±2 items)
Layer 2: Episodic Memory β Concrete experiences with temporal context
Layer 3: Semantic Memory β Abstracted facts and relationships
Layer 4: Procedural Memory β Skills and how-to knowledge
Layer 5: Short-Term Memory β Session context buffer
Layer 6: Long-Term Memory β Persistent cross-session knowledge
Layer 7: Core Memory β Pinned identity and values (Letta-pattern)
Key algorithms:
- Hebbian Learning (co-activation strengthening, decay, normalization)
- FSRS Spaced Repetition (optimal review scheduling)
- Bayesian Confidence Propagation (uncertainty quantification)
- Sleep Consolidation (Stickgold & Walker 2013 β memory replay simulation)
- Ebbinghaus Decay (forgetting curve modeling)
- Contextual Retrieval (Anthropic method, +67% retrieval accuracy)
Installation
# Core algorithms (zero dependencies)
npm install @zensation/algorithms
# Memory layer orchestration
npm install @zensation/core
# PostgreSQL + pgvector adapter
npm install @zensation/adapter-postgres
# SQLite adapter (zero-config)
npm install @zensation/adapter-sqlite
Quick Start
import { MemoryCoordinator } from '@zensation/core';
import { PostgresAdapter } from '@zensation/adapter-postgres';
const memory = new MemoryCoordinator({
adapter: new PostgresAdapter({ connectionString: process.env.DATABASE_URL }),
});
// Store a memory across all relevant layers
await memory.store({
content: 'ZenBrain uses Hebbian learning for knowledge graph strengthening',
type: 'semantic',
importance: 0.9,
});
// Recall with confidence scores
const results = await memory.recall('Hebbian learning', { topK: 5 });
// Returns facts with 95% confidence intervals
Links
- Paper (v5): https://zenodo.org/records/19413933
- PDF: https://zenodo.org/records/19413933/files/zenbrain.pdf
- GitHub: https://github.com/zensation-ai/zenbrain
- Website: https://zensation.ai/technologie
- npm: https://www.npmjs.com/package/@zensation/algorithms
Citation
@misc{bering2026zenbrain,
title = {ZenBrain: A Neuroscience-Inspired 7-Layer Memory Architecture for Autonomous AI Systems},
author = {Bering, Alexander},
year = {2026},
doi = {10.5281/zenodo.19353663},
url = {https://zenodo.org/records/19413933},
note = {Zenodo Preprint}
}
License
Apache 2.0 β see LICENSE