SuperLocalMemory Logo — Local AI Memory Layer
SuperLocalMemory
Adaptive Intelligence

Adaptive Learning
Architecture

SuperLocalMemory V3.1 features an active learning engine that gets smarter with every recall — at zero token cost. No cloud LLM needed. Mathematical signals drive adaptation. The only AI memory that learns without spending tokens.

Learning Process

1. Semantic Search

We use a high-dimensional vector store to find memories conceptually related to your query, even if keywords don't match.

2. Graph Association

Entities (Files, Functions, People) are extracted and linked. If you ask about "Auth", we also pull "Login.tsx" because they are linked in the graph.

3. Adaptive Re-Ranking

Memories you "use" (copy/paste) get boosted. Memories you ignore fade away. The system optimizes itself for your workflow.

V3.1 Active Memory

Three-Phase Adaptive Learning

Every recall generates learning signals. The system progressively adapts to your patterns.

Phase 1

Baseline (0-19 signals)

Cross-encoder ranking. Every recall collects implicit feedback signals — co-retrieval edges, confidence boosts, channel performance data.

Phase 2

Rule-Based (20+ signals)

Heuristic boosts from learned patterns: recency, access frequency, trust score. Results noticeably improve for your common queries.

Phase 3

ML Model (200+ signals)

A LightGBM model trains on YOUR specific usage. Fully personalized ranking. All local — no cloud, no tokens, no cost.

Zero-Cost Learning Signals

  • Co-Retrieval: Memories retrieved together strengthen their connections
  • Confidence Lifecycle: Used facts get boosted, unused facts decay
  • Channel Performance: Learns which retrieval method works for your queries
  • Entropy Gap: Surprising content gets prioritized for deeper indexing

Invisible Integration

  • Auto-Recall: Project context injected at session start via Claude Code hooks
  • Auto-Capture: Decisions, bugs, and preferences detected and stored automatically
  • Sleep-Time Consolidation: Background worker deduplicates, decays, and retrains
  • Pattern Detection: Tech preferences, temporal patterns, and interests mined from memories

The Retrieval Pipeline

INPUT
User Query
LAYER 1
Candidate Retrieval
LAYER 2
Re-Ranker
OUTPUT
Context Window