SuperLocalMemory Logo — Local AI Memory Layer
SuperLocalMemory

Installation Guide

Three ways to install. Same powerful memory system. Works on macOS, Windows, and Linux.

npm Install

macOS, Windows, Linux — requires Node.js 14+ and Python 3.11+

1

Install globally via npm

npm install -g superlocalmemory

Auto-installs Python dependencies (numpy, scipy, networkx, sentence-transformers, torch).

2

Run setup wizard

slm setup

Choose Mode A (zero-cloud), B (local Ollama), or C (cloud LLM). Mode A is the default.

3

Pre-download embedding model (optional)

slm warmup

Downloads nomic-embed-text-v1.5 (~500MB). If you skip this, it downloads on first use.

4

Verify installation

slm status

pip Install

Requires Python 3.11+

pip install superlocalmemory

Then run slm setup and slm warmup as above.

Git Clone

For development or air-gapped environments

git clone https://github.com/qualixar/superlocalmemory.git
cd superlocalmemory
pip install -e .

Then run slm setup and slm warmup.

Connect Your IDE

SuperLocalMemory works with 17+ AI tools via MCP

Auto-Configure (Recommended)

slm connect # Configure all detected IDEs
slm connect --list # See which IDEs are configured

Manual MCP Config

Add this to your IDE's MCP configuration file:

{
  "mcpServers": {
    "superlocalmemory": {
      "command": "slm",
      "args": ["mcp"]
    }
  }
}

Supported IDEs: Claude Code, Cursor, VS Code Copilot, Windsurf, Continue, Cody, ChatGPT Desktop, Gemini CLI, JetBrains, Zed, and more. 35 MCP tools available.

Upgrading from V2?

V3 is a complete architectural reinvention. Your data is preserved.

npm install -g superlocalmemory # Installs V3
slm migrate # Migrate V2 data
slm setup # Configure V3 mode
slm warmup # Download embedding model

Before upgrading: V3 uses a new mathematical engine, retrieval pipeline, and storage schema. A backup of your V2 database is created automatically. You can rollback with slm migrate --rollback.

Full migration guide: Migration from V2

What Gets Installed

Component Size When
Core libraries (numpy, scipy, networkx) ~50MB During install
Search engine (sentence-transformers, torch) ~200MB During install
Embedding model (nomic-embed-text-v1.5, 768d) ~500MB First use or slm warmup

Resource usage: ~500-800MB RAM peak during model load, ~20-50MB steady state. CPU-only — no GPU required. Runs on 2 vCPUs + 4GB RAM.

Troubleshooting

slm: command not found

Make sure npm global bin or Python scripts directory is in your PATH. Run npm bin -g to check.

Embedding model fails to download

Check internet connection. Run slm warmup manually. If behind a proxy, set HTTP_PROXY and HTTPS_PROXY.

Python dependency errors

The installer prints exact fix commands. BM25 keyword search works even without embeddings — you're never fully blocked.