Quick Start

Install the SDK, run the interactive wizard, and give your AI agent persistent memory in under a minute.

Install

$ npm install clude-bot

Setup

The init wizard walks you through configuring Supabase, Anthropic, Solana, and embeddings. All steps are skippable.

$ npx clude-bot init

This generates a .env file and shows a ready-to-use code snippet.

Database

Clude uses Supabase with pgvector. Run the schema against your Supabase project:

$ psql $DATABASE_URL -f node_modules/clude-bot/supabase-schema.sql

Usage

import { Cortex } from 'clude-bot'; const brain = new Cortex({ supabase: { url: process.env.SUPABASE_URL, serviceKey: process.env.SUPABASE_SERVICE_KEY, }, }); await brain.init(); // Store a memory await brain.store({ type: 'episodic', content: 'User asked about deployment options', summary: 'Deployment inquiry', source: 'my-agent', }); // Recall with hybrid search const memories = await brain.recall({ query: 'what did the user ask about', }); console.log(memories);
Note
Only supabase config is required. Everything else (Anthropic, embeddings, Solana) is optional and degrades gracefully.

Configuration

The Cortex constructor accepts a CortexConfig object. Only Supabase credentials are required.

const brain = new Cortex({ // Required supabase: { url: 'https://xxx.supabase.co', serviceKey: 'eyJ...', }, // Optional — enables dream cycles + LLM importance scoring anthropic: { apiKey: 'sk-ant-...', model: 'claude-sonnet-4-5-20250929', // default: claude-sonnet-4-5-20250929 }, // Optional — enables vector similarity search embedding: { provider: 'voyage', // 'voyage' | 'openai' apiKey: 'pa-...', model: 'voyage-3', dimensions: 1024, }, // Optional — enables on-chain memory commits solana: { rpcUrl: 'https://api.mainnet-beta.solana.com', botWalletPrivateKey: 'base58...', }, });

supabase (required)

FieldTypeDescription
urlstringSupabase project URL
serviceKeystringSupabase service role key

anthropic (optional)

FieldTypeDescription
apiKeystringAnthropic API key
modelstringModel ID. Default: claude-sonnet-4-5-20250929

embedding (optional)

FieldTypeDescription
provider'voyage' | 'openai'Embedding provider
apiKeystringProvider API key
modelstringModel name (e.g. voyage-3)
dimensionsnumberVector dimensions. Default: 1024

solana (optional)

FieldTypeDescription
rpcUrlstringSolana RPC endpoint. Default: mainnet
botWalletPrivateKeystringBase58 private key for memo transactions

Storing Memories

Store memories with automatic importance scoring, concept inference, and optional on-chain commitment.

const id = await brain.store({ type: 'semantic', content: 'Users who hold >1M tokens tend to ask about governance', summary: 'Whale holder behavior pattern', source: 'analysis-agent', tags: ['whale', 'governance'], importance: 0.8, }); console.log(id); // 42 (memory ID) or null on failure

StoreMemoryOptions

FieldTypeDescription
typeMemoryType'episodic' | 'semantic' | 'procedural' | 'self_model'
contentstringFull memory content (max 5000 chars)
summarystringShort summary for recall matching (max 500 chars)
sourcestringIdentifier for the agent storing the memory
tagsstring[]Tags for filtering (max 20)
conceptsstring[]Structured concepts (auto-inferred if omitted)
importancenumber0-1 scale. LLM-scored if omitted (requires anthropic config)
emotionalValencenumber-1 (negative) to 1 (positive). Default: 0
sourceIdstringExternal ID (e.g. tweet ID, message ID)
relatedUserstringAssociated user identifier
relatedWalletstringAssociated wallet address
metadataRecord<string, unknown>Arbitrary metadata
evidenceIdsnumber[]IDs of supporting memories

Memory Types

Each type has a different decay rate, mirroring biological memory:

TypeDecay / DayPurpose
episodic7%Raw interactions. Who said what, when.
semantic2%Distilled knowledge. Patterns and insights.
procedural3%Learned behavior. What works, what doesn't.
self_model1%Evolving self-understanding. Nearly permanent.

Concept Ontology

Memories are automatically tagged with structured concepts from a controlled vocabulary of 12 labels:

market_event, holder_behavior, self_insight, social_interaction, community_pattern, token_economics, sentiment_shift, recurring_user, whale_activity, price_action, engagement_pattern, identity_evolution

Recalling Memories

Hybrid retrieval combining vector similarity, keyword matching, tag scoring, and graph traversal. Ranked by the Generative Agents formula.

recall(opts?)

Returns full Memory objects ranked by composite score.

const memories = await brain.recall({ query: 'what does the user prefer', tags: ['preferences'], memoryTypes: ['episodic', 'semantic'], limit: 10, minImportance: 0.3, });

RecallOptions

FieldTypeDescription
querystringNatural language search query
tagsstring[]Filter by tags
relatedUserstringFilter by associated user
memoryTypesMemoryType[]Filter by memory type
limitnumberMax results to return
minImportancenumberMinimum importance threshold (0-1)
minDecaynumberMinimum decay factor threshold
trackAccessbooleanUpdate access count and timestamp. Default: true

Retrieval Scoring

Each memory is scored with the additive formula from Park et al. 2023:

score = (0.5 * recency) + (3.0 * relevance) + (2.0 * importance) + (3.0 * vector_similarity) + (1.5 * graph_boost)

All scores are gated by each memory's decay_factor.

recallSummaries(opts?)

Returns lightweight MemorySummary objects (no content field). Use for progressive disclosure — list summaries first, then hydrate the ones you need.

const summaries = await brain.recallSummaries({ query: 'recent events', }); // Pick the most relevant ones and hydrate const ids = summaries.slice(0, 3).map(s => s.id); const full = await brain.hydrate(ids);

hydrate(ids)

Fetch full Memory objects by ID. Useful after recallSummaries() to get content for specific memories.

const memories = await brain.hydrate([1, 2, 3]);

Dream Cycles

A three-phase introspection cycle inspired by biological memory consolidation. Requires anthropic config.

dream(opts?)

Run one full dream cycle: consolidation, reflection, emergence.

// Basic dream await brain.dream(); // With emergence callback await brain.dream({ onEmergence: async (text) => { console.log('Emergence:', text); // Post to your channel, save to file, etc. }, });

Dream Phases

PhaseWhat Happens
I. ConsolidationGenerates focal questions from recent memories. Each question retrieves evidence and produces new semantic insights with citation chains.
II. ReflectionSelf-model review against accumulated knowledge. Identity evolves based on experience.
III. EmergenceExamines its own existence. Output sent to onEmergence callback if provided.

startDreamSchedule() / stopDreamSchedule()

Run dream cycles on a 6-hour cron schedule with daily memory decay.

brain.startDreamSchedule(); // Later... brain.stopDreamSchedule();
Requires
Dream cycles require anthropic config. Calling dream() without it throws an error.

Association Graph

Typed, weighted links between memories. Connections strengthen through co-retrieval (Hebbian reinforcement).

link(sourceId, targetId, type, strength?)

await brain.link(42, 87, 'supports', 0.8);

Link Types

TypeMeaning
supportsSource provides evidence for target
contradictsSource conflicts with target
elaboratesSource adds detail to target
causesSource led to or caused target
followsSource happened after target (temporal)
relatesGeneral association

Hebbian Reinforcement

When two linked memories are recalled together, their link strength increases by 0.05. The graph evolves through use, not programming.

During recall, linked memories receive a graph boost weighted at 1.5x in the scoring formula.

Utilities

decay()

Apply type-specific memory decay. Returns the number of memories decayed. Typically called on a daily schedule (handled automatically by startDreamSchedule).

const count = await brain.decay(); console.log(`Decayed ${count} memories`);

stats()

Returns aggregate statistics about the memory system.

const stats = await brain.stats(); // { // total: 1500, // byType: { episodic: 900, semantic: 400, procedural: 120, self_model: 80 }, // avgImportance: 0.65, // avgDecay: 0.88, // topTags: [{ tag: 'market', count: 200 }, ...], // topConcepts: [{ concept: 'social_interaction', count: 150 }, ...], // embeddedCount: 1200, // ... // }

recent(hours, types?, limit?)

Get memories from the last N hours, optionally filtered by type.

const last24h = await brain.recent(24); const recentSemantic = await brain.recent(6, ['semantic'], 5);

selfModel()

Get current self-model memories. These represent the agent's evolving self-understanding.

const identity = await brain.selfModel();

scoreImportance(description)

Score a text's importance using the LLM (0-1). Falls back to rule-based scoring if no anthropic config.

const score = await brain.scoreImportance('User reported critical bug in auth flow'); // 0.85

formatContext(memories)

Format an array of memories into a context string suitable for LLM prompts.

const memories = await brain.recall({ query: 'user context' }); const context = brain.formatContext(memories); // Use `context` in your LLM system prompt

inferConcepts(summary, source, tags)

Infer structured concepts from memory metadata. Returns an array of concept labels from the controlled vocabulary.

const concepts = brain.inferConcepts( 'Large holder sold 500k tokens', 'market-watcher', ['whale', 'sell'], ); // ['whale_activity', 'holder_behavior', 'token_economics']

destroy()

Clean up resources, stop dream schedules, and remove event listeners.

brain.destroy();

Events

Listen for memory system events via the internal event bus.

on('memory:stored', handler)

Fired every time a memory is stored. Receives importance score and memory type.

brain.on('memory:stored', (payload) => { console.log(payload.importance); // 0.75 console.log(payload.memoryType); // 'episodic' });

Episodic memories automatically accumulate importance toward triggering dream cycles.

Database Schema

Clude uses Supabase PostgreSQL with the pgvector extension for vector similarity search.

Setup

The schema file is included in the npm package:

$ psql $DATABASE_URL -f node_modules/clude-bot/supabase-schema.sql

Or import it directly:

import schema from 'clude-bot/schema'; // Path to supabase-schema.sql

Tables

TablePurpose
memoriesCore memory store with pgvector embedding column
memory_linksAssociation graph edges (typed, weighted)
memory_fragmentsPer-fragment embeddings for granular vector search
dream_sessionsDream cycle history and outputs
linked_walletsX handle to Solana wallet mappings

pgvector

The schema creates HNSW indexes for fast vector search. Make sure the vector extension is enabled in your Supabase project (it is by default).

Note
HNSW indexes perform best after data is loaded. If you're starting fresh, the system gracefully falls back to keyword-only retrieval until enough embeddings are present.

Graceful Degradation

The SDK works with minimal config and progressively unlocks features as you add more.

FeatureConfig NeededWithout It
Store / RecallsupabaseConstructor throws
Vector searchembeddingKeyword + tag scoring only
LLM importanceanthropicRule-based calculateImportance()
Dream cyclesanthropicdream() throws clear error
On-chain commitssolanaSilently skipped
Emergence outputonEmergenceOutput discarded (SDK never tweets)
Minimal setup
With just supabase config, you get full store/recall with keyword matching, tag scoring, type-specific decay, and the association graph. Add embedding and anthropic configs later as needed.