Source Code
Dual-Brain Skill
What it does: Provides cognitive diversity by automatically generating perspectives from a secondary LLM (different from your primary model) for every user message.
For Agents
Before responding to any user message, check for a dual-brain perspective:
cat ~/.dual-brain/perspectives/{your-agent-id}-latest.md
If the file exists and contains a recent perspective (check timestamp in comment), consider it alongside your own reasoning. The perspective may highlight:
- Angles you might miss
- Alternative approaches
- Important things to verify
Integration:
- Perspectives are generated automatically by a background daemon
- No manual action required on your part
- Simply read the file before forming your response
- Synthesize both viewpoints (yours + the secondary LLM's)
Example workflow:
- User sends message
- Daemon intercepts, sends to secondary LLM (e.g., Kimi, GPT-4, Llama)
- Secondary LLM generates 2-3 sentence perspective
- Written to
~/.dual-brain/perspectives/{agent-id}-latest.md - You (primary agent) read it before responding
- You synthesize both perspectives into your answer
For Humans
Setup:
npm install -g openclaw-dual-brain
dual-brain setup # Interactive configuration
dual-brain start # Start daemon
Providers:
ollama- Local models (zero cost, requires Ollama)moonshot- Kimi/Moonshot API (Chinese LLM, fast)openai- GPT-4o, GPT-4-turbo, etc.groq- Fast inference with Llama models
Commands:
dual-brain setup- Configure provider, model, API keydual-brain start- Run daemon (foreground)dual-brain stop- Stop daemondual-brain status- Check running statusdual-brain logs- View recent activitydual-brain install-daemon- Install as system service
Config location: ~/.dual-brain/config.json
Perspectives location: ~/.dual-brain/perspectives/
Architecture
User Message โ OpenClaw Session (JSONL)
โ
Dual-Brain Daemon (polling)
โ
Secondary LLM Provider
(ollama/moonshot/openai/groq)
โ
Perspective Generated (2-3 sentences)
โ
~/.dual-brain/perspectives/{agent}-latest.md
โ
Primary Agent reads & synthesizes
โ
Response to User
Benefits
- Cognitive diversity - Two AI models = broader perspective
- Bias mitigation - Different training data/approaches
- Quality assurance - Second opinion catches issues
- Zero agent overhead - Runs in background, <1s latency
- Provider flexibility - Choose cost vs. quality tradeoff
Optional: Engram Integration
If Engram (semantic memory) is running on localhost:3400, perspectives are also stored as memories for long-term recall.