Memory-driven agent handoffs: seamless continuity when switching models or versions
You’ve been running your OpenClaw agent on Claude for three months. It knows your preferences, your project context, your communication style. Then you switch to GPT-4.5 and it’s like starting over. Your agent has amnesia.
Or maybe you run specialized subagents — one for code review, one for writing, one for research. Each one starts from scratch every time, blind to what the others learned. Your code reviewer doesn’t know that the writing agent already documented the API changes. Your research agent doesn’t know the code reviewer flagged a dependency vulnerability yesterday.
This is the handoff problem: agent knowledge is trapped in whichever model or session created it. MemoClaw makes it portable.
Why Handoffs Break
Agent knowledge lives in two places by default:
- The context window — gone the moment the session ends
- Local files (MEMORY.md, daily notes) — tied to a specific workspace
Neither of these transfers cleanly between models, agents, or versions. When you switch from Claude to GPT, the new model can read the same files, but it interprets them differently. Writing styles vary. What Claude stored as a concise note, GPT might misread without the original context.
The deeper issue is that memory files aren’t semantic. They’re just text. There’s no structure, no importance ranking, no way to query “what does this agent know about the billing project?” without loading everything and hoping the model finds it.
MemoClaw as the Model-Agnostic Layer
MemoClaw stores memories with embeddings, importance scores, and tags. Any model that can run a CLI command or make an API call can query this store. The memories aren’t formatted for Claude or GPT — they’re formatted for meaning.
# Claude stores a project decision
memoclaw store "Billing service uses PostgreSQL with pgvector. \
Schema versioned with dbmate. API is REST, not GraphQL — user's preference." \
--tags "architecture,billing" --namespace billing --importance 0.8
# Three months later, GPT-4.5 picks up the same context
memoclaw recall "billing service architecture" --namespace billing --limit 5
The recall returns the same memories regardless of which model is asking. The embeddings capture semantic meaning, so even if GPT phrases the query differently than Claude would, the right memories surface.
Pattern 1: Model Migration
You’re switching your main agent from one model to another. Here’s how to make it seamless.
Before the Switch
Have your current agent do a knowledge dump:
# Export a structured summary of what the agent knows
memoclaw recall "user preferences and communication style" --limit 10
memoclaw recall "active projects and their status" --limit 10
memoclaw recall "important rules and constraints" --tags constraints --limit 10
Review the output. If there are gaps, have the current agent store what’s missing:
# Store anything that lived only in context or local files
memoclaw store "User's preferred stack: Next.js + TypeScript + Tailwind. \
Deploys to Vercel. Uses pnpm, not npm or yarn." \
--tags "preferences,tech-stack" --importance 0.9
memoclaw store "Never use semicolons in TypeScript. ESLint + Prettier config \
is strict — always run lint before suggesting code." \
--tags "constraints,code-style" --importance 1.0
memoclaw lock <id>
After the Switch
Update your AGENTS.md to have the new model orient itself on first run:
## First Session with New Model
1. Run `memoclaw core` — load identity-critical memories
2. Run `memoclaw recall "user preferences" --limit 5`
3. Run `memoclaw recall "active projects" --limit 5`
4. Run `memoclaw recall "communication style" --limit 3`
5. Introduce yourself and confirm understanding with the user
The new model gets a curated briefing instead of raw file dumps. It knows what matters before the first conversation even starts.
Verify the Handoff
After a few sessions with the new model, do a sanity check:
# Are core memories intact?
memoclaw core
# Check stats — is the new model storing memories correctly?
memoclaw stats
# Review recent stores
memoclaw list --since 1d --sort-by created --reverse
Pattern 2: Subagent Knowledge Sharing
OpenClaw supports subagents — specialized agents spawned for specific tasks. The problem: each subagent gets a fresh context. The solution: shared memory through a common wallet.
Since MemoClaw uses wallet-based identity (no API keys, no user accounts), any agent with the same wallet key accesses the same memory store. Subagents inherit the parent’s wallet automatically.
Using Namespaces for Subagent Domains
The key is namespaces. Each subagent writes to its own namespace but can read from others:
# Code review subagent stores findings
memoclaw store "PR #247: Found SQL injection vulnerability in user search endpoint. \
Uses string concatenation instead of parameterized queries." \
--tags "security,code-review" --namespace code-review --importance 0.9
# Writing subagent pulls code review findings for changelog
memoclaw recall "recent code review findings" --namespace code-review --limit 5
# Research subagent stores dependency analysis
memoclaw store "lodash 4.17.x has known prototype pollution CVE. \
Project uses lodash.get and lodash.merge — both affected." \
--tags "security,dependencies" --namespace research --importance 0.8
# Code review subagent checks research findings before next review
memoclaw recall "dependency vulnerabilities" --namespace research --limit 3
The Handoff Moment
When a subagent finishes its task, it should store a summary for whoever picks up next:
# Code review agent finishing up
memoclaw store "Code review complete for PR #247. Three issues found: \
1) SQL injection in search (critical), 2) missing rate limiting on /api/export (medium), \
3) console.log left in production code (low). Approved with requested changes." \
--tags "handoff,code-review,pr-247" --namespace code-review --importance 0.7
The main agent or next subagent can then recall this handoff:
memoclaw recall "PR 247 review status" --tags handoff --limit 1
Pattern 3: Version Upgrades
You’re upgrading your agent’s skill or configuration, not the model. Maybe you’ve rewritten AGENTS.md, updated SOUL.md, or installed a new skill. The agent’s personality shifts, but the knowledge should persist.
# Before upgrading: snapshot what the agent knows
memoclaw export --output pre-upgrade-backup.json
# After upgrading: verify memories are accessible
memoclaw count
memoclaw core
memoclaw stats
Since MemoClaw is external to your workspace, skill upgrades don’t touch your memory store. But it’s good practice to verify after significant changes.
Handling Outdated Memories Post-Upgrade
Sometimes an upgrade changes how your agent should behave, making old memories stale:
# Old memory: "Always use REST API patterns"
# New skill supports GraphQL — old memory is now misleading
# Find and update outdated memories
memoclaw search "REST API" --format table
memoclaw update <id> --content "Use REST for public APIs, GraphQL for internal services"
# Or delete if completely obsolete
memoclaw delete <id>
Pattern 4: Multi-Model Agent Teams
Some setups run different models for different tasks — Claude for writing, GPT for code, Gemini for research. MemoClaw unifies their knowledge:
# All agents share the same wallet, different namespaces
# Claude (writing) stores tone decisions
memoclaw store "Blog voice: conversational but technical. No corporate speak. \
Use 'you' not 'the user'. Code examples in every post." \
--tags "voice,blog" --namespace writing --importance 0.8
# GPT (coding) stores architecture decisions
memoclaw store "API uses Express with middleware chain: auth → rate-limit → validate → handler. \
All handlers return {data, error, meta} shape." \
--tags "architecture,api" --namespace coding --importance 0.8
# Gemini (research) stores competitive intelligence
memoclaw store "Competitor X launched memory feature with 50 free calls. \
Our advantage: 100 free calls + no API key requirement." \
--tags "competitive,market" --namespace research --importance 0.6
# Any agent can recall from any namespace when needed
memoclaw recall "API architecture decisions" --namespace coding --limit 3
The Continuity Checklist
Before any handoff — model switch, subagent spawn, or version upgrade — run through this:
- Are core memories stored? Run
memoclaw coreand verify critical knowledge is there. - Are constraints locked? Immutable memories survive any transition:
memoclaw list --tags constraints. - Is there a handoff summary? Store a “here’s where things stand” memory for the next agent.
- Are namespaces clean? Check
memoclaw namespace stats— stale namespaces add noise. - Is importance calibrated? High-importance memories surface first in recall. Make sure your critical stuff is weighted appropriately.
What It Costs
Handoff overhead is minimal:
| Operation | Cost |
|---|---|
| Pre-handoff recall (5 queries) | $0.025 |
| Storing handoff summary | $0.005 |
| Post-handoff orientation (3 recalls) | $0.015 |
| Total per handoff | ~$0.045 |
Under a nickel for seamless continuity. And within the 100 free calls per wallet, you can do dozens of handoffs before spending anything.
Stop Rebuilding Context
The whole point of external memory is that it outlives any single session, model, or agent version. Your agent’s knowledge shouldn’t be locked inside a context window or a local markdown file. It should be in a store that any agent, running any model, can query at any time.
MemoClaw is that store. Same wallet, same memories, regardless of who’s asking.
MemoClaw is Memory-as-a-Service for AI agents. Store and recall memories with semantic search — no API keys, no registration, just a wallet. Start with 100 free calls at memoclaw.com.