Sharing memory between OpenClaw subagents
OpenClaw lets you spawn subagents for specific tasks. A main agent coordinates, and subagents handle the work: one researches, another writes, a third reviews. Solid pattern for complex workflows.
But there’s a coordination problem. Each subagent runs in its own session with its own context. When the research agent finds something useful, how does the writing agent know about it? The usual answer: the main agent passes information around in messages, stuffing relevant context into each subagent’s instructions. That works until it doesn’t. Context gets lost, instructions balloon in size, and the main agent becomes a bottleneck for every piece of information.
MemoClaw offers a simpler approach. Since all subagents under the same OpenClaw instance share the same wallet, they automatically share the same memory pool. The research agent stores a finding, and the writing agent can recall it directly. No message passing required.
How it works
MemoClaw uses your wallet as identity. No API keys, no user accounts. If two agents use the same wallet, they see the same memories.
In OpenClaw, your main agent and all its subagents use the same wallet by default. That means they already have shared memory access. You just need to install the skill:
clawhub install anajuliabit/memoclaw
Every agent and subagent that loads this skill can store and recall from the same memory pool.
The basic pattern
Say you have a content pipeline. Main agent spawns three subagents:
- Researcher - finds information about a topic
- Writer - produces a draft article
- Editor - reviews and polishes
Without shared memory, the main agent has to:
- Get research results from subagent 1
- Pack those results into subagent 2’s instructions
- Get the draft from subagent 2
- Pack the draft into subagent 3’s instructions
That’s a lot of token-heavy message passing. And if the writer needs to check a fact from the research, it has to work with whatever the main agent remembered to forward.
With MemoClaw shared memory:
Researcher subagent instructions:
Research the topic: "x402 payment protocol adoption in 2026"
Store each finding as a memory:
- Tags: ["research", "x402-article"]
- Importance: 0.5-0.9 based on relevance
- Be specific. Include sources, numbers, quotes.
Use namespace "content-pipeline" for all operations.
Writer subagent instructions (spawned after researcher finishes):
Write a 1200-word article about x402 payment protocol adoption.
Recall research findings:
memoclaw recall "x402 adoption trends and usage" --namespace content-pipeline --tags research,x402-article --limit 15
Use those findings as your source material. Write the draft.
When done, store the draft outline as a memory:
- Tags: ["draft", "x402-article"]
- Importance: 0.7
Editor subagent instructions:
Review and edit the article draft.
Recall the research and draft context:
memoclaw recall "x402 article research and draft" --namespace content-pipeline --tags x402-article --limit 20
Check facts against the stored research. Fix any inaccuracies.
The writer pulls research directly from MemoClaw. The editor can cross-reference both the research and the draft notes. Nobody depends on the main agent to relay information.
Namespace strategies for subagent teams
Namespaces prevent memory collision between unrelated workflows. A few patterns that work:
Per-task namespaces:
# Each pipeline run gets its own namespace
memoclaw store --namespace "article-x402-march-2026" --text "..." --tags research
Clean separation. Easy to delete when the task is done. But subagents need to know the exact namespace name.
Shared namespace with tag filtering:
# Everything goes in one namespace, tags differentiate
memoclaw store --namespace content-team --text "..." --tags research,x402-article
memoclaw store --namespace content-team --text "..." --tags draft,x402-article
Simpler to manage. Tags do the filtering. Works well when you want agents to occasionally discover memories from other tasks.
Per-agent namespaces (when you want isolation):
# Each agent has its own space plus access to shared
memoclaw store --namespace researcher-private --text "raw notes, rough ideas"
memoclaw store --namespace shared --text "polished finding ready for the team"
Useful when agents produce intermediate work that shouldn’t clutter the shared pool.
I’d go with tag filtering for most setups. MemoClaw’s semantic search handles the separation naturally when you combine tags with relevance ranking.
Real example: a multi-agent code review pipeline
Here’s something closer to a real workflow. You have a main agent that orchestrates code review by spawning specialized subagents:
Main agent kicks things off:
Spawn subagent "security-reviewer":
Review PR #47 for security issues. Store findings to MemoClaw.
Namespace: pr-47-review. Tags: security.
Spawn subagent "perf-reviewer":
Review PR #47 for performance issues. Store findings to MemoClaw.
Namespace: pr-47-review. Tags: performance.
Both reviewers run in parallel. Each stores findings as they go.
# Security reviewer stores a finding
memoclaw store \
--namespace pr-47-review \
--text "SQL injection risk in user_search endpoint. Raw query interpolation on line 142 of search.py. Should use parameterized queries." \
--tags security,high-priority \
--importance 0.9
# Perf reviewer stores a finding
memoclaw store \
--namespace pr-47-review \
--text "N+1 query in get_user_orders. Loads orders one at a time in a loop. Should use eager loading or a single JOIN query. Affects response time on /api/users/{id}/orders." \
--tags performance,medium-priority \
--importance 0.6
Then the main agent spawns a summarizer:
Spawn subagent "review-summarizer":
Recall all findings from namespace pr-47-review.
Compile into a single review comment.
Prioritize by importance score.
The summarizer pulls everything both reviewers found and produces a clean, prioritized review. No manual aggregation needed.
Handling memory cleanup
Shared memories accumulate. After a task is done, clean up:
# List memories in the task namespace
memoclaw list --namespace pr-47-review
# Delete the namespace's memories when the PR is merged
memoclaw delete --id <memory-id>
Listing and deleting are free operations, so cleanup costs nothing.
For long-running teams, consider having one subagent whose job is memory maintenance: consolidating old memories, deleting stale ones, adjusting importance scores. MemoClaw has a consolidate endpoint that merges related memories:
memoclaw consolidate --namespace content-team --tags research
This merges similar memories into fewer, denser ones. Costs $0.01 per call but keeps your memory pool lean.
What about latency?
MemoClaw stores are synchronous. Once the store call returns, the memory is available for recall by any agent on the same wallet. No eventual consistency delay.
The one thing to watch: if two subagents run in parallel and both store memories at the same time, neither will see the other’s latest stores until their next recall. This is fine for most workflows. If you need strict ordering, have the main agent coordinate: wait for subagent A to finish before spawning subagent B.
Costs for multi-agent setups
Each store and recall costs $0.005. For a typical multi-agent workflow:
- Research agent: 10 stores + 2 recalls = $0.06
- Writer agent: 5 recalls + 3 stores = $0.04
- Editor agent: 3 recalls + 2 stores = $0.025
Total for one pipeline run: about $0.125. Run it daily and you’re at $3.75/month for a three-agent team with shared memory.
Compare that to the token cost of stuffing research results into subagent instructions. A 2,000-token research summary forwarded through three subagents on Claude costs more per run than the MemoClaw operations.
Getting started
- Install the skill:
clawhub install anajuliabit/memoclaw - Pick a namespace for your workflow
- Add store/recall instructions to your subagent prompts
- Let the agents talk through memory instead of through the main agent
Your subagents already share a wallet. MemoClaw just turns that into shared context. The main agent can focus on coordination instead of being an information relay.