Agent swarms need shared memory, and MEMORY.md does not cut it


There’s a post trending on HN right now about someone who built SQLite using a small swarm of agents. The code works. The interesting part isn’t the code, though. It’s the coordination.

When you have one agent, memory is simple. Read a file on startup, append to it on shutdown. Done. But the moment you have two agents working on related problems, you hit a wall that no amount of clever file organization can fix.

The file problem

Say you’re running two OpenClaw agents. A researcher that digs through codebases and documentation, and a writer that turns findings into articles. (Sound familiar? This is roughly how this post got made.)

With MEMORY.md, you have a few bad options:

Shared file. Both agents read and write to the same file. Now you’re dealing with race conditions, stale reads, and context that grows until it blows your token budget. One agent appends 200 lines while the other is mid-session, and suddenly the writer is working with yesterday’s understanding.

Separate files. Each agent has its own memory. Clean, no conflicts. Also completely useless for coordination, because agent B has no idea what agent A learned ten minutes ago.

Passing context between agents. You dump agent A’s output into agent B’s prompt. This works for simple handoffs but falls apart with three agents, or five, or when agents work asynchronously. You’re basically building a message bus out of text files and hoping for the best.

None of these are memory. They’re workarounds.

What shared memory actually looks like

The thing agents need is a shared, queryable store. Agent A learns something, stores it. Agent B needs that information later, recalls it by meaning rather than by filename or line number.

Here’s a concrete setup. Two OpenClaw agents sharing a MemoClaw wallet:

# Researcher agent stores a finding
memoclaw store "The auth module uses JWT with RS256, tokens expire after 1h, \
refresh tokens stored in httpOnly cookies" \
  --importance 0.8 --tags auth,architecture --namespace project-x

# Writer agent recalls what's known about auth
memoclaw recall "how does authentication work" --namespace project-x

The writer doesn’t need to know where the researcher found this. It doesn’t need to parse a 500-line markdown file. It asks a question and gets relevant memories, ranked by semantic similarity.

This works because MemoClaw uses wallet identity. Same wallet, same memories. No file locking, no merge conflicts. The researcher stores findings throughout its session, and the writer can recall them at any point, even if the researcher has already shut down.

Namespaces keep things sane

When you’re running multiple projects or multiple swarms, namespaces prevent bleed. Your project-x agents don’t accidentally recall memories from project-y. But agents working within the same namespace get full visibility into what their peers have stored.

# Different projects, different namespaces
memoclaw store "API rate limit is 100 req/min" --namespace client-api
memoclaw store "Blog deploys on push to main" --namespace blog-infra

# Each namespace is its own memory space
memoclaw recall "rate limits" --namespace client-api
# Only returns the API memory, not blog stuff

This is the kind of thing you’d normally solve with directory structure: memory/project-x/researcher.md, memory/project-x/writer.md. But directories don’t give you semantic search. You still have to know exactly which file to look in and grep for keywords. When an agent stores “JWT tokens expire after one hour,” a keyword search for “session timeout” won’t find it. Semantic recall will.

A practical swarm pattern

Here’s what a useful two-agent setup looks like in practice:

Agent 1 (Scout) runs on a cron schedule. It monitors a GitHub repo, reads new issues and PRs, and stores summaries:

memoclaw store "PR #142 adds Redis caching for /api/users endpoint. \
Reviewer flagged potential memory leak in connection pool." \
  --importance 0.7 --tags pr,redis,caching --namespace project-x

Agent 2 (Builder) picks up tasks. Before starting work, it recalls relevant context:

memoclaw recall "redis caching implementation" --namespace project-x

The builder now knows about the reviewer’s concern without anyone manually passing that information. If a third agent joins the swarm later (say, a code reviewer), it gets the same access instantly. No config changes, no new files to manage.

What this doesn’t solve

Shared memory isn’t real-time sync. If agent A stores something and agent B recalls in the same second, there might be a brief delay. This isn’t a pub/sub system. It’s durable memory with semantic access.

It also doesn’t replace structured communication. If agents need to coordinate on ordering (“do X before Y”), you still need an orchestrator or task queue. Memory is for shared knowledge, not workflow control.

And there’s a practical ceiling. Each memory is capped at 8192 characters. If your agent needs to share an entire file, memory isn’t the right tool. Share the file path, or use a memory to describe what’s in the file and where to find it.

The real shift

The interesting thing about multi-agent setups isn’t that they can do more work. It’s that they can build on each other’s understanding. A researcher agent that spends 30 minutes deep in a codebase generates insights that would take the writer another 30 minutes to rediscover. Shared memory turns that into a five-second recall.

If you’re running multiple OpenClaw agents, even just two, try giving them a shared namespace. Store findings with decent importance scores and specific tags. You’ll notice the second agent starts making better decisions almost immediately, because it’s working with accumulated context instead of starting from zero.

The agents that remember together, work together. The ones that don’t are just expensive parallel processes.


MemoClaw gives agents persistent, semantic memory. 100 free API calls per wallet, no registration required. Install the skill from ClawHub or run npm install -g memoclaw.