Wiring MemoClaw into multi-tool agent workflows


Most OpenClaw agents run more than one MCP server. You’ve got GitHub for PRs, maybe a calendar server, a web search tool, filesystem access. Each tool does its job, produces some output, and that output disappears when the session ends.

That’s the gap. Your agent reviews a PR, learns the author always forgets to update tests, and forgets this by tomorrow. It checks your calendar, sees you have a standup at 9am, and won’t know that next week unless it checks again.

memoclaw-mcp sits alongside your other MCP servers and gives the agent a place to put what it learns. Store outcomes. Recall context before acting. The tools get smarter because the agent remembers what happened last time.

The setup

You probably already have MCP servers configured for your OpenClaw agent. Adding MemoClaw is one more block:

{
  "mcpServers": {
    "memoclaw": {
      "command": "memoclaw-mcp",
      "env": {
        "MEMOCLAW_PRIVATE_KEY": "your-wallet-private-key"
      }
    },
    "github": {
      "command": "mcp-github"
    },
    "filesystem": {
      "command": "mcp-filesystem",
      "args": ["/home/user/projects"]
    }
  }
}

Three servers, each with their own tools. The agent sees all of them and picks the right one per task. No conflicts, no overlap.

Pattern 1: remember what you learned from other tools

The simplest pattern. After the agent uses a tool and gets useful information, it stores that as a memory.

Say your agent reviews a PR using the GitHub MCP server:

  1. Agent calls github_get_pull_request to fetch PR #42
  2. Reads the diff, notices the author didn’t write tests (again)
  3. Leaves a review comment
  4. Stores what it learned:
store_memory({
  content: "PR author @danielm tends to skip unit tests for utility functions. Reminded them on PR #42, #38, and #31.",
  importance: 0.7,
  tags: ["github", "code-review", "team"],
  namespace: "work"
})

Next time it reviews a PR from the same author, a recall surfaces this pattern. The agent can check for missing tests proactively instead of discovering them while reading the diff.

This isn’t something you hardcode. You teach the agent the pattern in your AGENTS.md:

## Memory habits

After completing a tool-based task, store anything worth knowing next time:
- Patterns you noticed (someone's coding habits, recurring issues)
- Decisions made and why
- Corrections the user gave you

Use memoclaw_store with relevant tags and an appropriate namespace.

The agent figures out what’s worth storing. You adjust the instructions if it stores too much noise or misses things that matter.

Pattern 2: recall before you act

This one changes how the agent approaches tasks. Before using a tool, it first recalls relevant memories.

Your agent gets asked to set up a new repo. Before it starts:

recall_memories({
  query: "repo setup preferences CI deployment",
  namespace: "work"
})

Back come memories like:

  • “Always use pnpm, not npm or yarn”
  • “CI should run on GitHub Actions, not CircleCI”
  • “Include a .nvmrc with the current LTS Node version”
  • “Linting: biome, not eslint”

The agent sets up the repo with all of this baked in. No re-explaining. No “oh wait, we use pnpm” after it’s already initialized with npm.

In AGENTS.md, this becomes:

## Before starting any task

1. Recall from the current project namespace for project-specific context
2. Recall from default namespace for general preferences
3. Then proceed with the task using what you remembered

Two recalls, $0.01 total. The agent starts every task with context instead of guessing.

Pattern 3: tool results as memory

Some tool outputs are worth persisting verbatim. Not everything — you don’t want to store every file listing. But specific, reusable outputs? Those save repeated tool calls.

Example with a hypothetical deployment tool:

# Agent deploys, gets back the URL
deploy_result = deploy({ project: "api", environment: "staging" })
# Result: { url: "https://api-staging-abc123.railway.app", status: "success" }

# Store the deployment info
store_memory({
  content: "Staging deploy on Mar 9: https://api-staging-abc123.railway.app. Deployed commit abc1234.",
  importance: 0.5,
  tags: ["deployment", "staging"],
  namespace: "api-project"
})

Next time someone asks “what’s the staging URL?”, the agent recalls it instead of redeploying or digging through Railway’s dashboard.

This pattern works well for:

  • Deployment URLs and status
  • API keys and endpoints (non-secret ones) that other tools produced
  • Search results that the user found useful
  • Meeting notes pulled from a calendar integration

Pattern 4: cross-session workflows

Here’s where it gets interesting. Your agent runs a cron job that checks GitHub notifications every morning. It finds open PRs assigned to you, reviews them, and stores summaries:

store_memory({
  content: "PR #87 in api-service: adds rate limiting middleware. Looks good but needs a test for the 429 response. Left a comment.",
  importance: 0.6,
  tags: ["github", "pr-review", "pending"],
  namespace: "work"
})

Later that day, you sit down and ask your agent “what’s on my plate?” It recalls:

recall_memories({
  query: "pending work PRs tasks",
  namespace: "work"
})

And surfaces the PR summaries, deployment status, and anything else it stored from earlier tool interactions. You get a briefing without having to check each tool individually.

The cron agent and the interactive agent share the same wallet, so they share the same memory. Work done by the background agent is immediately available to the one you’re talking to.

A practical example: GitHub + MemoClaw code review

Let me walk through a complete flow. Your agent gets asked to review a PR.

Step 1 — Recall context about the repo and author:

recall_memories({
  query: "code review patterns api-service team preferences",
  namespace: "work"
})

Returns:

  • “api-service uses Hono, Drizzle ORM, Postgres on Neon”
  • “Team convention: all routes need integration tests”
  • “@danielm often forgets test updates for utility functions”

Step 2 — Fetch the PR:

The agent calls the GitHub MCP server to get PR details and the diff.

Step 3 — Review with context:

The agent already knows the stack, the conventions, and the author’s patterns. It checks for missing integration tests (because the team requires them) and specifically looks at utility function test coverage (because this author tends to skip those).

Step 4 — Store what it learned:

store_memory({
  content: "Reviewed PR #93 from @danielm. Added rate limiting to /api/users. Tests present this time. Approved with minor nit about error message wording.",
  importance: 0.5,
  tags: ["github", "code-review"],
  namespace: "work"
})

Next review of a PR from the same author, the agent knows their test coverage is improving. It adjusts its focus accordingly. That’s the kind of thing that makes an agent actually useful over time instead of starting from zero each session.

What this costs

Adding MemoClaw to a multi-tool workflow doesn’t change what the other tools cost — it adds the MemoClaw calls on top.

Typical day for my setup:

  • 5-10 stores from various tool interactions: $0.025-0.05
  • 10-15 recalls (before tasks + ad hoc): $0.05-0.075
  • Daily total: roughly $0.08-0.13

The free tier (100 calls per wallet) covers about a week of moderate use. After that, you’re looking at a few dollars a month. For an agent that actually remembers what it did yesterday, that’s a reasonable trade.

Getting started

npm install -g memoclaw-mcp

Add it to your MCP config alongside your existing servers. Tell your agent (in AGENTS.md) to store useful outcomes and recall before acting. That’s the whole integration.

The individual tools don’t change. The agent just gets better at using them because it carries context from one session to the next.


MemoClaw gives your OpenClaw agent persistent semantic memory. 100 free API calls, no registration. docs.memoclaw.com