Automating session summaries: cron-driven memory consolidation with MemoClaw
After a busy day, my agent had stored 73 memories. Most of them were small: a preference noted here, a correction there, a few intermediate project states that no longer mattered. When I ran a recall query the next morning, the results were cluttered with low-value noise from the day before.
This is the accumulation problem. Agents store memories as they go, and that’s the right behavior. But over days and weeks, your memory store fills up with granular fragments that make recall less useful. You need a cleanup process, and you probably don’t want to do it by hand.
Here’s how to set up automated memory consolidation using OpenClaw’s cron and MemoClaw’s consolidate endpoint.
What consolidation actually does
MemoClaw’s /consolidate endpoint takes a set of related memories and compresses them into fewer, higher-value memories. It’s not just concatenation. The endpoint uses GPT-4o-mini to read through the memories, identify overlapping information, extract the important parts, and produce consolidated memories with appropriate importance scores.
Say you stored these five memories during a coding session:
- “User prefers TypeScript over JavaScript” (importance: 0.7)
- “Started refactoring auth module to TypeScript” (importance: 0.5)
- “Auth module refactor complete, all tests passing” (importance: 0.6)
- “Found bug in token refresh logic during refactor” (importance: 0.7)
- “Fixed token refresh bug, added regression test” (importance: 0.7)
After consolidation, you might get:
- “User prefers TypeScript over JavaScript” (importance: 0.7) — unchanged, still relevant
- “Auth module refactored to TypeScript. Token refresh bug found and fixed with regression test. All tests passing.” (importance: 0.7) — compressed from four memories into one
The session noise is gone. The useful context is preserved. Recall queries will return the consolidated version instead of fragments.
Setting up the cron job
OpenClaw supports cron jobs that run on a schedule. Each cron job spawns a subagent with its own context, so it won’t interfere with your main session.
You can set this up through the OpenClaw CLI or by editing your cron configuration directly. Here’s what the cron job needs to do:
- List recent memories (last 24 hours)
- Call the consolidate endpoint
- Optionally, clean up memories below a certain importance threshold
The cron task
Create a cron job in OpenClaw that runs daily. I run mine at 4 AM when the agent is idle:
/cron add --schedule "0 4 * * *" --label "memory-consolidation" --task "Run MemoClaw memory consolidation: 1) List memories from the last 24 hours using memoclaw list --since 24h, 2) Call memoclaw consolidate to compress related memories, 3) Delete any memories with importance below 0.3 that are older than 7 days using memoclaw delete"
The cron subagent will interpret these instructions and execute them. You don’t need to write a bash script; the agent handles the tool calls.
Note: OpenClaw gateway must be running for cron to fire. Check with openclaw gateway status if your jobs aren’t executing.
What the subagent does
When the cron fires, the subagent will run something like:
# Step 1: Check what accumulated today
memoclaw list --since 24h
# Step 2: Consolidate related memories
memoclaw consolidate --since 24h
# Step 3: Clean up low-value old memories
memoclaw list --max-importance 0.3 --before 7d
# Then delete the ones that are truly disposable
The consolidate command groups memories by semantic similarity and merges related ones. It costs $0.01 per call because it uses GPT-4o-mini for the summarization step.
Importance scoring matters here
Consolidation respects importance scores. Memories marked with high importance (0.8+) are less likely to be merged aggressively. A correction from the user with importance 0.95 will survive consolidation intact. A throwaway observation at 0.3 might get absorbed into a broader summary or flagged for cleanup.
This means your storing habits directly affect consolidation quality. When your agent stores a memory, the importance score isn’t just metadata. It’s a signal for how consolidation should treat that memory later.
Some guidelines that have worked for me:
- 0.9-1.0: User corrections, explicit preferences, hard requirements. These should survive any consolidation.
- 0.7-0.8: Learned behaviors, project decisions, useful patterns. Consolidation might merge related ones.
- 0.4-0.6: Session context, intermediate states, observations. Good candidates for compression.
- 0.1-0.3: Temporary notes, acknowledgments, things that were relevant for one session only. Cleanup targets.
Before and after: recall quality
The difference shows up in recall results. Here’s a real comparison from my setup.
Before consolidation (query: “authentication setup”):
1. "Started looking into OAuth2 options" (0.5)
2. "Decided to use Auth0 for OAuth2" (0.6)
3. "Auth0 integration partially complete" (0.5)
4. "Auth0 integration done, testing callback URLs" (0.5)
5. "Auth0 callback URL issue resolved, was missing /api prefix" (0.7)
Five results, four of which are intermediate states that don’t help. The useful information (Auth0, callback URL needs /api prefix) is buried.
After consolidation (same query):
1. "Using Auth0 for OAuth2. Integration complete. Callback URLs require /api prefix." (0.7)
2. "User prefers token-based auth over session cookies" (0.8)
Two results. Both useful. The agent gets the context it needs without wading through a progression of status updates.
Tuning the schedule
Daily consolidation works for most setups. If your agent is very active (50+ memories per day), you might want to run it twice daily. If you use your agent a few times a week, every 3 days is fine.
Watch your memory count over time:
memoclaw stats
If total memory count keeps climbing despite consolidation, you might need more aggressive cleanup thresholds or more frequent runs. If count is stable or slowly growing, your schedule is about right.
The cost
Let’s do the math on a daily consolidation cron:
memoclaw list: Free (no embeddings used)memoclaw consolidate: $0.01 per callmemoclaw delete: Free
That’s $0.01/day or about $0.30/month. You’ll likely save more than that in reduced recall noise, since cleaner memory means fewer recall calls needed to get useful context.
MemoClaw gives you 100 free API calls per wallet, so you can test consolidation a handful of times before it starts costing anything. After the free tier, paid calls start at $0.005.
The subagent that runs the cron also uses tokens, but with a small model and a focused task, each run should cost well under a cent.
Common issues
Consolidation merges things you wanted separate. This usually happens when related memories have similar importance scores. Bump the importance on memories you want to keep distinct. Anything at 0.9+ stays untouched.
Too many low-value memories accumulating. Your agent might be over-storing. Review what it’s storing and adjust the prompts in your AGENTS.md. Something like “Only store memories that would be useful in future sessions” helps reduce noise at the source.
Cron job not firing. Check your OpenClaw cron list with /cron list. Make sure the schedule syntax is correct and the gateway is running. Cron jobs need the OpenClaw gateway daemon to be active.
Putting it together
The full setup takes about five minutes:
- Make sure the MemoClaw CLI is installed:
npm install -g memoclaw - Add the cron job through OpenClaw
- Let it run for a few days
- Check recall quality on queries you use often
Memory consolidation isn’t glamorous work. It’s the kind of background maintenance that you set up once and forget about. But the difference in recall quality is noticeable within a week. Your agent stops returning five fragments about the same topic and starts returning one clean, useful memory.
That’s the whole point of automated memory management: your agent remembers what matters and quietly forgets the rest.