Batch operations: bulk-import, update, and clean your agent's memory
MemoClaw charges $0.005 per individual store. The batch endpoint stores up to 100 memories for a flat $0.04. That’s $0.04 vs $0.50 for 100 memories.
But batch isn’t always the right choice. Here’s the cost math, practical patterns, and how to structure payloads so recall still works after bulk ingestion.
The cost math
| Approach | 1 memory | 10 memories | 50 memories | 100 memories |
|---|---|---|---|---|
| Individual stores | $0.005 | $0.05 | $0.25 | $0.50 |
| Batch store | $0.04 | $0.04 | $0.04 | $0.04 |
The breakeven point is 8 memories. Below 8, individual stores are cheaper. Above 8, batch wins. Since the price is flat regardless of count, fill batches as close to 100 as you can.
When to use batch
Session dumps
At the end of a long conversation, your agent has accumulated context worth preserving. Instead of storing each insight individually, collect them and batch-store:
memoclaw store-batch \
'{"content": "User decided to migrate from MongoDB to PostgreSQL", "importance": 0.9, "memory_type": "decision", "metadata": {"tags": ["database", "migration"]}}' \
'{"content": "Migration deadline is March 30, 2026", "importance": 0.8, "memory_type": "project", "metadata": {"tags": ["deadline", "migration"]}}' \
'{"content": "User wants to keep the MongoDB read replica during transition", "importance": 0.7, "memory_type": "decision", "metadata": {"tags": ["database", "migration"]}}' \
'{"content": "Chose Drizzle ORM over Prisma for the new PostgreSQL layer", "importance": 0.8, "memory_type": "decision", "metadata": {"tags": ["orm", "migration"]}}' \
--namespace project-acme
Four decisions from one session, one request, $0.04.
Migration from markdown files
Moving from MEMORY.md to MemoClaw? The migrate command handles it automatically. But if you want control over structure, batch store is the way.
Say your MEMORY.md looks like this:
## Preferences
- Prefers dark mode
- Uses vim keybindings
- Wants responses under 500 words
## Project stack
- Next.js 14 with App Router
- PostgreSQL 16 + pgvector
- Deployed on Vercel + Railway
You could use memoclaw migrate ($0.01 per file), or parse it yourself and batch-store with proper types and importance:
memoclaw store-batch \
'{"content": "Prefers dark mode", "importance": 0.7, "memory_type": "preference"}' \
'{"content": "Uses vim keybindings in all editors", "importance": 0.7, "memory_type": "preference"}' \
'{"content": "Wants agent responses under 500 words", "importance": 0.8, "memory_type": "preference"}' \
'{"content": "Project stack: Next.js 14 with App Router", "importance": 0.9, "memory_type": "project"}' \
'{"content": "Database: PostgreSQL 16 with pgvector extension", "importance": 0.9, "memory_type": "project"}' \
'{"content": "Deployed on Vercel (frontend) and Railway (API + DB)", "importance": 0.8, "memory_type": "project"}'
The manual approach gives you control over memory_type, importance, and tags, which directly affects recall quality.
Importing from external tools
Pulling memories from Notion, Obsidian, or anywhere else? Parse them and batch-store via the API:
curl -X POST https://api.memoclaw.com/v1/store/batch \
-H "Content-Type: application/json" \
-d '{
"memories": [
{
"content": "Team standup is at 10am UTC-3, Monday through Friday",
"importance": 0.7,
"memory_type": "preference",
"metadata": {"tags": ["team", "schedule"], "source": "notion"}
},
{
"content": "Sprint reviews happen every other Friday at 3pm",
"importance": 0.6,
"memory_type": "project",
"metadata": {"tags": ["team", "schedule"], "source": "notion"}
},
{
"content": "Use conventional commits: feat/fix/chore/docs prefix required",
"importance": 0.9,
"memory_type": "correction",
"metadata": {"tags": ["git", "conventions"], "source": "notion"}
}
]
}'
The deduplicated_count in the response tells you if MemoClaw merged any near-duplicates. If you accidentally include something already stored, it merges instead of creating a duplicate.
When NOT to use batch
Real-time context. If your agent needs to store something and immediately recall it in the same turn, use individual stores. Batch requests process as a unit — no partial results.
Small volumes (under 8 memories). Three individual stores cost $0.015 vs $0.04 for a batch. Small difference, but it’s there.
Mixed namespaces. All memories in a batch share the same namespace. Different namespaces need separate calls:
memoclaw store "Frontend preference: Tailwind CSS" --namespace frontend-agent
memoclaw store "API convention: REST, no GraphQL" --namespace backend-agent
Structuring payloads for good recall
Batch storing is easy. Making those memories useful during recall takes more thought.
One fact per memory
Don’t cram multiple facts into a single memory. Each memory gets one embedding vector. Combining unrelated facts makes the embedding a blurry average that matches nothing well.
Bad:
{"content": "User likes dark mode, deploys to Vercel, and has a dog named Max"}
Good:
[
{"content": "User prefers dark mode in all editors and UIs"},
{"content": "Deployment target: Vercel for frontend applications"},
{"content": "User has a dog named Max"}
]
Three separate memories, three focused embeddings, three precise recall matches.
Set memory types intentionally
The memory_type field controls decay rate:
correction— Slowest decay. Facts that override previous information.preference— Slow decay. Long-term user preferences.decision— Medium decay. Project decisions that may change.project— Medium decay. Technical context about current work.observation— Fast decay. Contextual notes from a single session.general— Default decay.
Mix types based on what each memory actually represents:
memoclaw store-batch \
'{"content": "API endpoint changed from /v1/users to /v2/users", "memory_type": "correction", "importance": 1.0}' \
'{"content": "User wants all API responses paginated", "memory_type": "preference", "importance": 0.8}' \
'{"content": "Discussed switching to GraphQL but decided against it", "memory_type": "decision", "importance": 0.7}' \
'{"content": "User seemed frustrated with the current auth flow", "memory_type": "observation", "importance": 0.5}'
Use tags for filtering
Tags scope recall results without relying on semantic similarity alone:
memoclaw recall "database setup" --tags migration
When batch-storing, tag consistently:
{
"memories": [
{"content": "PostgreSQL 16 on Neon serverless", "metadata": {"tags": ["database", "infrastructure"]}, "importance": 0.9},
{"content": "pgvector extension for semantic search", "metadata": {"tags": ["database", "search"]}, "importance": 0.8},
{"content": "Connection pooling via Neon's built-in pooler", "metadata": {"tags": ["database", "infrastructure"]}, "importance": 0.7}
]
}
Scale importance deliberately
Don’t set everything to 1.0 in a batch. That defeats the purpose.
- 1.0 — Critical corrections, identity facts
- 0.8-0.9 — Strong preferences, key decisions
- 0.6-0.7 — Useful context, project details
- 0.4-0.5 — Nice-to-have, background info
- 0.1-0.3 — Ephemeral, low-priority
Automating batch storage
Cron-based session archiving
If your agent generates daily summaries, automate batch storage with a cron job:
#!/bin/bash
# archive-sessions.sh
SESSIONS_DIR=~/.openclaw/workspace/memory
YESTERDAY=$(date -d "yesterday" +%Y-%m-%d)
if [ -f "$SESSIONS_DIR/$YESTERDAY.md" ]; then
memoclaw migrate "$SESSIONS_DIR/$YESTERDAY.md" --namespace daily-notes
fi
Programmatic batch via API
For more control, build the payload and POST it:
cat <<EOF | curl -X POST https://api.memoclaw.com/v1/store/batch \
-H "Content-Type: application/json" \
-d @-
{
"memories": [
{"content": "Sprint 14 shipped auth v2 and user profiles", "importance": 0.7, "memory_type": "project"},
{"content": "Performance regression in /api/users — fixed by adding index", "importance": 0.8, "memory_type": "correction"},
{"content": "Next sprint focus: billing integration with Stripe", "importance": 0.6, "memory_type": "project"}
]
}
EOF
Quick reference
| Scenario | Method | Cost |
|---|---|---|
| 1-7 memories, need immediate recall | Individual store | $0.005 each |
| 8+ memories, same namespace | Batch store | $0.04 flat |
| Importing markdown files | memoclaw migrate | $0.01/file |
| End-of-session dump | Batch store | $0.04 flat |
| Cross-namespace storage | Individual stores | $0.005 each |
Full API reference at docs.memoclaw.com. Pricing breakdown at docs.memoclaw.com/reference/pricing.