Building agent personality through memory
A fresh OpenClaw agent is a blank slate. It reads SOUL.md, it reads USER.md, it does its job. But after a week of interactions, it still doesn’t know that you hate verbose explanations, that you always want cost estimates in USD, or that last time it suggested a cron job for something that should’ve been a heartbeat check, you corrected it.
All that context evaporates between sessions. Unless you store it.
This article is about using MemoClaw to build a persistent preference layer for your OpenClaw agent. Not a static config file. A living memory of who you are and how you work, that the agent loads at session start and updates as it learns.
The difference between config and personality
You could put “user prefers concise answers” in USER.md. That works. But USER.md is a file you maintain manually. It doesn’t grow on its own. And it doesn’t capture the nuance of how you corrected your agent last Thursday, or the specific phrasing you used when you said “no, run the tests first, always.”
Personality memory is the stuff that accumulates through use. Corrections. Preferences stated mid-conversation. Patterns the agent notices. Things that are hard to anticipate when writing a config file but obvious in hindsight.
What to store as personality memory
After running this pattern for a while, I’ve landed on four categories that actually improve agent behavior.
Communication style. How does the user want to be talked to? This goes beyond “concise vs verbose.” It includes things like:
memoclaw store "Ana prefers direct answers without hedging. Skip 'I think' and 'perhaps' — just state the recommendation." --importance 0.8 --tags personality,communication
memoclaw store "When presenting options, Ana wants a recommendation with reasoning, not an open-ended list." --importance 0.7 --tags personality,communication
Corrections. Every time a user says “no, that’s wrong” or “actually, do it this way,” that’s gold. Store it at high importance.
memoclaw store "CORRECTION: Don't commit directly to main. Always create a feature branch and open a PR, even for one-line fixes." --importance 0.95 --tags personality,corrections,workflow
memoclaw store "CORRECTION: When checking calendar, show times in GMT-3, not UTC." --importance 0.9 --tags personality,corrections,timezone
I prefix these with “CORRECTION:” so they’re easy to spot in recall results. A dedicated tag like corrections helps too.
Tool preferences. Which tools does the user want the agent to reach for first?
memoclaw store "User prefers camofox browser over Chrome for web scraping tasks." --importance 0.7 --tags personality,tools
memoclaw store "For quick searches, use web_search. Only open a browser if the page needs interaction." --importance 0.7 --tags personality,tools
Decision history. Past decisions create precedent. When your agent faces a similar choice later, it should remember what was decided before.
memoclaw store "Decision: we use Railway for hosting, not Vercel. Reason: better for long-running processes." --importance 0.8 --tags personality,decisions,infrastructure
memoclaw store "Decision: blog posts go through GitHub Issues pipeline, not Notion. Switched 2026-02-15." --importance 0.8 --tags personality,decisions,content
Importance scoring for personality
Not all personality data matters equally. Here’s the rubric I follow:
Corrections get 0.9-0.95. If the user explicitly told you you’re wrong, that memory should almost always surface. A correction ignored twice erodes trust fast.
Stated preferences get 0.7-0.8. “I like concise answers” matters, but it’s not as urgent as “never push to main.”
Observed patterns get 0.5-0.6. Things the agent infers rather than being told. Useful but should rank below explicit instructions.
Contextual facts get 0.4-0.5. “Ana usually works between 10am and 7pm GMT-3.” Helpful for timing notifications, but not personality-defining.
Loading personality at session start
Here’s where it comes together. In your AGENTS.md, you can instruct the agent to recall personality memories at the start of every session:
## Session start routine
Before doing anything else:
1. Read SOUL.md, USER.md
2. Recall personality context:
- `memoclaw recall "user communication preferences" --tags personality`
- `memoclaw recall "recent corrections" --tags corrections`
- `memoclaw recall "tool and workflow preferences" --tags personality,tools`
Three recall calls. Maybe 200ms total. And now your agent starts the session knowing that you hate hedging, that you want feature branches for everything, and that camofox is the preferred browser.
Compare this to a 500-line MEMORY.md that you have to manually curate. The recall approach scales better because you only pull what’s relevant. A MEMORY.md with 200 entries burns tokens on every session start whether the info matters right now or not. Targeted recalls pull 5-10 memories that match the current context, at zero context window cost.
Storing personality in real time
The best personality memories come from live interactions. When your agent gets corrected, it should store that immediately:
# User says: "No, always run tests before deploying"
memoclaw recall "deployment workflow preferences" --tags corrections
# Check if this correction already exists. If not:
memoclaw store "CORRECTION: Always run test suite before any deployment. User explicitly requested this." --importance 0.95 --tags personality,corrections,deployment
The recall-before-store check prevents duplicates. If the user has corrected the same behavior three times, you don’t want three nearly-identical memories competing for recall slots. One strong memory with high importance is better.
Personality drift and maintenance
Over time, preferences change. You might switch from Railway to Fly.io. The old decision memory is now wrong.
The simplest fix: update the memory directly. MemoClaw’s update endpoint ($0.005 per call) replaces a memory’s content while keeping the same ID.
# Find the old memory
memoclaw recall "hosting platform decision" --tags decisions,infrastructure
# Update it
memoclaw update <memory-id> --content "Decision: we use Fly.io for API hosting, not Railway. Migrated 2026-03." --importance 0.85
The old content is gone, the new content gets re-embedded, and your agent won’t see conflicting memories on the next recall.
One exception: immutable memories. If you stored something with the immutable flag (useful for security rules, compliance constraints), MemoClaw won’t let you update it. That’s by design. For those, store a new superseding memory with higher importance:
memoclaw store "Decision UPDATE 2026-03: Migrating from Railway to Fly.io for API hosting. Previous Railway decision superseded." --importance 0.85 --tags personality,decisions,infrastructure
The word “UPDATE” and the date help the agent understand which memory is current when both surface in recall.
Every few weeks, do a personality audit. Recall all personality memories and check for conflicts or stale info:
memoclaw recall "all preferences and corrections" --tags personality
Stale entries? Update them. Conflicting pairs? Resolve them. This is maintenance, not glamorous, but it keeps your agent’s model of you from drifting into confusion.
What this looks like in practice
After two weeks of personality memory accumulation, my agent’s behavior changed noticeably. It stopped asking “would you like me to create a PR?” and just created the PR. It stopped showing UTC timestamps. It started using camofox without being told. It prefixed deployment commands with test runs.
None of this was in SOUL.md or USER.md. It was all learned through interaction and stored in MemoClaw. The agent felt like it actually knew me, because it did. Not from a config file someone wrote on day one, but from two weeks of working together.
That’s the difference between an assistant and your assistant. A few memoclaw store calls with the right importance scores and tags.
Quick start
If you want to try this today:
- Add the recall routine to your AGENTS.md session start
- Pick three preferences you keep re-explaining to your agent
- Store them with appropriate importance and tags
- Watch if they surface on next session
Start small. Three memories. See if the agent picks them up. Then build from there.