What is Acontext?
Acontext is an open-source skill memory layer for AI agents. It automatically captures learnings from agent runs and stores them as agent skill files — files you can read, edit, and share across agents, LLMs, and frameworks.
If you want the agent you build to learn from its mistakes and reuse what worked — without opaque memory polluting your context — give Acontext a try.
Skill is All You Need
Agent memory is getting increasingly complicated🤢 — hard to understand, hard to debug, and hard for users to inspect or correct. Acontext takes a different approach: if agent skills can represent every piece of knowledge an agent needs as simple files, so can the memory.
- Acontext builds memory in the agent skills format, so everyone can see and understand what the memory actually contains.
- Skill is Memory, Memory is Skill. Whether a skill comes from one you downloaded from Clawhub or one you created yourself, Acontext can follow it and evolve it over time.
The Philosophy of Acontext
- Plain file, any framework — Skill memories are Markdown files. Use them with LangGraph, Claude, AI SDK, or anything that reads files. No embeddings, no API lock-in. Git, grep, and mount to the sandbox.
- You design the structure — Attach more skills to define the schema, naming, and file layout of the memory. For example: one file per contact, one per project by uploading a working context skill.
- Progressive disclosure, not search — The agent can use
get_skill and get_skill_file to fetch what it needs. Retrieval is by tool use and reasoning, not semantic top-k.
- Download as ZIP, reuse anywhere — Export skill files as ZIP. Run locally, in another agent, or with another LLM. No vendor lock-in; no re-embedding or migration step.
How It Works
Store — How skills get memorized?
flowchart LR
A[Session messages] --> C[Task complete/failed]
C --> D[Distillation]
D --> E[Skill Agent]
E --> F[Update Skills]
- Session messages — Conversation (and optionally tool calls, artifacts) is the raw input. Tasks are extracted from the message stream automatically (or inferred from explicit outcome reporting).
- Task complete or failed — When a task is marked done or failed (e.g. by agent report or automatic detection), that outcome is the trigger for learning.
- Distillation — An LLM pass infers from the conversation and execution trace what worked, what failed, and user preferences.
- Skill Agent — Decides where to store (existing skill or new) and writes according to your
SKILL.md schema.
- Update Skills — Skills are updated. You define the structure in
SKILL.md; the system does extraction, routing, and writing.
Recall — How the agent uses skills on the next run
flowchart LR
E[Any Agent] --> F[list_skills/get_skill]
F --> G[Appear in context]
Give your agent Skill Content Tools (get_skill, get_skill_file). The agent decides what it needs, calls the tools, and gets the skill content. No embedding search — progressive disclosure, agent in the loop.
🪜 Use It to Improve your Agent
Claude Code: