Trigger: "set up agentic OS", "initialize agent harness", "init my project for AI agents", "where do I put CLAUDE.md", "create my agent environment", "set up persistent memory". Guides users through an interview to understand their use case, then scaffolds the right Agentic OS structure. Use even when the user just asks WHERE to put files.
From agent-agentic-osnpx claudepluginhub richfrem/agent-plugins-skills --plugin agent-agentic-osThis skill is limited to using the following tools:
acceptance-criteria.mdassets/diagrams/agentic-os-init-flow.mmdassets/resources/agentic-os-init-flow.mmdassets/templates/CLAUDE_LOCAL_MD.mdassets/templates/CLAUDE_MD_GLOBAL.mdassets/templates/CLAUDE_MD_PROJECT.mdassets/templates/EVENTS_JSONL.jsonlassets/templates/HEARTBEAT_MD.mdassets/templates/HOOKS_JSON.jsonassets/templates/MEMORY_MD.mdassets/templates/OS_STATE_JSON.jsonassets/templates/SOUL_MD.mdassets/templates/START_HERE_MD.mdassets/templates/STATUS_MD.mdassets/templates/USER_MD.mdevals/evals.jsonevals/results.tsvreferences/acceptance-criteria.mdreferences/architecture.mdreferences/memory/post_run_survey.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ./requirements.txt for the dependency lockfile (currently empty — standard library only).
Bootstrap the Agentic OS / Agent Harness structure into any project. The setup is not one-size-fits-all -- a solo developer using Claude for marketing strategy needs a very different environment than a team using agents to document a legacy system. The interview phase exists to get that right the first time.
There is no official Anthropic "agentic OS" reference implementation. This pattern synthesizes Anthropic's documented features (CLAUDE.md hierarchy, /loop, sub-agents, hooks) with community conventions for persistent memory and context management. Official Anthropic docs:
Execute these phases in order. Do not skip phases.
Do not assume defaults. Ask the user enough to make smart decisions. Pull answers from the conversation if they are already there -- do not repeat questions already answered.
Ask only what is unclear. Group questions to minimize back-and-forth. Adapt your language to the user's technical confidence level (a plumber opened a terminal for the first time is different from a senior engineer).
What is this project? What kind of work are you doing here?
Who is using it? Just you, or a team?
What is your main use case for the agent? Pick the closest:
What sub-tasks or specialized areas do you need the agent to handle? Examples: "analyze screenshots of legacy screens", "draft blog posts", "review PRs", "document business rules", "manage project status".
Do you need scheduled/autonomous work? (e.g., nightly summaries, daily standups, background analysis while you sleep) -- this determines whether to set up /loop and heartbeat.md.
How much context do you expect to persist? (light = just a few facts, heavy = full project history, business rules, entity glossary, etc.)
After getting the core answers, ask targeted follow-ups based on the use case:
If: legacy system documentation / analysis
entities/ folder for business terms glossary.If: marketing / strategy / communications
context/soul.md with brand voice, a content/ folder for
work-in-progress, and session logs for capturing working decisions.If: software development
If: research / analysis
research/ folder, context/memory.md for findings.Based on the discovery answers, propose a component plan before running the script. Show the user a simple table of what will be created and what will be skipped.
Example output format:
Here is what I plan to set up for your [use case]:
| Component | Include? | Why |
|------------------------|----------|-----|
| CLAUDE.md (project) | YES | Core kernel for your project conventions |
| context/soul.md | YES | Agent personality for your brand voice |
| context/user.md | YES | Your working style preferences |
| context/memory.md | YES | Persistent facts across sessions |
| context/memory/ logs | YES | Dated session logs |
| START_HERE.md | YES | Bootstrap prompt for new sessions |
| heartbeat.md + /loop | NO | You didn't mention scheduled tasks |
| .claude/agents/ | YES | Sub-agents for [specific tasks] |
| ~/.claude/CLAUDE.md | OPTIONAL | Global kernel if you want this across all projects |
Additional folders I recommend for your use case:
- entities/ -- glossary of business terms (legacy docs use case)
- research/ -- source material and findings (research use case)
Get confirmation or adjustments before proceeding to Phase 3.
Run the setup script with flags derived from the plan:
# Fallback to current directory if not running inside the plugin manager
PLUGIN_DIR="${CLAUDE_PLUGIN_ROOT:-$(pwd)}"
python3 "${PLUGIN_DIR}/skills/os-init/scripts/init_agentic_os.py" \
--target <path> \
[--global] \
[--dry-run] \
[--force]
Flags:
--target PATH : Project root (default: current directory)--global : Also scaffold ~/.claude/CLAUDE.md for the global kernel--dry-run : Preview what would be created without writing anything--force : Overwrite existing files (show this option only for existing projects)For existing projects: always run --dry-run first and show the user the preview.
After the script runs, create any use-case-specific additional folders that the interview surfaced (entities/, research/, content/, etc.) but the script does not create automatically.
After creating the structure, walk the user through what to fill in, in priority order. Do not just dump a list -- explain WHY each file matters.
CLAUDE.md is the single file that Claude reads at the start of every session. It is your project's kernel. Do not let the user leave it blank.
Prompt them with specific questions based on their use case:
This only matters if the agent needs a persona. For a coding project it is less critical. For marketing/communications it is essential. Ask: "Do you want Claude to have a specific voice or personality in this project?"
Help the user fill this in with a few targeted questions:
Remind the user:
# Add to .gitignore:
CLAUDE.local.md
context/memory/
context/events.jsonl
context/os-state.json
context/.locks/
.claude/
context/memory.md
context/status.md
Keep in git (shared with team):
CLAUDE.md
context/soul.md
context/user.md
context/kernel.py
context/agents.json
heartbeat.md
START_HERE.md
agents.json controls which agent names are allowed to emit events to the kernel bus. Any agent or hook that calls kernel.py emit_event must be listed here — unlisted agents are rejected (fail-closed). When a new plugin is installed that fires kernel events from hooks, its hook agent name must be added to permitted_agents.
Standard entries (from the runtime template):
{
"schema_version": "1.0",
"permitted_agents": [
"agentic-os-setup",
"Triple-Loop Retrospective",
"os-memory-manager",
"os-health-check",
"os-clean-locks",
"os-eval-runner",
"system"
]
}
When installing a plugin that fires hook events, check the plugin's hooks/ directory for the --agent value passed to emit_event and add it to the list. Missing entries cause silent event rejection (or, on unpatched kernels, fail-open — a security risk).
If more specialized skills are required for this environment, see the central installation guide:
👉 INSTALL.md
[!TIP] Avoid File Duplication: When installing local/development plugins, ensure they are linked as Symbolic Links rather than deep copied (verify if the skill installer does this, or use
ln -s).
After every init run, complete the Post-Run Self-Assessment Survey
(references/memory/post_run_survey.md). Init sessions reveal what is confusing about
onboarding — this is critical signal for improving the skill and the OS.
Count-Based Signals: How many times did the user need clarification? How many interview questions needed rephrasing? How many setup steps required re-explanation?
Qualitative Friction:
Improvement Recommendation: What one change to the init skill, interview flow, or scaffold templates should be tested before the next run?
Save to: ${CLAUDE_PROJECT_DIR}/context/memory/retrospectives/survey_[YYYYMMDD]_[HHMM]_os-init.md
Emit survey completion:
python3 context/kernel.py emit_event --agent agentic-os-setup \
--type learning --action survey_completed \
--summary "retrospectives/survey_[DATE]_[TIME]_os-init.md"
For the agent running this skill:
os-guide skillos-memory-manager skillreferences/operations/project-setup-guide.md