(Industry standard: Loop Agent / Single Agent) Primary Use Case: Self-contained research, content generation, and exploration where no inner delegation is required. Self-directed research and knowledge capture loop. Use when: starting a session (Orientation), performing research (Synthesis), or closing a session (Seal, Persist, Retrospective). Ensures knowledge survives across isolated agent sessions.
From agent-loopsnpx claudepluginhub richfrem/agent-plugins-skills --plugin agent-loopsThis skill is limited to using the following tools:
acceptance-criteria.mdevals/evals.jsonevals/results.tsvfallback-tree.mdreferences/acceptance-criteria.mdreferences/fallback-tree.mdreferences/phases.mdreferences/self-correction.mdrequirements.txtGuides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Deploys Linkerd service mesh on Kubernetes with patterns for installation, proxy injection, mTLS, service profiles (retries/metrics), traffic splits (canary), and authorization policies.
This skill requires Python 3.8+ and standard library only. No external packages needed.
To install this skill's dependencies:
pip-compile ./requirements.in
pip install -r ./requirements.txt
See ./requirements.txt for the dependency lockfile (currently empty — standard library only).
The Learning Loop is a structured cognitive continuity protocol ensuring that knowledge survives across isolated agent sessions. It is designed to be universally applicable to any agent framework.
YOU MUST ACTUALLY PERFORM THE STEPS LISTED BELOW. Describing what you "would do", summarizing expected output, or marking a step complete without actually doing the work is a PROTOCOL VIOLATION.
Closure is NOT optional. If the user says "end session" or you are wrapping up, you MUST run the full closure sequence. Skipping any step means the next agent starts blind.
Prerequisite: You must establish a valid session context upon Wakeup before modifying any code.
Orientation → Synthesis → Strategic Gate → Red Team Audit → [Execution] → Loop Complete (Return to Orchestrator)
Goal: Establish Identity & Context. Trigger: First action upon environment initialization.
STOP: Do NOT proceed to work until you have completed Phase I.
learning/ or memory/ directory.Human-in-the-Loop Required
Choose your Execution Mode:
Option A: Standard Agent (Single Loop)
Option B: Dual Loop
triple-loop SKILL. Execute according to its instructions.This loop is now complete. You must formally exit the loop and return control to the Orchestrator. Skipping any close step means the next agent starts blind and the flywheel stalls.
Before handoff, you MUST complete the Post-Run Self-Assessment Survey
(references/memory/post_run_survey.md). Answer every question — do not summarize or skip sections.
Survey sections (all mandatory):
Run Metadata: date, task type, task complexity, skill/capability under test
Completion Outcome:
Count-Based Signals (Karpathy Parity):
Qualitative Friction:
Improvement Recommendation:
Save completed survey to:
${CLAUDE_PROJECT_DIR}/context/memory/retrospectives/survey_[YYYYMMDD]_[HHMM]_[AGENT].md
Emit survey completion event:
python3 context/kernel.py emit_event --agent Triple-Loop Retrospective \
--type learning --action survey_completed \
--summary "retrospectives/survey_[DATE]_[TIME]_[AGENT].md"
Run the automated metric collector:
python3 "${CLAUDE_PLUGIN_ROOT}/hooks/scripts/post_run_metrics.py"
This emits a type: metric event capturing: human_interventions, workflow_uncertainty,
missed_steps, cli_errors, friction_events_total, hook_errors. These feed the Triple-Loop Retrospective
auto-trigger: 3+ friction events of same type = Full Loop improvement before next cycle.
Run session-memory-manager to write the dated session log and promote key findings to L3:
context/memory/YYYY-MM-DD.md including survey outcomes and metric countscontext/memory.md with dedup IDs| Phase | Name | Action Required |
|---|---|---|
| I | Orientation | Load context, last survey, last session log |
| II | Synthesis | Create/modify research artifacts |
| III | Strategic Gate | Obtain "Proceed" from User |
| IV | Red Team Audit | Compile packet for adversary review |
| V | Self-Assessment Survey | Answer all sections, save to retrospectives/, emit event |
| VI | Post-Run Metrics | Run post_run_metrics.py, emit metric event |
| VII | Memory Persistence | Session log + L3 promotion via session-memory-manager |
| VIII | Handoff | Return control to Orchestrator |
You are not "done" until the active task tracker says you're done.
done without running its verification sequence first.When a Learning Loop runs inside a Dual-Loop session:
| Phase | Dual-Loop Role | Notes |
|---|---|---|
| I (Orientation) | Outer Loop boots, orients | Reads boot files + spec context |
| II-III (Synthesis/Gate) | Outer Loop plans, user approves | Strategy Packet generated |
| IV (Audit) | Outer Loop snapshots before delegation | Pre-execution checkpoint |
| (Execution) | Inner Loop performs tactical work | Code-only, isolated |
| Verification | Outer Loop inspects Inner Loop output | Validates against criteria |
| V (Handoff) | Outer Loop receives results | Triggers global retrospective |
Key rule: The Inner Loop does NOT run Learning Loop phases. All cognitive continuity is the Outer Loop's responsibility.
Cross-reference: dual-loop SKILL