From odin
Restructures human-in-the-loop workflows into autonomous LLM CLI loops with file outputs, structured logs, and trap-or-abandon decisions. For iterative grunt work, web UIs, dashboards, or Claude automation.
npx claudepluginhub outlinedriven/odin-claude-plugin --plugin odinThis skill uses the workspace's default tool permissions.
The job: turn workflows that need a human in the inner loop into workflows the LLM closes itself. The two halves are *removing the trigger gate* and *opening observability*.
Designs closed-loop prompts with Request-Validate-Resolve structure for reliable agentic workflows, self-validating agents, feedback loops, and test-fix cycles.
Provides patterns for autonomous Claude Code loops from sequential pipelines to RFC-driven multi-agent DAGs. Use for continuous dev workflows, parallel agents, and quality-gated pipelines.
Provides patterns for autonomous Claude Code loops: sequential pipelines, agentic REPLs, PR cycles, de-sloppify cleanups, and RFC-driven multi-agent DAGs. For continuous dev workflows without intervention.
Share bugs, ideas, or general feedback.
The job: turn workflows that need a human in the inner loop into workflows the LLM closes itself. The two halves are removing the trigger gate and opening observability.
Before proposing changes, name the trigger gate explicitly:
Most loops have one or two gates that, removed, collapse the cycle to seconds. Pick the smallest gate first.
If the workflow is gated by clicking in a web app, find or build the equivalent CLI command. Webhooks, REST endpoints, gh / aws / gcloud CLI subcommands, internal just targets — anything programmatically invokable. The LLM can then loop without leaving its session.
If the workflow's result lives in chat memory or a screenshot, redirect to a file the LLM can read back: structured JSON dumps, markdown reports, append-only logs with addressable offsets. Why: file outputs survive compaction, support diff, and are inspectable by future sessions without replaying context.
If verification requires eyeballing a Grafana / Datadog dashboard, surface the same metrics through a CLI query (PromQL, Datadog API, log aggregation tail). Anything that produces a pass/fail/warn verdict the LLM can read.
If the human's role is "looks right to me", encode the criterion as a test, schema, or assertion. The contract becomes the loop's done-criterion (pair with strict-validation-setup for the bootstrap of those gates).
After the structural fixes above, some steps still cannot be made autonomous — they involve genuine human judgment, external compliance, or capability the LLM lacks. For each remaining gate, apply this rule:
Naming the rule: babysitting an unloopable step is the failure mode this skill exists to prevent. Pre-existing chat consensus: "what can't be looped — abandon firmly and improve the harness."
init for AGENTS.md.strict-validation-setup.test-driven or the language's idiomatic tester.Surgical, not architectural. Remove one gate at a time. After each fix, re-evaluate whether the loop now closes — sometimes one trigger removal is enough. Resist the temptation to redesign the whole system.