From jerry
Provides structured framework with specialized agents for research, analysis, architecture decisions, validation, synthesis, reviews, investigations, and reporting. For complex problems needing systematic exploration and persistent artifacts.
npx claudepluginhub geekatron/jerry --plugin jerryThis skill is limited to using the following tools:
> **Version:** 2.2.0
PLAYBOOK.mdagents/ps-analyst.governance.yamlagents/ps-analyst.mdagents/ps-architect.governance.yamlagents/ps-architect.mdagents/ps-critic.governance.yamlagents/ps-critic.mdagents/ps-investigator.governance.yamlagents/ps-investigator.mdagents/ps-reporter.governance.yamlagents/ps-reporter.mdagents/ps-researcher.governance.yamlagents/ps-researcher.mdagents/ps-reviewer.governance.yamlagents/ps-reviewer.mdagents/ps-synthesizer.governance.yamlagents/ps-synthesizer.mdagents/ps-validator.governance.yamlagents/ps-validator.mdcomposition/ps-analyst.agent.yamlCreates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Version: 2.2.0 Framework: Jerry Problem-Solving (PS) Constitutional Compliance: Jerry Constitution v1.0
This SKILL.md serves multiple audiences:
| Level | Audience | Sections to Focus On |
|---|---|---|
| L0 (ELI5) | New users, stakeholders | Purpose, When to Use, Routing Disambiguation, Quick Reference |
| L1 (Engineer) | Developers invoking agents | Invoking an Agent, Agent Details, Adversarial Quality Mode |
| L2 (Architect) | Workflow designers | Orchestration Flow, State Passing, Adversarial Quality Mode |
The Problem-Solving skill provides a structured framework for tackling complex problems through specialized agents. Each agent produces persistent artifacts that survive context compaction and build a knowledge base over time.
Activate when:
| Agent | Role | Output Location |
|---|---|---|
ps-researcher | Research Specialist - Gathers information with citations | docs/research/ |
ps-analyst | Analysis Specialist - Deep analysis (5 Whys, FMEA, trade-offs) | docs/analysis/ |
ps-architect | Architecture Specialist - Creates ADRs with Nygard format | docs/decisions/ |
ps-critic | Quality Evaluator - Iterative refinement with quality scores | docs/critiques/ |
ps-validator | Validation Specialist - Verifies constraints with evidence | docs/analysis/ |
ps-synthesizer | Synthesis Specialist - Pattern extraction across documents | docs/synthesis/ |
ps-reviewer | Review Specialist - Code/design/security quality reviews | docs/reviews/ |
ps-investigator | Investigation Specialist - Root cause of failures | docs/investigations/ |
ps-reporter | Reporting Specialist - Status and progress reports | docs/reports/ |
All agents produce output at three levels:
Simply describe what you need:
"Research best practices for event sourcing in Python"
"Analyze the trade-offs between SQLite and PostgreSQL for this use case"
"Create an ADR for choosing Redis as our caching layer"
"Validate that all domain constraints are met"
"Investigate why the API timeout occurred"
The orchestrator will select the appropriate agent based on keywords and context.
Request a specific agent:
"Use ps-researcher to explore graph database options"
"Have ps-analyst do a 5 Whys on the login failures"
"I need ps-architect to create an ADR for the new persistence layer"
For programmatic invocation within workflows:
Task(
description="ps-researcher: Graph databases",
subagent_type="general-purpose",
prompt="""
You are the ps-researcher agent (v2.0.0).
## PS CONTEXT (REQUIRED)
- **PS ID:** work-024
- **Entry ID:** e-101
- **Topic:** Graph Database Options
## MANDATORY PERSISTENCE (P-002)
Create file at: docs/research/work-024-e-101-graph-databases.md
## RESEARCH TASK
Research graph database options for the Jerry framework.
Focus on: Gremlin compatibility, Python support, embedded options.
"""
)
For complex problems requiring multiple perspectives:
User Request: "I need to understand why our tests are slow and fix it"
1. ps-researcher → Gather data on test execution patterns
Output: docs/research/work-024-e-001-test-performance.md
2. ps-analyst → Apply 5 Whys to identify root cause
Output: docs/analysis/work-024-e-002-root-cause.md
3. ps-architect → Create ADR for proposed solution
Output: docs/decisions/work-024-e-003-adr-test-optimization.md
4. ps-validator → Verify solution meets constraints
Output: docs/analysis/work-024-e-004-validation.md
Agents can reference each other's output using state keys:
| Agent | Output Key | Provides |
|---|---|---|
| ps-researcher | researcher_output | Research findings, sources |
| ps-analyst | analyst_output | Root cause, recommendations |
| ps-architect | architect_output | Decision, alternatives |
| ps-validator | validator_output | Validation status, gaps |
| ps-synthesizer | synthesizer_output | Patterns, themes |
| ps-reviewer | reviewer_output | Findings, assessment |
| ps-investigator | investigator_output | Root cause, corrective actions |
| ps-reporter | reporter_output | Metrics, health status |
Each agent uses the allowed tools differently. Here are concrete examples:
1. Find existing research documents:
Glob(pattern="docs/research/**/*.md")
→ Returns list of prior research to reference
2. Search for industry sources:
WebSearch(query="event sourcing Python patterns 2026")
→ Find current industry guidance
3. Create research output (MANDATORY per P-002):
Write(
file_path="docs/research/work-024-e-001-event-sourcing.md",
content="# Research: Event Sourcing in Python\n\n## L0: Executive Summary\n..."
)
→ Persist findings - transient output VIOLATES P-002
1. Find prior analyses to reference:
Glob(pattern="docs/analysis/**/*.md")
2. Search for specific patterns in codebase:
Grep(pattern="try|except|raise", path="src/", output_mode="content", -C=2)
→ Find error handling patterns for root cause analysis
3. Read existing documentation:
Read(file_path="docs/research/work-024-e-001-event-sourcing.md")
→ Load prior research to inform analysis
4. Create analysis output (MANDATORY per P-002):
Write(
file_path="docs/analysis/work-024-e-002-root-cause.md",
content="# Root Cause Analysis: Build Failures\n\n## L0: Executive Summary\n..."
)
1. Find existing ADRs for consistency:
Glob(pattern="docs/decisions/**/*.md")
→ Reference prior decisions
2. Research architectural patterns:
WebFetch(url="https://martinfowler.com/eaaDev/EventSourcing.html",
prompt="Extract key benefits and trade-offs of event sourcing")
3. Create ADR output (MANDATORY per P-002):
Write(
file_path="docs/decisions/work-024-e-003-adr-persistence.md",
content="# ADR-042: Use Event Sourcing for Task History\n\n## Status\nPROPOSED\n..."
)
All agents MUST persist their output to files. This ensures:
docs/
├── research/ # ps-researcher outputs
│ └── {ps-id}-{entry-id}-{topic}.md
├── analysis/ # ps-analyst and ps-validator outputs
│ └── {ps-id}-{entry-id}-{analysis-type}.md
├── decisions/ # ps-architect ADRs
│ └── {ps-id}-{entry-id}-adr-{slug}.md
├── synthesis/ # ps-synthesizer outputs
│ └── {ps-id}-{entry-id}-synthesis.md
├── reviews/ # ps-reviewer outputs
│ └── {ps-id}-{entry-id}-{review-type}.md
├── investigations/ # ps-investigator outputs
│ └── {ps-id}-{entry-id}-investigation.md
└── reports/ # ps-reporter outputs
└── {ps-id}-{entry-id}-{report-type}.md
SSOT Reference:
.context/rules/quality-enforcement.md-- all thresholds, strategy IDs, and criticality levels are defined there. NEVER hardcode values; always reference the SSOT.
The problem-solving skill integrates the adversarial quality framework defined in EPIC-002. This enables structured creator-critic-revision cycles with strategy-specific adversarial review for all PS workflows.
The quality framework provides 10 selected adversarial strategies across 4 mechanistic families. See .context/rules/quality-enforcement.md (Strategy Catalog section) for the authoritative list with IDs S-001 through S-014, composite scores, and family classifications.
| Family | Strategies | PS Application |
|---|---|---|
| Iterative Self-Correction | S-014 (LLM-as-Judge), S-007 (Constitutional AI Critique), S-010 (Self-Refine) | Quality scoring, constitutional compliance checks, self-review before output |
| Dialectical Synthesis | S-003 (Steelman Technique) | Strengthening arguments before critique, ensuring balanced analysis |
| Role-Based Adversarialism | S-002 (Devil's Advocate), S-004 (Pre-Mortem Analysis), S-001 (Red Team Analysis) | Challenging assumptions, anticipating failures, adversarial exploration |
| Structured Decomposition | S-013 (Inversion Technique), S-012 (FMEA), S-011 (Chain-of-Verification) | Systematic failure mode analysis, verification chains, inverse reasoning |
Per H-14 (HARD rule), all C2+ deliverables MUST go through a minimum 3-iteration creator-critic-revision cycle.
Cycle flow:
Quality scoring uses the 6-dimension weighted composite defined in the SSOT:
Circuit breaker: Minimum 3 iterations REQUIRED (H-14). If no improvement after 2 consecutive iterations, ACCEPT_WITH_CAVEATS or escalate to user.
Strategy activation follows the SSOT criticality levels (C1-C4). See .context/rules/quality-enforcement.md (Criticality Levels section) for the authoritative mapping.
| Level | PS Context | Required Strategies | Typical PS Scenario |
|---|---|---|---|
| C1 (Routine) | Simple research, status reports | S-010 (Self-Refine) | Single-topic research, progress report |
| C2 (Standard) | Analysis, design decisions, reviews | S-007, S-002, S-014 | Root cause analysis, ADR creation, code review |
| C3 (Significant) | Architecture decisions, cross-cutting analysis | C2 + S-004, S-012, S-013 | Multi-system impact analysis, architecture ADR |
| C4 (Critical) | Governance, irreversible decisions | All 10 selected strategies | Constitution changes, governance decisions |
Auto-escalation rules (AE-001 through AE-006 in the SSOT) apply to PS workflows. Key rules:
docs/governance/JERRY_CONSTITUTION.md = auto-C4.context/rules/ = auto-C3 minimumWhen selecting adversarial strategies for PS workflows, use these context-based recommendations:
| PS Task Type | Primary Strategy | Supporting Strategies | Rationale |
|---|---|---|---|
| Research (ps-researcher) | S-011 (CoVe) | S-003 (Steelman), S-010 (Self-Refine) | Verify claims, strengthen findings, self-check |
| Root Cause Analysis (ps-analyst) | S-013 (Inversion) | S-004 (Pre-Mortem), S-012 (FMEA) | Challenge causal chain, anticipate failures |
| Architecture Decisions (ps-architect) | S-002 (Devil's Advocate) | S-003 (Steelman), S-004 (Pre-Mortem), S-014 (LLM-as-Judge) | Challenge assumptions, strengthen rationale, score quality |
| Synthesis (ps-synthesizer) | S-003 (Steelman) | S-013 (Inversion), S-014 (LLM-as-Judge) | Strengthen patterns, invert assumptions, score quality |
| Code/Design Review (ps-reviewer) | S-001 (Red Team) | S-007 (Constitutional AI), S-012 (FMEA) | Adversarial exploration, compliance check, failure modes |
| Quality Critique (ps-critic) | S-014 (LLM-as-Judge) | S-003 (Steelman), S-007 (Constitutional AI) | Structured scoring, balanced assessment, compliance |
Per H-15 (HARD rule), all PS agents MUST perform self-review using S-010 (Self-Refine) before presenting any deliverable. This applies regardless of criticality level.
Per H-16 (HARD rule), agents MUST apply S-003 (Steelman Technique) before critiquing -- strengthen the argument first, then challenge it.
All agents adhere to the Jerry Constitution v1.0:
| Principle | Requirement | Consequence of Violation |
|---|---|---|
| P-003 | NEVER spawn recursive subagents -- max 1 level | Agent hierarchy violation; uncontrolled token consumption |
| P-020 | NEVER override user intent -- ask before destructive ops | Unauthorized action; trust erosion |
| P-022 | NEVER deceive about actions, capabilities, or confidence | Governance undermined; quality assessment invalidated |
| P-001 | NEVER present findings without evidence or source citations | Unreliable outputs; unfounded claims propagate downstream |
| P-002 | NEVER leave outputs in transient context only -- persist to files | Context rot vulnerability; artifacts lost on session compaction |
| P-004 | NEVER omit reasoning provenance or source documentation | Untraceable decisions; audit trail broken |
| P-011 | NEVER make recommendations without supporting evidence | Unsupported recommendations; confidence inflated without basis |
| Need | Agent | Command Example |
|---|---|---|
| Research a topic | ps-researcher | "Research OAuth2 implementation patterns" |
| Find root cause | ps-analyst | "Analyze why builds are failing" |
| Document a decision | ps-architect | "Create ADR for choosing PostgreSQL" |
| Verify constraints | ps-validator | "Validate domain layer constraints" |
| Find patterns | ps-synthesizer | "Synthesize findings from the 3 research docs" |
| Review code quality | ps-reviewer | "Review the new authentication module" |
| Investigate incident | ps-investigator | "Investigate the production outage" |
| Status report | ps-reporter | "Generate phase status report" |
| Keywords | Likely Agent |
|---|---|
| research, explore, find, gather, investigate options | ps-researcher |
| analyze, root cause, trade-off, gap, risk, 5 whys, FMEA | ps-analyst |
| ADR, architecture decision, design, choose, decide | ps-architect |
| validate, verify, constraint, test, evidence | ps-validator |
| synthesize, patterns, themes, combine, meta-analysis | ps-synthesizer |
| review, quality, code review, security, OWASP | ps-reviewer |
| investigate, failure, incident, debug, what happened | ps-investigator |
| report, status, progress, metrics, summary | ps-reporter |
Problem-solving artifacts should use standardized templates to ensure consistency.
Location: docs/knowledge/exemplars/templates/
| Template | Use For | Path |
|---|---|---|
adr.md | Architecture Decision Records | docs/knowledge/exemplars/templates/adr.md |
research.md | Research artifacts | docs/knowledge/exemplars/templates/research.md |
analysis.md | Analysis artifacts | docs/knowledge/exemplars/templates/analysis.md |
deep-analysis.md | Deep analysis | docs/knowledge/exemplars/templates/deep-analysis.md |
synthesis.md | Synthesis documents | docs/knowledge/exemplars/templates/synthesis.md |
review.md | Review artifacts | docs/knowledge/exemplars/templates/review.md |
investigation.md | Investigation reports | docs/knowledge/exemplars/templates/investigation.md |
jrn.md | Journal entries | docs/knowledge/exemplars/templates/jrn.md |
use-case-template.md | Use case specifications | docs/knowledge/exemplars/templates/use-case-template.md |
Usage: When creating a new artifact, read the appropriate template first to ensure consistent structure and sections.
When this skill is the wrong choice and what happens if misrouted.
| Condition | Use Instead | Consequence of Misrouting |
|---|---|---|
| Simple multi-agent workflow coordination without research | /orchestration | Problem-solving loads 9 agent definitions (ps-researcher, ps-analyst, ps-architect, ps-critic, ps-validator, ps-synthesizer, ps-reviewer, ps-investigator, ps-reporter) when task only needs workflow state tracking and checkpoint coordination |
| Requirements engineering, V&V, or formal technical reviews (SRR/PDR/CDR) | /nasa-se | Problem-solving produces research artifacts and ADRs; NASA SE traceability matrices, VCRM tables, and NPR-compliant review packages not generated |
| Transcript parsing or meeting note extraction (VTT/SRT files) | /transcript | Problem-solving has no VTT/SRT parser; transcript-specific agents (ts-parser, ts-extractor) with hybrid Python+LLM architecture not invoked |
| Standalone adversarial quality review or tournament scoring | /adversary | Problem-solving ps-critic operates within creator-critic-revision loops (H-14); standalone one-shot adversarial assessment with strategy template selection requires /adversary |
| Security-hardened software design or threat modeling | /eng-team | Problem-solving lacks STRIDE/DREAD methodology, OWASP ASVS verification, and NIST SSDF governance; security-specific agent team (10 agents) not loaded |
| Offensive security testing or penetration testing | /red-team | Problem-solving produces research artifacts, not attack narratives; no MITRE ATT&CK kill chain coverage or engagement authorization methodology |
| Work item tracking or entity management | /worktracker | Problem-solving has no entity hierarchy management; WORKTRACKER.md manifest operations and WTI integrity rules not available |
For detailed agent specifications, see:
skills/problem-solving/agents/ps-researcher.mdskills/problem-solving/agents/ps-analyst.mdskills/problem-solving/agents/ps-architect.mdskills/problem-solving/agents/ps-validator.mdskills/problem-solving/agents/ps-synthesizer.mdskills/problem-solving/agents/ps-reviewer.mdskills/problem-solving/agents/ps-investigator.mdskills/problem-solving/agents/ps-reporter.mdSkill Version: 2.2.0 Constitutional Compliance: Jerry Constitution v1.0 Enhancement: EN-707 Adversarial quality mode integration (EPIC-003) Last Updated: 2026-02-14