Guides software development through six-phase workflow: Research, Plan, Iterate Plan, Experiment, Implement, Validate. Generates structured markdown docs via slash commands for auditable trails.
npx claudepluginhub uw-ssec/rse-plugins --plugin ai-research-workflowsThis skill uses the workspace's default tool permissions.
A structured, AI-enabled workflow for software development that guides you from initial research through to validated implementation. This skill provides a systematic approach to complex development tasks through distinct, well-defined phases.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
A structured, AI-enabled workflow for software development that guides you from initial research through to validated implementation. This skill provides a systematic approach to complex development tasks through distinct, well-defined phases.
The research workflow consists of six phases:
/research) — Document and understand existing code, patterns, and architecture/plan) — Create detailed, testable implementation plans through interactive research/iterate-plan) — Refine existing plans based on feedback or changed requirements/experiment) — Try multiple approaches before committing (optional)/implement) — Execute the plan phase by phase with verification/validate) — Systematically verify implementation against plan criteriaEach phase produces a structured markdown document saved to .agents/ in your project root, creating an auditable trail of technical decisions and implementation details.
Use this decision tree to choose which workflow step to run:
Need to understand existing code?
└─> Research <topic>
Ready to design an implementation?
├─> Have research docs?
│ └─> Plan <feature> (references research automatically)
└─> No research docs?
└─> Run Research first, then Plan
Need to adjust an existing plan?
└─> Iterate Plan <plan-file> <changes>
Uncertain about the best approach?
└─> Experiment <approach-question>
Ready to execute the plan?
└─> Implement <plan-file>
Implementation complete, need verification?
└─> Validate <plan-file>
All workflow documents are saved to .agents/ with this naming pattern:
research-<slug>.md — Example: research-auth-system.mdplan-<slug>.md — Example: plan-auth-system.mdexperiment-<slug>.md — Example: experiment-jwt-vs-session.mdimplement-<slug>.md — Example: implement-auth-system.mdThe slug is automatically derived from the command argument (lowercased and hyphenated).
Note: The /iterate-plan command edits existing plan documents in place. The /validate command produces inline validation reports rather than templated documents.
Workflow phases build on each other through explicit references:
## References section listing research docs consultedEach document uses relative links to referenced docs:
[Research: Auth System](research-auth-system.md)
[Plan: Auth System Implementation](plan-auth-system.md)
This creates a navigable graph of technical decisions and their implementation.
Output: A comprehensive technical document explaining the current state with file references and architecture insights.
Key principle: Document what IS, not what SHOULD BE. You are a technical documentarian, not a critic.
Output: A detailed, phased implementation plan with measurable success criteria, specific file references, and testing strategy.
Key principle: Interactive and iterative. Ask questions, research patterns, get feedback at each stage before finalizing.
Output: Updated plan document with surgical edits maintaining consistency.
Key principle: Verify assumptions with code research. Confirm understanding before making changes.
Output: Comparative analysis with code prototypes, observations, and a clear recommendation.
Key principle: Actually run code. Don't theorize — test real implementations and record honest observations.
Note: This step is OPTIONAL. Only use when the best approach is genuinely uncertain.
Output: Working implementation with updated plan checkmarks and an implementation summary document.
Key principle: Follow the plan's intent while adapting to reality. Communicate mismatches clearly.
Output: Comprehensive validation report showing pass/fail status for each success criterion.
Key principle: Systematic and thorough. Validate what was actually built, not what was intended.
.agents/research-<slug>.md.agents/plan-<slug>.md.agents/experiment-<slug>.md.agents/implement-<slug>.md on completionThis skill provides four document templates in ${CLAUDE_PLUGIN_ROOT}/skills/research-workflow-management/assets/:
research-template.md — Structure for research documentationplan-template.md — Structure for implementation plansexperiment-template.md — Structure for experiment reportsimplement-template.md — Structure for implementation summariesCommands automatically use these templates when generating workflow documents.
This structured approach provides several benefits:
The workflow is designed to be flexible — you can skip optional phases like experimentation or iterate on plans as requirements evolve. The key is maintaining clear documentation of what was built and why.
This workflow integrates seamlessly with standard development practices:
The structured workflow complements rather than replaces your existing development process.