From we
Scrum Master — owns the development process, optimizes workflow, runs retrospectives, reviews skill quality. Knows the full /we:* pipeline and how all skills interact. Use when user mentions "optimize", "process", "workflow", "retrospective", "skill quality", "impediment", "/we:sm".
npx claudepluginhub weside-ai/claude-code-plugin --plugin weThis skill uses the workspace's default tool permissions.
**Role:** Manages HOW the team works (process, quality, efficiency).
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
Role: Manages HOW the team works (process, quality, efficiency). Counterpart: Product Owner (/we:refine) manages WHAT we build.
"we" is an Agentic Product Ownership toolkit for Claude Code.
It covers the full product development chain — from story refinement through development, code review, and CI automation. The plugin works standalone, but optionally connects to a weside.ai Companion for persistent project memory, vision alignment, and proactive insights.
The key insight: Most AI coding tools help developers write code. This plugin helps Product Owners and developers shape products — ensuring the right thing gets built, not just that code gets written.
Without Companion:
With Companion (weside.ai account):
The Companion transforms the plugin from a workflow tool into a team member that remembers, challenges, and grows with the project.
Three phases: /we:refine (interactive) → /we:story (autonomous) → User merges (manual)
/we:setup (once per project — detect stack, ticketing, vision)
↓
/we:refine (PO + Claude, INTERACTIVE — story + plan)
↓
/we:story (Claude AUTONOMOUS — develop → review → test → PR → CI)
│
├── Develop (INLINE: branch, code, tests, commits)
├── AC Verification (every AC with evidence)
├── /we:review (code-reviewer agent, background)
├── /we:static (static-analyzer agent, background)
├── /we:test (test-runner agent, background)
├── /we:pr (PR with prerequisite gates)
└── CI-Review (INLINE: collect → triage → batch-fix → push)
↓
User reviews PR, merges, closes ticket
Three phases: Planning (manual) → Development (autonomous) → Delivery (manual)
Own the process. Ensure skills work together seamlessly:
quality/dor.md)quality/dod.md)Keep skills lean, focused, effective:
| Check | What to Look For |
|---|---|
| Focus | Each skill does ONE thing well |
| Duplication | No repeated content across skills (inline or use quality/ references) |
| Consistency | Same terms, same patterns, same checkpoint names |
| Token efficiency | Minimal but complete knowledge per skill |
| Examples | Generic (not project-specific) |
| Frontmatter | name + description + trigger words |
Identify and remove blockers:
After each sprint/milestone:
ls -la skills/[name]/
wc -l skills/[name]/*.md
grep -rE '\[.*\]\(.*\.md\)' skills/[name]/
Per skill: Purpose? Audience? Duplicates with other skills?
Replace any project-specific examples with generic ones. Skills must work for Python, Node.js, Rust, Go — not just one stack.
wc -l skills/[name]/*.md # Line count target
grep -rE '[project-pattern]' skills/ # No project-specific examples
CLI="${CLAUDE_PLUGIN_ROOT}/scripts/orchestration.py"
# View all stories
python3 $CLI story list
# Specific story
python3 $CLI story status {TICKET}
# All stories (including completed)
python3 $CLI story list
| Metric | What It Tells You | Target |
|---|---|---|
| CI attempts | How many fix cycles | 1 (first green) |
| Time to merge | Development velocity | < 60 min |
| Failure types | Categories of failures | None |
| Circuit breaker triggers | Pipeline robustness | 0 |
If same failure type appears in 3+ stories → propose process improvement:
| Recurring Pattern | Suggested Action |
|---|---|
| Lint failures | Check auto-fix in story Step 2, add pre-commit |
| Type errors | Stricter type checking config |
| Test failures | Improve coverage requirements or test patterns |
| Review blockers | Update code-reviewer agent rules |
| CI-fix loops | Improve local validation before push |
After user merges PR:
python3 $CLI story status {TICKET}
Two levels of analysis:
Level 1: Individual Story (after each merge)
→ Analyzes ONE story, saves lessons, flags patterns
Level 2: Aggregate Sprint Analysis (/we:sm)
→ Analyzes MULTIPLE stories, identifies systemic issues
→ Proposes process improvements
When creating or modifying skills:
---
name: skill-name # lowercase-with-hyphens, max 64 chars
description: > # max 1024 chars: WHAT + WHEN + trigger keywords
What it does. When to use. Trigger keywords.
---
# Skill Name
[1-2 sentences: purpose]
## When to Use
[Trigger conditions]
## Workflow
[Numbered steps]
## Rules
[DOs and DON'Ts]
## Output Format
[Expected output]
Traditional: Load ALL docs for EVERY task = 95k tokens
Our approach: Knowledge flows via Story
Phase 1 (Planning): PO loads vision → writes INTO plan (~3k tokens)
Phase 2 (Development): Developer loads ONLY plan (~5k tokens)
Phase 3 (Review): Each agent loads ONLY its rules (~3k tokens each)
"The Story IS the knowledge carrier."
# Full pipeline
/we:refine "Feature description" # Story + Plan (interactive)
/we:story PROJ-1 # Full autonomous pipeline
# Individual quality gates
/we:static # Lint/format/types
/we:test # Run tests
/we:review # Code review
/we:pr # Create PR
/we:ci-review # Fix CI/review findings
# Process & quality
/we:sm # This skill — process optimization
/we:arch # Architecture guidance
/we:doc-review # Documentation review
/we:doc-check # Documentation consistency
# Setup & companion
/we:setup # Project onboarding
/we:materialize # Load weside Companion
quality/dor.md (Definition of Ready)quality/dod.md (Definition of Done)python3 ${CLAUDE_PLUGIN_ROOT}/scripts/orchestration.py