Use when Initiative needs to discover what work should be done on a project. Dispatches expert agents in parallel to analyze the project from multiple perspectives.
From initiativenpx claudepluginhub ianmcvann/initiative --plugin initiativeThis skill uses the workspace's default tool permissions.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Optimizes cloud costs on AWS, Azure, GCP via rightsizing, tagging strategies, reserved instances, spot usage, and spending analysis. Use for expense reduction and governance.
You are analyzing a project to figure out what needs to be done next. Rather than analyzing alone, you dispatch a panel of specialist agents who each examine the project from their unique perspective. This ensures thorough, multi-dimensional analysis.
Quickly explore the project to understand:
Launch the following specialist agents in parallel using the Agent tool. Each expert should explore the codebase independently and return a prioritized list of findings — not just problems, but also new features to build, improvements to make, and opportunities to pursue. The goal is forward momentum, not just bug-fixing.
For software projects, dispatch all 8 experts:
Scan the project for security weaknesses. Look for:
- Input validation gaps (SQL injection, XSS, command injection)
- Exposed secrets, hardcoded credentials, insecure defaults
- Missing authentication/authorization checks
- Insecure dependencies or configurations
- Data exposure risks
Return a prioritized list of findings with severity (critical/high/medium/low).
Review the project's architecture and code structure. Look for:
- Tight coupling between modules
- Violations of separation of concerns
- Missing abstractions or over-engineering
- Scalability bottlenecks
- Inconsistent patterns or conventions
Return a prioritized list of findings with impact assessment.
Assess the project's test coverage and quality. Look for:
- Untested code paths and missing edge cases
- Critical functionality without tests
- Test quality issues (flaky tests, poor assertions, missing mocks)
- Missing test types (unit, integration, end-to-end)
- Error handling paths that aren't tested
Return a prioritized list of gaps with risk assessment.
Evaluate the project's documentation. Look for:
- Missing or outdated README sections
- Undocumented public APIs or interfaces
- Missing inline documentation for complex logic
- Outdated comments that no longer match the code
- Missing setup/deployment/contribution guides
Return a prioritized list of documentation gaps.
Analyze the project for performance issues. Look for:
- N+1 query patterns or redundant database calls
- Unnecessary computation or allocations
- Missing caching opportunities
- Memory leak patterns
- Blocking I/O in async contexts
Return a prioritized list of performance concerns.
Evaluate the project from a product perspective. Think about both what's missing AND what should be built next:
- What new features would users love? What's the next logical capability to add?
- What could be improved to make existing features more powerful or delightful?
- Incomplete user workflows or dead ends that need finishing
- Feature prioritization: what should be built next for maximum user value?
- Gaps between the project's stated goals and what's actually implemented
Return a prioritized list of feature ideas, product gaps, and improvements with user impact assessment.
Assess the user experience of the project. Look for:
- Confusing interfaces, unclear naming, or unintuitive flows
- Missing feedback loops (user does something but gets no confirmation)
- Accessibility issues (for CLI tools: unclear help text, missing examples, poor error messages)
- Inconsistent terminology or interaction patterns
- Steep learning curve areas that could be simplified
Return a prioritized list of UX improvements with usability impact.
Evaluate the project's positioning and discoverability. Look for:
- Missing or weak value proposition in README/docs
- Lack of compelling examples or demos
- Missing comparison with alternatives (why use this over X?)
- Incomplete distribution story (package registry, marketplace, install instructions)
- Missing social proof elements (badges, screenshots, usage stats)
Return a prioritized list of marketing/positioning improvements.
For non-software projects, adapt the expert panel to relevant domains (e.g., Content Expert, Research Expert, Data Quality Expert).
After all experts report back, synthesize their findings:
depends_on so they execute in the right order. Example:
add_task, each with:
depends_on parameter linking to prerequisite task IDs when applicable