From lisa
Analytical triage gate for JIRA tickets. Detects requirement ambiguities, identifies edge cases from codebase analysis, and plans verification methodology. Posts findings to the ticket and produces a verdict (BLOCKED/PASSED_WITH_FINDINGS/PASSED) that gates whether implementation can proceed.
npx claudepluginhub codyswanngt/lisa --plugin lisaThis skill is limited to using the following tools:
Perform analytical triage on the JIRA ticket. The caller has fetched the ticket details (summary, description, acceptance criteria, labels, status, comments) and provided them in context.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Perform analytical triage on the JIRA ticket. The caller has fetched the ticket details (summary, description, acceptance criteria, labels, status, comments) and provided them in context.
Repository name for scoped labels and comment headers: determine via basename $(git rev-parse --show-toplevel).
Search the local codebase using Glob and Grep for code related to the ticket's subject matter:
If NO relevant code is found in this repo:
## Verdict: NOT_RELEVANTclaude-triaged-{repo} and skip this ticketIf relevant code IS found, proceed to Phase 2.
Parse the ticket's existing comments for triage headers from OTHER repositories. Look for patterns like:
*[some-repo-name] Ambiguity detected**[some-repo-name] Edge cases**[some-repo-name] Verification methodology*Note which phases other repos have already covered and what findings they posted. In subsequent phases:
Examine the ticket summary, description, and acceptance criteria. Look for:
| Signal | Example |
|---|---|
| Vague language | "should work properly", "handle edge cases", "improve performance" |
| Untestable criteria | No measurable outcome defined |
| Undefined terms | Acronyms or domain terms not explained in context |
| Missing scope boundaries | What's included vs excluded is unclear |
| Implicit assumptions | Assumptions not stated explicitly |
Skip ambiguities already raised by another repo's triage comments.
For each NEW ambiguity found, produce:
### Ambiguity: [short title]
**Description:** [what is ambiguous]
**Suggested clarification:** [specific question to resolve it]
Be specific -- every ambiguity must have a concrete clarifying question.
Search the codebase using Glob and Grep for files related to the ticket's subject matter. Check git history for recent changes in those areas:
git log --oneline -20 -- <relevant-paths>
Identify:
Reference only files in THIS repo. Acknowledge edge cases from other repos if relevant, but do not duplicate them.
For each edge case, produce:
### Edge Case: [title]
**Description:** [what could go wrong]
**Code reference:** [file path and relevant lines or patterns]
Every edge case must reference specific code files or patterns found in the codebase. If no relevant code exists, note that this appears to be a new feature with no existing code to analyze.
For each acceptance criterion, specify a concrete verification method scoped to what THIS repo can test:
| Verification Type | When to Use | What to Specify |
|---|---|---|
| UI | Change affects user-visible interface | Playwright test description with specific assertions |
| API | Change affects HTTP/GraphQL/RPC endpoints | curl command with expected response status and body |
| Data | Change involves schema, migrations, queries | Database query or service call to verify state |
| Performance | Change claims performance improvement | Benchmark description with target metrics |
Do not duplicate verification methods already posted by other repos.
Produce a table:
| Acceptance Criterion | Verification Method | Type |
|---------------------|--------------------| -----|
Every verification method must be specific enough that an automated agent could execute it.
Evaluate the findings and produce exactly one verdict:
NOT_RELEVANT -- No relevant code was found in this repository (Phase 1). The caller should add the triage label and skip implementation in this repo.BLOCKED -- Ambiguities were found in Phase 3. Work MUST NOT proceed until the ambiguities are resolved by a human. The caller should post findings, add the triage label, and STOP.PASSED_WITH_FINDINGS -- No ambiguities, but edge cases or verification findings were identified. Work can proceed. The caller should post findings and add the triage label.PASSED -- No ambiguities, edge cases, or verification gaps found. Work can proceed. The caller should add the triage label.Output format:
## Verdict: [NOT_RELEVANT | BLOCKED | PASSED_WITH_FINDINGS | PASSED]
**Ambiguities found:** [count]
**Edge cases identified:** [count]
**Verification methods defined:** [count]
Structure all output with clear section headers so the caller can parse and post findings:
## Triage: [TICKET-KEY] ([repo-name])
### Ambiguities
[Phase 3 findings, or "None found."]
### Edge Cases
[Phase 4 findings, or "None found."]
### Verification Methodology
[Phase 5 table, or "No acceptance criteria to verify."]
## Verdict: [NOT_RELEVANT | BLOCKED | PASSED_WITH_FINDINGS | PASSED]
The caller is responsible for:
claude-triaged-{repo} label to the ticketBLOCKED: stopping all work and reporting the ambiguities to the human