From my-mac-claude
Runs interactive QA sessions: users report bugs conversationally, agent clarifies, explores codebase for domain context, assesses scope, and files durable GitHub issues.
npx claudepluginhub yashs33244/my-mac-claude --plugin gstackThis skill uses the workspace's default tool permissions.
Run an interactive QA session. The user describes problems they're encountering. You clarify, explore the codebase for context, and file GitHub issues that are durable, user-focused, and use the project's domain language.
Runs interactive QA sessions: users report bugs conversationally, agent clarifies, explores codebase for domain context, assesses scope, and files durable GitHub issues.
Conducts conversational bug discovery, performs light codebase exploration, assesses scope, and drafts GitHub issues for user approval before creation. Use for QA sessions, triaging reports, or filing bugs.
Share bugs, ideas, or general feedback.
Run an interactive QA session. The user describes problems they're encountering. You clarify, explore the codebase for context, and file GitHub issues that are durable, user-focused, and use the project's domain language.
Let the user describe the problem in their own words. Ask at most 2-3 short clarifying questions focused on:
Do NOT over-interview. If the description is clear enough to file, move on.
While talking to the user, kick off an Agent (subagent_type=Explore) in the background to understand the relevant area. The goal is NOT to find a fix — it's to:
This context helps you write a better issue — but the issue itself should NOT reference specific files, line numbers, or internal implementation details.
Before filing, decide whether this is a single issue or needs to be broken down into multiple issues.
Break down when:
Keep as a single issue when:
Create issues with gh issue create. Do NOT ask the user to review first — just file and share URLs.
Issues must be durable — they should still make sense after major refactors. Write from the user's perspective.
Use this template:
## What happened
[Describe the actual behavior the user experienced, in plain language]
## What I expected
[Describe the expected behavior]
## Steps to reproduce
1. [Concrete, numbered steps a developer can follow]
2. [Use domain terms from the codebase, not internal module names]
3. [Include relevant inputs, flags, or configuration]
## Additional context
[Any extra observations from the user or from codebase exploration that help frame the issue — e.g. "this only happens when using the Docker layer, not the filesystem layer" — use domain language but don't cite files]
Create issues in dependency order (blockers first) so you can reference real issue numbers.
Use this template for each sub-issue:
## Parent issue
#<parent-issue-number> (if you created a tracking issue) or "Reported during QA session"
## What's wrong
[Describe this specific behavior problem — just this slice, not the whole report]
## What I expected
[Expected behavior for this specific slice]
## Steps to reproduce
1. [Steps specific to THIS issue]
## Blocked by
- #<issue-number> (if this issue can't be fixed until another is resolved)
Or "None — can start immediately" if no blockers.
## Additional context
[Any extra observations relevant to this slice]
When creating a breakdown:
After filing, print all issue URLs (with blocking relationships summarized) and ask: "Next issue, or are we done?"
Keep going until the user says they're done. Each issue is independent — don't batch them.