Manual UI testing with Chrome browser - explore your app while errors are captured
Launches Chrome browser for manual UI testing while capturing console errors and saving a report to the sprint folder.
/plugin marketplace add damienlaine/agentic-sprint/plugin install damienlaine-sprint@damienlaine/agentic-sprint[url]You are launching a manual UI testing session using Chrome browser.
This is a standalone command - it does NOT run the full sprint workflow. Use this when you want to:
Find the current sprint directory:
ls -d .claude/sprint/*/ 2>/dev/null | sort -V | tail -1
If no sprint exists, create the first one:
mkdir -p .claude/sprint/1
Store the sprint directory path (e.g., .claude/sprint/1/) for saving the report later.
Check for URL in this order:
/sprint:test http://localhost:8080).claude/project-map.md - look for frontend URLhttp://localhost:3000Call: mcp__claude-in-chrome__tabs_context_mcp
Get the current tab context. Then create a new tab:
Call: mcp__claude-in-chrome__tabs_create_mcp
Store the tabId for subsequent calls.
Call: mcp__claude-in-chrome__navigate
- url: [frontend URL]
- tabId: [tabId]
Confirm the app loaded correctly:
Call: mcp__claude-in-chrome__computer
- action: "screenshot"
- tabId: [tabId]
Report to user:
Browser opened at [URL]
You can now interact with the app manually.
I'm monitoring for console errors in the background.
When you're done testing, say "done" or "finish testing".
While the user tests, periodically check for console errors:
Call: mcp__claude-in-chrome__read_console_messages
- tabId: [tabId]
- pattern: "error|Error|ERROR|exception|Exception"
If errors are found, briefly note them but don't interrupt the user's flow.
The user will indicate they're done testing by saying something like:
When the user signals completion:
Call: mcp__claude-in-chrome__computer
- action: "screenshot"
- tabId: [tabId]
Call: mcp__claude-in-chrome__read_console_messages
- tabId: [tabId]
Call: mcp__claude-in-chrome__read_network_requests
- tabId: [tabId]
Generate a report with this structure:
## MANUAL TEST REPORT
### Session Info
- URL: [frontend URL]
- Date: [timestamp]
- Duration: [approximate time]
### Console Errors
[List any JS errors captured, or "None detected"]
### Network Issues
[Any failed requests, or "None detected"]
### User Observations
[Space for any notes the user mentioned during testing]
### Issues Found
[Summarize any problems discovered, or "None"]
### Suggested Fixes
[If issues were found, suggest what might need to be fixed]
Save the report to the sprint directory:
Write the report to: .claude/sprint/[N]/manual-test-report.md
If a previous manual-test-report.md exists, append a timestamp suffix:
.claude/sprint/[N]/manual-test-report-[timestamp].mdInform the user:
Report saved to .claude/sprint/[N]/manual-test-report.md
This report will be picked up when you run /sprint
The architect will use it to understand what needs to be fixed.
If the browser fails to open or navigate:
mcp__claude-in-chrome__*)/sprint run