Detects test framework, loads QA context files, reads agent state and learnings before agent execution
From scout-qanpx claudepluginhub anicol/scout-qa --plugin scout-qaThis skill uses the workspace's default tool permissions.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Builds 3-5 year financial models for startups with cohort revenue projections, cost structures, cash flow, headcount plans, burn rate, runway, and scenario analysis.
Before performing your primary task, detect the project's test infrastructure and load QA context.
Check for test framework indicators in order. Stop at the first match.
Node.js / TypeScript:
package.json in project root. Check devDependencies and scripts:
vitest in deps or vitest in scripts → Vitest (npx vitest run, coverage: npx vitest run --coverage)jest in deps or jest in scripts → Jest (npx jest, coverage: npx jest --coverage)mocha in deps → Mocha (npx mocha)vitest.config.ts, jest.config.js, jest.config.ts, .mocharc.ymlPython:
pyproject.toml or setup.cfg. Check for:
pytest in dependencies or [tool.pytest] section → Pytest (pytest, coverage: pytest --cov)unittest patterns → Unittest (python -m unittest discover)conftest.py, pytest.iniGo:
go.mod → Go test (go test ./..., coverage: go test -coverprofile=coverage.out ./...)Rust:
Cargo.toml → Cargo test (cargo test)Ruby:
Gemfile. Check for:
rspec in Gemfile → RSpec (bundle exec rspec, coverage via simplecov)minitest → Minitest (bundle exec ruby -Itest)Record the detected framework, test command, and coverage command.
Based on the framework, identify where tests live:
**/*.test.{ts,tsx,js,jsx}, **/*.spec.{ts,tsx,js,jsx} (JS/TS)**/test_*.py, **/*_test.py (Python)**/*_test.go (Go)**/*_spec.rb (Ruby)Count total test files found.
Linting:
eslint in package.json deps → npx eslint .ruff in pyproject.toml → ruff check .golangci-lint installed → golangci-lint runrubocop in Gemfile → bundle exec rubocopType checking:
typescript in package.json deps → npx tsc --noEmitmypy in pyproject.toml → mypy .pyright in pyproject.toml → pyrightRecord detected tools or "none detected."
Look for context/qa/ directory in the project root.
If it exists, read all present files in this order:
| File | Purpose | Required By |
|---|---|---|
test-strategy.md | Testing philosophy, what to test vs skip | test-writer |
conventions.md | Naming, placement, assertion style, mock patterns, quality bar | test-writer |
critical-paths.md | Business-critical code paths with glob patterns | risk-analyzer, coverage-checker |
coverage-policy.md | Coverage thresholds by tier, changed-line policy, exemptions | coverage-checker |
ownership.md | Who owns test coverage for which code areas, team maturity | test-writer, coverage-checker |
frameworks.md | Custom framework config overrides (auto-populated by /scout:init) | all agents |
risk-config.yaml | Risk weights per category, large change threshold | risk-analyzer |
learnings/what-works.md | QA patterns proven effective | all agents |
learnings/what-doesnt.md | QA anti-patterns proven to fail | all agents |
For each file:
Check for placeholder patterns ([placeholder], [e.g.,, [Add ). Note files that need filling.
If it doesn't exist, proceed with auto-detected defaults. Note that /scout:init can scaffold context files.
If context/qa/learnings/what-works.md exists, read it. Extract patterns relevant to the current agent's task:
If context/qa/learnings/what-doesnt.md exists, read it. Note anti-patterns to avoid.
If neither exists, proceed without learnings. Note that filling these in improves agent quality over time.
Read context/qa/agent-state.json if it exists. The state file tracks:
Per agent:
last_run — ISO 8601 timestamprun_count — total executionslast_summary — one-line result from last runSignals — detected issues that persist across sessions:
id — unique identifier (format: {type}-{identifier}-{date})type — signal category (see below)severity — Critical / High / Medium / Lowdescription — human-readable explanationdetected_at — when first detectedstatus — active / resolved / dismissedSignal types for QA agents:
coverage-declining — coverage trend is going downuntested-critical-path — critical path code has no testspersistent-failure — same test failing across multiple runsflaky-test — test that passes/fails non-deterministicallyrisk-score-spike — risk score jumped significantlynew-untested-code — new source files without corresponding testsActions taken — log of all executed actions:
timestamp, action (e.g., run-tests, write-test-file, run-coverage), descriptionFeedback — user ratings on agent usefulness:
timestamp, rating (Yes / Partially / No), feedback_note (optional)For the current agent:
last_run, run_count, last_summaryactive signals (don't re-alert on same signal)dismissed signals (skip entirely)feedback history (note patterns — if a signal type is consistently rated "No", deprioritize it)flaky_tests listIf the file does not exist, note this is a first run.
Read context/qa/autonomy.yaml if it exists. For the current agent, summarize which action types are autonomous, requires_approval, or disabled. Default all actions to autonomous (QA actions are safe — they read/run, not deploy).
Before proceeding with the primary task, briefly report: