Help us improve
Share bugs, ideas, or general feedback.
From ralphharness
This skill must ALWAYS be invoked at the start of every spec — it audits the agent's own system prompt for broken references before any work begins. Invoke unconditionally regardless of goal keywords. Detects phantom infrastructure, ghost paths, incorrect URLs, missing CLI tools, and absent .env files referenced in CLAUDE.md, copilot-instructions.md, or any active system prompt instructions.
npx claudepluginhub informatico-madrid/ralph-harness --plugin ralphharnessHow this skill is triggered — by the user, by Claude, or both
Slash command
/ralphharness:context-auditorThe summary Claude sees in its skill listing — used to decide when to auto-load this skill
Audits the agent's own system prompt for broken references **before any spec work begins**. The system prompt is the agent's source of authority — if it contains false assertions, all subagents inherit that falsehood silently.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Audits the agent's own system prompt for broken references before any spec work begins. The system prompt is the agent's source of authority — if it contains false assertions, all subagents inherit that falsehood silently.
The system prompt is injected into every subagent call, for every spec, indefinitely. Broken references in it cause cascading failures that look like code bugs but are information bugs:
docker-compose.yml that does not existlocalhost:8123 is a test instance, may interact with productionThis skill has no keyword trigger — it runs for every spec because a broken system prompt corrupts every spec.
ALWAYS invoke. No keyword matching. No relevance check.
This is enforced by start.md which calls this skill unconditionally as the first action in Skill Discovery Pass 1.
The system prompt is already in the agent's conversation context. It includes content from:
CLAUDE.md (project root).github/copilot-instructions.mdDo NOT read files from disk — the system prompt is already available in context. Extract its text as-is.
Scan the system prompt text for all assertions that can be checked programmatically. Look for:
| Pattern | Examples |
|---|---|
| File paths | test-ha/docker-compose.yml, ./scripts/setup.sh, config/settings.json |
| Directories | tests in test-ha/, specs stored in ./specs/, put files in src/components/ |
| URLs / ports | localhost:8123, http://localhost:3000, api.example.com/v1 |
| CLI commands | run \npm run test-ha`, execute `docker-compose up`, use `pnpm build`` |
| Env files | credentials in .env, config in .env.local, secrets in .env.test |
| Named scripts | package.json script "test-ha", Makefile target "setup" |
Collect every assertion as a structured item:
{ type: FILESYSTEM | URL | COMMAND | ENV | SCRIPT, raw: "<original text>", value: "<extracted path/url/cmd>" }
If no assertions are found in the system prompt: output AUDIT_CLEAN with note "No verifiable assertions found in system prompt." and stop.
For each extracted path or directory:
ls "<path>" 2>/dev/null || stat "<path>" 2>/dev/null
Normalize relative paths from the project root (where .ralph-state.json lives).
Do NOT attempt network connections. Mark all URL assertions as:
⚠️ NEEDS MANUAL VERIFICATION — URL/port cannot be verified without network access
Include the URL and the context sentence from the system prompt where it appeared.
For each CLI command name extracted:
which "<command>" 2>/dev/null || command -v "<command>" 2>/dev/null
npx-based commands: check which npx instead of the package namepnpm, npm, yarn: check the package manager binaryFor each referenced env file:
ls "<env-file>" 2>/dev/null
.env files are typically gitignored; a missing .env may be intentional. Flag as ⚠️ MISSING (may be intentional) rather than ❌ for .env files, but flag as ❌ for .env.test or .env.ci that are committed.For each referenced package.json script name:
jq -r '.scripts | keys[]' package.json 2>/dev/null | grep -x "<script-name>"
package.json doesn't exist: mark ⚠️ CANNOT VERIFY (no package.json)Write the audit report to .progress.md under a ## Context Audit section:
## Context Audit
**Audited**: <timestamp>
**Total assertions found**: N
**Status**: CLEAN | WARNINGS | BLOCKED
### Filesystem
- ✅ `test-ha/docker-compose.yml` — exists
- ❌ `test-ha/docker-compose.yml` — NOT FOUND (referenced in system prompt line: "use test-ha/docker-compose.yml as test infra")
### URLs
- ⚠️ `localhost:8123` — needs manual verification (referenced as test instance)
### Commands
- ✅ `docker` — available
- ❌ `test-ha` — NOT INSTALLED (referenced in "run `npm run test-ha`")
### Environment Files
- ⚠️ `.env` — missing (may be intentional — not committed)
- ❌ `.env.test` — NOT FOUND (referenced in system prompt)
### Scripts
- ✅ `test` — defined in package.json
- ❌ `test-ha` — NOT DEFINED in package.json (referenced in system prompt)
After writing to .progress.md, emit one of these signals:
AUDIT_CLEAN
assertions_checked: N
contradictions: 0
warnings: N
AUDIT_WARNINGS
contradictions:
- type: FILESYSTEM | COMMAND | ENV | SCRIPT
assertion: "<what system prompt claims>"
finding: "<what filesystem/shell found>"
impact: "<how this could affect spec execution>"
action_required: Review system prompt and correct or remove broken references before proceeding.
Do NOT block spec execution — emit the warnings prominently and continue. The user chose to start this spec and may be aware of the state. The audit's job is to surface contradictions, not to halt work.
When start.md records this skill in the Skill Discovery section of .progress.md, use:
- **context-auditor** (plugin): always-invoked (reason: mandatory system prompt validation)
.env files — check existence onlySystem prompt claims: "Use test-ha/docker-compose.yml as the test infrastructure. The test Home Assistant instance runs at localhost:8123."
Audit result:
AUDIT_WARNINGS
contradictions:
- type: FILESYSTEM
assertion: "test-ha/docker-compose.yml exists and is the test infrastructure"
finding: "ls test-ha/docker-compose.yml → No such file or directory"
impact: "Any spec that tries to start test infrastructure will fail. Agents will generate
code pointing to phantom infra, causing test failures that look like config bugs."
- type: URL
assertion: "localhost:8123 is the test Home Assistant instance"
finding: "Cannot verify without network access — needs manual check"
impact: "If this URL points to a production instance, agents may interact with real data."