Ensure factual accuracy by validating claims through tool execution, avoiding superlatives and unsubstantiated metrics, and marking uncertain information appropriately
Validates all claims through tool execution before making statements about files, system capabilities, performance metrics, or time estimates. Marks unverifiable information as uncertain rather than fabricating plausible-sounding details.
/plugin marketplace add vinnie357/claude-skills/plugin install core@vinnie357This skill inherits all available tools. When active, it can use any tool Claude has access to.
Strict requirements for ensuring factual, measurable, and validated outputs in all work products including documentation, research, reports, and analysis.
Activate when:
Never use unverified superlatives:
Instead, use factual descriptions:
Never fabricate quantitative data:
Instead, provide verified measurements:
Never claim features exist without verification:
Instead, verify before claiming:
Do not provide time estimates without factual basis:
If estimates are requested, execute tools first:
Then provide estimate with evidence:
Be explicit about limitations:
Before claiming files exist or contain specific content:
1. Use Read tool to verify file exists and check contents
2. Use Glob to find files matching patterns
3. Use Grep to verify specific code or content is present
4. Never state "file X contains Y" without tool verification
Example violations:
Correct approach:
Before claiming system capabilities:
1. Use Bash to check installed tools/dependencies
2. Read package.json, requirements.txt, or equivalent
3. Verify environment variables and configuration
4. Test actual behavior when possible
Before claiming framework presence or version:
1. Read package.json, Gemfile, mix.exs, or dependency file
2. Search for framework-specific imports or patterns
3. Check for framework configuration files
4. Report specific version found, not assumed capabilities
Only report test outcomes after actual execution:
1. Execute tests using Bash tool
2. Capture and read actual output
3. Report specific pass/fail counts and error messages
4. Never claim "tests pass" or "all tests successful" without execution
Only make performance statements based on measurement:
1. Run benchmarks or profiling tools
2. Capture actual timing/memory data
3. Report specific measurements with conditions
4. State testing methodology used
❌ "The code has been thoroughly tested" ❌ "All edge cases are handled" ❌ "Test coverage is good"
✅ "Executed test suite: 45 passing, 2 failing" ✅ "Coverage report shows 78% line coverage" ✅ "Tested with inputs [X, Y, Z], observed [specific results]"
❌ "This follows microservices architecture" ❌ "Uses event-driven design patterns" ❌ "Implements SOLID principles"
✅ Use Grep to find specific patterns, then describe what exists ✅ "Found 12 service definitions in [location]" ✅ "Code shows [specific pattern] in [specific files]"
❌ "This is high-quality code" ❌ "Well-structured implementation" ❌ "Follows best practices"
✅ "Code follows [specific standard] as verified by linter" ✅ "Matches patterns from [specific reference documentation]" ✅ "Static analysis shows complexity metrics of [specific values]"
When creating any factual content:
Bad approach:
This API is highly performant and handles thousands of requests per second.
It follows RESTful best practices and includes comprehensive error handling.
Good approach:
This API implements REST endpoints as defined in [specification link].
Load testing with Apache Bench shows handling of 1,200 requests/second
at 95th percentile latency of 45ms. Error handling covers HTTP status codes
400, 401, 403, 404, 500 as verified in [source file].
Bad approach:
React hooks are the modern way to write React components and are much
better than class components. They improve performance and code quality.
Good approach:
React hooks (introduced in React 16.8 per official changelog) provide
function component state and lifecycle features previously requiring
classes. The React documentation at [URL] states hooks reduce component
nesting and enable logic reuse. Performance impact requires measurement
for specific use cases.
Bad approach:
This should be a quick implementation, probably 2-3 hours.
We'll add authentication which is straightforward, then deploy.
Good approach:
Implementation requires:
- Authentication integration (12 files need modification per grep analysis)
- Configuration of [specific auth provider]
- Testing of login/logout flows
Complexity assessment needed before timeline estimation. Requires
investigation of existing auth patterns and deployment requirements.
This skill should be active alongside:
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.