Validate claims through tool execution, avoid superlatives and unsubstantiated metrics. Use when reviewing codebases, analyzing systems, reporting test results, or making any factual claims about code or capabilities.
Validates all claims through tool execution, avoiding superlatives and unsubstantiated metrics.
npx claudepluginhub vinnie357/claude-skillsThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Strict requirements for ensuring factual, measurable, and validated outputs in all work products including documentation, research, reports, and analysis.
Activate when:
Never use unverified superlatives:
Instead, use factual descriptions:
Never fabricate quantitative data:
Instead, provide verified measurements:
Never claim features exist without verification:
Instead, verify before claiming:
Do not provide time estimates without factual basis:
If estimates are requested, execute tools first:
Then provide estimate with evidence:
Be explicit about limitations:
Before claiming files exist or contain specific content:
1. Use Read tool to verify file exists and check contents
2. Use Glob to find files matching patterns
3. Use Grep to verify specific code or content is present
4. Never state "file X contains Y" without tool verification
Example violations:
Correct approach:
Before claiming system capabilities:
1. Use Bash to check installed tools/dependencies
2. Read package.json, requirements.txt, or equivalent
3. Verify environment variables and configuration
4. Test actual behavior when possible
Before claiming framework presence or version:
1. Read package.json, Gemfile, mix.exs, or dependency file
2. Search for framework-specific imports or patterns
3. Check for framework configuration files
4. Report specific version found, not assumed capabilities
Only report test outcomes after actual execution:
1. Execute tests using Bash tool
2. Capture and read actual output
3. Report specific pass/fail counts and error messages
4. Never claim "tests pass" or "all tests successful" without execution
Only make performance statements based on measurement:
1. Run benchmarks or profiling tools
2. Capture actual timing/memory data
3. Report specific measurements with conditions
4. State testing methodology used
❌ "The code has been thoroughly tested" ❌ "All edge cases are handled" ❌ "Test coverage is good"
✅ "Executed test suite: 45 passing, 2 failing" ✅ "Coverage report shows 78% line coverage" ✅ "Tested with inputs [X, Y, Z], observed [specific results]"
❌ "This follows microservices architecture" ❌ "Uses event-driven design patterns" ❌ "Implements SOLID principles"
✅ Use Grep to find specific patterns, then describe what exists ✅ "Found 12 service definitions in [location]" ✅ "Code shows [specific pattern] in [specific files]"
❌ "This is high-quality code" ❌ "Well-structured implementation" ❌ "Follows best practices"
✅ "Code follows [specific standard] as verified by linter" ✅ "Matches patterns from [specific reference documentation]" ✅ "Static analysis shows complexity metrics of [specific values]"
When creating any factual content:
Bad approach:
This API is highly performant and handles thousands of requests per second.
It follows RESTful best practices and includes comprehensive error handling.
Good approach:
This API implements REST endpoints as defined in [specification link].
Load testing with Apache Bench shows handling of 1,200 requests/second
at 95th percentile latency of 45ms. Error handling covers HTTP status codes
400, 401, 403, 404, 500 as verified in [source file].
Bad approach:
React hooks are the modern way to write React components and are much
better than class components. They improve performance and code quality.
Good approach:
React hooks (introduced in React 16.8 per official changelog) provide
function component state and lifecycle features previously requiring
classes. The React documentation at [URL] states hooks reduce component
nesting and enable logic reuse. Performance impact requires measurement
for specific use cases.
Bad approach:
This should be a quick implementation, probably 2-3 hours.
We'll add authentication which is straightforward, then deploy.
Good approach:
Implementation requires:
- Authentication integration (12 files need modification per grep analysis)
- Configuration of [specific auth provider]
- Testing of login/logout flows
Complexity assessment needed before timeline estimation. Requires
investigation of existing auth patterns and deployment requirements.
This skill should be active alongside:
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
This skill should be used when the user wants to "create a skill", "add a skill to plugin", "write a new skill", "improve skill description", "organize skill content", or needs guidance on skill structure, progressive disclosure, or skill development best practices for Claude Code plugins.