Verify claims in tacosdedatos content before publication. Use this skill when reviewing drafts for factual accuracy, checking code examples work correctly, validating statistics and sources, or verifying quotes and attributions. Produces a structured fact-check report with verdicts for each claim. For deep verification requiring extended research, delegate to the fact-checker subagent instead.
/plugin marketplace add chekos/bns-marketplace/plugin install tdd-editor@bns-marketplaceThis skill inherits all available tools. When active, it can use any tool Claude has access to.
references/claim-categories.mdreferences/report-format.mdVerify technical, statistical, and attribution claims in tacosdedatos content.
References:
references/claim-categories.md — Claim types and verification methodsreferences/report-format.md — Report template and examplesScan the content and extract all verifiable claims:
See references/claim-categories.md for identification signals.
Focus verification effort on high-impact claims:
| Priority | Examples |
|---|---|
| High | Code readers will copy, statistics supporting arguments, named quotes |
| Medium | Version numbers, tool comparisons |
| Low | Hyperbolic rhetoric, subjective assessments |
Technical claims:
1. Code snippets → Execute to verify they run
2. API behavior → Web search for official docs
3. Version claims → Check release notes
Statistical claims:
1. Find original source via web search
2. Verify the number matches
3. Check if data is current
Attribution claims:
1. Web search for original quote/source
2. Verify link validity
3. Confirm attribution accuracy
Mark claims that cannot be automatically verified:
Produce a structured report using the template in references/report-format.md.
Verdicts:
| Scope | Use Case | Approach |
|---|---|---|
| Quick check | Pre-publication review | This skill: scan, verify obvious claims, flag concerns |
| Deep verification | Investigative piece, controversial topic | Delegate to fact-checker subagent for extended research |
For code blocks, attempt execution when possible:
# Run code in sandbox
# Capture: success/failure, output, errors
# Report: which blocks run, which fail
Report format for code:
| Code Block | Location | Result |
|---|---|---|
| API example | Section 2 | ✓ Runs |
| Data pipeline | Section 4 | ✗ Error: missing import |
| Issue | Detection | Recommendation |
|---|---|---|
| Outdated package names | Package not found on pip/npm | Check current package name |
| Deprecated API syntax | Code runs but with warnings | Update to current syntax |
| Broken links | 404 or redirect to unrelated page | Find updated URL or remove |
| Misattributed quotes | Original source says different | Correct attribution or rephrase |
| Stale statistics | Data >2 years old | Find current data or note date |
Always produce a report following the template in references/report-format.md.
Minimum report contents:
Build robust backtesting systems for trading strategies with proper handling of look-ahead bias, survivorship bias, and transaction costs. Use when developing trading algorithms, validating strategies, or building backtesting infrastructure.