Auto-activate when critically questioning claims, pushing back on assumptions, disagreeing with an approach, sanity-checking contentious decisions, or when a response feels like reflexive agreement. Produces an honest assessment — confirms what holds up with evidence, identifies specific flaws, and delivers a direct verdict without hedging. Use when: validating claims, stress-testing ideas, preventing sycophantic agreement, questioning confident assertions, or saying 'are you sure about that?' Not for generating agreement, validating feelings, or confirming what the user wants to hear.
From flownpx claudepluginhub cofin/flow --plugin flowThis skill uses the workspace's default tool permissions.
references/challenge-strategy.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agentic engineering workflows: eval-first loops, 15-min task decomposition, model routing (Haiku/Sonnet/Opus), AI code reviews, and cost tracking.
Prevents reflexive agreement by forcing structured critical reassessment. References the perspectives skill for its critical thinking framework.
Extract the core assertion being made. Strip away qualifiers and framing to find the actual claim. If there are multiple claims, address each separately.
Using the framework from perspectives/references/critical-thinking.md:
Do not reason from memory when you can verify — if a claim is about code, read the code; if about an API, check the docs.
Just present the analysis directly. Do NOT say things like:
The user knows they asked for a challenge — they don't need narration.
</workflow> <guardrails>Before delivering the assessment, verify:
Challenge: "We should rewrite the auth system in Rust for performance."