From prompt-engineering-plugin
Produces grounded, citation-backed responses from source documents via direct quotes, uncertainty permission, and claim verification. Use for analyzing codebases, specs, or long documents.
npx claudepluginhub laurigates/claude-plugins --plugin prompt-engineering-pluginThis skill is limited to using the following tools:
Produce a grounded, citation-backed response — every claim traced to a source quote, unsupported claims retracted, unknowns stated explicitly.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Produce a grounded, citation-backed response — every claim traced to a source quote, unsupported claims retracted, unknowns stated explicitly.
| Use this skill when... | Use something else when... |
|---|---|
| You need factual accuracy grounded in source documents | Text needs style/tone adjustment → /prose:distill |
| Analyzing long documents (>20k tokens) where hallucination risk is high | You need code review → /code-quality:code-review |
| User asks to "cite sources" or "verify claims" | You need to synthesize scattered notes → /prose:synthesize |
| Answering questions about specs, policies, or technical docs | General coding task with no source verification needed |
| User wants auditable, traceable answers | Creative writing or brainstorming (accuracy not the goal) |
These three techniques are from Anthropic's official documentation on reducing hallucination. Apply all three in every response.
You have explicit permission to say "I don't know" or "The source does not address this." Do not invent, speculate, or fill gaps with plausible-sounding information. When the source is silent on a topic, say so. When the source is ambiguous, state the ambiguity.
Before analyzing or answering, extract word-for-word quotes from the source material that are relevant to the task. This grounds your reasoning in actual text, not recalled impressions. For documents >20k tokens, this step is critical — extract before you analyze.
After formulating your response, audit every claim. Each claim must have a supporting quote. If a claim lacks a supporting quote, retract it. Mark retracted claims with [RETRACTED — no supporting quote found]. This makes your response auditable.
Parse $ARGUMENTS:
--source <path>: Optional file path or glob pattern for source material--source is provided, look for source material in the conversation contextExecute this grounded analysis workflow:
--source is provided, read the file(s):
ReadGlob to find files, then Read eachGlob with **/* to discover relevant files--source, check conversation context for documents, code, or prior contentNote the total size of source material. For large sources (>20k tokens), be especially rigorous about Step 2.
Search the source material and extract word-for-word quotes relevant to the question/task.
For each quote, record:
Q1, Q2, etc.)Extract comprehensively. It is better to extract too many quotes than too few — you can discard unused quotes later, but you cannot cite quotes you didn't extract.
If no relevant quotes are found, state: "No relevant quotes found in the provided source material. I cannot provide a grounded response to this question."
Answer the question or perform the task. Structure your response as a series of claims, each referencing one or more quotes:
[Q1], [Q2, Q3]Walk through each claim in your response:
For any claim that fails verification:
[RETRACTED — no supporting quote found]After verification, explicitly list:
Do not skip this step. Stating unknowns is as important as stating knowns.
Deliver the final response using this structure:
## Grounded Analysis
<Your response with inline [QN] citations for each claim>
## Supporting Quotes
| # | Quote | Source |
|---|-------|--------|
| Q1 | "exact text from source" | file.md:42 |
| Q2 | "exact text from source" | file.md:78 |
## What the Source Does Not Address
- <Gap 1>
- <Gap 2>
## Retracted Claims (if any)
- <Claim that was removed and why>
| Situation | Approach |
|---|---|
| Source contradicts itself | Quote both passages, note the contradiction, do not resolve it |
| Question is partially answerable | Answer the answerable part with citations, list unanswerable parts in unknowns |
| Source is code, not prose | Extract relevant code blocks as quotes, cite file:line |
| Multiple sources disagree | Present each source's position with its quotes, note the disagreement |
| Source is very short (<1k tokens) | Still follow the full workflow — brevity doesn't eliminate hallucination risk |
| Context | Approach |
|---|---|
| Short document (<5k tokens) | Read fully, extract quotes inline, present compact response |
| Long document (5k-50k tokens) | Use Grep to find relevant sections, then Read targeted ranges |
| Very long document (>50k tokens) | Use Grep with multiple patterns, read only matching regions |
| Codebase analysis | Use Glob + Grep to locate relevant files, extract code quotes with file:line |
| Multiple files | Process each file, merge quotes, deduplicate overlapping citations |
| No source provided | Ask user before proceeding — never ground in imagined sources |