From agentic-skills
Prompting patterns that encourage the model to articulate its specific thought process (Chain of Thought) to improve performance on complex logical, mathematical, or reasoning tasks. Use when user asks to "improve agent reasoning", "add chain-of-thought", "logical reasoning", or mentions inference, deductive reasoning, or structured thinking.
npx claudepluginhub lauraflorentin/skills-marketplace --plugin agentic-skillsThis skill uses the workspace's default tool permissions.
Reasoning techniques (like Chain-of-Thought, Tree-of-Thought) force the LLM to show its work. Large Language Models are statistical, not logical. By making them output a step-by-step reasoning path before the final answer, you allow the model to provide context to itself, significantly reducing logic errors and "hallucinations of calculation".
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Compresses source documents into lossless, LLM-optimized distillates preserving all facts and relationships. Use for 'distill documents' or 'create distillate' requests.
Reasoning techniques (like Chain-of-Thought, Tree-of-Thought) force the LLM to show its work. Large Language Models are statistical, not logical. By making them output a step-by-step reasoning path before the final answer, you allow the model to provide context to itself, significantly reducing logic errors and "hallucinations of calculation".
def chain_of_thought_prompt(question):
prompt = f"""
Question: {question}
Instruction: Answer the question by reasoning step-by-step.
Format your answer as:
Reasoning:
1. [First Step]
2. [Second Step]
...
Final Answer: [Answer]
"""
return llm.generate(prompt)