Use when tackling complex reasoning tasks requiring step-by-step logic, multi-step arithmetic, commonsense reasoning, symbolic manipulation, or problems where simple prompting fails - provides comprehensive guide to Chain-of-Thought and related prompting techniques (Zero-shot CoT, Self-Consistency, Tree of Thoughts, Least-to-Most, ReAct, PAL, Reflexion) with templates, decision matrices, and research-backed patterns
Provides advanced reasoning techniques like Chain-of-Thought, Tree of Thoughts, and ReAct for complex problems. Use when tackling multi-step arithmetic, symbolic manipulation, or tasks requiring external information search and iterative refinement.
/plugin marketplace add NeoLabHQ/context-engineering-kit/plugin install customaize-agent@context-engineering-kitThis skill inherits all available tools. When active, it can use any tool Claude has access to.
Chain-of-Thought (CoT) prompting and its variants encourage LLMs to generate intermediate reasoning steps before arriving at a final answer, significantly improving performance on complex reasoning tasks. These techniques transform how models approach problems by making implicit reasoning explicit.
| Technique | When to Use | Complexity | Accuracy Gain |
|---|---|---|---|
| Zero-shot CoT | Quick reasoning, no examples available | Low | +20-60% |
| Few-shot CoT | Have good examples, consistent format needed | Medium | +30-70% |
| Self-Consistency | High-stakes decisions, need confidence | Medium | +10-20% over CoT |
| Tree of Thoughts | Complex problems requiring exploration | High | +50-70% on hard tasks |
| Least-to-Most | Multi-step problems with subproblems | Medium | +30-80% |
| ReAct | Tasks requiring external information | Medium | +15-35% |
| PAL | Mathematical/computational problems | Medium | +10-15% |
| Reflexion | Iterative improvement, learning from errors | High | +10-20% |
Paper: "Chain of Thought Prompting Elicits Reasoning in Large Language Models" (Wei et al., 2022) Citations: 14,255+
Provide few-shot examples that include intermediate reasoning steps, not just question-answer pairs. The model learns to generate similar step-by-step reasoning.
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
A: Roger started with 5 balls. 2 cans of 3 tennis balls each is 6 tennis balls. 5 + 6 = 11. The answer is 11.
Q: The cafeteria had 23 apples. If they used 20 to make lunch and bought 6 more, how many apples do they have?
A: The cafeteria had 23 apples originally. They used 20 to make lunch. So they had 23 - 20 = 3. They bought 6 more apples, so they have 3 + 6 = 9. The answer is 9.
Q: [YOUR QUESTION HERE]
A:
Paper: "Large Language Models are Zero-Shot Reasoners" (Kojima et al., 2022) Citations: 5,985+
Simply append "Let's think step by step" (or similar phrase) to the prompt. This triggers the model to generate reasoning steps without any examples.
Q: A juggler can juggle 16 balls. Half of the balls are golf balls, and half of the golf balls are blue. How many blue golf balls are there?
Let's think step by step.
Alternative trigger phrases:
Stage 1 - Reasoning Extraction:
Q: [QUESTION]
A: Let's think step by step.
Stage 2 - Answer Extraction:
[REASONING FROM STAGE 1]
Therefore, the answer is
Paper: "Self-Consistency Improves Chain of Thought Reasoning in Language Models" (Wang et al., 2022) Citations: 5,379+
Sample multiple diverse reasoning paths, then select the most consistent answer via majority voting. The intuition: correct answers can be reached through multiple reasoning paths.
[Use any CoT prompt - zero-shot or few-shot]
[Generate N samples with temperature > 0]
[Extract final answers from each sample]
[Return the most frequent answer (majority vote)]
def self_consistency(prompt, n_samples=5, temperature=0.7):
answers = []
for _ in range(n_samples):
response = llm.generate(prompt, temperature=temperature)
answer = extract_answer(response)
answers.append(answer)
# Majority vote
return Counter(answers).most_common(1)[0][0]
Paper: "Tree of Thoughts: Deliberate Problem Solving with Large Language Models" (Yao et al., 2023) Citations: 3,026+
Generalize CoT to a tree structure where each node is a "thought" (coherent language unit). Uses search algorithms (BFS/DFS) with self-evaluation to explore and select promising reasoning paths.
Thought Generation:
Given the current state:
[STATE]
Generate 3-5 possible next steps to solve this problem.
State Evaluation:
Evaluate if the following partial solution is:
- "sure" (definitely leads to solution)
- "maybe" (could potentially work)
- "impossible" (cannot lead to solution)
Partial solution:
[THOUGHTS SO FAR]
BFS/DFS Search:
def tree_of_thoughts(problem, max_depth=3, beam_width=3):
queue = [(problem, [])] # (state, thought_path)
while queue:
state, path = queue.pop(0)
if is_solved(state):
return path
# Generate candidate thoughts
thoughts = generate_thoughts(state, k=5)
# Evaluate and keep top-k
evaluated = [(t, evaluate(state, t)) for t in thoughts]
top_k = sorted(evaluated, key=lambda x: x[1])[:beam_width]
for thought, score in top_k:
if score != "impossible":
new_state = apply_thought(state, thought)
queue.append((new_state, path + [thought]))
return None
Problem: Use 4, 9, 10, 13 to get 24 (use +, -, *, / and each number once)
Thought 1: 13 - 9 = 4 (Now have: 4, 4, 10)
Evaluation: "maybe" - have two 4s and 10, could work
Thought 2: 10 - 4 = 6 (Now have: 4, 6, 13)
Evaluation: "maybe" - 4 * 6 = 24, need to use 13
Thought 3: 4 + 9 = 13 (Now have: 10, 13, 13)
Evaluation: "impossible" - no way to get 24 from these
Paper: "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models" (Zhou et al., 2022) Citations: 1,466+
Two-stage process:
Stage 1: Decomposition
Q: Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice as old as 30 years old, how old is Kody?
To solve "Four years ago, Kody was only half as old as Mohamed. If Mohamed is currently twice as old as 30 years old, how old is Kody?", we need to first solve:
- "If Mohamed is currently twice as old as 30 years old, how old is Mohamed?"
Then we need to solve:
- "Four years ago, Kody was only half as old as Mohamed. How old is Kody?"
Stage 2: Sequential Solving
Q: If Mohamed is currently twice as old as 30 years old, how old is Mohamed?
A: Mohamed is twice as old as 30, so Mohamed is 30 * 2 = 60 years old.
Q: Four years ago, Kody was only half as old as Mohamed. Mohamed is 60 years old. How old is Kody?
A: Four years ago, Mohamed was 60 - 4 = 56 years old. Kody was half as old, so Kody was 56 / 2 = 28 years old four years ago. Therefore, Kody is currently 28 + 4 = 32 years old.
The answer is 32.
Paper: "ReAct: Synergizing Reasoning and Acting in Language Models" (Yao et al., 2022) Citations: 5,012+
Interleave reasoning traces ("Thought") with actions ("Action") and observations ("Observation"). Reasoning helps plan actions; actions provide new information for reasoning.
Question: What is the elevation range for the area that the eastern sector of the Colorado orogeny extends into?
Thought 1: I need to search Colorado orogeny, find the area that the eastern sector extends into, then find the elevation range of that area.
Action 1: Search[Colorado orogeny]
Observation 1: The Colorado orogeny was an episode of mountain building (an orogeny) in Colorado and surrounding areas.
Thought 2: It does not mention the eastern sector. So I need to look up eastern sector.
Action 2: Lookup[eastern sector]
Observation 2: The eastern sector extends into the High Plains.
Thought 3: The eastern sector extends into the High Plains. I need to search High Plains and find its elevation range.
Action 3: Search[High Plains]
Observation 3: High Plains refers to one of two distinct land regions: High Plains (United States)...
Thought 4: I need to search High Plains (United States).
Action 4: Search[High Plains (United States)]
Observation 4: The High Plains are a subregion of the Great Plains... elevation of around 1,800 to 7,000 ft (550 to 2,130 m).
Thought 5: The elevation range is 1,800 to 7,000 ft.
Action 5: Finish[1,800 to 7,000 ft]
Search[query] - Search for informationLookup[keyword] - Look up keyword in current contextFinish[answer] - Return final answerPaper: "PAL: Program-aided Language Models" (Gao et al., 2022) Citations: 608+
Generate code (typically Python) instead of natural language reasoning. Execute the code to get the answer. The LLM handles decomposition; the interpreter handles computation.
Q: Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?
# solution in Python:
def solution():
"""Roger has 5 tennis balls. He buys 2 more cans of tennis balls. Each can has 3 tennis balls. How many tennis balls does he have now?"""
tennis_balls_initial = 5
bought_cans = 2
tennis_balls_per_can = 3
tennis_balls_bought = bought_cans * tennis_balls_per_can
tennis_balls_total = tennis_balls_initial + tennis_balls_bought
return tennis_balls_total
Q: The bakers at the Beverly Hills Bakery baked 200 loaves of bread on Monday morning. They sold 93 loaves in the morning and 39 loaves in the afternoon. A grocery store returned 6 unsold loaves. How many loaves of bread did they have left?
# solution in Python:
def solution():
"""The bakers baked 200 loaves. They sold 93 in morning, 39 in afternoon. A store returned 6. How many left?"""
loaves_baked = 200
loaves_sold_morning = 93
loaves_sold_afternoon = 39
loaves_returned = 6
loaves_left = loaves_baked - loaves_sold_morning - loaves_sold_afternoon + loaves_returned
return loaves_left
Paper: "Automatic Chain of Thought Prompting in Large Language Models" (Zhang et al., 2022) Citations: 838+
Step 1: Generate diverse demonstrations
# Cluster questions
clusters = cluster_questions(all_questions, k=8)
# For each cluster, pick representative and generate CoT
demonstrations = []
for cluster in clusters:
question = select_representative(cluster)
reasoning = zero_shot_cot(question) # "Let's think step by step"
demonstrations.append((question, reasoning))
Step 2: Use as few-shot exemplars
Q: [Demo question 1]
A: Let's think step by step. [Generated reasoning 1]
Q: [Demo question 2]
A: Let's think step by step. [Generated reasoning 2]
...
Q: [New question]
A: Let's think step by step.
Paper: "Reflexion: Language Agents with Verbal Reinforcement Learning" (Shinn et al., 2023) Citations: 2,179+
After task failure, the agent generates a verbal "reflection" analyzing what went wrong. This reflection is stored in memory and used in subsequent attempts to avoid repeating mistakes.
Initial Attempt:
Task: [TASK DESCRIPTION]
Thought: [REASONING]
Action: [ACTION]
...
Result: [FAILURE/PARTIAL SUCCESS]
Reflection:
The previous attempt failed because:
1. [SPECIFIC ERROR ANALYSIS]
2. [WHAT SHOULD HAVE BEEN DONE]
3. [KEY INSIGHT FOR NEXT ATTEMPT]
Reflection: In the next attempt, I should...
Subsequent Attempt (with memory):
Task: [TASK DESCRIPTION]
Previous reflections:
- [REFLECTION 1]
- [REFLECTION 2]
Using these insights, I will now attempt the task again.
Thought: [IMPROVED REASONING]
Action: [BETTER ACTION]
Task: Write a function to find the longest palindromic substring.
Attempt 1: [CODE WITH BUG]
Test Result: Failed on "babad" - expected "bab" or "aba", got "b"
Reflection: My solution only checked single characters. I need to:
1. Consider substrings of all lengths
2. Use expand-around-center technique for efficiency
3. Track both start position and maximum length
Attempt 2: [IMPROVED CODE USING REFLECTION]
Test Result: Passed all tests
Need Examples?
/ \
No Yes
| |
Zero-shot CoT Few-shot CoT
| |
Need higher accuracy? Need computation?
/ \ |
Yes No PAL
| |
Self-Consistency Done with CoT
|
Still not enough?
/ \
Yes No
| |
Problem decomposable? Done
/ \
Yes No
| |
Least-to-Most Need exploration?
/ \
Yes No
| |
Tree of Thoughts Need external info?
/ \
Yes No
| |
ReAct Need iteration?
/ \
Yes No
| |
Reflexion Use CoT
Begin with Zero-shot CoT ("Let's think step by step"), then progress to more complex techniques if needed.
Techniques are often complementary:
| Mistake | Why It's Wrong | Fix |
|---|---|---|
| Using CoT for simple lookups | Adds unnecessary tokens and latency | Reserve for multi-step reasoning |
| Too few samples in Self-Consistency | Majority voting needs adequate samples | Use 5-10 samples minimum |
| Generic "think step by step" without checking output | Model may produce irrelevant reasoning | Validate reasoning quality, not just presence |
| Mixing techniques without understanding trade-offs | Computational cost without benefit | Understand when each technique adds value |
| Using PAL without code interpreter | Code generation is useless without execution | Ensure execution environment available |
| Not testing exemplar quality in few-shot CoT | Poor exemplars lead to poor reasoning | Validate exemplars solve problems correctly |
| Applying Tree of Thoughts to linear problems | Massive overhead for no benefit | Use ToT only when exploration needed |
Wei, J. et al. (2022). "Chain of Thought Prompting Elicits Reasoning in Large Language Models." arXiv:2201.11903
Kojima, T. et al. (2022). "Large Language Models are Zero-Shot Reasoners." arXiv:2205.11916
Wang, X. et al. (2022). "Self-Consistency Improves Chain of Thought Reasoning in Language Models." arXiv:2203.11171
Yao, S. et al. (2023). "Tree of Thoughts: Deliberate Problem Solving with Large Language Models." arXiv:2305.10601
Zhou, D. et al. (2022). "Least-to-Most Prompting Enables Complex Reasoning in Large Language Models." arXiv:2205.10625
Yao, S. et al. (2022). "ReAct: Synergizing Reasoning and Acting in Language Models." arXiv:2210.03629
Gao, L. et al. (2022). "PAL: Program-aided Language Models." arXiv:2211.10435
Zhang, Z. et al. (2022). "Automatic Chain of Thought Prompting in Large Language Models." arXiv:2210.03493
Shinn, N. et al. (2023). "Reflexion: Language Agents with Verbal Reinforcement Learning." arXiv:2303.11366
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.