From atum-ai-ml
ReAct (Reasoning + Acting) agent pattern library — implementation of the ReAct paradigm by Yao et al. 2022 (ReAct: Synergizing Reasoning and Acting in Language Models, ICLR 2023) where an agent alternates between Thought (reasoning step in natural language), Action (tool call or environment interaction), and Observation (result feedback) in an explicit loop. Covers the core loop structure (Thought→Action→Observation→Thought→...), prompt template design (system prompt with ReAct format instructions, scratchpad accumulation, action parser), action space definition (tool registry, tool descriptions in JSON Schema or natural language, action validation), observation handling (tool output parsing, error recovery, observation truncation for long outputs), termination conditions (final answer detection, max iterations, confidence threshold), comparison with alternative agent patterns (CoT pure for non-tool tasks, function calling JSON for structured tool use, CodeAct for code-as-action, Reflexion for self-correction), production frameworks that implement ReAct (LangChain AgentExecutor, LlamaIndex ReActAgent, Haystack Agents, Smolagents from Hugging Face, AutoGen ReAct, CrewAI), Claude/GPT-specific ReAct prompt patterns, debugging ReAct loops (loop detection, hallucinated tools, infinite loops), and the limitations of ReAct (no parallelism, latency cost per step, error propagation, prompt verbosity). Use when implementing tool-using agents, building autonomous research agents, debugging existing ReAct implementations, or choosing between agent patterns. Differentiates from generic agent skills by deep focus on the ReAct-specific loop mechanics and the prompt templates that make it work reliably.
npx claudepluginhub arnwaldn/atum-plugins-collection --plugin atum-ai-mlThis skill uses the workspace's default tool permissions.
Pattern d'agentique fondamental publié par **Yao et al. 2022** (Princeton + Google Research). Le papier "ReAct: Synergizing Reasoning and Acting in Language Models" est devenu la base de la plupart des frameworks d'agents modernes (LangChain, LlamaIndex, Smolagents, AutoGen).
Executes pre-written implementation plans: critically reviews, follows bite-sized steps exactly, runs verifications, tracks progress with checkpoints, uses git worktrees, stops on blockers.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Guides idea refinement into designs: explores context, asks questions one-by-one, proposes approaches, presents sections for approval, writes/review specs before coding.
Pattern d'agentique fondamental publié par Yao et al. 2022 (Princeton + Google Research). Le papier "ReAct: Synergizing Reasoning and Acting in Language Models" est devenu la base de la plupart des frameworks d'agents modernes (LangChain, LlamaIndex, Smolagents, AutoGen).
Au lieu de demander au LLM soit de raisonner (Chain-of-Thought) soit d'agir (function calling pur), ReAct alterne explicitement raisonnement et action dans une boucle visible.
┌─────────────────────────────────────────────────────────┐
│ ReAct LOOP │
├─────────────────────────────────────────────────────────┤
│ │
│ ┌──────────┐ ┌──────────┐ ┌─────────────┐ │
│ │ THOUGHT │──▶│ ACTION │──▶│ OBSERVATION │ │
│ │ │ │ │ │ │ │
│ │ "Pour │ │ search( │ │ "5 résultats │ │
│ │ répondre,│ │ query │ │ trouvés..." │ │
│ │ je dois │ │ ) │ │ │ │
│ │ chercher"│ │ │ │ │ │
│ └──────────┘ └──────────┘ └─────────────┘ │
│ ▲ │ │
│ └──────────────────────────────────┘ │
│ (continue jusqu'à FINAL ANSWER) │
└─────────────────────────────────────────────────────────┘
You are an agent that solves tasks by reasoning and acting in a loop.
Available tools:
- search(query: str) -> str: Search the web for information
- calculator(expression: str) -> float: Evaluate math expression
- final_answer(answer: str): Submit the final answer
For each step, output exactly:
Thought: [your reasoning about what to do next]
Action: [tool_name(arguments)]
Observation: [will be filled by the system after the action]
When you have enough information, output:
Thought: [reasoning that leads to the answer]
Action: final_answer("your answer here")
Task: {user_question}
{scratchpad}
Et le scratchpad accumule les Thought/Action/Observation précédents.
Question : "Quel est l'âge du président actuel de la France au carré ?"
Thought: Je dois d'abord trouver qui est le président actuel de la France.
Action: search("president actuel France 2026")
Observation: Emmanuel Macron est le président de la République française depuis 2017.
Thought: Maintenant je dois trouver son âge.
Action: search("Emmanuel Macron date de naissance")
Observation: Emmanuel Macron est né le 21 décembre 1977.
Thought: En 2026, il a 48 ou 49 ans selon la date. Calculons 48².
Action: calculator("48 ** 2")
Observation: 2304
Thought: J'ai la réponse.
Action: final_answer("L'âge d'Emmanuel Macron au carré est 2304 (en supposant qu'il a 48 ans en 2026).")
import re
from typing import Callable
def react_agent(question: str, tools: dict[str, Callable], llm_call: Callable, max_steps: int = 10):
scratchpad = ""
for step in range(max_steps):
prompt = REACT_PROMPT_TEMPLATE.format(
tools=format_tool_descriptions(tools),
question=question,
scratchpad=scratchpad,
)
response = llm_call(prompt)
# Parse Thought + Action
thought_match = re.search(r"Thought:\s*(.+?)(?=\nAction:|$)", response, re.DOTALL)
action_match = re.search(r"Action:\s*(\w+)\((.*?)\)", response, re.DOTALL)
if not action_match:
return "ERROR: pas d'action parseable"
tool_name = action_match.group(1)
tool_args = action_match.group(2)
# Termination
if tool_name == "final_answer":
return tool_args.strip('"\' ')
# Execute tool
if tool_name not in tools:
observation = f"ERROR: tool {tool_name} not found"
else:
try:
observation = str(tools[tool_name](tool_args))
except Exception as e:
observation = f"ERROR: {e}"
# Append to scratchpad
scratchpad += f"\nThought: {thought_match.group(1)}\nAction: {tool_name}({tool_args})\nObservation: {observation}\n"
return "ERROR: max steps reached"
| Framework | Avantages | Quand l'utiliser |
|---|---|---|
LangChain AgentExecutor | Mature, écosystème large, mémoire intégrée | Apps Python avec besoins variés |
LlamaIndex ReActAgent | Tight integration RAG | Apps RAG-centric |
| Smolagents (Hugging Face) | Léger, code-as-action native, transparent | Apps modernes 2026, alternative à LangChain |
| AutoGen (Microsoft) | Multi-agents, conversations | Workflows multi-agents complexes |
| Haystack Agents | Pipelines déclaratifs | NLP pipelines mature |
| CrewAI | Role-based agents | Crews collaboratifs |
| Pattern | Quand l'utiliser |
|---|---|
| CoT pur | Pas besoin d'outils, raisonnement seulement |
| Function calling JSON (OpenAI/Anthropic native) | Tool use simple, 1-2 calls, latence critique |
| ReAct | Tool use complexe avec raisonnement multi-étapes, transparence souhaitée |
| CodeAct | Actions composables, math/data processing, Turing-complétude |
| Reflexion | Tâches où l'auto-correction itérative apporte beaucoup |
| Tree-of-Thoughts | Décisions multi-options avec backtracking |
<thought>, <action>, <observation>. Utiliser le natif tool_use est souvent meilleur que ReAct prompt.prompt-engineer (ce plugin)eval-harness (ce plugin)reflexion-pattern (ce plugin)codeact-pattern (ce plugin)tree-of-thoughts (ce plugin)