From project-toolkit
Detects loops in agent conversations via Jaccard similarity on topic signatures and emits self-reflection nudges to break repetition for orchestrators in long sessions.
npx claudepluginhub rjmurillo/ai-agents --plugin project-toolkitThis skill uses the workspace's default tool permissions.
Detect when an agent is repeating the same topics across turns and surface a
Detects agent conversation loops via lexical topic-signature similarity and emits self-reflection nudges to break repetition in orchestrators.
Performs structured self-healing assessment of AI subsystems (memory, reasoning, tools, communication) to detect drift, rebalance, and integrate learnings. Use mid-session for formulaic responses, error chains, or context overload.
Monitors and maintains conversation context health during long sessions. Assesses degradation with checklists, identifies issues like stale files or repeated errors, applies fixes like refreshing context or delegating tasks, and verifies improvements.
Share bugs, ideas, or general feedback.
Detect when an agent is repeating the same topics across turns and surface a self-reflection nudge so the orchestrator can break the loop. Lightweight, deterministic, no external services.
| Trigger Phrase | Operation |
|---|---|
check stuck loop | Score current response against recent history |
detect repetition | Flag repeating topic signatures |
agent looping | Emit a loop-breaking nudge |
reset stuck history | Clear history after a confirmed topic change |
| Symptom | Cause | Fix |
|---|---|---|
| Agent re-states the same status every turn | Topic signature unchanged across turns | Inject the nudge payload before the next turn |
| False positive after a real topic change | Stale history kept after the user redirected | Run reset to clear history |
| Stuck never triggers despite obvious repetition | Responses below MIN_TEXT_LENGTH (50 chars) | Lower threshold or pass full response text |
| Unrelated turns flagged as stuck | Generic vocabulary inflates Jaccard score | Raise DEFAULT_SIMILARITY_THRESHOLD toward 0.75 |
Use this skill when:
Use a richer evaluation instead when:
python3 .claude/skills/stuck-detection/stuck_detection.py check "<text>""stuck": true, inject the nudge field into the
orchestrator's next system prompt so the model self-corrects on its next
turn. The nudge is internal control text wrapped in <stuck-detection>
tags. Do not render it directly to the end user.reset to clear stale history.# Check a response for stuck pattern (text via arg or stdin)
python3 .claude/skills/stuck-detection/stuck_detection.py check "your response text"
echo "your response" | python3 .claude/skills/stuck-detection/stuck_detection.py check
# Show current history state
python3 .claude/skills/stuck-detection/stuck_detection.py status
# Reset history after a topic change
python3 .claude/skills/stuck-detection/stuck_detection.py reset
# Extract signature only (debugging)
python3 .claude/skills/stuck-detection/stuck_detection.py extract "your text"
{
"stuck": true,
"signature": "deploy,error,pipeline,retry,timeout",
"similar_count": 3,
"nudge": "<stuck-detection>\nSELF-REFLECTION: Loop detected.\n..."
}
Defaults are conservative. Override at the function call site or via constants
in stuck_detection.py:
| Constant | Default | Purpose |
|---|---|---|
DEFAULT_MAX_HISTORY | 10 | Entries retained on disk |
DEFAULT_STUCK_THRESHOLD | 3 | Consecutive similar turns to trigger |
DEFAULT_SIMILARITY_THRESHOLD | 0.6 | Jaccard cutoff for "similar" |
MIN_TEXT_LENGTH | 50 | Skip short responses |
SIGNATURE_SIZE | 5 | Top words per signature |
Resolution order:
--history <path> CLI flagSTUCK_DETECTION_HISTORY environment variable (full path)STUCK_DETECTION_SESSION environment variable (per-session file under the
XDG state dir, e.g. history-<session>.json); use this when running
multiple concurrent sessions to prevent cross-session contamination$XDG_STATE_HOME/claude-stuck-detection/history.json~/.local/state/claude-stuck-detection/history.jsonPaths are expanded and resolved before use. Avoid relative .. segments in
--history or STUCK_DETECTION_HISTORY if you need the resolved location to
match the input.
After integrating, confirm the following before relying on the guard:
python3 .claude/skills/stuck-detection/stuck_detection.py status returns valid JSON with the configured history_lengthcheck with the same long input return "stuck": true on the third callreset zeroes history_length on the next status checknudge text is forwarded into the orchestrator system prompt, not rendered to the end usercheck calls never produce truncated JSON (atomic write via temp file + replace)tests/skills/stuck-detection/ pass under uv run pytest| Avoid | Why | Instead |
|---|---|---|
Calling check on every token | High overhead, noisy signatures | Call once per agent turn |
Treating stuck: false as proof of progress | Lexical similarity misses semantic loops | Pair with task-completion checks |
| Hardcoding the history path in callers | Breaks portability across environments | Use the env var or CLI flag |
Ignoring reset after topic changes | History stays polluted | Reset when the user redirects |
Call from a hook or orchestrator wrapper. When stuck is true, prepend the
nudge to the system prompt for the next turn so the model self-corrects.
uv run pytest tests/skills/stuck-detection/ -v