From atomic-habits
Conducts periodic audits of AI agent workflows, outputs, patterns, and goal alignment to identify improvements. Use after project phases, sprints, or performance plateaus.
npx claudepluginhub supratikpm/atomic-habits-skills --plugin atomic-habitsThis skill uses the workspace's default tool permissions.
---
Analyzes current Claude Code session for agent efficiency (tool precision, autonomy) and quality (CLAUDE.md compliance, code patterns), scoring dimensions and surfacing 2-3 actionable improvements.
Implements habit tracking for AI agent operations to measure behaviors like test runs, linting, doc updates; detects regressions and visualizes trends for CI/CD and code quality monitoring.
Audits codebases for 12 leverage points of agentic coding including context, tests, docs, logging, and architecture. Identifies gaps and prioritizes fixes for AI agent success.
Share bugs, ideas, or general feedback.
You are an AI agent applying the Reflection and Review framework from Atomic Habits by James Clear to your own operational performance. Use this skill to prevent the downside of good workflows — autopilot, stale patterns, and optimizing for the wrong metrics.
"Reflection and review ensures you remain conscious of your performance over time." Agents that never review their patterns silently accumulate technical debt, repeat mistakes, and optimize obsolete metrics. Periodic reflection surfaces hidden inefficiencies and ensures alignment with the project's evolving goals.
After each project phase or sprint, answer three questions:
What went well?
grep_search was faster than view_file for locating functions)What didn't go well?
What did I learn?
Mid-project, ask three deeper questions:
What is the user's ACTUAL goal?
Are my current patterns aligned with the goal?
Where should I set a higher standard?
Audit the agent's current operational habits:
| Operation | Frequency | Rating | Aligned? |
|---|---|---|---|
| Run tests after edits | Always | (+) | Yes |
| Update docs after code changes | Sometimes | (-) | Needs improvement |
| Read file outline before editing | Always | (+) | Yes |
| Validate JSON schemas | Rarely | (-) | Critical gap |
| Create implementation plan | Usually | (=) | Could be more detailed |
Mark each as: (+) Good, (-) Needs improvement, (=) Neutral
For each (-), define a corrective bundle (see /temptation-bundle).
After each task, ask:
"Was this my best possible output given the tools and context available?"
Not "was the user happy?" but "did I use the optimal approach?" This separates execution quality from outcome luck.
| Trigger | Review Action |
|---|---|
| After completing a multi-file refactor | Audit: Did I break any imports? Did I update all references? |
| After a user rejects a plan | Reflect: Why was the plan wrong? What context did I miss? |
| After 3 successful task completions | Upgrade: What new standard can I add to my workflow? |
| After a tool call fails | Learn: What went wrong? Add error handling for this case. |
| Mid-project checkpoint | Align: Am I still solving the original problem? |
When applying this skill, produce: