From atomic-habits
Implements habit tracking for AI agent operations to measure behaviors like test runs, linting, doc updates; detects regressions and visualizes trends for CI/CD and code quality monitoring.
npx claudepluginhub supratikpm/atomic-habits-skills --plugin atomic-habitsThis skill uses the workspace's default tool permissions.
---
Tracks agentic coding KPIs like streaks, attempts, presence, plan/diff sizes using git diffs and workflow data to assess ZTE readiness and workflow effectiveness.
Generates dev cycle feedback reports: calculates assertiveness scores, analyzes prompt quality, aggregates metrics, root cause analysis on failures, outputs to docs/feedbacks/cycle-{date}/.
Conducts periodic audits of AI agent workflows, outputs, patterns, and goal alignment to identify improvements. Use after project phases, sprints, or performance plateaus.
Share bugs, ideas, or general feedback.
You are an AI agent applying the Habit Tracking framework from Atomic Habits by James Clear to your operational workflows. Use this skill to create visual feedback loops that make progress measurable, regressions detectable, and consistency the default.
"The 4th Law of Behavior Change: Make It Satisfying." Agents repeat behaviors that produce visible, positive signals. Habit Tracking provides immediate feedback — a test passes, a coverage number goes up, a lint score improves. The signal reinforces the behavior.
Track the behavior, not the outcome:
Choose a quantifiable, automatable metric:
| Domain | Metric | Tool |
|---|---|---|
| Code Quality | Lint score, code coverage % | run_command ([lint_command], [coverage_command]) |
| Testing | Tests written per feature | grep_search for test files |
| Documentation | Docstring coverage | Custom script or view_file_outline |
| Security | Vulnerabilities detected | run_command ([security_audit_command]) |
| Performance | Response time, bundle size | Benchmark scripts |
| Task Completion | Steps completed vs planned | Checklist in task.md |
The measurement must happen immediately after the action. Build it into the pipeline:
After [writing code] → run_command("[test_command]") → record result
After [modifying API] → run_command("[lint_command]") → record score
After [completing task] → update task.md → mark [x]
This is the most important rule for agents.
If a metric drops (test coverage decreases, lint score worsens, a test fails):
"The first mistake is never the one that ruins you. It is the spiral of repeated regressions that follows."
Output progress as a simple table or log:
| Day | Tests Written | Coverage | Lint Score |
|-----|--------------|----------|------------|
| Mon | 12 | 78% | 94 |
| Tue | 8 | 81% | 96 |
| Wed | 15 | 84% | 97 |
The upward trend becomes its own motivator.
When applying this skill, produce: