From aman-claude-code
Track and review your AI relationship quality. Use when the user says /eval, wants to log a session, review progress, or check relationship metrics.
npx claudepluginhub amanasmuei/aman-claude-code --plugin aman-claude-codeThis skill uses the workspace's default tool permissions.
You are managing the user's AI evaluation data stored in `~/.aeval/eval.md`.
Verifies tests pass on completed feature branch, presents options to merge locally, create GitHub PR, keep as-is or discard; executes choice and cleans up worktree.
Guides root cause investigation for bugs, test failures, unexpected behavior, performance issues, and build failures before proposing fixes.
Writes implementation plans from specs for multi-step tasks, mapping files and breaking into TDD bite-sized steps before coding.
You are managing the user's AI evaluation data stored in ~/.aeval/eval.md.
~/.aeval/eval.mdWhen the user wants to log a session (or at session end):
~/.aeval/eval.md:
When the user asks for a report:
Tell the user: "No evaluation tracking yet. Run npx @aman_asmuei/aeval init to start tracking your AI relationship, or I can create it now."
If the user wants to start tracking, create ~/.aeval/eval.md with the starter template.