Guided reflection to extract learnings from recent work. Use after bug fixes, feature completion, or when prompted by hooks.
Guides you through structured reflection to extract and record learnings from recent work.
/plugin marketplace add iheanyi/tasuku/plugin install iheanyi-tasuku-plugins-tasuku@iheanyi/tasukuA guided process for extracting and recording learnings from your recent work. This skill walks you through reflection questions to ensure valuable knowledge is captured.
/tasuku:reflect
Look at recent activity:
# Check recent git activity
git log --oneline -10
# Check completed tasks
tk task list --status done
For each significant piece of work, answer:
Root Cause Analysis (for bug fixes):
Pattern Recognition:
Decision Points:
Good learnings are:
Examples:
# Root cause learning
tk learn "Never manually manipulate ANSI-styled strings - use lipgloss.Width/Place which handle escape sequences"
# Pattern learning
tk learn "BubbleTea async pattern: Init() returns Cmd, Update() processes result Msg, always restore selection after list refresh"
# Gotcha learning
tk learn "lipgloss.Truncate doesn't exist - use charmbracelet/x/ansi.Truncate for ANSI-aware string truncation"
# Scoped learning (applies to specific paths)
tk learn "Always force UTC timezone in golden tests" --scope "**/*_test.go"
If you made architectural choices:
tk decide --id "rate-limit-algo" \
--chose "token bucket" \
--over "fixed window, sliding window" \
--because "Better burst handling and distributed-system friendly"
# List recent learnings
tk learning list
# Check for rule learnings that should be promoted
tk learning rules
When you invoke this skill, I'll ask:
What did you just finish?
What broke / what was tricky?
What was the fix / solution?
What rule prevents this in the future?
Any decisions made?
/tasuku:reflect
Agent: What did you just work on?
User: Fixed the TUI tests that were failing
Agent: What was the problem?
User: Tests were showing "Loading tasks..." instead of actual content
Agent: What was the root cause?
User: The tests weren't waiting for the async Init() command to complete.
BubbleTea's New() doesn't load data - Init() returns a Cmd that does.
Agent: What's the rule to prevent this?
User: Always process Init() command in tests before making assertions
Agent: Recording learnings...
✓ tk learn "In BubbleTea tests, always process Init() command before assertions - New() starts empty, Init() returns the load Cmd"
Any architectural decisions made?
User: No decisions, just a bug fix.
Agent: Done! Learning recorded. Run `tk learning rules` to see if it should be promoted.
/tasuku:learn - Quick learning capture (no guided process)/tasuku:decide - Record architectural decisions/tasuku:complete - Task completion workflow (includes reflection)/reflectReflect on previus response and output, based on Self-refinement framework for iterative improvement with complexity triage and verification