**Category:** Learning
Submits feedback to the learning system to improve agent selection and workflows.
/plugin marketplace add nguyenthienthanh/aura-frog/plugin install aura-frog@aurafroglearn/Category: Learning Purpose: Manually submit feedback to the learning system
/learn:feedback # Interactive feedback
/learn:feedback --type correction # Report a correction
/learn:feedback --type success # Report what worked well
/learn:feedback --type agent-issue # Report agent selection issue
/learn:feedback --type workflow-issue # Report workflow issue
Script: ./scripts/learn/submit-feedback.sh
When user runs /learn:feedback, Claude MUST actually submit to Supabase.
DO NOT just show documentation. DO run the curl command to store feedback.
When user runs /learn:feedback, Claude should:
# Source .envrc first
source .envrc 2>/dev/null || source .claude/.envrc 2>/dev/null || true
# Verify
if [ -z "$SUPABASE_URL" ] || [ -z "$SUPABASE_SECRET_KEY" ]; then
echo "ā Learning system not configured"
echo "Run: source .envrc"
exit 1
fi
Ask the user:
## š Submit Feedback
**What type of feedback?**
1. ā
**Success** - Something worked really well
2. ā **Correction** - I had to fix/redo something
3. š¤ **Agent Issue** - Wrong agent was selected
4. š **Workflow Issue** - Problem with workflow phases
5. š” **Suggestion** - General improvement idea
**Select (1-5):**
**What worked well?**
- Agent used: [auto-detect or ask]
- Task type: [e.g., "React component", "API endpoint"]
- Why it worked: [user input]
**Rate effectiveness (1-5):**
**What needed correction?**
- What was the issue: [user input]
- What did you change: [user input]
- File(s) affected: [auto-detect recent edits or ask]
- Agent used: [auto-detect or ask]
**Severity (1-5):** 1=minor tweak, 5=complete redo
**Agent Selection Problem**
- Task description: [user input]
- Agent selected: [auto-detect or ask]
- Better agent would be: [user input or suggest from list]
- Why: [user input]
**Workflow Problem**
- Phase where issue occurred: [1-9]
- What happened: [user input]
- Expected behavior: [user input]
**Improvement Suggestion**
- Category: [agent/workflow/rule/skill/other]
- Suggestion: [user input]
- Expected benefit: [user input]
# Determine feedback_type based on user selection
# success ā approval
# correction ā correction
# agent-issue ā agent_override
# workflow-issue ā rejection
# suggestion ā suggestion
curl -X POST "${SUPABASE_URL}/rest/v1/af_feedback" \
-H "apikey: ${SUPABASE_SECRET_KEY}" \
-H "Authorization: Bearer ${SUPABASE_SECRET_KEY}" \
-H "Content-Type: application/json" \
-d '{
"feedback_type": "'"$feedback_type"'",
"context": {
"task_type": "'"$task_type"'",
"agent_used": "'"$agent_used"'",
"details": "'"$details"'"
},
"user_comment": "'"$user_comment"'",
"severity": '"$severity"',
"session_id": "'"$SESSION_ID"'"
}'
ā
Feedback submitted!
**Type:** Correction
**Severity:** 3/5
**Agent:** react-expert
**Details:** Component state management approach was overcomplicated
š” Run `/learn:analyze` to see how your feedback improves the system.
For fast feedback submission:
# Quick success
/learn:feedback --type success --message "TDD approach worked great for API"
# Quick correction
/learn:feedback --type correction --severity 3 --message "Had to simplify state logic"
# Quick agent issue
/learn:feedback --type agent-issue --agent react-expert --better nodejs-expert --message "Was backend task"
| Type | feedback_type | When to Use |
|---|---|---|
| Success | approval | Agent/approach worked well |
| Correction | correction | You modified AI output |
| Agent Issue | agent_override | Wrong agent selected |
| Workflow Issue | rejection | Phase/workflow problem |
| Suggestion | suggestion | General improvement idea |
User: /learn:feedback
Claude:
## š Submit Feedback
What type of feedback?
1. ā
Success
2. ā Correction
3. š¤ Agent Issue
4. š Workflow Issue
5. š” Suggestion
User: 1
Claude: What worked well?
User: The react-expert agent's approach to form validation was perfect
Claude: Rate effectiveness (1-5):
User: 5
Claude:
ā
Feedback submitted!
Type: Success (approval)
Rating: 5/5
Agent: react-expert
Context: Form validation approach
š” This positive feedback helps identify successful patterns!
User: /learn:feedback --type correction
Claude: What needed correction?
User: The API error handling was too verbose, had to simplify
Claude: Severity (1=minor, 5=complete redo):
User: 2
Claude:
ā
Feedback submitted!
Type: Correction
Severity: 2/5
Details: API error handling too verbose, simplified
š” This helps identify patterns to avoid.
User: /learn:feedback --type success --message "Vue composition API pattern was exactly right"
Claude:
ā
Feedback submitted!
Type: Success
Message: Vue composition API pattern was exactly right
Agent: vue-expert (auto-detected)
š” Quick feedback recorded!
The command will try to auto-detect:
This reduces manual input needed.
/learn:status - Check feedback count and system status/learn:analyze - Analyze feedback patterns/learn:apply - Apply improvements based on feedbackVersion: 1.0.0