Best practices for inter-group communication, knowledge sharing, and collaborative workflows in four-tier architecture
Provides best practices for inter-group communication, knowledge sharing, and coordination in four-tier architecture. Claude uses this when implementing handoff protocols between analysis, decision, execution, and validation phases, or when setting up feedback loops and cross-group learning workflows.
/plugin marketplace add bejranonda/LLM-Autonomous-Agent-Plugin-for-Claude/plugin install bejranonda-autonomous-agent@bejranonda/LLM-Autonomous-Agent-Plugin-for-ClaudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
This skill provides guidelines, patterns, and best practices for effective collaboration between the four agent groups in the four-tier architecture. It covers communication protocols, knowledge transfer strategies, feedback mechanisms, and coordination patterns that enable autonomous learning and continuous improvement across groups.
Use this skill when:
Required for:
Group 1: Strategic Analysis & Intelligence (The "Brain")
Group 2: Decision Making & Planning (The "Council")
Group 3: Execution & Implementation (The "Hand")
Group 4: Validation & Optimization (The "Guardian")
Purpose: Transfer analysis findings and recommendations to decision-makers
Structure:
from lib.group_collaboration_system import record_communication
record_communication(
from_agent="code-analyzer", # Group 1
to_agent="strategic-planner", # Group 2
task_id=task_id,
communication_type="recommendation",
message="Code analysis complete with 5 recommendations",
data={
"quality_score": 72,
"recommendations": [
{
"type": "refactoring",
"priority": "high",
"confidence": 0.92, # High confidence
"description": "Extract login method complexity",
"rationale": "Cyclomatic complexity 15, threshold 10",
"estimated_effort_hours": 2.5,
"expected_impact": "high",
"files_affected": ["src/auth.py"]
}
],
"patterns_detected": ["token_auth", "validation_duplication"],
"metrics": {
"complexity_avg": 8.5,
"duplication_rate": 0.12,
"test_coverage": 0.78
}
}
)
Best Practices:
Anti-Patterns to Avoid:
Purpose: Communicate execution plan with priorities and user preferences
Structure:
record_communication(
from_agent="strategic-planner", # Group 2
to_agent="quality-controller", # Group 3
task_id=task_id,
communication_type="execution_plan",
message="Execute quality improvement plan with 3 priorities",
data={
"decision_rationale": "High-priority refactoring based on user preferences",
"execution_plan": {
"quality_targets": {
"tests": 80,
"standards": 90,
"documentation": 70
},
"priority_order": [
"fix_failing_tests", # Highest priority
"apply_code_standards",
"add_missing_docs"
],
"approach": "incremental", # or "comprehensive"
"risk_tolerance": "low" # User preference
},
"user_preferences": {
"auto_fix_threshold": 0.9,
"coding_style": "concise",
"comment_level": "moderate",
"documentation_level": "standard"
},
"constraints": {
"max_iterations": 3,
"time_budget_minutes": 15,
"files_in_scope": ["src/auth.py", "src/utils.py"]
},
"decision_confidence": 0.88
}
)
Best Practices:
Anti-Patterns to Avoid:
Purpose: Send execution results for validation and quality assessment
Structure:
record_communication(
from_agent="quality-controller", # Group 3
to_agent="post-execution-validator", # Group 4
task_id=task_id,
communication_type="execution_result",
message="Quality improvement complete: 68 → 84",
data={
"metrics_before": {
"quality_score": 68,
"tests_passing": 45,
"standards_violations": 23,
"doc_coverage": 0.60
},
"metrics_after": {
"quality_score": 84,
"tests_passing": 50,
"standards_violations": 2,
"doc_coverage": 0.75
},
"changes_made": {
"tests_fixed": 5,
"standards_violations_fixed": 21,
"docs_generated": 10
},
"files_modified": ["src/auth.py", "tests/test_auth.py"],
"auto_corrections_applied": 30,
"manual_review_needed": [],
"iterations_used": 2,
"execution_time_seconds": 145,
"component_scores": {
"tests": 28,
"standards": 22,
"documentation": 16,
"patterns": 13,
"code_metrics": 5
},
"issues_encountered": []
}
)
Best Practices:
Anti-Patterns to Avoid:
Purpose: Provide feedback on recommendation effectiveness for learning
Structure:
from lib.agent_feedback_system import add_feedback
add_feedback(
from_agent="post-execution-validator", # Group 4
to_agent="code-analyzer", # Group 1
task_id=task_id,
feedback_type="success", # or "improvement", "warning", "error"
message="Recommendations were highly effective",
details={
"recommendations_followed": 3,
"recommendations_effective": 3,
"quality_improvement": 16, # points improved
"execution_smooth": True,
"user_satisfaction": "high",
"suggestions_for_improvement": []
},
impact="quality_score +16, all recommendations effective"
)
Best Practices:
Anti-Patterns to Avoid:
When to Use: Share successful patterns across groups
from lib.inter_group_knowledge_transfer import add_knowledge
add_knowledge(
source_group=1, # Group 1 discovered this
knowledge_type="pattern",
title="Modular Authentication Pattern",
description="Breaking auth logic into validate(), authenticate(), authorize() improves testability and maintainability",
context={
"applies_to": ["authentication", "authorization", "security"],
"languages": ["python", "typescript"],
"frameworks": ["flask", "fastapi"]
},
evidence={
"quality_score_improvement": 12,
"test_coverage_improvement": 0.15,
"reuse_count": 5,
"success_rate": 0.92
}
)
When to Use: Share what NOT to do based on failures
add_knowledge(
source_group=3, # Group 3 encountered this during execution
knowledge_type="anti_pattern",
title="Avoid Nested Ternary Operators",
description="Nested ternary operators reduce readability and increase cognitive complexity significantly",
context={
"applies_to": ["code_quality", "readability"],
"severity": "medium"
},
evidence={
"complexity_increase": 8, # Cyclomatic complexity
"maintenance_issues": 3,
"refactoring_time_hours": 1.5
}
)
When to Use: Share techniques that consistently work well
add_knowledge(
source_group=4, # Group 4 validated this across tasks
knowledge_type="best_practice",
title="Test Fixtures with CASCADE for PostgreSQL",
description="Always use CASCADE in test fixture teardown to avoid foreign key constraint errors",
context={
"applies_to": ["testing", "database"],
"frameworks": ["pytest"],
"databases": ["postgresql"]
},
evidence={
"success_rate": 1.0,
"fixes_applied": 15,
"issues_prevented": 30
}
)
When to Use: Share performance improvements
add_knowledge(
source_group=4, # Group 4 performance-optimizer discovered this
knowledge_type="optimization",
title="Batch Database Queries in Loops",
description="Replace N+1 query patterns with batch queries using IN clause or JOINs",
context={
"applies_to": ["performance", "database"],
"orm": ["sqlalchemy", "sequelize"]
},
evidence={
"performance_improvement": "80%", # 5x faster
"query_reduction": 0.95, # 95% fewer queries
"cases_improved": 8
}
)
Principle: Provide feedback immediately after validation, not days later
# ✅ GOOD: Immediate feedback
validate_results()
send_feedback_to_group_1()
send_feedback_to_group_3()
# ❌ BAD: Delayed feedback loses context
validate_results()
# ... days later ...
send_feedback() # Context is lost
Principle: Feedback must be specific and actionable, not vague
# ✅ GOOD: Specific and actionable
add_feedback(
message="Recommendation confidence was too high (0.92) for untested pattern. Consider 0.75-0.85 for new patterns",
suggestions=["Add confidence penalty for untested patterns", "Increase confidence gradually with reuse"]
)
# ❌ BAD: Vague
add_feedback(
message="Confidence was wrong",
suggestions=[]
)
Principle: Highlight successes and areas for improvement
# ✅ GOOD: Balanced
add_feedback(
positive=[
"Priority ranking was excellent - high priority items were truly critical",
"User preference integration worked perfectly"
],
improvements=[
"Estimated effort was 40% too low - consider adjusting effort formula",
"Could benefit from more error handling recommendations"
]
)
Principle: Focus on how the agent can improve, not blame
# ✅ GOOD: Learning-oriented
add_feedback(
feedback_type="improvement",
message="Analysis missed security vulnerability in auth flow",
learning_opportunity="Add OWASP Top 10 checks to security analysis workflow",
how_to_improve="Integrate security-auditor findings into code-analyzer reports"
)
# ❌ BAD: Blame-oriented
add_feedback(
feedback_type="error",
message="You failed to find the security issue",
# No suggestions for improvement
)
When to Use: Multiple Group 1 agents can analyze simultaneously
# Orchestrator coordinates parallel Group 1 analysis
from lib.group_collaboration_system import coordinate_parallel_execution
results = coordinate_parallel_execution(
group=1,
agents=["code-analyzer", "security-auditor", "smart-recommender"],
task_id=task_id,
timeout_minutes=5
)
# All Group 1 findings consolidated before sending to Group 2
consolidated_findings = consolidate_findings(results)
send_to_group_2(consolidated_findings)
When to Use: Groups must execute in order (1→2→3→4)
# Standard workflow
findings = execute_group_1_analysis() # Group 1: Analyze
plan = execute_group_2_decision(findings) # Group 2: Decide
results = execute_group_3_execution(plan) # Group 3: Execute
validation = execute_group_4_validation(results) # Group 4: Validate
When to Use: Quality doesn't meet threshold, needs iteration
for iteration in range(max_iterations):
# Group 3 executes
results = execute_group_3(plan)
# Group 4 validates
validation = execute_group_4(results)
if validation.quality_score >= 70:
break # Success!
# Group 4 sends feedback to Group 2 for plan adjustment
feedback = validation.get_improvement_suggestions()
plan = group_2_adjust_plan(plan, feedback)
# Group 3 re-executes with adjusted plan
When to Use: Execution path depends on analysis results
# Group 1 analysis
security_findings = security_auditor.analyze()
if security_findings.critical_count > 0:
# Critical security issues → immediate path
plan = group_2_create_security_fix_plan(security_findings)
results = group_3_execute_security_fixes(plan)
else:
# Normal path
all_findings = consolidate_all_group_1_findings()
plan = group_2_create_standard_plan(all_findings)
results = group_3_execute_standard(plan)
Symptoms:
Diagnosis:
from lib.group_collaboration_system import get_communications_for_agent
# Check if communications are recorded
comms = get_communications_for_agent("strategic-planner", communication_type="recommendation")
if not comms:
print("❌ No communications found - sender may not be recording properly")
Fix:
record_communication() is called after analysisSymptoms:
Diagnosis:
from lib.agent_feedback_system import get_feedback_stats
stats = get_feedback_stats("code-analyzer")
if stats["total_feedback"] == 0:
print("❌ No feedback received - feedback loop broken")
Fix:
Symptoms:
Diagnosis:
from lib.inter_group_knowledge_transfer import get_knowledge_transfer_stats
stats = get_knowledge_transfer_stats()
if stats["successful_transfers"] < stats["total_knowledge"] * 0.5:
print("⚠️ Low knowledge transfer success rate")
Fix:
Symptoms:
Diagnosis:
from lib.group_specialization_learner import get_specialization_profile
profile = get_specialization_profile(group_num=3)
if not profile.get("specializations"):
print("⚠️ No specializations detected - need more task diversity")
Fix:
Effective Group Collaboration Indicators:
Track with:
from lib.group_collaboration_system import get_group_collaboration_stats
stats = get_group_collaboration_stats()
print(f"Communication success rate: {stats['communication_success_rate']:.1%}")
print(f"Average feedback cycle time: {stats['avg_feedback_cycle_seconds']}s")
print(f"Knowledge reuse rate: {stats['knowledge_reuse_rate']:.1%}")
# Orchestrator coordinates complete workflow
from lib.group_collaboration_system import record_communication
from lib.agent_feedback_system import add_feedback
from lib.inter_group_knowledge_transfer import query_knowledge, add_knowledge
from lib.group_specialization_learner import get_recommended_group_for_task
# Step 0: Get specialization recommendations
routing = get_recommended_group_for_task(
task_type="refactoring",
complexity="medium",
domain="authentication"
)
print(f"Recommended: {routing['recommended_agents']}")
# Step 1: Group 1 analyzes (code-analyzer)
analysis = code_analyzer.analyze(task)
# Query existing knowledge
existing_patterns = query_knowledge(
for_group=1,
knowledge_type="pattern",
task_context={"task_type": "refactoring", "domain": "authentication"}
)
# Send findings to Group 2
record_communication(
from_agent="code-analyzer",
to_agent="strategic-planner",
task_id=task_id,
communication_type="recommendation",
data=analysis
)
# Step 2: Group 2 decides (strategic-planner)
user_prefs = preference_coordinator.load_preferences()
plan = strategic_planner.create_plan(analysis, user_prefs)
# Send plan to Group 3
record_communication(
from_agent="strategic-planner",
to_agent="quality-controller",
task_id=task_id,
communication_type="execution_plan",
data=plan
)
# Step 3: Group 3 executes (quality-controller)
results = quality_controller.execute(plan)
# Send results to Group 4
record_communication(
from_agent="quality-controller",
to_agent="post-execution-validator",
task_id=task_id,
communication_type="execution_result",
data=results
)
# Step 4: Group 4 validates (post-execution-validator)
validation = post_execution_validator.validate(results)
# Send feedback to Group 1
add_feedback(
from_agent="post-execution-validator",
to_agent="code-analyzer",
task_id=task_id,
feedback_type="success",
message="Recommendations were 95% effective",
details={"quality_improvement": 18}
)
# Send feedback to Group 3
add_feedback(
from_agent="post-execution-validator",
to_agent="quality-controller",
task_id=task_id,
feedback_type="success",
message="Execution was efficient and effective"
)
# Share successful pattern
if validation.quality_score >= 90:
add_knowledge(
source_group=4,
knowledge_type="pattern",
title="Successful Authentication Refactoring Pattern",
description=f"Pattern used in task {task_id} achieved quality score {validation.quality_score}",
context={"task_type": "refactoring", "domain": "authentication"},
evidence={"quality_score": validation.quality_score}
)
Related Systems:
lib/group_collaboration_system.py - Communication trackinglib/agent_feedback_system.py - Feedback managementlib/inter_group_knowledge_transfer.py - Knowledge sharinglib/group_specialization_learner.py - Specialization trackinglib/agent_performance_tracker.py - Performance metricsRelated Documentation:
docs/FOUR_TIER_ARCHITECTURE.md - Complete architecture designdocs/FOUR_TIER_ENHANCEMENTS.md - Advanced featuresagents/orchestrator.md - Orchestrator coordination logicThis skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.