From antigravity-awesome-skills
Orchestrates multi-agent code reviews using specialized agents for performance analysis, security audits, architecture evaluation, and compliance validation. Ideal for holistic code assessments.
npx claudepluginhub sickn33/antigravity-awesome-skillsThis skill uses the workspace's default tool permissions.
- Working on multi-agent code review orchestration tool tasks or workflows
Orchestrates multi-agent code reviews using specialized agents for performance analysis, security audits, architecture evaluation, and compliance validation. Ideal for holistic code assessments.
Orchestrates multi-agent code reviews with specialists in quality, security, architecture, performance, compliance, and best practices for comprehensive artifact analysis.
Orchestrates parallel execution of specialized code review agents for security, architecture, and performance analysis with decision tracking to avoid redundancy. Use for comprehensive reviews of large changesets.
Share bugs, ideas, or general feedback.
resources/implementation-playbook.md.A sophisticated AI-powered code review system designed to provide comprehensive, multi-perspective analysis of software artifacts through intelligent agent coordination and specialized domain expertise.
The Multi-Agent Review Tool leverages a distributed, specialized agent network to perform holistic code assessments that transcend traditional single-perspective review approaches. By coordinating agents with distinct expertise, we generate a comprehensive evaluation that captures nuanced insights across multiple critical dimensions:
$ARGUMENTS: Target code/project for review
def route_agents(code_context):
agents = []
if is_web_application(code_context):
agents.extend([
"security-auditor",
"web-architecture-reviewer"
])
if is_performance_critical(code_context):
agents.append("performance-analyst")
return agents
class ReviewContext:
def __init__(self, target, metadata):
self.target = target
self.metadata = metadata
self.agent_insights = {}
def update_insights(self, agent_type, insights):
self.agent_insights[agent_type] = insights
def execute_review(review_context):
# Parallel independent agents
parallel_agents = [
"code-quality-reviewer",
"security-auditor"
]
# Sequential dependent agents
sequential_agents = [
"architecture-reviewer",
"performance-optimizer"
]
def synthesize_review_insights(agent_results):
consolidated_report = {
"critical_issues": [],
"important_issues": [],
"improvement_suggestions": []
}
# Intelligent merging logic
return consolidated_report
def resolve_conflicts(agent_insights):
conflict_resolver = ConflictResolutionEngine()
return conflict_resolver.process(agent_insights)
def optimize_review_process(review_context):
return ReviewOptimizer.allocate_resources(review_context)
def validate_review_quality(review_results):
quality_score = QualityScoreCalculator.compute(review_results)
return quality_score > QUALITY_THRESHOLD
multi_agent_review(
target="/path/to/project",
agents=[
{"type": "security-auditor", "weight": 0.3},
{"type": "architecture-reviewer", "weight": 0.3},
{"type": "performance-analyst", "weight": 0.2}
]
)
sequential_review_workflow = [
{"phase": "design-review", "agent": "architect-reviewer"},
{"phase": "implementation-review", "agent": "code-quality-reviewer"},
{"phase": "testing-review", "agent": "test-coverage-analyst"},
{"phase": "deployment-readiness", "agent": "devops-validator"}
]
hybrid_review_strategy = {
"parallel_agents": ["security", "performance"],
"sequential_agents": ["architecture", "compliance"]
}
The tool is designed with a plugin-based architecture, allowing easy addition of new agent types and review strategies.
Target for review: $ARGUMENTS