From swe
DAA-powered codebase analysis using swarm agents. Use for deep analysis of large codebases.
npx claudepluginhub earthmanweb/serena-workflow-engine --plugin sweThis skill is limited to using the following tools:
**If starting a new session**, first read workflow initialization:
Produces structured codebase analysis reports with architecture overview, critical files, patterns, and actionable recommendations.
Orchestrates parallel codebase analysis using adaptive Explore subagents selected by task complexity and type, delegating report synthesis to codebase-analysis-reporter.
Share bugs, ideas, or general feedback.
If starting a new session, first read workflow initialization:
mcp__plugin_swe_serena__read_memory("wf/WF_INIT")
Follow WF_INIT instructions before executing this skill.
Deep codebase analysis using Decentralized Autonomous Agents (DAA).
Required (one of):
ruv-swarm MCP (preferred for DAA learning)claude-flow MCP (alternative)Fallback: Sequential analysis if no swarm MCP available
| Agent ID | Purpose | Cognitive Pattern |
|---|---|---|
| config-analyzer | Parse config files | convergent |
| architecture-mapper | Detect layers | systems |
| pattern-detector | Find conventions | lateral |
| domain-extractor | Extract domains | divergent |
| system-finder | Identify systems | systems |
| test-analyzer | Test patterns | critical |
| import-tracer | Dependency graph | convergent |
| convention-learner | Style detection | adaptive |
| file-indexer | File inventory | convergent |
| synthesizer | Compile results | systems |
⚠️ CRITICAL: RUV-Swarm has TWO separate agent pools - choose ONE pattern:
| Pattern | Agent Creation | Execution | Use When |
|---|---|---|---|
| Swarm | agent_spawn | task_orchestrate | Parallel task execution |
| DAA | daa_agent_create | daa_workflow_execute | Learning/adaptation needed |
// Option A: RUV-Swarm Task Orchestration (faster, no learning)
if (mcp_available("ruv-swarm") && !needsLearning) {
mcp__ruv-swarm__swarm_init({ topology: "mesh", strategy: "balanced", maxAgents: 10 });
}
// Option B: RUV-Swarm DAA Workflow (slower, with learning)
if (mcp_available("ruv-swarm") && needsLearning) {
mcp__ruv-swarm__daa_init({ enableLearning: true, enableCoordination: true });
}
// Option C: Claude-Flow (alternative)
if (mcp_available("claude-flow")) {
mcp__claude-flow__swarm_init({ topology: "mesh", maxAgents: 10 });
}
CRITICAL: Spawn ALL agents in ONE message for parallelism
Option A: Swarm Agents (for task_orchestrate)
// These go into the SWARM pool - usable by task_orchestrate
mcp__ruv-swarm__agent_spawn({ type: "analyst", name: "config-analyzer" })
mcp__ruv-swarm__agent_spawn({ type: "analyst", name: "architecture-mapper" })
mcp__ruv-swarm__agent_spawn({ type: "researcher", name: "pattern-detector" })
mcp__ruv-swarm__agent_spawn({ type: "researcher", name: "domain-extractor" })
mcp__ruv-swarm__agent_spawn({ type: "analyst", name: "system-finder" })
mcp__ruv-swarm__agent_spawn({ type: "analyst", name: "test-analyzer" })
mcp__ruv-swarm__agent_spawn({ type: "researcher", name: "import-tracer" })
mcp__ruv-swarm__agent_spawn({ type: "researcher", name: "convention-learner" })
mcp__ruv-swarm__agent_spawn({ type: "analyst", name: "file-indexer" })
mcp__ruv-swarm__agent_spawn({ type: "coordinator", name: "synthesizer" })
Option B: DAA Agents (for daa_workflow_execute)
// These go into the DAA pool - usable by daa_workflow_execute, NOT task_orchestrate
const agents = [
{ id: "config-analyzer", cognitivePattern: "convergent" },
{ id: "architecture-mapper", cognitivePattern: "systems" },
{ id: "pattern-detector", cognitivePattern: "lateral" },
{ id: "domain-extractor", cognitivePattern: "divergent" },
{ id: "system-finder", cognitivePattern: "systems" },
{ id: "test-analyzer", cognitivePattern: "critical" },
{ id: "import-tracer", cognitivePattern: "convergent" },
{ id: "convention-learner", cognitivePattern: "adaptive" },
{ id: "file-indexer", cognitivePattern: "convergent" },
{ id: "synthesizer", cognitivePattern: "systems" }
];
// Spawn all DAA agents in parallel
agents.forEach(a => mcp__ruv-swarm__daa_agent_create({
id: a.id,
cognitivePattern: a.cognitivePattern,
enableMemory: true,
learningRate: 0.8
}));
⚠️ Match execution to agent type!
Option A: Swarm Agents → task_orchestrate
// ONLY works with agents from agent_spawn
mcp__ruv-swarm__task_orchestrate({
task: "Analyze codebase structure, patterns, domains, and systems",
strategy: "parallel",
maxAgents: 10,
priority: "high"
});
Option B: DAA Agents → daa_workflow_execute
// ONLY works with agents from daa_agent_create
mcp__ruv-swarm__daa_workflow_create({
id: "analysis-workflow",
name: "Codebase Analysis",
strategy: "parallel"
});
mcp__ruv-swarm__daa_workflow_execute({
workflowId: "analysis-workflow",
agentIds: ["config-analyzer", "architecture-mapper", "pattern-detector",
"domain-extractor", "system-finder", "test-analyzer",
"import-tracer", "convention-learner", "file-indexer", "synthesizer"],
parallelExecution: true
});
Each agent produces structured findings:
Based on synthesized results, create:
Record analysis success for future improvement:
mcp__ruv-swarm__daa_agent_adapt({
agentId: "synthesizer",
performanceScore: 0.9,
feedback: "Analysis complete"
});
mcp__ruv-swarm__daa_knowledge_share({
sourceAgentId: "synthesizer",
targetAgentIds: ["config-analyzer", "architecture-mapper"],
knowledgeDomain: "codebase-patterns"
});
SWARM ANALYSIS COMPLETE
| Metric | Value |
|---|---|
| Agents Used | 10 |
| Analysis Time | [duration] |
Detected:
Memories Created:
DAA Learning:
## Skill Return
- **Skill**: swe-swarm-analyze
- **Status**: [success|success_with_findings|blocked]
- **Agents Used**: [count]
- **Memories Created**: [list]
- **Domains Found**: [count]
- **Systems Found**: [count]
- **Next Step Hint**: WF_CLASSIFY
If no swarm MCP available:
⚠️ No swarm MCP detected. Running sequential analysis.
This will take longer but produce similar results.
Progress:
[1/10] Analyzing config files...
[2/10] Mapping architecture...
...
> **Skill /swe-swarm-analyze complete** - [count] memories created via DAA analysis