Context-aware task execution with Serena MCP backend. First time: Explores project, saves to Serena, runs spec if complex, executes waves. Returning: Loads from Serena (<1s), detects changes, executes with cached context. Intelligently decides when to research, when to spec, when to prime. One catch-all intelligent execution command. Use when: User wants task executed in any project (new or existing).
Context-aware task execution with Serena MCP backend. First time: explores project, saves context, runs spec if complex, executes waves. Returning: loads cached context in <1s, detects changes, executes with saved knowledge. Intelligently decides when to research, spec, or prime. Use for any task in new or existing projects.
/plugin marketplace add krzemienski/shannon-framework/plugin install shannon@shannon-frameworkThis skill is limited to using the following tools:
Comprehensive intelligent task execution that automatically handles all scenarios: new projects, existing codebases, simple tasks, complex requirements, first-time work, and returning workflows - all with Serena MCP as the persistent context backend.
Core Innovation: One command that adapts to any scenario without configuration, learns from every execution, and gets faster on return visits.
Check if project exists in Serena memory:
Determine project ID from current working directory:
pwdCheck Serena for existing project memory:
list_memories()"shannon_project_{project_id}"Duration: < 1 second
For projects not yet in Serena:
Count files in current directory to determine if new or existing project:
# Use Bash tool
find . -type f \( -name "*.py" -o -name "*.js" -o -name "*.tsx" -o -name "*.java" -o -name "*.go" \) | wc -l
For greenfield projects (empty or minimal files):
Assess Task Complexity:
For Complex Tasks - Run Spec Analysis:
Invoke sub-skill:
@skill spec-analysis
Specification: {task_description}
For Simple Tasks - Skip Spec:
Execute Task:
Invoke sub-skill:
@skill wave-orchestration
Task: {task_description}
Spec: {spec_analysis_results if complex}
Save Project to Serena:
Use Serena tool:
write_memory("shannon_project_{project_id}", {
project_path: "{full_path}",
created: "{ISO_timestamp}",
type: "NEW_PROJECT",
initial_task: "{task_description}",
complexity: "{simple|complex}",
spec_id: "{spec_analysis_id if complex}"
})
Save Execution Results:
Use Serena tool:
write_memory("shannon_execution_{timestamp}", {
project_id: "{project_id}",
task: "{task_description}",
files_created: [list of files from wave results],
duration_seconds: {duration},
success: true,
timestamp: "{ISO_timestamp}"
})
For projects with existing codebase:
Explore Project Structure:
find . -name "*.py" | head -20 (sample files)Detect Tech Stack: From files found:
Extract specific frameworks from file contents:
Detect Validation Gates: From package.json:
From pyproject.toml:
Research Decision: Check if task mentions libraries NOT in current dependencies:
If Research Needed:
For each new library:
- Use Tavily tool: Search "{library_name} best practices guide"
- Use Context7 tool: Get library documentation if available
- Save research to Serena:
write_memory("shannon_research_{library}_{timestamp}", {
library: "{library_name}",
project_id: "{project_id}",
best_practices: "{tavily_results}",
documentation: "{context7_results}",
timestamp: "{ISO_timestamp}"
})
Complexity Assessment:
Execute with Context:
Invoke sub-skill:
@skill wave-orchestration
Task: {task_description}
Project Context:
- Tech Stack: {detected_tech_stack}
- Entry Points: {main_files}
- Validation Gates: {detected_gates}
Research: {research_results if any}
Spec: {spec_analysis if complex}
Save Project Context to Serena:
Use Serena tool:
write_memory("shannon_project_{project_id}", {
project_path: "{full_path}",
explored: "{ISO_timestamp}",
type: "EXISTING_PROJECT",
tech_stack: [list of detected technologies],
file_count: {file_count},
entry_points: [list of main files],
validation_gates: {
test: "{test_command}",
build: "{build_command}",
lint: "{lint_command}"
}
})
Save Execution:
Use Serena tool:
write_memory("shannon_execution_{timestamp}", {
project_id: "{project_id}",
task: "{task_description}",
files_created: [list from wave results],
research_performed: {true|false},
spec_analysis_ran: {true|false},
duration_seconds: {duration},
success: true,
timestamp: "{ISO_timestamp}"
})
For projects with Serena context:
Load Project Context from Serena:
Use Serena tool:
const projectContext = read_memory("shannon_project_{project_id}")
Extract:
Check Context Currency:
Detect Changes:
# Use Bash tool
current_file_count=$(find . -name "*.py" -o -name "*.js" -o -name "*.tsx" | wc -l)
Update Context if Changed: If changes detected OR context stale:
Use Serena tool:
write_memory("shannon_project_{project_id}", {
...existing_context,
file_count: {current_count},
updated: "{ISO_timestamp}"
})
Load Context if Fresh: If no changes:
Research Decision (same as first-time):
Complexity Decision:
Execute with Cached Context:
Invoke sub-skill:
@skill wave-orchestration
Task: {task_description}
Cached Context:
- Tech Stack: {context.tech_stack}
- Validation Gates: {context.validation_gates}
Research: {if performed}
Spec: {if complex}
Save Execution to Serena:
Use Serena tool:
write_memory("shannon_execution_{timestamp}", {
project_id: "{project_id}",
task: "{task_description}",
used_cache: true,
context_age_hours: {age},
files_created: [list],
timestamp: "{ISO_timestamp}"
})
COUNTER:
Rule: Always check Serena first. Every time.
COUNTER:
Rule: Check file count. Explore if files exist.
COUNTER:
Rule: If external library detected in task but not in project dependencies, research it.
Project Context:
"shannon_project_{project_id}"{project_path, tech_stack, file_count, validation_gates, explored, type}Execution History:
"shannon_execution_{timestamp}"{project_id, task, files_created, duration, success, timestamp}Research Results:
"shannon_research_{library}_{timestamp}"{library, best_practices, documentation, timestamp}Spec Analysis (when complex):
"spec_analysis_{timestamp}"{complexity_score, domain_percentages, phase_plan, ...} (from spec-analysis skill)User: /shannon:do "create authentication system using Auth0"
Working Directory: /tmp/new-auth-app/
Execution:
1. list_memories() → No "shannon_project_new-auth-app"
2. File count: 0 → NEW_PROJECT
3. Task assessment: "authentication system" (complex) → Run spec-analysis
4. Research: "Auth0" detected → Research Auth0 integration patterns
5. Execute: wave-orchestration with spec + research
6. Save: write_memory("shannon_project_new-auth-app", {...})
7. Save: write_memory("shannon_execution_{timestamp}", {...})
Result: Authentication system created, context saved for next time
Time: 5-8 minutes
User: /shannon:do "add password reset endpoint"
Working Directory: /projects/my-flask-api/
Execution:
1. list_memories() → No "shannon_project_my-flask-api"
2. File count: 45 → EXISTING_PROJECT
3. Explore: Read app.py, requirements.txt → Detect Python/Flask
4. Validation gates: Found pytest in pyproject.toml
5. Task: Simple ("add endpoint") → Skip spec
6. Research: None (internal feature)
7. Execute: wave-orchestration with Flask context
8. Save project: write_memory("shannon_project_my-flask-api", {tech_stack: ["Python/Flask"], ...})
9. Save execution: write_memory("shannon_execution_{timestamp}", {...})
Result: Endpoint added, project context cached
Time: 3-5 minutes
User: /shannon:do "add email verification"
Working Directory: /projects/my-flask-api/
Execution:
1. list_memories() → Found "shannon_project_my-flask-api" ✓
2. read_memory("shannon_project_my-flask-api") → Load tech_stack, validation_gates
3. Check age: 2 hours old → Fresh
4. File count: 45 files (same) → No changes
5. Display: "Using cached context (< 1s)"
6. Task: Simple → Skip spec
7. Research: None needed
8. Execute: wave-orchestration with loaded context
9. Save execution: write_memory("shannon_execution_{timestamp}", {...})
Result: Feature added using cached context
Time: 2-3 minutes (vs 5 first time)
Speedup: 2x faster
Shannon CLI invokes this skill via Agent SDK:
# In Shannon CLI unified_orchestrator.py
async for msg in self.sdk_client.invoke_skill(
skill_name='intelligent-do',
prompt_content=f"Task: {task}"
):
# Skill handles all intelligence via Serena
# CLI wraps with V3 features (cost optimization, analytics, dashboard)
Division of Responsibilities:
intelligent-do Skill Provides (Shannon Framework):
Shannon CLI Provides (Platform Features):
Together: Complete intelligent execution platform
Use functional-testing Skill Patterns:
Evidence Required:
Status: Skill template ready for proper implementation following Shannon patterns
This skill should be used when the user asks to "create an agent", "add an agent", "write a subagent", "agent frontmatter", "when to use description", "agent examples", "agent tools", "agent colors", "autonomous agent", or needs guidance on agent structure, system prompts, triggering conditions, or agent development best practices for Claude Code plugins.
This skill should be used when the user asks to "create a slash command", "add a command", "write a custom command", "define command arguments", "use command frontmatter", "organize commands", "create command with file references", "interactive command", "use AskUserQuestion in command", or needs guidance on slash command structure, YAML frontmatter fields, dynamic arguments, bash execution in commands, user interaction patterns, or command development best practices for Claude Code.
This skill should be used when the user asks to "create a hook", "add a PreToolUse/PostToolUse/Stop hook", "validate tool use", "implement prompt-based hooks", "use ${CLAUDE_PLUGIN_ROOT}", "set up event-driven automation", "block dangerous commands", or mentions hook events (PreToolUse, PostToolUse, Stop, SubagentStop, SessionStart, SessionEnd, UserPromptSubmit, PreCompact, Notification). Provides comprehensive guidance for creating and implementing Claude Code plugin hooks with focus on advanced prompt-based hooks API.