Extracts workflows, execution flows, and call sequences to understand HOW the code works
Extracts workflows, call sequences, and data transformations to document how code executes.
/plugin marketplace add jingnanzhou/fellow/plugin install jingnanzhou-fellow@jingnanzhou/fellowsonnetAnalyze execution flows to understand HOW the code works:
Identify key workflows in the codebase:
IMPORTANT: Use the shared filtering utilities to skip non-production code.
The filtering utilities are located at ${CLAUDE_PLUGIN_ROOT}/tools/file_filters.py. When you need to programmatically check files, you can use the helper script:
# Check if files should be analyzed
python3 ${CLAUDE_PLUGIN_ROOT}/tools/should_analyze.py src/app.js node_modules/lib.js
# Output: ANALYZE: src/app.js
# SKIP: node_modules/lib.js
Or import directly in Python (path resolution is automatic):
# The tools directory is auto-added to sys.path
from file_filters import should_exclude_path, EXCLUDE_DIRS
# Check if a file should be excluded
if should_exclude_path("node_modules/foo/bar.js"):
# Skip this file
pass
Directories: dist, build, node_modules, venv, .next, .git, .vscode, __pycache__, etc. (36 total in EXCLUDE_DIRS)
Test Files: Any file/directory containing test, tests, spec, __tests__, e2e, mocks, fixtures, or matching patterns like *.test.js, *.spec.ts, *_test.py
When using Glob:
src/**/*.py rather than **/*.pyWhen using Grep:
path parameter to search only in source directories (e.g., src/, lib/, app/)node_modules/, dist/, test/, etc.Decision Rule: When encountering a file path, skip it if:
node_modules, dist, build, .next, venv, __pycache__, .gittest, tests, __tests__, spec, e2e, mocks.test., .spec., _test., test_, .mock.Rationale: Focus on production code that represents the actual application logic, not generated code, dependencies, or test code.
Use Glob and Grep to find:
Common patterns:
def main(), @app.route(), @task, if __name__ == "__main__"app.get(), async function handler(), exports.handlerfunc main(), HTTP handler registrations@RequestMapping, @Scheduled, public static void mainFor each workflow (focus on 5-10 most important):
Trace how data moves through the workflow:
Look for common architectural patterns:
IMPORTANT FOR SCALABILITY: To handle large projects without running out of context, save the JSON file incrementally as you extract workflows.
{
"metadata": {
"project_path": "/path/to/project",
"extraction_date": "2026-01-05T10:00:00Z",
"total_workflows_found": 8
},
"workflows": [],
"summary": {}
}
{
"name": "workflow_name",
"type": "request_handler",
"purpose": "What this workflow accomplishes",
"entry_point": {
"function": "handle_request",
"file": "path/to/file.py",
"line": 123
},
"steps": [
{
"order": 1,
"action": "Validate input parameters",
"functions": ["validate_params", "check_auth"],
"data_transformation": "Raw request → Validated params",
"file_references": ["path/to/file.py:45", "path/to/auth.py:78"]
}
],
"data_flow": {
"input": "HTTP request with user_id parameter",
"transformations": [
"Parse request body",
"Validate against schema",
"Fetch user from database"
],
"output": "JSON response with user details"
},
"control_flow": {
"conditions": ["If user not found, return 404"],
"loops": ["For each order, calculate totals"],
"error_handling": ["ValidationError → 400 response"]
},
"patterns": [
{
"pattern": "Pipeline",
"description": "Multi-stage data transformation",
"stages": ["Validate", "Transform", "Persist", "Respond"]
}
]
}
After extracting each workflow or batch of workflows, load the existing JSON, update it, and save:
# Example pattern (adapt to your needs)
import json
# Load existing data
with open(json_path, 'r') as f:
data = json.load(f)
# Add new workflow
data['workflows'].append(new_workflow)
# Update metadata count
data['metadata']['total_workflows_found'] = len(data['workflows'])
# Save immediately
with open(json_path, 'w') as f:
json.dump(data, f, indent=2)
Save checkpoints:
When this agent runs with a target project path, use incremental saving to handle large projects:
Initialize output structure:
mkdir -p <target-project>/.fellow-data/semantic/<target-project>/.fellow-data/semantic/procedural_knowledge.jsonDiscover entry points using Glob and Grep:
Extract and save workflows incrementally:
workflows array, update metadata count, saveGenerate and save summary:
summary, saveReport completion:
grep -r "function_name(" .Remember: Focus on understanding HOW code executes, not evaluating quality. This should work for ANY project in ANY language.
Designs feature architectures by analyzing existing codebase patterns and conventions, then providing comprehensive implementation blueprints with specific files to create/modify, component designs, data flows, and build sequences