From autonomous-dev
Provides Python observability guidance: structured JSON logging, pdb/ipdb debugging, cProfile/line_profiler profiling, stack traces, and performance metrics.
npx claudepluginhub akaszubski/autonomous-dev --plugin autonomous-devThis skill is limited to using the following tools:
Comprehensive guide to logging, debugging, profiling, and performance monitoring in Python applications.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
Comprehensive guide to logging, debugging, profiling, and performance monitoring in Python applications.
Structured logging with JSON format for machine-readable logs and rich context.
Why Structured Logging?
Key Features:
Example:
import logging
import json
logger = logging.getLogger(__name__)
logger.info("User action", extra={
"user_id": 123,
"action": "login",
"ip": "192.168.1.1"
})
See: docs/structured-logging.md for Python logging setup and patterns
Interactive debugging with pdb/ipdb and effective debugging strategies.
Tools:
pdb Commands:
n (next) - Execute current lines (step) - Step into functionc (continue) - Continue executionp variable - Print variable valuel - List source codeq - Quit debuggerExample:
import pdb; pdb.set_trace() # Debugger starts here
See: docs/debugging.md for interactive debugging patterns
CPU and memory profiling to identify performance bottlenecks.
Tools:
cProfile Example:
python -m cProfile -s cumulative script.py
Profile Decorator:
import cProfile
import pstats
def profile(func):
def wrapper(*args, **kwargs):
profiler = cProfile.Profile()
profiler.enable()
result = func(*args, **kwargs)
profiler.disable()
stats = pstats.Stats(profiler)
stats.sort_stats('cumulative')
stats.print_stats(10) # Top 10 functions
return result
return wrapper
@profile
def slow_function():
# Your code here
pass
See: docs/profiling.md for comprehensive profiling techniques
Performance monitoring, timing decorators, and simple metrics.
Timing Patterns:
Simple Metrics:
Example:
import time
from functools import wraps
def timer(func):
@wraps(func)
def wrapper(*args, **kwargs):
start = time.time()
result = func(*args, **kwargs)
duration = time.time() - start
print(f"{func.__name__} took {duration:.2f}s")
return result
return wrapper
@timer
def process_data():
# Your code here
pass
See: docs/monitoring-metrics.md for stack traces, timers, and metrics
Debugging strategies and logging anti-patterns to avoid.
Debugging Best Practices:
Logging Anti-Patterns to Avoid:
See: docs/best-practices-antipatterns.md for detailed strategies
| Tool | Use Case | Details |
|---|---|---|
| Structured Logging | Production logs | docs/structured-logging.md |
| pdb/ipdb | Interactive debugging | docs/debugging.md |
| cProfile | CPU profiling | docs/profiling.md |
| line_profiler | Line-by-line profiling | docs/profiling.md |
| memory_profiler | Memory analysis | docs/profiling.md |
| Timer decorator | Function timing | docs/monitoring-metrics.md |
| Context timer | Code block timing | docs/monitoring-metrics.md |
import logging
# Setup
logging.basicConfig(level=logging.INFO, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
logger = logging.getLogger(__name__)
# Usage
logger.debug("Debug message") # Detailed diagnostic
logger.info("Info message") # General information
logger.warning("Warning message") # Warning (recoverable)
logger.error("Error message") # Error (handled)
logger.critical("Critical message") # Critical (unrecoverable)
# With context
logger.info("User action", extra={"user_id": 123, "action": "login"})
# pdb
import pdb; pdb.set_trace()
# ipdb (enhanced)
import ipdb; ipdb.set_trace()
# Post-mortem (debug after crash)
import pdb, sys
try:
# Your code
pass
except Exception:
pdb.post_mortem(sys.exc_info()[2])
# CPU profiling
python -m cProfile -s cumulative script.py
# Line profiling
kernprof -l -v script.py
# Memory profiling
python -m memory_profiler script.py
# Sampling profiler (no code changes)
py-spy top --pid 12345
This skill uses progressive disclosure to prevent context bloat:
docs/*.md files with implementation details (loaded on-demand)Available Documentation:
docs/structured-logging.md - Logging setup, levels, JSON format, best practicesdocs/debugging.md - Print debugging, pdb/ipdb, post-mortem debuggingdocs/profiling.md - cProfile, line_profiler, memory_profiler, py-spydocs/monitoring-metrics.md - Stack traces, timing patterns, simple metricsdocs/best-practices-antipatterns.md - Debugging strategies and logging anti-patternsRelated Skills:
Related Tools: