Neural network interpretability tools. Includes TransformerLens (circuit analysis), SAELens (sparse autoencoders), NNSight (activation patching), and pyvene (intervention library). Use when analyzing model internals, finding circuits, or understanding how models compute.
/plugin marketplace add zechenzhangAGI/AI-research-SKILLs/plugin install mechanistic-interpretability@ai-research-skillsComprehensive feature development workflow with specialized agents for codebase exploration, architecture design, and quality review
Interactive learning mode that requests meaningful code contributions at decision points (mimics the unshipped Learning output style)
Automated code review for pull requests using multiple specialized agents with confidence-based scoring
Streamline your git workflow with simple commands for committing, pushing, and creating pull requests