By tondevrel
Agent skills for scientific computing, research workflows, and data analysis
npx claudepluginhub tondevrel/scientific-agent-skillsAtomic Simulation Environment - a set of tools for setting up, manipulating, running, visualizing, and analyzing atomistic simulations. Acts as a universal interface between Python and numerous quantum chemical and molecular dynamics codes. Use for building atomic structures, geometry optimization, molecular dynamics simulations, transition state searches (NEB), file format conversion (CIF, XYZ, POSCAR, PDB), electronic property calculations (DOS, band structures), and automating simulation workflows with DFT/MD codes like VASP, GPAW, Quantum ESPRESSO, LAMMPS.
The core library for Astronomy and Astrophysics in Python. Provides data structures for coordinates, time, units, FITS files, and cosmological models. Essential for observational data reduction and theoretical astrophysics. Use when working with astronomical coordinates (RA/Dec), physical units, FITS files, time scales, WCS, cosmology, or astronomical tables.
Comprehensive guide for Biopython - the premier Python library for computational biology and bioinformatics. Use for DNA/RNA/protein sequence analysis, file I/O (FASTA, FASTQ, GenBank, PDB), sequence alignment, BLAST searches, phylogenetic analysis, structure analysis, and NCBI database access.
A Python package useful for chemistry (mainly physical/analytical/inorganic chemistry). Features include balancing chemical reactions, chemical kinetics (ODE integration), chemical equilibria, ionic strength calculations, and unit handling. Use when working with chemical equations, reaction balancing, kinetic modeling, equilibrium calculations, speciation, pH calculations, ionic strength, activity coefficients, or chemical formula parsing.
Constraints-Based Reconstruction and Analysis for Python. Used for modeling large-scale metabolic networks in microorganisms.
Advanced sub-skill for Dask focused on distributed system performance, memory management, and task graph optimization. Covers cluster tuning, efficient serialization, data skew mitigation, and dashboard-driven debugging.
A flexible library for parallel computing in Python. It scales Python libraries like NumPy, pandas, and scikit-learn to multi-core systems or distributed clusters. Features lazy evaluation and task scheduling for data that exceeds RAM capacity. Use for out-of-core computing, parallel processing, distributed computing, large-scale data analysis, dask.array, dask.dataframe, dask.delayed, dask.bag, task scheduling, lazy evaluation, and scaling beyond memory limits.
Causal inference framework for answering "does X cause Y?" beyond correlation. DoWhy (Microsoft Research) provides the identify-estimate-refute loop: define a causal graph (DAG), identify the causal effect using backdoor/frontdoor/instrumental variable criteria, estimate treatment effects with multiple estimators, and validate results with automated refutation tests. Use when: distinguishing causation from correlation, estimating treatment effects (ATE, ATT, CATE), designing and analyzing A/B tests with confounders, using instrumental variables, performing counterfactual reasoning ("what would have happened if..."), validating causal claims with sensitivity analysis, working with observational data where randomization is impossible, or any analysis where the question is "what is the CAUSAL effect of X on Y" rather than just "how do X and Y relate?"
An analytical in-process SQL database management system. Designed for fast analytical queries (OLAP). Highly interoperable with Python's data ecosystem (Pandas, NumPy, Arrow, Polars). Supports querying files (CSV, Parquet, JSON) directly without an ingestion step. Use for complex SQL queries on Pandas/Polars data, querying large Parquet/CSV files directly, joining data from different sources, analytical pipelines, local datasets too big for Excel, intermediate data storage and feature engineering for ML.
Example skill template. Replace this description with keywords and triggers for your actual skill. This description determines when the skill auto-loads based on conversation context.
Dual skill for deploying scientific models. FastAPI provides a high-performance, asynchronous web framework for building APIs with automatic documentation. Streamlit enables rapid creation of interactive data applications and dashboards directly from Python scripts. Load when working with web APIs, model serving, REST endpoints, interactive dashboards, data visualization UIs, scientific app deployment, async web frameworks, Pydantic validation, uvicorn, or building production-ready scientific tools.
Open source project to make working with geospatial data in python easier. Extends the datatypes used by pandas to allow spatial operations on geometric types. Built on top of Shapely, Fiona, and Pyproj. Use for reading and writing spatial formats (Shapefile, GeoJSON, GeoPackage, KML), performing spatial joins, coordinate system transformations (reprojecting), geometric analysis (buffers, centroids, convex hulls), thematic mapping (Choropleth maps), calculating spatial relationships (contains, overlaps, touches, within), working with OpenStreetMap data or satellite-derived vector data.
Programmatic mesh generation and mesh I/O for computational physics and FEM simulation. gmsh generates 2D and 3D meshes from geometric primitives and CAD-style boolean operations (union, difference, intersection) via the OpenCASCADE kernel, with fine-grained control over element size, adaptive refinement around features, and physical group tagging for boundary conditions. meshio reads and writes meshes across 40+ formats (GeoTIFF, VTK, VTU, Gmsh .msh, HDF5, ExodusII, XDMF, NetCDF) and performs format conversion. Use when: generating meshes for FEM/FEA simulation (prerequisite for FEniCS, deal.II, Firedrake), creating 2D or 3D computational domains from geometric descriptions, performing CSG (Constructive Solid Geometry) to build complex shapes from primitives, controlling mesh density and adaptive refinement near boundaries or singularities, tagging boundaries and subdomains for boundary conditions in solvers, converting meshes between simulation formats, or inspecting and manipulating mesh data programmatically. This is the geometry layer that sits between "I have a shape" and "I can simulate physics on it."
A Pythonic interface to the HDF5 binary data format. It allows you to store huge amounts of numerical data and easily manipulate that data from NumPy. Features a hierarchical structure similar to a file system. Use for storing datasets larger than RAM, organizing complex scientific data hierarchically, storing numerical arrays with high-speed random access, keeping metadata attached to data, sharing data between languages, and reading/writing large datasets in chunks.
Advanced sub-skill for JAX focused on solving Partial Differential Equations (PDEs) and Differentiable Physics. Covers Finite Difference Methods (FDM), Neural Operators, and Physics-Informed Neural Networks (PINNs).
Composable transformations of Python+NumPy programs. Differentiate, vectorize, JIT-compile to GPU/TPU. Built for high-performance machine learning research and complex scientific simulations. Use for automatic differentiation, GPU/TPU acceleration, higher-order derivatives, physics-informed machine learning, differentiable simulations, and automatic vectorization.
Complete survival analysis library in Python. Handles right-censored data, Kaplan-Meier curves, and Cox regression. Standard for clinical trial analysis and epidemiology.
Professional sub-skill for Matplotlib focused on high-performance animations, complex multi-figure layouts (GridSpec), interactive widgets, and publication-ready typography (LaTeX/PGF).
The foundational library for creating static, animated, and interactive visualizations in Python. Highly customizable and the industry standard for publication-quality figures. Use for 2D plotting, scientific data visualization, heatmaps, contours, vector fields, multi-panel figures, LaTeX-formatted plots, custom visualization tools, and plotting from NumPy arrays or Pandas DataFrames.
Comprehensive guide for MDAnalysis - the Python library for analyzing molecular dynamics trajectories. Use for trajectory loading, RMSD/RMSF calculations, distance/angle/dihedral analysis, atom selections, hydrogen bonds, solvent accessible surface area, protein structure analysis, membrane analysis, and integration with Biopython. Essential for MD simulation analysis.
Open-source Python package for exploring, visualizing, and analyzing human neurophysiological data including EEG, MEG, sEEG, and ECoG.
Python package for the creation, manipulation, and study of the structure, dynamics, and functions of complex networks. Supports various graph types (Directed, Undirected, Multigraphs) and features a vast library of standard graph algorithms. Use for network analysis, graph theory, social network analysis, biological networks, infrastructure networks, path finding, centrality measures, community detection, graph algorithms, shortest paths, PageRank, connectivity analysis, and routing optimization.
A Just-In-Time (JIT) compiler for Python that translates a subset of Python and NumPy code into fast machine code. Developed by Anaconda, Inc. Highly effective for accelerating loops, custom mathematical functions, and complex numerical algorithms. Use for @njit, @vectorize, prange, cuda.jit, numba.typed, JIT compilation, parallel loops, GPU acceleration with CUDA, Monte Carlo simulations, numerical algorithms, and high-performance Python computing.
Advanced sub-skill for NumPy focused on internal memory management, stride manipulation, structured arrays, and interfacing with C/Cython. Covers zero-copy operations and SIMD vectorization principles.
Comprehensive guide for NumPy - the fundamental package for scientific computing in Python. Use for array operations, linear algebra, random number generation, Fourier transforms, mathematical functions, and high-performance numerical computing. Foundation for SciPy, pandas, scikit-learn, and all scientific Python.
A chemical toolbox designed to speak the many languages of chemical data. Supports over 110 formats and provides tools for conversion, 3D structure generation, molecular searching (SMARTS), and force field calculations. Use for chemical file format conversion (SDF, PDB, SMILES, CIF, Gaussian), 3D coordinate generation from 2D structures, substructure searching with SMARTS patterns, molecular docking preparation, force field minimizations (UFF, GAFF, MMFF94), molecular fingerprints and Tanimoto coefficients, and batch processing of chemical databases.
Open Source Computer Vision Library (OpenCV) for real-time image processing, video analysis, object detection, face recognition, and camera calibration. Use when working with images, videos, cameras, edge detection, contours, feature detection, image transformations, object tracking, optical flow, or any computer vision task.
Google Optimization Tools. An open-source software suite for optimization, specialized in vehicle routing, flows, integer and linear programming, and constraint programming. Features the world-class CP-SAT solver. Use for vehicle routing problems (VRP), scheduling, bin packing, knapsack problems, linear programming (LP), integer programming (MIP), network flows, constraint programming, combinatorial optimization, resource allocation, shift scheduling, job-shop scheduling, and discrete optimization problems.
Advanced sub-skill for pandas focused on memory optimization, execution speed, and handling large-scale datasets (10M+ rows). Covers low-level dtypes, efficient indexing, and vectorization of complex logic.
Cross-platform Python library for differentiable quantum computing. Integrated with machine learning libraries like PyTorch, TensorFlow, and JAX. Designed for quantum machine learning (QML), variational algorithms, and hardware-agnostic quantum programming. Use for Quantum Neural Networks (QNNs), Variational Quantum Algorithms (VQE, QAOA), hybrid classical-quantum machine learning, quantum chemistry calculations, benchmarking quantum algorithms, optimizing quantum control pulses, and investigating QML phenomena like Barren Plateaus.
An Astropy coordinated package for detecting and performing photometry of astronomical sources. Provides tools for background estimation, source detection (DAOFIND, IRAF), aperture photometry, and PSF (Point Spread Function) fitting. Use when working with astronomical image analysis, star/galaxy detection, measuring brightness (photometry), background subtraction, PSF fitting, aperture photometry, centroiding, or isophotal analysis.
A high-level interactive graphing library for Python. Ideal for web-based visualizations, 3D plots, and complex interactive dashboards. Built on plotly.js, it allows users to zoom, pan, and hover over data points in a browser-based environment. Use for interactive charts, web applications, Jupyter notebooks, 3D data visualization, geographic maps, financial charts, animations, time-series analysis, and building production-ready dashboards with Dash.
Blazingly fast DataFrame library written in Rust. Features a multi-threaded query engine, lazy evaluation, and efficient memory usage via Apache Arrow. Designed for high-performance data processing on a single machine. Use for large datasets (1GB-100GB+), fast data transformations, Parquet/CSV processing, complex query pipelines, memory-efficient operations, and when speed is critical (10-100x faster than pandas).
Protein Dynamics, Evolution, and Structure analysis. Specialized in Normal Mode Analysis (NMA) using Anisotropic (ANM) and Gaussian Network Models (GNM). Features tools for structural ensemble analysis, PCA, and co-evolutionary analysis (Evol). Use for protein flexibility prediction, collective motions, structural ensemble comparison, hinge region identification, binding site analysis, MD trajectory filtering, and evolutionary analysis.
Python package for working with DICOM files. It allows you to read, modify, and write DICOM data in a Pythonic way. Essential for medical imaging processing, clinical data extraction, and AI in radiology.
Probabilistic programming for Bayesian statistical modeling and inference. PyMC provides declarative model specification with MCMC (NUTS) and variational inference samplers; NumPyro offers JAX-accelerated equivalent for large-scale problems. Use when: quantifying uncertainty in parameter estimates, building hierarchical or mixed-effects models, Bayesian A/B testing or experimentation, posterior predictive checks, model comparison with WAIC or LOO-CV, scientific measurement with error propagation, any analysis requiring credible intervals, probability statements like P(effect > 0), or situations where understanding the full posterior distribution matters more than a single p-value. Also use when priors encode domain knowledge, sample sizes are small, or data is naturally nested.
Python Optimization Modeling Objects. A high-level framework for formulating, solving, and analyzing optimization models. Supports Linear Programming (LP), Mixed-Integer Linear Programming (MILP), and Non-Linear Programming (NLP). Part of the COIN-OR project. Use for mathematical optimization, linear programming, mixed-integer programming, non-linear programming, strategic planning, process engineering, energy systems, supply chain optimization, stochastic programming, and solver integration with IPOPT, SCIP, Gurobi, CPLEX, or GLPK.
Python interface to PROJ (cartographic projections and coordinate transformations library). Handles transformations between different Coordinate Reference Systems (CRS) and performs geodetic calculations (distance, area on ellipsoids). Use for coordinate transformations, CRS conversions, geodetic calculations, UTM projections, GPS coordinate conversions, ellipsoidal distance calculations, and spatial reference system operations.
Python module for reading, manipulating and writing genomic alignment formats (SAM/BAM/CRAM) and variant files (VCF/BCF). Wrapper for htslib.
Comprehensive guide for PySCF - Python-based Simulations of Chemistry Framework. Use for ab initio quantum chemistry calculations including Hartree-Fock, DFT, MP2, CCSD, geometry optimization, excited states, and molecular properties. Industry-standard library for electronic structure calculations.
Advanced sub-skill for PyTorch focused on model productionization and deployment. Covers TorchScript (JIT/Tracing), ONNX export, LibTorch (C++ API), and inference optimization (Quantization, Pruning).
Graph Neural Networks (GNN) for learning on graph-structured data. PyTorch Geometric (PyG) extends PyTorch with the MessagePassing framework — the core abstraction for all GNN layers — and provides standard convolutions (GCNConv, GATConv, GraphSAGEConv, GINConv), graph pooling, batching of variable-size graphs, and datasets. Use when: performing node classification (e.g., predicting labels on a citation network), graph classification (e.g., predicting molecular properties), link prediction (e.g., recommending new connections), learning representations on any graph-structured data (social networks, molecules, knowledge graphs, protein structures), implementing custom GNN architectures via the MessagePassing base class, working with heterogeneous graphs (multiple node/edge types), or any task where data has explicit relational structure that CNNs/RNNs cannot capture. Complements networkx (classical graph algorithms) and rdkit (molecular graphs) — PyG adds the deep learning layer on top.
Advanced sub-skill for PyTorch focused on deep research and production engineering. Covers custom Autograd functions, module hooks, advanced initialization, Distributed Data Parallel (DDP), and performance profiling.
Leading deep learning framework. Provides Tensors and Dynamic Computational Graphs with strong GPU acceleration. Widely used for research, neural networks, and differentiable programming.
Advanced sub-skill for Qiskit focused on executing circuits on physical quantum processing units (QPUs). Covers IBM Quantum Runtime, error mitigation techniques (TREX, ZNE), hardware-aware transpilation, and low-level pulse control (OpenPulse).
Comprehensive guide for Qiskit - IBM's quantum computing framework. Use for quantum circuit design, quantum algorithms (VQE, QAOA, Grover, Shor), quantum simulation, noise modeling, quantum machine learning, and quantum chemistry calculations. Essential for quantum computing research and applications.
Quantum Toolbox in Python. Framework for simulating the dynamics of open quantum systems. Provides data structures for quantum objects (kets, bras, operators) and solvers for master equations, Monte Carlo trajectories, and time-dependent Hamiltonians. Use for quantum dynamics simulation, open quantum systems, master equations, quantum optics, cavity QED, Jaynes-Cummings model, Rabi oscillations, Wigner functions, quantum correlations, entanglement analysis, and quantum control.
Raster geospatial data processing — the Python interface to GDAL for satellite imagery, elevation models, and grid-based geographic analysis. Rasterio reads and writes georeferenced raster formats (GeoTIFF, NetCDF, JP2, PNG, JPEG2000), handles Coordinate Reference Systems (CRS) and reprojection, performs band math (NDVI, NDWI, EVI), clips/masks rasters with vector geometries, resamples grids, and supports memory-efficient windowed I/O for multi-gigabyte files. Use when: working with satellite imagery or aerial photos, processing Digital Elevation Models (DEM/DTM/DSM), computing spectral indices from multispectral data, clipping raster data to polygon boundaries, reprojecting between coordinate systems, performing spatial interpolation on gridded data, analyzing land cover or land use change over time, integrating raster data with vector data (geopandas/shapely), or any task involving georeferenced grid/pixel data as opposed to vector points/lines/polygons.
Open-source cheminformatics and machine learning toolkit for drug discovery, molecular manipulation, and chemical property calculation. RDKit handles SMILES, molecular fingerprints, substructure searching, 3D conformer generation, pharmacophore modeling, and QSAR. Use when working with chemical structures, drug-like properties, molecular similarity, virtual screening, or computational chemistry workflows.
Scalable toolkit for analyzing single-cell gene expression data. Built on top of Anndata, focusing on clustering, trajectory inference, and visualization.
Library for bioinformatics and community ecology statistics. Provides data structures and algorithms for sequences, alignments, phylogenetics, and diversity analysis. Essential for microbiome research and ecological data science. Use for alpha/beta diversity metrics, ordination (PCoA), phylogenetic trees, sequence manipulation (DNA/RNA/Protein), distance matrices, PERMANOVA, and community ecology analysis.
A collection of algorithms for image processing in Python. Built on NumPy, SciPy, and Cython. It focuses on scientific image analysis including segmentation, geometric transformations, color space manipulation, analysis, and filtering.
The industry standard library for machine learning in Python. Provides simple and efficient tools for predictive data analysis, covering classification, regression, clustering, dimensionality reduction, model selection, and preprocessing.
Video processing library for scientists. Provides easy access to video files using FFmpeg, motion estimation algorithms, and video quality metrics. Built on NumPy and designed for high-performance research in computer vision and image sequence analysis. Use when working with video files, motion estimation, video quality assessment (VQA), FFmpeg, temporal image data, video codecs, YUV data, or scientific video recordings.
Comprehensive guide for SciPy - the fundamental library for scientific and technical computing in Python. Use for integration, optimization, interpolation, linear algebra, signal processing, statistics, ODEs, Fourier transforms, and advanced scientific algorithms. Built on NumPy and essential for research and engineering.
A Python data visualization library based on matplotlib. It provides a high-level interface for drawing attractive and informative statistical graphics. Great for exploring relationships between variables and visualizing distributions. Use for statistical data visualization, exploratory data analysis (EDA), relationship plots, distribution plots, categorical comparisons, regression visualization, heatmaps, cluster maps, and creating publication-quality statistical graphics from Pandas DataFrames.
Manipulation and analysis of planar geometric objects. Based on the widely deployed GEOS library. Provides data structures for points, curves, and surfaces, and standardized algorithms for geometric operations. Use for 2D geometry operations, spatial relationships, set-theoretic operations (intersection, union, difference), point-in-polygon queries, geometric calculations (area, distance, centroid), buffering, simplifying geometries, linear referencing, and cleaning invalid geometries. Essential for GIS operations, spatial analysis, and geometric computations.
A process-based discrete-event simulation framework. Use for modeling queuing systems, supply chains, manufacturing processes, network simulation, project management, and any system where events occur at specific points in time. Load when working with discrete event simulation, process modeling, resource allocation, virtual time, simpy.Environment, simpy.Resource, or event-driven simulation.
Professional sub-skill for scikit-learn focused on robust pipeline architecture, custom estimator development, advanced feature engineering, and rigorous model validation. Covers Target Encoding, Nested Cross-Validation, and Production Deployment.
Advanced sub-skill for scikit-learn focused on model interpretability, feature importance, and diagnostic tools. Covers global and local explanations using built-in inspection tools and SHAP/LIME integrations.
Time series machine learning layer (Tier 1): integration of **sktime** and **tsfresh** for building production-grade pipelines that transform raw time series into tabular feature representations suitable for classical machine-learning models. *sktime* provides a unified, sklearn-compatible interface for time-series data types, transformations, and pipelines, while *tsfresh* enables large-scale automated extraction of statistical, spectral, and autocorrelation features, with optional statistically grounded feature relevance selection (FRESH).
Natural Language Processing for text analysis, corpus linguistics, and production NLP pipelines. spaCy provides fast production-grade tokenization, POS tagging, NER, dependency parsing, and custom model training. NLTK provides classical corpus linguistics, linguistic analysis, VADER sentiment, collocation analysis, and access to standard linguistic corpora. Use when: processing and analyzing text data, extracting named entities (people, orgs, locations, dates), dependency parsing and syntactic analysis, building text classification pipelines, performing corpus-level linguistic analysis (frequency, collocations, readability), sentiment analysis, lemmatization and stemming, working with multilingual text, training custom NER or text classifiers, or any task requiring structured understanding of natural language beyond simple string operations.
Advanced statistical modeling and hypothesis testing. Complementary to SciPy's stats module, it provides classes and functions for the estimation of many different statistical models, as well as for conducting statistical tests and statistical data exploration. Use for linear regression, GLM, time series analysis, ANOVA, survival analysis, causal inference, and statistical hypothesis testing. Load when working with OLS, WLS, logistic regression, Poisson regression, ARIMA, SARIMAX, statistical diagnostics, p-values, confidence intervals, or R-style statistical analysis.
The community-developed free and open-source software package for solar physics. Provides tools for data search and download, coordinate transformations specific to solar physics, and powerful image processing through the Map object. Use when working with solar data, solar images (EUV, magnetograms, white light), solar coordinates (Helioprojective, Heliographic), Fido data search, solar time series, differential rotation, limb fitting, or multi-instrument solar analysis (AIA, HMI, GOES).
Comprehensive guide for SymPy - Python library for symbolic mathematics. Use for symbolic expressions, calculus (derivatives, integrals, limits, series), equation solving (algebraic, differential, systems), linear algebra, simplification, matrix operations, special functions, code generation, and mathematical proofs. Essential for analytical mathematics and computer algebra.
Comprehensive deep learning framework for building, training, and deploying neural networks. TensorFlow provides tf.keras high-level API for model construction, tf.data for efficient data pipelines, and tf.function for graph-mode optimization. Use when working with: neural network training and inference, image classification/detection/segmentation, NLP/text processing with embeddings or transformers, time series forecasting, generative models (VAE, GAN), transfer learning with pretrained models, custom training loops with GradientTape, GPU/TPU accelerated computation, or any deep learning task.
A fast, extensible progress bar for Python and CLI. Instantly makes your loops show a smart progress meter with ETA, iterations per second, and customizable statistics. Minimal overhead. Use for monitoring long-running loops, simulations, data processing, ML training, file downloads, I/O operations, command-line tools, pandas operations, parallel tasks, and nested progress bars.
State-of-the-art Machine Learning for PyTorch, TensorFlow, and JAX. Provides thousands of pretrained models to perform tasks on different modalities such as text, vision, and audio. The industry standard for Large Language Models (LLMs) and foundation models in science.
N-dimensional labeled arrays and datasets in Python. Built on top of NumPy and Dask. It introduces labels in the form of dimensions, coordinates, and attributes on top of raw NumPy-like arrays, making data analysis in physical sciences more intuitive and less error-prone. Use for working with multi-dimensional scientific data, NetCDF/GRIB/Zarr files, climate/weather/oceanographic datasets, remote sensing, geospatial imaging, large out-of-memory datasets with Dask, and labeled array operations.
Industry-standard gradient boosting libraries for tabular data and structured datasets. XGBoost and LightGBM excel at classification and regression tasks on tables, CSVs, and databases. Use when working with tabular machine learning, gradient boosting trees, Kaggle competitions, feature importance analysis, hyperparameter tuning, or when you need state-of-the-art performance on structured data.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
External network access
Connects to servers outside your machine
Battle-tested Claude Code plugin for engineering teams — 48 agents, 182 skills, 68 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Upstash Context7 MCP server for up-to-date documentation lookup. Pull version-specific documentation and code examples directly from source repositories into your LLM context.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Reliable automation, in-depth debugging, and performance analysis in Chrome using Chrome DevTools and Puppeteer