Use when starting technical work requiring structured approach - writing tests before code (TDD), planning data exploration (EDA), designing statistical analysis, clarifying modeling objectives (causal vs predictive), or validating results. Invoke when user mentions "write tests for", "explore this dataset", "analyze", "model", "validate", or when technical work needs systematic scaffolding before execution.
Provides systematic scaffolds for technical work: TDD test structures, EDA plans, statistical analysis frameworks, and validation checklists. Use when user says "write tests for", "explore this dataset", "analyze", "model", or "validate" to ensure rigorous approach before execution.
/plugin marketplace add lyndonkl/claude/plugin install lyndonkl-thinking-frameworks-skills@lyndonkl/claudeThis skill inherits all available tools. When active, it can use any tool Claude has access to.
resources/evaluators/rubric_code_data_analysis_scaffolds.jsonresources/examples/eda-customer-churn.mdresources/examples/tdd-authentication.mdresources/methodology.mdresources/template.mdThis skill provides structured scaffolds (frameworks, checklists, templates) for technical work in software engineering and data science. It helps you approach complex tasks systematically by defining what to do, in what order, and what to validate before proceeding.
Use this skill when you need to:
Trigger phrases:
Skip this skill when:
Code Data Analysis Scaffolds provides structured frameworks for common technical patterns:
Quick example:
Task: "Write authentication function"
TDD Scaffold:
# Test structure (write these FIRST) def test_valid_credentials(): assert authenticate("user@example.com", "correct_pass") == True def test_invalid_password(): assert authenticate("user@example.com", "wrong_pass") == False def test_nonexistent_user(): assert authenticate("nobody@example.com", "any_pass") == False def test_empty_credentials(): with pytest.raises(ValueError): authenticate("", "") # Now implement authenticate() to make tests pass
Copy this checklist and track your progress:
Code Data Analysis Scaffolds Progress:
- [ ] Step 1: Clarify task and objectives
- [ ] Step 2: Choose appropriate scaffold type
- [ ] Step 3: Generate scaffold structure
- [ ] Step 4: Validate scaffold completeness
- [ ] Step 5: Deliver scaffold and guide execution
Step 1: Clarify task and objectives
Ask user for the task, dataset/codebase context, constraints, and expected outcome. Determine if this is TDD (write tests first), EDA (explore data), statistical analysis (test hypothesis), or validation (check quality). See resources/template.md for context questions.
Step 2: Choose appropriate scaffold type
Based on task, select scaffold: TDD (testing code), EDA (exploring data), Statistical Analysis (hypothesis testing, A/B tests), Causal Inference (estimating treatment effects), Predictive Modeling (building ML models), or Validation (checking quality). See Scaffold Types for guidance on choosing.
Step 3: Generate scaffold structure
Create systematic framework with clear steps, validation checkpoints, and expected outputs at each stage. For standard cases use resources/template.md; for advanced techniques see resources/methodology.md.
Step 4: Validate scaffold completeness
Check scaffold covers all requirements, includes validation steps, makes assumptions explicit, and provides clear success criteria. Self-assess using resources/evaluators/rubric_code_data_analysis_scaffolds.json - minimum score ≥3.5.
Step 5: Deliver scaffold and guide execution
Present scaffold with clear next steps. If user wants execution help, follow the scaffold systematically. If scaffold reveals gaps (missing data, unclear requirements), surface these before proceeding.
When: Writing new code, refactoring existing code, fixing bugs Output: Test structure (test cases → implementation → refactor) Key Elements: Test cases covering happy path, edge cases, error conditions, test data setup
When: New dataset, data quality questions, feature engineering Output: Exploration plan (data overview → quality checks → univariate → bivariate → insights) Key Elements: Data shape/types, missing values, distributions, outliers, correlations
When: Hypothesis testing, A/B testing, comparing groups Output: Analysis design (question → hypothesis → test selection → assumptions → interpretation) Key Elements: Null/alternative hypotheses, significance level, power analysis, assumption checks
When: Estimating treatment effects, understanding causation not just correlation Output: Causal design (DAG → identification strategy → estimation → sensitivity analysis) Key Elements: Confounders, treatment/control groups, identification assumptions, effect estimation
When: Building ML models, forecasting, classification/regression tasks Output: Modeling pipeline (data prep → feature engineering → model selection → validation → evaluation) Key Elements: Train/val/test split, baseline model, metrics selection, cross-validation, error analysis
When: Checking data quality, code quality, model quality before deployment Output: Validation checklist (assertions → edge cases → integration tests → monitoring) Key Elements: Acceptance criteria, test coverage, error handling, boundary conditions
| Task Type | When to Use | Scaffold Resource |
|---|---|---|
| TDD | Writing/refactoring code | resources/template.md #tdd-scaffold |
| EDA | Exploring new dataset | resources/template.md #eda-scaffold |
| Statistical Analysis | Hypothesis testing, A/B tests | resources/template.md #statistical-analysis-scaffold |
| Causal Inference | Treatment effect estimation | resources/methodology.md #causal-inference-methods |
| Predictive Modeling | ML model building | resources/methodology.md #predictive-modeling-pipeline |
| Validation | Quality checks before shipping | resources/template.md #validation-scaffold |
| Examples | See what good looks like | resources/examples/ |
| Rubric | Validate scaffold quality | resources/evaluators/rubric_code_data_analysis_scaffolds.json |
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.