From thinking-frameworks-skills
Generates factorial, response surface, and Taguchi experimental designs to optimize multi-factor systems with minimal runs. Useful for screening variables, discovering interactions, A/B/n testing, parameter tuning.
npx claudepluginhub lyndonkl/claude --plugin thinking-frameworks-skillsThis skill uses the workspace's default tool permissions.
- [Workflow](#workflow)
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Copy this checklist and track your progress:
Design of Experiments Progress:
- [ ] Step 1: Define objectives and constraints
- [ ] Step 2: Identify factors, levels, and responses
- [ ] Step 3: Choose experimental design
- [ ] Step 4: Plan execution details
- [ ] Step 5: Create experiment plan document
- [ ] Step 6: Validate quality
Step 1: Define objectives and constraints
Clarify the experiment goal (screening vs optimization), response metric(s), experimental budget (max runs), time/cost constraints, and success criteria. See Common Patterns for typical objectives.
Step 2: Identify factors, levels, and responses
List all candidate factors (controllable inputs), specify levels for each factor (low/high or discrete values), categorize factors (control vs noise), and define response variables (measurable outputs). For screening many factors (8+), see resources/methodology.md for Plackett-Burman and fractional factorial approaches.
Step 3: Choose experimental design
Based on objective and constraints:
Step 4: Plan execution details
Specify randomization order (eliminate time trends), blocking strategy (control nuisance variables), replication plan (estimate error), sample size justification (power analysis), and measurement protocols. See Guardrails for critical requirements.
Step 5: Create experiment plan document
Create design-of-experiments.md with sections: objective, factors table, design matrix (run order with factor settings), response variables, execution protocol, and analysis plan. Use resources/template.md for structure.
Step 6: Validate quality
Self-assess using resources/evaluators/rubric_design_of_experiments.json. Check: objective clarity, factor completeness, design appropriateness, randomization plan, measurement protocol, statistical power, analysis plan, and deliverable quality. Minimum standard: Average score ≥ 3.5 before delivering.
Pattern 1: Screening (many factors → vital few)
Pattern 2: Optimization (find best settings)
Pattern 3: Response Surface (map the landscape)
Pattern 4: Robust Design (work despite noise)
Pattern 5: Sequential Experimentation (learn then refine)
Design requirements:
Randomize run order: Eliminates time-order bias and confounding with lurking variables. Use random number generator, not "convenient" sequences.
Replicate center points: For designs with continuous factors, replicate center point runs (3-5 times) to estimate pure error and detect curvature.
Preserve critical interactions: In fractional factorials, avoid confounding important 2-way interactions with main effects. Choose Resolution IV or higher if interactions matter.
Check design balance: Ensure orthogonality (factors are uncorrelated in design matrix). Correlation > 0.3 reduces precision and interpretability.
Define response precisely: Use objective, quantitative, repeatable measurements. Avoid subjective scoring unless calibrated with multiple raters.
Justify sample size: Run power analysis to ensure design can detect meaningful effect sizes with acceptable Type II error risk (beta at most 0.20).
Document assumptions: State expected effect magnitudes, interaction assumptions, noise variance estimates. Design validity depends on these.
Plan for analysis before running: Specify statistical tests, significance level (alpha), effect size metrics before data collection to prevent p-hacking.
Common pitfalls:
Key resources:
Typical workflow time:
When to escalate:
Inputs required:
Outputs produced:
design-of-experiments.md: Complete experiment plan with design matrix, randomization, protocols, analysis approach