From computational-design-skills
Guides implementation of evolutionary algorithms, multi-objective optimization, design space exploration, fitness functions, and generative workflows for AEC computational design.
npx claudepluginhub amanbh997/claude-skills-for-computational-designersThis skill uses the workspace's default tool permissions.
Generative design is a computational design methodology in which a designer defines a problem through goals, constraints, and variable parameters, and an algorithmic system autonomously generates, evaluates, and evolves candidate solutions across a defined design space. Unlike traditional design where the human produces every solution manually, generative design shifts the designer's role from ...
Provides guidance on optimization methods like genetic algorithms, simulated annealing, PSO, gradient-based, topology/shape/size optimization, and benchmarks for AEC computational design.
Optimizes single- and multi-objective problems in Python using evolutionary algorithms like NSGA-II, NSGA-III, MOEA/D. Analyzes Pareto fronts for engineering design and hyperparameter search.
Solves single- and multi-objective optimization problems using NSGA-II/III, MOEA/D, evolutionary algorithms, and Pareto front analysis for engineering design.
Share bugs, ideas, or general feedback.
Generative design is a computational design methodology in which a designer defines a problem through goals, constraints, and variable parameters, and an algorithmic system autonomously generates, evaluates, and evolves candidate solutions across a defined design space. Unlike traditional design where the human produces every solution manually, generative design shifts the designer's role from direct form-maker to curator of outcomes — defining what is desired rather than how to achieve it.
In the AEC context, generative design applies to problems ranging from single-building floor plan layouts and structural topologies to neighborhood-scale massing studies and infrastructure routing. The common thread is a design space too large for exhaustive manual exploration.
The confusion between parametric and generative design is pervasive. The distinction is fundamental:
| Aspect | Parametric Design | Generative Design |
|---|---|---|
| Core action | Define relationships between parameters | Explore the solution space algorithmically |
| Designer's role | Adjust sliders, observe outcomes | Define objectives and constraints, curate results |
| Output | One solution per parameter state | Population of diverse candidate solutions |
| Search method | Manual, intuition-driven | Automated, algorithm-driven |
| Model requirement | Parametric model with exposed variables | Parametric model + fitness function + solver |
| Typical scale | Dozens to hundreds of manual explorations | Thousands to millions of evaluated candidates |
A parametric model is a prerequisite for generative design — it provides the mechanism by which the solver manipulates geometry. But parametric design alone does not search; it merely responds to human input. Generative design automates the search.
Every generative design process follows a three-phase loop:
Generate — The solver creates candidate solutions by sampling or evolving design variable values within defined ranges. In evolutionary approaches, this involves applying genetic operators (crossover, mutation) to parent solutions.
Evaluate — Each candidate is assessed against one or more fitness functions. This is typically the computational bottleneck: running energy simulations, structural analyses, daylight calculations, or spatial adjacency checks for every individual in every generation.
Evolve — Based on evaluation results, the solver selects better-performing candidates and uses them to produce the next generation. Over many iterations, the population converges toward high-performing regions of the design space.
This loop continues until convergence criteria are met: a generation limit is reached, fitness improvements plateau below a threshold, or population diversity drops below a minimum.
Generative design is appropriate when:
Generative design is not appropriate when:
The designer retains authorship because they define the problem, construct the fitness landscape, set constraints, and select solutions. The algorithm is a tool with no intent. However, generative design shifts creative decisions upstream: defining what matters (fitness functions), what is possible (variable ranges), and what is acceptable (feasibility thresholds). This demands deeper understanding of design performance than traditional workflows.
Fully automated: Solver runs autonomously from initialization to convergence. Designer intervenes only at setup and selection. Appropriate for well-defined problems with reliable fitness functions.
Human-in-the-loop (IEC): Designer evaluates candidates subjectively during optimization, guiding evolution toward aesthetically or experientially desirable outcomes.
The hybrid approach — automated fitness for quantifiable metrics, human selection for qualitative criteria — is often most productive in practice.
The Genetic Algorithm is the foundational evolutionary optimization method used in AEC generative design. Inspired by Darwinian natural selection, it maintains a population of candidate solutions that evolve over generations through selection, crossover, and mutation.
The encoding (genotype representation) determines how design variables are stored and manipulated:
Binary encoding: Variables as bit strings (e.g., 8-bit = 0-255). Simple operators but suffers from Hamming cliffs and imprecision for continuous variables.
Real-valued encoding: Variables as floating-point numbers directly. Standard for AEC — design variables (dimensions, angles, positions) are inherently continuous. Used by all major Grasshopper solvers.
Permutation encoding: For ordering problems (room sequencing, scheduling). Requires order-preserving operators (PMX, OX, CX).
Tree/graph encoding: For evolving solution topology itself (bracing patterns, connectivity graphs). Enables topological innovation but complex to implement.
Selection determines which individuals become parents for the next generation:
Tournament selection: Randomly pick k individuals (tournament size, typically k=2 to 7), select the best. Larger k increases selection pressure. This is the most commonly used method in AEC tools — simple, efficient, no global fitness sorting required.
Roulette wheel (fitness-proportionate) selection: Selection probability proportional to fitness. Problem: premature convergence when one individual dominates.
Rank-based selection: Selection probability proportional to rank rather than raw fitness. Avoids scaling issues of roulette wheel.
Stochastic universal sampling (SUS): Like roulette wheel but uses equally spaced pointers, reducing selection variance.
Crossover combines genetic material from two parents to produce offspring:
Single-point crossover: Choose a random point; offspring gets genes from parent A before the point and parent B after.
Two-point crossover: Two random points; segment between points from one parent, rest from the other. Better gene block preservation.
Uniform crossover: Each gene independently chosen from either parent with equal probability. Maximum mixing — good when variables are independent.
Simulated Binary Crossover (SBX): Standard for NSGA-II. Operates directly on real numbers. Distribution index eta_c (typically 2-20) controls offspring distance from parents. Higher eta_c = offspring closer to parents (exploitation); lower = larger jumps (exploration).
Blend Crossover (BLX-alpha): Offspring sampled uniformly from [min(p1,p2) - alphad, max(p1,p2) + alphad]. Alpha=0.5 is typical.
Mutation introduces random variation to maintain diversity:
Bit-flip mutation: For binary encoding. Each bit has probability 1/L of flipping.
Gaussian mutation: For real-valued encoding. Add N(0, sigma) noise to each gene. Most common mutation in AEC problems. Sigma can be fixed or adaptive.
Polynomial mutation: Used in NSGA-II. Distribution index eta_m (typically 20-100) controls perturbation magnitude. Higher eta_m = smaller perturbations.
Swap / Scramble mutation: For permutation encoding. Swap exchanges two positions; scramble randomly rearranges a subset.
Adaptive mutation: Rate or step size adjusts during the run — higher early (exploration), lower later (exploitation).
Elitism ensures the best individual(s) from the current generation survive unchanged into the next generation. Without elitism, the best solution found can be lost through crossover and mutation. Typically, the top 1-5% of the population is preserved. NSGA-II implements elitism through its combined parent+offspring selection scheme.
Population size determines the balance between solution diversity and computational cost:
The critical trade-off: larger populations explore more of the design space per generation but require more evaluations per generation. If a single evaluation takes 30 seconds (e.g., energy simulation), a population of 200 requires nearly 2 hours per generation.
Optimization terminates when:
The fundamental tension in optimization:
Effective optimization requires both. Early generations should favor exploration; later generations should favor exploitation. This can be achieved through:
Holland's Schema Theorem: short, low-order, above-average schemata (building blocks) receive exponentially increasing trials in subsequent generations. Implication for encoding: variables that interact strongly should be positioned near each other in the genotype to reduce disruption by crossover.
Most AEC design problems involve multiple conflicting objectives. A building cannot simultaneously minimize cost, minimize energy consumption, and maximize floor area — these objectives conflict. Multi-objective optimization acknowledges this and seeks the set of best trade-off solutions.
Pareto dominance: Solution A dominates solution B if A is at least as good as B in all objectives and strictly better in at least one. A solution that is not dominated by any other solution in the population is called non-dominated or Pareto optimal.
Pareto front: The set of all non-dominated solutions forms the Pareto front (or Pareto frontier) in objective space. This front represents the best achievable trade-offs — improving one objective requires worsening another.
For 2 objectives: a 2D scatter plot with each axis representing one objective. The Pareto front appears as a curve along the boundary of the feasible region.
For 3 objectives: a 3D scatter plot or parallel coordinate plot. The Pareto front is a surface.
For 4+ objectives: direct visualization is impossible. Use parallel coordinate plots, radar charts, heatmaps, or dimensionality reduction (PCA, t-SNE) to explore the solution set. Wallacei provides built-in multi-dimensional Pareto analytics.
NSGA-II (Deb et al., 2002) is the most widely used multi-objective evolutionary algorithm in AEC. It is the engine behind Wallacei and many other tools.
Key mechanisms:
Typical NSGA-II parameters for AEC:
SPEA2 maintains an external archive of non-dominated solutions. Fitness based on domination strength and density (k-th nearest neighbor distance). Often produces better-distributed Pareto fronts than NSGA-II for many-objective problems. Available in Octopus for Grasshopper.
Decomposes the multi-objective problem into single-objective subproblems using weight vectors. Each subproblem optimized simultaneously with information sharing between neighbors. Efficient for many-objective problems (4+) where NSGA-II's crowding distance becomes less effective.
The Pareto front provides options, not answers. Decision-making methods:
Weighted sum: F = w1f1 + w2f2 + ... + wn*fn. Simple but cannot find solutions on non-convex Pareto front regions. Galapagos uses this approach.
Epsilon-constraint: Optimize f1 subject to f2 <= epsilon_2, f3 <= epsilon_3. Can find non-convex Pareto front solutions. Requires choosing which objective to optimize and setting constraint bounds.
The fitness function is the single most consequential decision in generative design. It encodes what the designer values. A poorly designed fitness function will efficiently produce solutions that are technically "optimal" but designically irrelevant. The fitness function is the designer's proxy — it must faithfully represent design intent.
When combining multiple objectives, normalization is essential to prevent one objective from dominating due to scale differences:
Min-max: f_norm = (f - f_min) / (f_max - f_min). Maps to [0,1]. Requires estimating bounds. Z-score: f_norm = (f - mean) / std_dev. Maps to ~[-3, 3]. Dynamic per generation. Target-based: f_norm = |f - f_target| / f_target. For absolute performance targets. Rank-based: Replace values with population rank. Eliminates scale differences but loses magnitude.
Not all variable combinations produce valid designs. Constraint handling manages infeasible solutions:
Penalty method: F_penalized = F_original + penalty * violation_magnitude. Penalty coefficient must balance discouraging infeasibility without undervaluing near-boundary feasible solutions. Adaptive penalties that increase over generations are effective.
Repair method: Map infeasible solutions to nearest feasible solution (e.g., clip oversized rooms to maximum). Effective but requires domain-specific logic.
Decoder method: Genotype maps to feasible phenotypes via a decoder function. Feasibility guaranteed by construction. Example: floor plan decoder ensures rooms tile without overlaps.
Feasibility rules (Deb's rules): Feasible beats infeasible; between infeasible, smaller violation wins; between feasible, better fitness wins. Used in NSGA-II.
Multi-objective constraint handling: Treat constraint satisfaction as an additional objective in Pareto ranking.
For single-objective solvers (Galapagos): F_total = w1f1_norm + w2f2_norm + ... + wn*fn_norm where sum(wi) = 1.0. Start with equal weights, then adjust to reflect priorities. Warning: cannot discover solutions on non-convex Pareto front regions.
Hard boundaries: minimum room areas (code), maximum stress (safety), minimum daylight factor (LEED/BREEAM), maximum height (zoning), minimum setbacks (zoning), fire egress distances (life safety). These are constraints defining the feasible region, not objectives.
The design space is the set of all possible solutions defined by the design variables and their ranges. Each variable defines one dimension of the space. A problem with 20 variables defines a 20-dimensional space.
Variable types:
As the number of variables increases, the volume of the design space grows exponentially. A problem with 10 variables, each with 10 possible values, has 10^10 = 10 billion possible solutions. Exhaustive search is impossible.
Practical implications:
Initial population generation affects convergence speed and solution quality:
Random sampling: Each variable sampled uniformly and independently. Simple but leaves gaps in high dimensions.
Latin Hypercube Sampling (LHS): Each variable's range divided into N equal intervals with exactly one sample per interval. Standard for DOE in AEC. Better coverage than random sampling.
Sobol sequences: Quasi-random low-discrepancy sequences with superior space-filling properties. Available in Python (scipy.stats.qmc.Sobol).
Orthogonal sampling: Extension of LHS ensuring uniform distribution in multi-dimensional subspaces, not just marginal distributions.
Before full optimization, sensitivity analysis identifies which variables most influence the objectives, enabling dimensionality reduction:
Morris method (Elementary Effects): Screening method computing mean and standard deviation of elementary effects per variable. Large mean = influential variable; large std = variable interacts with others. Cost: O(k*(n+1)) evaluations.
Sobol indices: Variance-based global sensitivity. First-order index S_i measures variance due to variable i alone. Total-order index ST_i includes all interactions involving i. Computationally expensive (thousands of evaluations); use surrogate models to reduce cost.
DOE provides structured approaches to sample the design space before or instead of optimization:
When fitness evaluation is expensive (minutes per evaluation for energy simulation or FEA), surrogate models approximate the fitness function with a cheap-to-evaluate mathematical model:
Kriging (Gaussian Process Regression): Interpolates known points with uncertainty estimates, enabling intelligent sampling via expected improvement criterion. Gold standard for expensive optimization.
Radial Basis Functions (RBF): Weighted sums of radial functions centered at known points. Faster than Kriging for large datasets. Used by Opossum (RBFOpt).
Polynomial regression: Low-order polynomials. Fast but limited to smooth, low-dimensional landscapes. Useful for initial screening.
Neural networks: Approximate complex, high-dimensional landscapes. Require hundreds to thousands of training points.
Parallel coordinate plots: Each axis = one variable/objective; each solution = a polyline. Reveals correlations and preferred ranges. Wallacei provides interactive versions. Scatter matrix: Grid of pairwise scatter plots revealing correlations and trade-offs. Heatmaps: Fitness values across 2D variable slices. Identifies ridges, valleys, optima. t-SNE / UMAP: Dimensionality reduction grouping similar solutions, revealing clusters.
| Tool | Platform | Algorithm(s) | Objectives | Strengths | Limitations |
|---|---|---|---|---|---|
| Galapagos | Grasshopper | GA, Simulated Annealing | Single (weighted multi) | Built-in, simple UI, fast setup | No true multi-objective, limited analytics |
| Wallacei | Grasshopper | NSGA-II | Multi-objective | Pareto analytics, clustering, parallel coords, phenotype explorer | Learning curve, slower for large populations |
| Octopus | Grasshopper | HypE, SPEA2 | Multi-objective | Many-objective support, interactive Pareto | Less actively maintained, UI complexity |
| Opossum | Grasshopper | RBFOpt (surrogate) | Single/Multi | Efficient for expensive evaluations, fewer evaluations needed | Requires initial sampling, less exploratory |
| Optimus | Grasshopper | Multiple (GA, PSO, DE) | Single/Multi | Algorithm selection flexibility | Complexity, less community support |
| Refinery | Dynamo/Autodesk | GA (cloud-based) | Multi-objective | Cloud compute, Autodesk integration, no local compute limit | Requires Autodesk subscription, limited customization |
| Autodesk Forma | Web/Cloud | Performance-driven gen. | Multi-objective | Real-time feedback, wind/sun/energy, urban scale | Less flexible than scripted approaches, limited variable types |
| Topos | Standalone/Plugin | SIMP, BESO | Topology optimization | True topology optimization, structural focus | Structural only, requires FEA integration |
Galapagos: The entry point for most designers. Drag a fitness output and a set of sliders into the Galapagos component. It handles GA setup automatically. For multi-objective problems, manually combine objectives into a weighted sum. Best for: quick single-objective explorations, learning generative workflows, problems with <15 variables.
Wallacei: The professional standard for multi-objective generative design in Grasshopper. Provides NSGA-II with full Pareto front analytics including: generation-by-generation convergence tracking, objective value distributions, parallel coordinate filtering, K-means clustering of solutions, phenotype (geometry) preview for any solution. Wallacei X adds enhanced analytics. Best for: serious multi-objective AEC optimization, research, design competitions.
Octopus: Supports many-objective optimization (4+ objectives) better than Wallacei through HypE (Hypervolume-based) algorithm. Interactive Pareto front allows the designer to steer evolution in real time. Best for: many-objective problems, interactive exploration.
Opossum: Uses surrogate-based optimization (RBFOpt) to minimize the number of true evaluations. Instead of evaluating thousands of solutions, it builds a surrogate model from dozens of evaluations and optimizes the surrogate. Best for: problems where each evaluation takes minutes (energy simulation, CFD, detailed structural analysis).
Refinery (Autodesk): Cloud-based generative design for Dynamo. Offloads computation to Autodesk servers. Provides multi-objective optimization with result visualization in a web interface. Best for: Revit/Dynamo users, teams without powerful local hardware.
Autodesk Forma: Cloud platform for early-stage urban and building design. Provides real-time performance feedback (wind, daylight, energy, noise) and generative exploration of massing options. Best for: urban-scale generative studies, early concept design, non-specialist users.
Clearly articulate what you are optimizing and why. Write out:
Example: "Optimize the floor plan layout of a 2,000 m2 office floor to maximize daylight autonomy, maximize programmatic adjacency satisfaction, and minimize circulation area, subject to minimum room sizes per the brief and maximum distance-to-exit per fire code."
List every parameter the solver can manipulate. For each variable, specify:
Aim for 5-30 variables. Fewer than 5 may not need generative design; more than 30 may require decomposition or dimensionality reduction.
Construct the parametric model in Grasshopper, Dynamo, or a scripting environment. The model must:
Model robustness is critical. If 10% of variable combinations crash the model, the solver wastes 10% of evaluations and may converge to regions that avoid crashes rather than regions with high fitness.
Implement computable functions that evaluate each objective. Use simulation plugins (Ladybug/Honeybee for environmental, Karamba for structural, custom scripts for spatial) to compute performance metrics. Normalize all fitness values to comparable scales.
Test fitness functions manually with a few known configurations to verify they produce sensible values and rankings.
Select the solver based on the problem:
Set parameters:
Launch the solver. Monitor:
For long runs (hours/days), save checkpoints. Most tools allow pausing and resuming.
For single-objective: examine the best solution and compare to the initial design.
For multi-objective:
Choose 3-5 solutions from the Pareto front that represent distinct trade-off strategies. For each:
Problem: Optimize the layout of 12 rooms on a 40m x 30m rectangular floor plate for an educational building.
Variables (18 total):
Objectives:
Constraints: Minimum room areas per educational standards. Maximum distance to nearest exit. No room overlaps (enforced by decoder).
Solver: Wallacei, NSGA-II, population 100, 80 generations = 8,000 evaluations.
Results: Pareto front with 45 non-dominated solutions. Three clusters emerge: (A) high-daylight layouts with rooms along perimeter, higher circulation; (B) compact layouts with minimal circulation but reduced daylight for interior rooms; (C) balanced layouts with light wells providing daylight to interior rooms. Cluster C reveals a design strategy (light wells) that was not initially considered — a generative discovery.
Problem: Optimize a parametric louver shading system on a south-facing office facade (20m wide x 15m tall, Latitude 40N).
Variables (8 total):
Objectives:
Solver: Wallacei, NSGA-II, population 80, 60 generations. Each evaluation requires a Radiance simulation (~10 seconds) and an EnergyPlus simulation (~30 seconds). Total runtime: approximately 27 hours.
Results: Pareto front reveals that deeper louvers dramatically reduce cooling load but at diminishing returns beyond 0.6m depth. View and daylight preservation is best achieved with variable-depth zoning — deeper louvers at eye level and shallower louvers above. The cost-optimal region suggests 0.45m depth at 0.6m spacing as the knee point of the cost-performance trade-off.
Problem: Optimize the shape of a long-span roof shell (50m x 50m footprint) for a sports hall.
Variables (12 total):
Objectives:
Solver: Octopus (HypE), population 120, 100 generations. Each evaluation requires a Karamba3D analysis (~2 seconds). Total runtime: approximately 6.5 hours.
Results: The Pareto front shows a clear trade-off between weight and deflection. Minimum-weight solutions tend toward anticlastic (saddle-shaped) surfaces that carry load efficiently through membrane action but have higher deflections. Minimum-deflection solutions tend toward synclastic (dome-like) shapes that are stiffer but heavier. The knee point reveals a hybrid form — a shallow dome with edge curvature — that achieves 85% of the minimum weight at only 120% of the minimum deflection. This form was not intuitively predictable.
In IEC, the human designer serves as the fitness function for some or all objectives. Each generation, the designer views rendered phenotypes and selects preferred solutions. Evolution is guided by aesthetic, experiential, or cultural criteria that resist quantification.
Challenges: human fatigue limits populations to 10-20 individuals and runs to 20-30 generations. Solutions: pre-filter with computational fitness to reduce the set the human must evaluate; use surrogate models trained on human selections to automate subsequent generations.
Multiple populations evolve simultaneously, with fitness depending on interactions between populations. Applications in AEC:
Co-evolution can discover emergent synergies between subsystems that would be missed by optimizing them sequentially.
Instead of optimizing fitness, novelty search rewards solutions that are different from all previously found solutions. The archive of encountered solutions grows over time, and fitness is defined as the distance from the nearest archived solution in behavior space.
Application: when the fitness landscape is deceptive (local optima trap conventional optimization), novelty search explores more broadly and often finds globally optimal solutions as a side effect. In AEC: generating diverse facade patterns, exploring unusual structural topologies, or discovering non-obvious spatial configurations.
MAP-Elites divides the design space into a grid of behavioral niches (defined by user-chosen feature dimensions) and seeks the highest-performing solution in each niche. The result is a map of the design space showing the best achievable fitness for every combination of behavioral features.
Example: for a tower design, the feature dimensions might be (building height, floor plate aspect ratio). MAP-Elites fills a 2D grid where each cell contains the best-performing tower with that height and aspect ratio. The designer can browse the map to understand how performance varies across the design space, not just at the optimum.
Using evolutionary algorithms to optimize neural network architectures and weights. Applications in AEC:
Knowledge from one optimization can bootstrap another. If a floor plan optimization for Building A converges on effective layout strategies, the final population can seed the initial population for Building B's optimization (with modified constraints). This reduces convergence time and improves solution quality for repeated problem types (e.g., a firm designing many office buildings with similar programs).
Strategies:
Is the problem single-objective?
YES -> Is evaluation fast (<1 sec)?
YES -> Galapagos (GA or SA)
NO -> Opossum (surrogate-based)
NO -> How many objectives?
2-3 -> Wallacei (NSGA-II)
4+ -> Octopus (HypE/SPEA2)
Is evaluation fast (<1 sec)?
YES -> Direct evaluation
NO -> Surrogate-assisted (Opossum or custom Kriging)
Are qualitative criteria important?
YES -> Human-in-the-loop (IEC) or hybrid
NO -> Fully automated