From dm-game
Use when designing game mechanics, evaluating gameplay feel, tuning game systems, reviewing player experience, debugging why something feels wrong, balancing combat, designing progression, or working on any player-facing game feature. Provides a constraint system for evaluating mechanics with focus on player experience over feature completion.
npx claudepluginhub rbergman/dark-matter-marketplace --plugin dm-gameThis skill uses the workspace's default tool permissions.
**Purpose:** Central evaluation framework for game mechanics. Use this as the **first pass** on any feature — the 5-Component Filter identifies what's weak, then specialized skills provide deep guidance.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Purpose: Central evaluation framework for game mechanics. Use this as the first pass on any feature — the 5-Component Filter identifies what's weak, then specialized skills provide deep guidance.
Core principle: Mechanics are code. Gameplay is the player's experience of that code. The goal is not to implement features, but to implement Relevance.
Influences: The 5-Component Framework synthesizes principles from experience engineering theory, cognitive UX research, and systematic balance methodology.
Before implementing or critiquing ANY game feature, evaluate against:
| Component | Core Question | Quick Check |
|---|---|---|
| Clarity | Can the player predict what will happen? | Telegraph exists before resolution |
| Motivation | Does the player care about the outcome? | Outcome affects persistent state |
| Response | Do player inputs matter? | Actions can be buffered/cancelled meaningfully |
| Satisfaction | Does success feel earned? | Multiple feedback channels fire (visual + audio minimum) |
| Fit | Does it match the game's identity? | Weight, timing, audio match entity type |
Conflict priority: Response > Clarity > Satisfaction > Fit > Motivation
For detailed evaluation rubrics, consult references/5-component-rubric.md.
references/domain-guide.mdWhen proposing ANY numeric value (timing windows, costs, speeds, damage, etc.), choose ONE:
Option A — Source-backed:
Option B — Starting value with test plan:
Never claim "industry standard" or "common practice" without a source.
When critical information is missing, state explicitly:
ASSUMPTION: [what you're assuming]
IMPACT: [why it matters to the design]
IF WRONG: [failure mode]
VALIDATE: [how to check quickly]
Search before proposing when:
If search unavailable, convert to "Assumption + Test Plan" format.
For ANY feature that changes player state (movement abilities, combat actions, status effects):
| Property | Must Define |
|---|---|
| Entry conditions | What states can transition INTO this? |
| Exit conditions | What ends this state? (timer, input, external event) |
| Interruptibility | What can cancel this? (damage, player input, other abilities) |
| Chained actions | What states can this transition TO? |
| Resource cost | What is consumed on entry? On sustain? |
| Edge cases | Behavior on: slopes, ceilings, moving platforms, during hitstun, at resource zero |
When told "it feels wrong/boring/clunky," diagnose in order:
| Symptom | Check First | Before Tuning Numbers | Deep Dive |
|---|---|---|---|
| "I didn't know that would happen" | Clarity | Add telegraph, audio cue, UI indicator | player-ux |
| "I don't care" | Motivation | Connect to progression, increase stakes | experience-design |
| "It feels laggy" | Response | Add buffering, allow cancels, reduce lockouts | game-feel |
| "It feels weak" | Satisfaction | Add feedback channels (minimum 2) | game-feel |
| "It doesn't fit" | Fit | Adjust timing, weight, audio texture | game-feel |
| "It's not balanced" | Balance | Check cost curves, dominant strategies | game-balance |
| "It's boring" | Engagement | Check loop, pacing, meaningful choice | experience-design |
| "It's too hard/easy" | Progression | Check flow channel, difficulty curve | progression-systems |
Rule: Do not tune damage/timing numbers until Clarity and Response are verified as not the root cause.
Every significant feature must include scenarios for:
When proposing or critiquing a feature:
Upstream — before evaluating individual mechanics:
| Area | Skill | When to Use |
|---|---|---|
| Concept & vision | game-vision | Pillars, target experience, core loop crystallization, MVG |
| System architecture | systems-design | System interactions, emergence analysis, possibility space |
Downstream — after the 5-Component Filter identifies a weakness:
| Area | Skill | When to Use |
|---|---|---|
| Balance & economy | game-balance | Cost curves, dominant strategies, economy sinks/sources |
| Economy architecture | economy-design | Currency design, sink/source modeling, inflation, LiveOps |
| Engagement & pacing | experience-design | Core loops, emotion layering, "why isn't this fun?" |
| Player motivation | motivation-design | Reward psychology, reinforcement schedules, retention |
| Cognitive load & UI | player-ux | Perception/attention/memory, Gestalt UI, onboarding |
| Difficulty & leveling | progression-systems | Power curves, flow channel, XP math, unlock pacing |
| Spatial & AI design | encounter-design | Combat spaces, enemy behavior, environmental flow |
| Story & quests | narrative-design | Quest structure, branching narrative, narrative as system |
| Feedback & juice | game-feel | Juice checklists, timing, "why does this feel bad?" |
| Audio systems | audio-design | Sound as information system, adaptive music, spatial audio |
| Multiplayer | multiplayer-design | Competitive balance, co-op, social systems, matchmaking |
| Accessibility | accessibility-design | Four pillars (visual, auditory, motor, cognitive), implementation tiers |
| Analytics | data-driven-design | Telemetry, funnels, A/B testing, metrics frameworks |
| Testing & validation | playtest-design | Question generation, observation protocols, metrics |
| Per-frame performance | game-perf | Zero-allocation patterns for hot paths |
| Project bootstrapping | pixi-vector-arcade | PixiJS 8 setup with vector aesthetics |
For detailed guidance:
references/learning-path.md - Start here: tiered learning path, arcade game quick start, council role mapping, intuition bridgereferences/worked-examples.md - Full pipeline examples: vision → systems → evaluation, diagnostic flows, council debate formatreferences/5-component-rubric.md - Full evaluation rubrics with signals, rules, knobs, acceptance testsreferences/domain-guide.md - Combat, movement, camera, audio, UI/UX, progression, persistence domainsreferences/templates.md - Edge case enumeration, debugging flow, playtest scripts