Help us improve
Share bugs, ideas, or general feedback.
Share bugs, ideas, or general feedback.
Share bugs, ideas, or general feedback.
By galando
Run a structured SDLC pipeline for AI-generated code: plan features with blast radius analysis, build with TDD and quality gates, review with parallel subagents, and validate across compile, test, lint, type-check, and security checks.
npx claudepluginhub galando/temper --plugin temperUnified SDLC command: plan → design → build → review → check with stage gates, feedback loops, and observability
Plan feature with impact analysis and blast radius
System design exploration for complex features
Execute plan with TDD and quality gates
Technical code review with confidence scoring, review memory, and intent validation
Temper core: stack detection, quality gates, blast radius, learning
Hierarchical context loading for AI coding agents — load what you need, defer what you don't
Version-aware, source-driven development — fetch official docs before writing framework code
Share bugs, ideas, or general feedback.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimSDLC enforcement for AI agents — TDD, planning, self-review, CI shepherd
Persona-driven AI development team: orchestrator, team agents, review agents, skills, slash commands, and advisory hooks for Claude Code
Auto-loop execution workflow with quality gates for Claude Code. Automatically decomposes tasks, implements code, runs tests, and iterates through quality gates until completion.
Verification-first engineering toolkit for Claude Code. 15 skills across a 5-phase spine (Investigate → Design → Implement → Verify → Ship), 8 specialist agents, an interactive setup wizard. Every skill has rationalizations + evidence requirements. Built for senior ICs and tech leads.
Full SDD (Spec-Driven Development) framework for Claude Code. 72+ slash commands (/dc:*), 14 reusable skills, 15 reasoning models, TDD as an iron law, and i18n support (ES/EN/PT). Turn requirements into tested, documented, production-ready code through 7 disciplined lifecycle phases.
Self-orchestrating multi-agent development system — 8 specialized AI agents, parallel quality gates, and automated workflows. You say WHAT, the AI decides HOW.
PIV + Spec-Kit: PIV methodology with structured specs and strict TDD
Your AI writes fast. Temper makes it last.
Intent-driven development with behavioral testing, security analysis, and quality gates for AI-generated code
Website | Getting Started | Releases
AI writes code fast. But "fast" without "right" creates bugs, technical debt, and features that miss the point.
"Why not just tell Claude to be careful?"
You can. And it helps. But AI-generated code has structural failure patterns that "be careful" doesn't address. These aren't sloppiness — they're limitations of how LLMs generate code:
These map to three unanswered questions:
| Question | What Goes Wrong Without It |
|---|---|
| Did we solve the problem? | Feature works but nobody uses it. Wrong problem solved. |
| Does it do the right things? | Happy path works, edge cases ship broken. |
| Does the code work? | Tests pass, but they test implementation details, not behaviors. |
Most AI tools answer only the third. Temper answers all three.
Temper combines three development methodologies in a single artifact called intent.md. Each layer answers a different question and is enforced at a different stage of the pipeline:
intent.md
|
+-- Intent Section (IDD) WHY are we building this?
| | Problem statement
| | Success criteria (each with a Validate: type)
| | Constraints
| |
+-- Scenarios Section (BDD) WHAT should it do?
| Gherkin Given/When/Then
| Derived BEFORE architecture
| Every planned file traces to a scenario
|
+-- /temper:build (TDD) HOW do we build it?
Tests written from scenarios
RED -> GREEN -> REFACTOR
Question: Did we solve the problem?
When: Defined during /temper:plan, validated during /temper:review
IDD captures the why behind a feature. Not "add a password reset endpoint" but "users should be able to reset their password without contacting support, completing the flow in under 2 minutes."
The Intent section of intent.md contains:
Validate: type that tells review how to check itEach success criterion gets a validation type. This is what makes IDD mechanical instead of subjective:
| Type | What It Means | How Review Checks It | Example |
|---|---|---|---|
scenario | Criterion is satisfied when a linked BDD scenario's test passes | Finds the test, runs it, checks PASS | "Users can reset password" -> linked to scenario "Successful password reset" |
code | Criterion is satisfied when specific code exists | Greps the codebase for the pattern | "POST /api/reset endpoint exists" -> greps for route definition |
metric | Can't be verified before deployment | Flags for post-deploy monitoring | "Support tickets decrease 30%" -> requires production data |
manual | Requires human judgment | Flags for human review, non-blocking | "Reset flow feels intuitive" -> UX review needed |