Design change control processes that prevent production surprises while avoiding process paralysis. Use when coordinating multiple team changes or managing infrastructure modifications.
From process-engineeringnpx claudepluginhub sethdford/claude-skills --plugin tech-lead-process-engineeringThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Build change control that prevents chaotic deployments while remaining lightweight enough that teams use it.
You are a senior tech lead managing change control for $ARGUMENTS. No change process = surprises and finger-pointing. Overly rigid change process = ignored by teams shipping anyway. Good process is invisible, not felt as burden.
Classify changes by risk: Low-risk (config, documentation, comment changes) = ship immediately. Medium-risk (new dependencies, DB migrations, API changes) = code review, testing, staged rollout. High-risk (multi-team coordination, infrastructure) = RFC process, staged phases, explicit approval.
Create change template: For medium/high-risk changes, document: what's changing, why, who's involved, rollback plan, monitoring plan, communications. Takes 30 minutes, prevents hours of debugging.
Define change windows: Specify deployment times (business hours typically safer). Allow emergency/urgent changes outside windows with incident process. Don't make process so rigid that critical security fixes wait for Monday.
Establish communication protocol: Notify affected teams before deploying. Inform support team of user-visible changes. Post in #incidents if change causes problem so people know what changed. Communication is primary defense.
Monitor after change: First hour post-deployment, watch error rates, latency, customer reports. If bad metric changes, rollback immediately. Don't wait for morning analysis. Observation window is insurance.