By kienbui1995
Cost-optimized Claude Code plugin — 16 built-in agents + 33 optional agents (Cloud/Amplitude/AI/ADO/Solo divisions via /install-skills), 232 workflow skills, and 15 slash commands. Run /setup to get started.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersBrainstorm ideas into designs — explore context, ask questions, propose approaches, get approval before implementation
Database review — schema, queries, indexes, migrations
Systematic debugging — reproduce, isolate, fix, verify
Deploy checklist — CI/CD, containerization, infrastructure readiness
Browse and install optional skills from magic-powers into your project
Plan implementation — break down a feature into actionable steps with dependencies
PR workflow — prepare changes for pull request with proper commits and description
Refactor code — improve structure, reduce complexity, maintain behavior
Quick code review — check recent changes for bugs, security issues, performance problems
Security scan — scan for vulnerabilities, auth issues, data exposure
End a Claude Code working session — saves decisions to memory, creates WIP commit if needed, outputs session summary. Run before closing Claude Code.
Start a Claude Code working session — loads project memory, checks git state, outputs 3-line context brief. Run at the beginning of every session.
Personalize magic-powers for this project — detect stack, choose role & priority, install optional features (hooks, MCP, stack-specific skills)
Test-driven development — write tests first, then implement, then refactor
Run a structured development workflow (feature/bugfix/refactor/research/incident). Usage: /workflow [type] "task description" — type auto-detected if omitted.
Use for designing and implementing Azure DevOps CI/CD pipelines — multi-stage YAML design, release management with gates and approvals, pipeline optimization (caching/parallelism), pipeline security hardening, and container/Kubernetes deployments via Azure Pipelines.
Use for building AI features — LLM integration, RAG pipelines, agentic systems, prompt engineering, eval harness setup, and LLMOps. Covers the full technical stack for shipping AI to production.
Use for building AI evaluation infrastructure — test harnesses, CI/CD for AI, golden datasets, regression detection, model-as-judge, prompt A/B testing, and continuous quality monitoring.
Use for productizing AI features — UX design for AI, streaming patterns, error handling, fallback design, responsible AI disclosure, reliability targets, and product metrics for AI features.
Magic-powers Amplitude Division provides role-specific agents and skills for product analytics,
Use for monitoring AI/LLM agent quality, analyzing user topics and intents, investigating specific AI sessions, and reviewing agent performance health. Requires Amplitude MCP.
Use for creating Amplitude charts/dashboards, daily/weekly briefings, general product analytics, and answering product questions. Requires Amplitude MCP. Powered by Amplitude mcp-marketplace skills.
Use for analytics instrumentation planning, event taxonomy design, discovering existing tracking patterns, and generating tracking specs from code diffs or features. Requires Amplitude MCP.
Use for A/B test analysis, experiment monitoring, opportunity discovery, user journey comparison, and account health for B2B. Requires Amplitude MCP.
Use for session replay analysis, UX friction mapping, error diagnosis, reliability monitoring, and debugging user-reported issues. Requires Amplitude MCP.
Use proactively for brainstorming features, system design, architecture decisions, writing implementation plans, choosing tech approaches, and any task requiring deep reasoning before coding.
Use for managing and administering Azure DevOps services — organization setup, project governance, agent pool management, service connections, security policies, branch policies, audit logs, work tracking configuration, artifact feeds, and az devops CLI automation.
Use for building Chrome/Firefox/Safari browser extensions — Manifest V3 architecture, content scripts, extension APIs, popup/options UI, cross-browser compatibility, Chrome Web Store and Firefox AMO publishing.
Magic-powers Cloud Divisions provide role-specific agents and skills for cloud professionals,
Use for Lambda functions, API Gateway, DynamoDB, SQS/SNS, CodePipeline CI/CD, and serverless architecture on AWS. Exam prep: AWS Certified Developer Associate (DVA-C02).
Use for AWS Glue ETL, Kinesis streaming, Redshift optimization, S3 data lakes, DynamoDB design, Lake Formation governance, and data pipeline troubleshooting. Exam prep: AWS Certified Data Engineer Associate (DEA-C01).
Use for AWS CI/CD pipelines, EKS, CloudFormation IaC, SRE practices, CloudWatch monitoring, and incident response. Exam prep: AWS Certified DevOps Engineer Professional (DOP-C02).
Use for SageMaker model training/serving/monitoring, ML pipelines, MLOps on AWS. Exam prep: AWS Certified Machine Learning Engineer Associate (MLA-C01 — replaces retiring MLS-C01).
Use for VPC design, Transit Gateway, Direct Connect, Route 53, load balancers, and network security on AWS. Exam prep: AWS Certified Advanced Networking Specialty (ANS-C01).
Use for AWS IAM, GuardDuty threat detection, VPC security, KMS encryption, Security Hub, and compliance. Exam prep: AWS Certified Security Specialty (SCS-C02).
Use for multi-account AWS architecture, migration strategies, cost optimization, reliability design, and enterprise solution design. Exam prep: AWS Certified Solutions Architect Professional (SAP-C02).
Use for Azure OpenAI Service, Cognitive Services, RAG patterns, AI agents, Computer Vision, NLP solutions, and responsible AI. Exam prep: Azure AI Engineer Associate (AI-102).
Use for Azure Functions, App Service, Azure Container Apps, Azure OpenAI integration, CI/CD with Azure DevOps, and Azure security. Exam prep: Azure AI Cloud Developer Associate (AI-200, replacing AZ-204 July 2026).
Use for Microsoft Fabric Lakehouse, Dataflow Gen2, Fabric Pipelines, Eventstreams, real-time intelligence, and data governance on Microsoft Fabric. Exam prep: Microsoft Fabric Data Engineer Associate (DP-700).
Use for Azure Pipelines CI/CD, AKS deployments, Infrastructure as Code (Bicep/Terraform), Azure Monitor, SRE practices. Exam prep: Azure DevOps Engineer Expert (AZ-400).
Use for Azure VNet design, NSGs, Load Balancer, Application Gateway, VPN Gateway, ExpressRoute, Azure Firewall, and Private Link. Exam prep: Azure Network Engineer Associate (AZ-700).
Use for Microsoft Entra ID, Azure network security, Microsoft Defender for Cloud, Microsoft Sentinel, and cloud security posture. Exam prep: Cloud and AI Security Engineer Associate (SC-500, replacing AZ-500 July 2026).
Use for Azure solution architecture, multi-region design, identity governance, business continuity, infrastructure design. Exam prep: Azure Solutions Architect Expert (AZ-305).
Use for GCP solution architecture, multi-service design, cost optimization, reliability planning, and case study analysis. Exam prep: GCP Professional Cloud Architect.
Use for Cloud Run, Cloud Functions, App Engine, GKE application development, CI/CD pipelines, and GCP service integration. Exam prep: GCP Professional Cloud Developer.
Use for BigQuery schema design, Dataflow pipeline development, Pub/Sub messaging, Cloud Storage data lakes, data quality validation, and Vertex AI MLOps. Exam prep: GCP Professional Data Engineer (GCP-PDE).
Use for GCP CI/CD pipelines, SRE practices (SLO/SLI/error budgets), Cloud Build, GKE deployments, Cloud Monitoring, and troubleshooting. Exam prep: GCP Professional Cloud DevOps Engineer.
Use for Vertex AI pipeline design, feature engineering, model training/serving, MLOps, and ML system design. Exam prep: GCP Professional Machine Learning Engineer.
Use for VPC design, Cloud DNS, Cloud Load Balancing, Cloud Armor, hybrid connectivity (Interconnect/VPN), and network security. Exam prep: GCP Professional Cloud Network Engineer.
Use for GCP IAM configuration, VPC security, data encryption, Security Command Center, compliance requirements, and cloud security audits. Exam prep: GCP Professional Cloud Security Engineer.
Use when writing landing pages, marketing emails, social posts, product descriptions, launch announcements, or any user-facing copy.
Use for database schema reviews, query optimization, migration planning, and indexing strategies.
Use when encountering bugs, test failures, unexpected behavior, or error messages. Systematically diagnoses root cause before proposing fixes.
Use for git strategy, branch management, commit hygiene, merge conflict resolution, and release workflows.
Use when making product decisions, prioritizing features, writing user stories, analyzing competitors, planning launches, or defining pricing strategy.
Use for Quality Assurance (QA) — preventing defects through process design, quality standards, SDLC integration, process audits, and risk-based quality planning. QA is process-oriented and proactive: ensuring the right processes are in place to build quality in from the start.
Use for Quality Control (QC) — detecting defects in software products through test case design, test automation, test data management, defect management, and quality metrics. QC is product-oriented and reactive: finding what's wrong with what was built.
Use after completing code changes, implementing features, or before committing. Reviews correctness, readability, performance, security, and project conventions.
Use for security audits, vulnerability scanning, dependency checks, and reviewing code for security issues.
Use when building an AI product solo — from idea validation to launch. Covers problem validation, product positioning, GTM for one person, pricing, rapid prototyping, retention design, and data moat strategy. The CEO-in-a-box for solo AI founders.
Use for infrastructure reviews, deployment pipelines, monitoring setup, incident response, and reliability improvements.
Use for writing documentation, API docs, READMEs, changelogs, architecture decision records, and user guides.
Use when building, designing, or improving UI/UX — landing pages, dashboards, components, forms, layouts. Also use for design system generation and visual review.
Use when unsure which Claude model to use for a task. Input: any task description. Output: recommended model (Haiku/Sonnet/Opus), reason, cost estimate, and escalation condition. Fast routing tool.
Use to run a structured development workflow end-to-end (feature/bugfix/refactor/research/incident). Selects correct template, assigns right agents and models per phase, dispatches with isolated context, tracks progress. Invoked by /workflow command.
Use to start or end a Claude Code working session. At start: loads memory and outputs 3-line context brief. At end: saves decisions, updates memory, creates WIP commit. Also saves mid-workflow snapshots. Invoked by /session-start and /session-end.
Use when auditing WCAG compliance, testing with assistive technologies, or fixing accessibility issues
Use when implementing or reviewing accessibility - WCAG compliance, screen reader support, keyboard navigation, a11y testing
Use when automating Azure DevOps operations — az devops CLI, REST API calls, PAT management, service principal automation, webhooks, and scripting repeatable ADO administrative tasks.
Use when managing Azure Artifacts — feed creation and permissions, upstream sources, retention policies, package promotion across views, and connecting build pipelines to artifact feeds.
Use when implementing container CI/CD on Azure DevOps — Docker image builds with caching, pushing to Azure Container Registry, deploying to AKS with Helm or kubectl, and image promotion across environments.
Use when setting up or managing Azure DevOps organizations and projects — project creation, team structure, user management, billing, extensions, and org-level settings.
Use when designing Azure Pipelines YAML — multi-stage pipelines, reusable templates, conditions and expressions, matrix strategies, triggers, and pipeline dependencies for complex CI/CD workflows.
Use when improving Azure Pipelines performance — caching dependencies, parallel job strategies, artifact management between stages, test result publishing, code coverage gates, and reducing pipeline runtime.
Use when hardening Azure Pipelines security — YAML pipeline permissions, fork build security, resource authorization, secret scanning, protected resources, and preventing pipeline-based attacks.
Use when managing Azure DevOps pipeline infrastructure — self-hosted agent pools, service connections, variable groups, secure files, environments, approvals, and pipeline resource governance.
Use when implementing release management on Azure DevOps — deployment gates (quality gates), pre/post-deployment approvals, deployment rings, rollback strategies, deployment freeze windows, and multi-environment promotion.
Use when configuring Azure DevOps security — security groups and permissions, branch policies, PR policies, audit log review, and org/project-level security governance.
Use when configuring Azure DevOps work tracking — boards setup, backlog configuration, sprint management, work item type customization, process templates, queries, and team area/iteration paths.
Use when documenting architecture decisions, capturing the context and trade-offs behind technical choices
Use when designing AI agents - tool use, multi-agent orchestration, state management, planning loops, error recovery, and agent evaluation
Use when evaluating AI agent systems — trajectory evaluation, pass@k testing, tool call correctness, non-deterministic behavior testing, and building eval infrastructure specific to multi-step agentic workflows.
Use when designing memory systems for AI agents — tiered memory architecture (in-context, session, long-term, episodic), context window management, memory compression, and retrieval strategies for persistent agent state.
Use when designing reliable AI agent systems — retry strategies, circuit breakers, fallbacks, graceful degradation, timeout management, and handling compound failures in multi-step agent workflows.
Use when securing AI agent systems — defending against prompt injection, sandboxing tool execution, preventing indirect attacks through retrieved data, designing minimal-permission tool architectures, and security testing agents.
Use when building defensibility into an AI product — designing data collection strategies that compound over time, domain-specific dataset building, proprietary data as competitive moat vs base models, and when data beats prompt engineering.
Use when building evaluation infrastructure for AI systems — test harnesses, CI pipelines for AI, automated regression detection, golden datasets, and continuous quality measurement.
Use when productizing AI features for end users — UX patterns for AI, streaming, loading states, error handling, fallback design, reliability, and responsible AI disclosure.
Use when pricing an AI product — choosing between usage-based/hybrid/outcome pricing, calculating unit economics, protecting margins against LLM cost, and setting prices that reflect value without losing customers.
Use when defining how an AI product stands out — defensibility assessment, outcome-based messaging, feature vs product decision, competitive moat design, and positioning for a specific niche.
Use when designing AI products for long-term retention — stickiness patterns, daily engagement hooks, workflow integration depth, habit loops specific to AI, and measuring whether users actually keep using your AI feature.
Use when deciding whether to build an AI product — rapid problem validation, market discovery, early user interviews, and demand signals BEFORE writing any code. Prevents the
Use when adding safety layers to AI features - output validation, hallucination detection, content filtering, PII redaction, input sanitization
Use when building marketing dashboards, attribution models, or reporting on campaign performance
Use when validating API schemas, detecting breaking changes, or setting up consumer-driven contract testing
Use when designing REST or GraphQL APIs - endpoint naming, versioning, error handling, pagination, authentication patterns
Use when implementing auth - OAuth 2.0, JWT, session management, API keys, RBAC, or reviewing auth security
Use when reviewing smart contracts for security vulnerabilities, auditing web3 patterns, or preparing for a formal audit
You MUST use this before any creative work - creating features, building components, adding functionality, or modifying behavior. Explores user intent, requirements and design before implementation.
Use when defining tone of voice guidelines, ensuring messaging consistency, or onboarding writers to your brand
Use when implementing caching - Redis, CDN, HTTP cache headers, application-level memoization, or cache invalidation patterns
Use when projecting infrastructure needs, planning for traffic growth, or making scaling decisions
Use when designing resilience tests, planning chaos experiments, or validating failure recovery
Use when setting up or improving CI/CD pipelines - GitHub Actions, automated testing, deployment, release automation
Use when automating Claude Code workflows with hooks — PreToolUse (validate/block actions), PostToolUse (react to completions), Stop (enforce standards before finishing), and SessionStart (load context). Configure in .claude/settings.json.
Use when connecting Claude Code to external services via MCP (Model Context Protocol) — configuring MCP servers for databases, APIs, file systems, and custom tools, and designing effective tool descriptions for Claude.
Use when writing or improving CLAUDE.md files — project context that Claude Code reads every session, global vs project rules, what to include for maximum AI effectiveness, and memory-aware documentation patterns.
Use when leveraging Claude Code's auto-memory system — understanding what Claude saves to memory, writing good memory entries manually, structuring the memory directory, and using memory for project continuity across sessions.
Use when configuring Claude Code for a project — .claude/settings.json structure, permission modes, model selection, tool allowlists/denylists, and team vs personal settings.
Use when auditing cloud spend, rightsizing instances, reviewing reserved instance coverage, or finding cost optimization opportunities
Use when researching competitors, positioning features, or preparing for a market entry
Use when building a content calendar, defining target audiences for content, or choosing content formats and distribution channels
Use when deciding which model or agent to use for a task - guides cost-optimized model selection based on task complexity
Use when designing analytics schemas, choosing between star schema and OBT, or modeling entities for a data warehouse
Use when designing ETL/ELT pipelines, choosing between streaming vs batch, or architecting data flow between systems
Use when validating data pipelines, writing data tests, or investigating data anomalies
Use when reviewing database schemas, slow queries, missing indexes, or planning migrations
Use when navigating complex enterprise deals, multi-stakeholder sales, or competitive displacement situations
Use when updating packages, auditing vulnerabilities, managing version pinning, or evaluating new dependencies
Use when preparing designs for developer implementation, writing specs, or managing the design-to-code workflow
Use when reviewing component consistency, design token coverage, or the health of a design system
Use when creating UI designs with Pencil — the MCP-native canvas that lives in your repo. Requires Pencil extension installed in your IDE.
Use when creating UI designs, mockups, or prototypes - integrates Google Stitch SDK for visual design generation
Use when writing API documentation, creating developer tutorials, building devrel content, or engaging the developer community
Use when running discovery calls, qualifying opportunities, or applying MEDDIC/SPIN selling frameworks
Use when facing 2+ independent tasks that can be worked on without shared state or sequential dependencies
Use when creating Dockerfiles, docker-compose configs, optimizing container images, or setting up containerized development environments
Use when designing email campaigns, building drip sequences, segmenting lists, or improving deliverability
Use when bootstrapping new projects, setting up dev environments, writing onboarding docs, or configuring local development tooling
Use when you have a written implementation plan to execute in a separate session with review checkpoints
Use when designing A/B tests, managing experiment hypotheses, analyzing results, or building an experimentation culture
Use when selecting, transforming, or creating features for ML models
Use when processing user interviews, support tickets, NPS comments, or survey responses into actionable insights
Use when building unit economics models, financial projections, or analyzing the business viability of a feature or product
Use when implementation is complete, all tests pass, and you need to decide how to integrate the work
Use when writing a sound design brief, planning music direction, or building the audio systems specification for a game
Use when writing a Game Design Document (GDD), defining core mechanics, or planning player loops
Use when designing viral loops, improving activation rates, running retention experiments, or building growth models
Use when designing hiring processes, writing job descriptions, running performance reviews, or documenting culture and values
Use when writing a blameless postmortem after an incident, identifying root causes, and building follow-up action items
Use when handling production incidents - outage triage, root cause analysis, communication, postmortem writing
Use when reviewing deployments, CI/CD, monitoring, scaling, or incident response configurations
Use when handing off a system, preparing someone to own a codebase, or ensuring knowledge doesn't live in one person's head
Use when planning a product or feature launch, building a GTM strategy, or coordinating a cross-functional release
Use when reviewing code, docs, or features for legal and regulatory requirements (GDPR, CCPA, SOC2, HIPAA, etc.)
Use when designing game levels, planning pacing and challenge curves, or documenting spatial layouts
Use when reducing AI API costs — prompt caching, token reduction, batch processing, cost accounting for multi-step workflows, and building a cost optimization strategy for LLM-powered applications.
Use when measuring AI output quality - eval frameworks, golden datasets, regression testing, benchmarking, human-in-the-loop evaluation
Use when monitoring AI systems in production - cost tracking, latency, token usage, error rates, quality drift, and LLMOps dashboards
Use when managing ML experiments, ensuring reproducibility, or comparing model runs
Use when deploying ML models to production, setting up canary releases, or designing the serving infrastructure
Use when selecting evaluation metrics, detecting bias, or validating model readiness for production
Use when setting up drift detection, retraining triggers, or production model health dashboards
Use when selecting AI models for different tasks, designing cost-aware routing (cheap→expensive cascade), implementing model fallbacks, and optimizing the capability/cost/latency tradeoff across model tiers.
Use when choosing between Claude models for a task — decision tree for Haiku/Sonnet/Opus based on task type, cost estimates, escalation triggers, and cascade patterns.
Use when building MVPs fast with a small team - lean startup for AI products, feature prioritization, ship-fast patterns, iteration cycles
Use when designing game story structure, writing branching dialogue, building lore, or planning narrative delivery
Use when writing runbooks for on-call engineers, documenting incident response steps, or creating operational playbooks
Use when setting up, maintaining, or growing an open source project — covers docs, community, licensing, and launch
Use when establishing performance baselines, comparing before/after changes, or validating performance SLAs
Use when diagnosing slow code, optimizing queries, reducing latency, or profiling application performance
Use when load testing APIs, profiling bottlenecks, or validating performance SLAs before release
Use when reviewing deal stages, maintaining CRM hygiene, building forecasts, or analyzing pipeline health
Use when creating pull requests - PR structure, description templates, review checklist, merge strategies, branch naming
Use when defining KPIs, building dashboards, or measuring whether a feature or product is healthy
Use when making product decisions, prioritizing features, planning launches, writing PRDs, or defining what to build next
Use when designing, testing, or versioning LLM prompts - covers few-shot, chain-of-thought, structured output, prompt templates, and systematic testing
Use when writing sales proposals, structuring pricing presentations, or articulating value propositions
Use when conducting quality audits — reviewing process compliance, identifying gaps between defined process and actual practice, conducting structured inspections (code review audits, test quality reviews), and producing audit reports with remediation plans.
Use when designing quality assurance processes — defining quality standards, integrating QA checkpoints into SDLC, creating process documentation, onboarding teams to quality practices, and building a quality-first engineering culture.
Use when managing quality risk — identifying quality risks in a product or release, applying risk-based testing prioritization, creating risk mitigation plans, and communicating quality risk to stakeholders for go/no-go decisions.
Use when designing or implementing test automation — choosing the right automation framework (Playwright, pytest, JUnit), Page Object Model, selector strategies, test isolation, managing flaky tests, and CI integration.
Use when managing defects — writing effective bug reports, applying severity/priority matrix, tracking defect lifecycle, conducting root cause analysis, and measuring defect metrics for process improvement.
Use when measuring and reporting QA quality — defect escape rate, test coverage analysis, flaky test rate, mean time to detect, shift-left metrics, and building quality dashboards for stakeholders.
Use when testing mobile applications — device matrix strategy, iOS and Android testing tools (XCUITest, Espresso, Appium), gesture and interaction testing, network condition testing, app lifecycle testing, and mobile-specific quality concerns.
Use when testing security from a QC perspective — OWASP Top 10 test cases, authentication and authorization testing, input validation testing, security regression testing, and integrating security checks into the QC process.
Use when managing test data — designing test data strategies, using factories and builders, creating fixtures, generating synthetic data, masking PII for testing, and managing test database state.
Use when designing test cases — applying boundary value analysis, equivalence partitioning, decision tables, pairwise testing, and exploratory testing techniques to maximize defect detection with minimal test cases.
Use when facilitating User Acceptance Testing — planning UAT sessions with business stakeholders, designing business-scenario test cases (not technical), coordinating UAT execution, managing UAT defects, and obtaining formal sign-off.
Use when defining definition of done, setting release criteria, or building automated quality checks into the CI/CD pipeline
Use when building RAG pipelines - document ingestion, chunking, embedding, vector search, retrieval, reranking, and generation with context
Use when receiving code review feedback, before implementing suggestions - requires technical rigor and verification, not performative agreement
Use when improving code structure without changing behavior - extract methods, simplify conditionals, reduce duplication, improve naming
Use when completing tasks, implementing major features, or before merging to verify work meets requirements
Use when building quarterly roadmaps, prioritizing the backlog, or communicating what's coming and why
Use when running call reviews, building rep onboarding plans, handling objection training, or improving sales team performance
Use when reviewing code for security vulnerabilities, auth issues, data exposure, or before deploying to production
Use when improving organic search rankings, conducting keyword research, or fixing technical SEO issues
Use when starting or ending a Claude Code working session — load context from memory, output a session brief, save decisions and progress at session end, and ensure work is resumable next session.
Use when defining service level objectives, SLIs, or error budgets for reliability engineering
Use when building a social media strategy, scheduling content, or adapting messaging per platform
Use when building go-to-market as a solo founder — distribution playbook for <5 hours/week, AI-powered personalized outreach, community leverage (Twitter/Reddit/ProductHunt/IndieHackers), and sales-led loops for early traction.
Use when designing pre-sales architectures, creating technical proposals, or helping customers integrate your platform
Use when designing 3D layouts, applying depth cues, planning spatial hierarchies, or ensuring user comfort in spatial experiences
Use when starting any non-trivial feature — enforces requirements → design → tasks workflow with explicit approval gates before writing code. Prevents wasted implementation effort.
Use when facilitating sprint planning, refining the backlog, calculating team capacity, or setting sprint goals
Use when facilitating sprint retrospectives, choosing retro formats, or driving actionable outcomes from team reflection
Use when writing exec updates, status reports, or communicating product decisions to non-technical stakeholders
Use when executing implementation plans with independent tasks in the current session
Use when building a support triage process, writing escalation paths, or creating templates for common support issues
Use when encountering any bug, test failure, or unexpected behavior, before proposing fixes
Use when onboarding a new engineer, setting up their dev environment, introducing the codebase, or planning their first PR
Use when prioritizing technical debt, deciding what to fix vs live with, or allocating time for debt reduction
Use when writing shader briefs, defining performance budgets, creating LOD strategies, or bridging art and engineering
Use when writing API docs, runbooks, user guides, architecture docs, or internal wikis
Use when writing documentation, READMEs, API docs, changelogs, ADRs, or user guides — ensures clarity, structure, and project consistency
Use when implementing any feature or bugfix, before writing implementation code
Use when building a test coverage plan, choosing what to test at each layer, or applying risk-based testing to focus effort
Use when building reproducible ML training workflows, orchestrating training jobs, or versioning training artifacts
Use when planning user interviews, writing discussion guides, running usability tests, or synthesizing research findings
Use when writing user stories, acceptance criteria, or breaking epics into shippable slices
Use when starting feature work that needs isolation from current workspace or before executing implementation plans
Use when starting any conversation - establishes how to find and use skills, model routing, and cost-aware development
Use when conducting a heuristic evaluation of an existing interface, identifying usability problems, or prioritizing UX improvements
Use when about to claim work is complete, fixed, or passing, before committing or creating PRs - requires running verification commands and confirming output before making any success claims
Use when rapidly prototyping AI products — going from idea to working demo in 4-8 hours using AI-assisted development tools (Cursor, Bolt, v0, Lovable), knowing when to vibe vs spec, and transitioning prototypes to production.
Use when designing for Apple visionOS, applying spatial design conventions, or building for the Apple Vision Pro platform
Use when executing a structured workflow — select and run a feature, bugfix, refactor, research, or incident template with correct agent and model assignments per phase.
Use when you have a spec or requirements for a multi-step task, before touching code
Use when creating new skills, editing existing skills, or verifying skills work before deployment
Use when designing for XR (AR/VR/MR), choosing interaction modes, or adapting 2D UI patterns for spatial computing
End-to-end instrumentation workflow orchestrating diff-intake → discover-event-surfaces → instrument-events. Uses mcp__Amplitude__get_event_properties, mcp__Amplitude__get_project_context.
B2B account health assessment covering usage patterns, expansion risk, and growth opportunities. Uses mcp__Amplitude__get_users, mcp__Amplitude__query_amplitude_data.
Analyze user inquiries to AI agents, identify topic coverage gaps, and prioritize improvements. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_feedback_insights, mcp__Amplitude__get_feedback_mentions.
Deep-dive investigation of Amplitude charts to identify trends, anomalies, and root causes. Uses mcp__Amplitude__query_chart, mcp__Amplitude__render_chart, mcp__Amplitude__get_event_properties.
Synthesize an Amplitude dashboard into executive narrative with key findings, trends, and risks. Uses mcp__Amplitude__get_dashboard, mcp__Amplitude__query_charts.
Comprehensive A/B test analysis with statistical validity, segment breakdown, and SHIP/ITERATE/ABANDON recommendation. Uses mcp__Amplitude__query_experiment, mcp__Amplitude__get_experiments.
Synthesize customer feedback into themes, pain points, and prioritized product roadmap recommendations. Uses mcp__Amplitude__get_feedback_insights, mcp__Amplitude__get_feedback_comments, mcp__Amplitude__get_feedback_trends.
Identify behavioral differences between two user groups combining session replays with quantitative metrics. Uses mcp__Amplitude__get_session_replays, mcp__Amplitude__query_amplitude_data.
Build Amplitude charts from natural language descriptions. Uses mcp__Amplitude__get_event_properties, mcp__Amplitude__get_context, mcp__Amplitude__get_charts.
Build Amplitude dashboards from requirements by discovering existing charts and organizing them into logical sections. Uses mcp__Amplitude__create_dashboard, mcp__Amplitude__get_charts, mcp__Amplitude__query_charts.
Morning analytics briefing covering the last 1-2 days — surfaces anomalies, trends, risks, and wins. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_charts, mcp__Amplitude__get_context.
Transform bug reports into actionable reproduction steps using session replay data. Uses mcp__Amplitude__list_session_replays, mcp__Amplitude__get_session_replay_events, mcp__Amplitude__get_session_replays.
Triage and investigate application errors using Amplitude's auto-captured error events. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_session_replays, mcp__Amplitude__get_charts.
Transform code diffs (PRs, branches, files) into structured YAML briefs for analytics instrumentation planning. Minimal MCP usage — primarily code analysis.
Map existing analytics SDK implementations in a codebase to understand naming conventions and instrumentation patterns. Uses mcp__Amplitude__get_event_properties.
Step 2 of instrumentation workflow — identify candidate analytics events from code change briefs. Uses mcp__Amplitude__get_event_properties, mcp__Amplitude__get_context.
Cross-reference analytics, experiments, session replays, and feedback to surface highest-impact product improvements. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_session_replays, mcp__Amplitude__get_feedback_insights.
Step 3 of instrumentation workflow — transform event candidates into concrete tracking specifications with exact code locations and property definitions. Uses mcp__Amplitude__get_event_properties, mcp__Amplitude__get_project_context.
Deep-dive into a specific AI agent session to identify failure root cause and improvement opportunities. Uses mcp__Amplitude__get_session_replay_events, mcp__Amplitude__query_amplitude_data.
Proactive health monitoring of AI/LLM features covering quality, cost, performance, and error metrics. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_charts, mcp__Amplitude__get_agent_results.
Proactive reliability health check using auto-captured error and network failure data. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_charts, mcp__Amplitude__get_context.
Synthesize multiple session replays into a UX friction map identifying systemic usability issues. Uses mcp__Amplitude__list_session_replays, mcp__Amplitude__get_session_replay_events, mcp__Amplitude__get_session_replays.
Retrieve and synthesize AI agent analysis findings ranked by business impact. Uses mcp__Amplitude__get_agent_results, mcp__Amplitude__get_feedback_insights.
Create, validate, audit, and govern Amplitude event taxonomy across a product. Uses mcp__Amplitude__get_event_properties, mcp__Amplitude__get_project_context, mcp__Amplitude__query_amplitude_data.
Weekly analytics briefing synthesizing 7 days of data with week-over-week momentum analysis. Uses mcp__Amplitude__query_amplitude_data, mcp__Amplitude__get_charts.
Applies Lenny Rachitsky's product wisdom to your specific situation by searching his newsletter archive. Uses mcp__Amplitude__search.
Publish and maintain Chrome extensions on the Chrome Web Store — packaging, store listing, screenshots, review process, and update management.
Build content scripts for DOM manipulation, page interaction, and messaging between extension and web pages.
Build browser extensions that work across Chrome, Firefox, Safari, and Edge — API differences, polyfills, and browser-specific manifest requirements.
Use Chrome/WebExtension APIs correctly — storage, tabs, alarms, notifications, contextMenus, identity, and cross-browser compatibility.
Secure browser extensions — CSP configuration, minimal permissions, content script XSS prevention, and handling sensitive data safely.
Test browser extensions with Playwright, unit test background workers and storage, and set up CI for extension projects.
Build extension UI — popup, options page, side panel, and devtools panel — with React/Vue or vanilla JS.
Publish Firefox browser extensions to Mozilla Add-ons (AMO) — packaging, review process, source code submission, and Firefox-specific requirements.
Design and configure Manifest V3 browser extensions — service workers, permissions, declarative rules, and migration from MV2.
Use when setting up AWS observability with CloudWatch metrics, logs, alarms, dashboards, X-Ray tracing, or CloudWatch Synthetics canaries. Covers monitoring domains across DEA-C01, DVA-C02, and DOP-C02 exams.
Use when building AWS CI/CD pipelines with CodePipeline/CodeBuild/CodeDeploy, choosing deployment strategies, configuring buildspec.yml, or setting up artifact management with CodeArtifact. Covers AWS DOP-C02 and DVA-C02 CI/CD domains.
Use when designing DynamoDB schemas, choosing partition and sort keys, planning GSI/LSI indexes, selecting capacity modes, or implementing DynamoDB Streams and DAX. Covers AWS DEA-C01 and DVA-C02 NoSQL design domains.
Use when designing EKS clusters, choosing node types (managed/Fargate), implementing IRSA for pod IAM access, scaling with Karpenter, or troubleshooting EKS networking. Covers AWS DOP-C02 and SAP-C02 container orchestration domains.
Use when building ETL pipelines with AWS Glue, managing the Glue Data Catalog, designing crawler strategies, or choosing between Glue and EMR. Covers AWS DEA-C01 domain: Data Ingestion and Transformation.
Use when setting up AWS GuardDuty threat detection, managing findings, automating incident response, configuring multi-account setups, or understanding GuardDuty vs Inspector vs Security Hub. Covers AWS SCS-C02 detection and response domain.
Use when designing IAM policies, troubleshooting access denied errors, implementing SCPs, permission boundaries, cross-account roles, or using IAM Access Analyzer. Covers AWS SCS-C02, SAP-C02, and DVA-C02 identity domains.
Use when designing real-time streaming architectures with Amazon Kinesis, choosing between Kinesis services, managing shards, or comparing Kinesis vs MSK. Covers AWS DEA-C01 and DVA-C02 streaming domains.
Use when implementing fine-grained access control for S3 data lakes, setting up column-level or row-level security, sharing data across accounts, or governing the Glue Data Catalog with Lake Formation. Covers AWS DEA-C01 data governance domain.
Use when building Lambda functions, designing serverless architectures, configuring event sources, managing concurrency and cold starts, or setting up Lambda@Edge. Covers AWS DVA-C02 and SAP-C02 serverless domains.
Use when designing Amazon Redshift schemas, optimizing query performance, choosing distribution and sort keys, planning RA3 clusters, or comparing Redshift vs Athena. Covers AWS DEA-C01 data warehousing domain.
Use when designing S3 data lakes, selecting storage classes, configuring lifecycle policies, implementing access control and encryption, or optimizing S3 performance. Covers AWS DEA-C01 and SAP-C02 storage domains.
Use when building ML training/serving pipelines on AWS SageMaker, implementing MLOps with SageMaker Pipelines and Model Registry, monitoring models in production, or optimizing training costs with Spot instances. Covers AWS MLA-C01 exam domains.
Use when designing VPC architectures, configuring subnets and routing, setting up hybrid connectivity (VPN/Direct Connect/Transit Gateway), or choosing between load balancer types. Covers AWS ANS-C01 and SAP-C02 networking domains.
Use when designing Azure Kubernetes Service (AKS) clusters, configuring node pools, integrating Azure AD/Entra ID RBAC, implementing Workload Identity, planning scaling strategies, or studying for AZ-400 or AZ-305.
Use when configuring Microsoft Entra ID (Azure AD), managing app registrations, setting up Conditional Access policies, implementing PIM for privileged access, or studying for AZ-500, SC-500, or AZ-305.
Use when building Azure Pipelines CI/CD workflows, configuring YAML pipelines, setting up deployment environments with approvals, choosing agent types, or studying for Azure DevOps Engineer Expert (AZ-400).
Use when building serverless event-driven applications with Azure Functions, designing Durable Functions orchestration workflows, choosing hosting plans, or studying for Azure AI Cloud Developer Associate (AI-200/AZ-204).
Use when setting up Azure observability with Log Analytics, configuring metric and log alerts, integrating Application Insights for APM, routing diagnostic logs, or studying for AZ-400 or AZ-305.
Use when designing Azure VNet architecture, configuring NSGs, selecting load balancers, planning hybrid connectivity (VPN/ExpressRoute), implementing Private Link, or studying for Azure Network Engineer Associate (AZ-700) or AZ-305.
Use when integrating Azure OpenAI Service, deploying GPT/embedding models, building RAG applications with Azure AI Search, implementing prompt engineering patterns, or studying for Azure AI Engineer Associate (AI-102) or AI-200.
Use when building no-code/low-code data transformations in Microsoft Fabric with Dataflow Gen2, configuring Power Query transformations, setting up incremental refresh, or studying for DP-700 (Microsoft Fabric Data Engineer Associate).
Use when building real-time streaming pipelines in Microsoft Fabric with Eventstreams, connecting Event Hubs or IoT Hub sources, processing streams with windowed aggregations, or routing to Eventhouse/Lakehouse destinations. Covers DP-700 real-time intelligence domain.
Use when configuring Microsoft Fabric workspace security, sensitivity labels, item-level permissions, endorsement, domain management, or row-level security in semantic models. Covers DP-700 governance and security domain.
Use when designing Microsoft Fabric Lakehouse architecture, working with Delta tables, OneLake storage, Spark notebooks, or studying for DP-700 (Microsoft Fabric Data Engineer Associate). Covers Fabric architecture, Delta Lake, OneLake shortcuts, and medallion patterns.
Use when monitoring Microsoft Fabric capacity usage, pipeline run failures, notebook performance, semantic model refresh errors, or managing Fabric capacity with the Capacity Metrics app. Covers DP-700 monitoring and optimization domain.
Use when building data pipeline orchestration in Microsoft Fabric, configuring Copy Data activities, scheduling data movement, implementing control flow logic, or studying for DP-700 (Microsoft Fabric Data Engineer Associate).
Use when implementing Microsoft Sentinel as SIEM/SOAR, configuring data connectors, building analytics rules, managing incidents, automating response with playbooks, or studying for SC-500 (Cloud and AI Security Engineer) or AZ-500.
Use when designing BigQuery schemas, optimizing queries, managing partitioning/clustering, controlling costs, or studying for GCP Professional Data Engineer (GCP-PDE). Covers domains: Design data processing systems (~22%) and Store the data (~15-20%).
Use when building CI/CD pipelines on GCP with Cloud Build, Cloud Deploy, or Artifact Registry. Covers GCP Cloud Developer domain: Building and testing (~26%) and Deploying (~19%). Also covers DevOps Engineer domain: CI/CD pipelines (~25%).
Use when configuring GCP IAM roles, service accounts, org policies, Workload Identity Federation, or least-privilege access. Covers GCP Security Engineer domain: Configuring access (~22-28%) and DevOps domain: Org management (~20%).
Use when setting up Cloud Monitoring dashboards, alerting policies, log-based metrics, distributed tracing, or building SLO/SLI frameworks. Covers GCP DevOps Engineer domain: Troubleshooting (~25%) and Optimizing performance (~12%).
Use when designing VPC networks, configuring subnets/routes/firewall rules, setting up VPC Peering or Shared VPC, or designing hybrid connectivity. Covers GCP Network Engineer domains: VPC Design (~20-25%) and VPC Implementation (~20-25%).
Use when choosing between Cloud Run and Cloud Functions, designing serverless compute on GCP, configuring concurrency/scaling, or building event-driven architectures. Covers GCP Cloud Developer domain: Designing apps (~36%).
Use when designing Cloud Storage buckets, choosing storage classes, setting lifecycle rules, controlling access, or using GCS as a data lake. Covers GCP-PDE domain: Store the data (~15-20%).
Use when designing data quality checks, validating pipeline outputs, setting up schema validation, or using Dataform/Dataplex/Cloud DQ. Covers GCP-PDE domain: Prepare and use data for analysis (~10-15%).
Use when building Apache Beam pipelines on Google Cloud Dataflow — batch ETL, streaming, windowing, triggers, or Dataflow vs Dataproc decisions. Covers GCP-PDE domain: Ingest and process data (~25-30%).
Use when designing GKE clusters, choosing Autopilot vs Standard, configuring workloads, setting up Workload Identity, or managing node pools. Covers GCP Cloud Developer domain: Deploying (~19%) and DevOps domain: CI/CD (~25%).
Use when designing Pub/Sub topics/subscriptions, choosing push vs pull, handling message ordering, dead letters, or integrating Pub/Sub with Dataflow/BigQuery. Covers GCP-PDE domain: Ingest and process data (~25-30%).
Use when configuring Security Command Center, reviewing security findings, setting up threat detection, or managing compliance posture on GCP. Covers GCP Security Engineer domain: Managing operations (~16-22%).
Use when building ML pipelines on Vertex AI, managing model lifecycle, setting up feature stores, or deploying models for serving. Covers GCP-PDE domain: Maintain and automate data workloads (~10-15%) and GCP ML Engineer domain: MLOps (~30-35%).
Use when configuring VPC Service Controls perimeters to protect GCP services from data exfiltration, or designing access levels for conditional access. Covers GCP Security Engineer domain: Securing communications and boundary protection (~18-24%) and Ensuring data protection (~23%).
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
Uses power tools
Uses Bash, Write, or Edit tools
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Complete collection of battle-tested Claude Code configs agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
AI-supervised issue tracker for coding workflows. Manage tasks, discover work, and maintain context with simple CLI commands.
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement