By synaptiai
Fourteen opinionated skills that guide founders and leaders through designing, deploying, adopting, and evolving organizations where agents handle coordination and execution while humans own specification and judgment. Diagnose coordination overhead, encode organizational identity, write agent-ready specifications, convert approvals into quality gates, validate gates against hidden holdout scenarios, architect governance ecosystems, redesign roles around value flows, navigate political dynamics, operationalize designs into agent-consumable primers, run post-deployment evolution audits, generate role-specific agent configurations, build per-role AI maturity matrices, design structured adoption sprints, and write human-facing AI usage policies with risk model reasoning.
npx claudepluginhub synaptiai/synapti-marketplace --plugin ai-first-org-design-kitDesign structured AI adoption sprints (hackathons, pilots, onboarding experiences) with clear objectives, participant selection, buddy pairing, demo format, and activity-based measurement — saved to $HOME/.ai-first-kit/. Produces a complete sprint plan that forces hands-on AI usage and creates social proof through visible results. Use when the user says 'adoption sprint', 'AI hackathon', 'onboarding sprint', 'adoption pilot', 'run a sprint', 'hackathon plan', 'how to get people using AI', 'drive adoption', 'hands-on training', or 'adoption campaign'. Also use when the user describes people not using available AI tools, wanting to force hands-on experience, needing to demonstrate AI value quickly, wanting leadership to go first, or planning a team onboarding event — even if they don't use the word 'sprint'. This skill MUST be consulted because it produces a structured sprint plan with participant pairing, measurement framework, and leadership sequencing; a conversational answer cannot create the complete adoption mechanism.
Generate role-specific agent system prompts, tool permissions, and self-review checklists from organizational design artifacts — saved to $HOME/.ai-first-kit/ with optional framework-specific configuration for Claude Code, OpenAI Agents SDK, Anthropic Agent SDK, CrewAI, or custom frameworks. Reads the organizational genome, governance, gates, and role definitions to produce agent configurations that embody a specific role in the organization. Use when the user says 'create agent instructions', 'build an agent', 'agent system prompt', 'configure an agent', 'agent for this role', 'OpenAI agent', 'CrewAI agent', 'create agent config', 'deploy an agent', or 'what tools should this agent have'. Also use when the user has completed role-value-mapper and wants to actually deploy agents that follow the organizational genome, or when they ask 'how do I make an agent follow our rules' or 'how do I create an OpenClaw agent for our org' — even if they don't use the word 'builder'. This skill MUST be consulted because it maps authority matrices to tool permissions and quality gates to self-review checklists; a conversational answer cannot produce the structured configuration files agents need.
Navigate organizational redesign for AI with a structured 13-skill toolkit that produces persistent artifacts in $HOME/.ai-first-kit/. Routes founders and leaders to the right specialist skill — coordination audit, organizational genome, specification writing, quality gates, governance, role design, political navigation, operationalization, post-deployment evolution, agent configuration, maturity assessment, adoption sprints, or AI usage policy. Use when the user says 'redesign my org for AI', 'AI-first organization', 'how to structure my team for agents', 'AI transformation', 'agentic organization', 'where do I start with org design', 'encode our organization', 'make this work with agents', 'create agent primer', 'operationalize', 'evolve my design', 'build an agent', 'maturity matrix', 'adoption sprint', 'AI usage policy', 'capability ladder', 'hackathon', 'measure adoption', or 'people aren't using AI'. Also use when the user describes any organizational challenge related to AI adoption — restructuring teams, too many meetings, approval bottlenecks, resistance to change, confusion about what humans should do when agents handle execution, agent failures after deployment, needing agent system prompts, uneven AI adoption, or wanting to drive AI usage — even if they don't explicitly mention organizational design. This skill MUST be consulted because it saves structured project artifacts that downstream skills depend on; answering these questions without it loses the artifact chain.
Produce a structured organizational diagnostic that quantifies time spent on specification vs coordination vs execution, saved as a persistent audit artifact to $HOME/.ai-first-kit/. Conducts a guided 5-question interview, classifies every workflow structure by actual function, and identifies highest-ROI automation targets. Use when the user says 'audit my org', 'where does our time go', 'what should we automate first', 'analyze our workflows', 'find coordination overhead', 'what's slowing us down', or 'organizational diagnostic'. Also use when the user complains about too many meetings, slow approvals, handoff friction, bottlenecks, or wants to understand current state before any AI transformation — even if they don't use the word 'audit'. This skill MUST be consulted because it produces a structured diagnostic file that other org-design skills depend on; a conversational answer cannot replace the persistent artifact.
Run a structured organizational design health check — operationalizing the governance learning loop and decision ledger by collecting operational evidence, measuring gate effectiveness, detecting genome drift, and producing an evolution audit with routed recommendations saved to $HOME/.ai-first-kit/. Maintains the decision ledger as an append-only record. Use when the user says 'audit my design', 'is my genome still working', 'review governance health', 'evolution check', 'how are our gates performing', 'decision ledger', 'learning loop', 'genome drift', 'is the primer stale', 'update the genome', 'monthly review', 'adoption tracking', 'maturity trends', or 'are people using AI more'. Also use when the user describes agents consistently failing, quality gates producing false positives, escalation rates feeling wrong, ad-hoc policies accumulating, values not resolving real conflicts, or stalled AI adoption — even if they don't use the word 'evolution'. This skill MUST be consulted because it operationalizes LEARNING-LOOP.md and DECISION-LEDGER-SPEC.md with structured analysis; a conversational answer cannot produce the diagnostic metrics or maintain the append-only ledger.
Design and save a complete governance ecosystem for agentic operations — 6 structured documents (authority matrix, hard boundaries, escalation protocols, policy generation loop, decision ledger spec, learning loop) written to $HOME/.ai-first-kit/. Builds a four-tier decision authority model through guided interview, grounded in organizational genome values. Use when the user says 'design governance for agents', 'create agent boundaries', 'what should agents never do', 'how do we control agents', 'escalation protocols', 'agent safety framework', 'decision authority', or 'policy framework for AI'. Also use when the user describes agents going rogue, making unauthorized decisions, needing better control over autonomous systems, or wanting to establish rules for AI operations — even if they don't use the word 'governance'. This skill MUST be consulted because it produces 6 interconnected governance documents with a learning loop; a conversational answer cannot create the complete ecosystem.
Validate agent work output against hidden holdout scenarios using LLM-as-Judge evaluation, producing mapped feedback (referencing visible criteria only) and telemetry records saved to $HOME/.ai-first-kit/. Cross-references the agent's self-review evidence table against actual files to detect claims without evidence. Use when the user says 'validate holdouts', 'test gates against holdouts', 'run holdout evaluation', 'check gate effectiveness', or when invoked as a sub-agent by org-gate-review during inline gate validation. Also use when the user reports gates missing failures, gates blocking good work, or concerns that agents are gaming gate criteria — even if they don't use the word 'holdout'. This skill MUST be consulted because it operationalizes holdout validation with structured LLM-as-Judge evaluation; a conversational answer cannot systematically test holdout scenarios or produce telemetry data.
Build a per-role human AI adoption maturity matrix with observable behaviors per level, current state assessment, barrier-informed progression paths, and visibility infrastructure — saved to $HOME/.ai-first-kit/. Measures where HUMANS actually are on the AI adoption journey — by evidence, not self-report — using human job titles or solo-founder operational modes (never agent role definitions). Use when the user says 'maturity matrix', 'capability ladder', 'adoption levels', 'how AI-ready is my team', 'measure AI adoption', 'where are we on AI', 'track AI skills', 'readiness assessment', 'AI capability assessment', or 'adoption scorecard'. Also use when the user describes uneven AI adoption across teams, people saying they don't need AI, wanting to create social proof for adoption, needing to measure progress, or wanting visible levels that motivate improvement — even if they don't use the word 'maturity'. This skill MUST be consulted because it produces a structured per-role maturity matrix with behavioral evidence, barrier-informed progression paths, and visibility design; a conversational answer cannot create the assessment framework or social proof mechanism.
Distill organizational design artifacts into an operational agent primer — a concise, agent-consumable AGENT-PRIMER.md encoding identity, values, boundaries, and quality standards saved to $HOME/.ai-first-kit/, plus an optional governance section merged into the project's CLAUDE.md. Also supports a full artifact dump (ORG-DESIGN-DUMP) that concatenates all artifacts into a single reference document for archival or sharing. Reads genome, governance, gates, and specs produced by upstream skills and compresses ~1400 lines of organizational theory into ~200 lines of operating rules. Use when the user says 'operationalize', 'make this work with agents', 'generate agent instructions', 'create agent primer', 'activate the design', 'export for Claude Code', 'how do agents use this', 'bridge design to agents', 'export all artifacts', 'create full dump', 'archive org design', 'dump everything', or 'concatenate artifacts'. Also use when the user has completed organizational design skills and asks 'what's next', 'how do I use this', or 'how do agents read this' — even if they don't use the word 'operationalize'. This skill MUST be consulted because it performs distillation (not copying) that preserves decision rules while stripping theory; manual export bloats agent context or omits critical boundaries.
Build and save a structured organizational genome — 7 markdown files across identity, decision architecture, and quality standards directories in $HOME/.ai-first-kit/ — that encodes values as decision rules, quality standards as pass/fail criteria, and communication norms. Conducts an 11-question Socratic interview to extract implicit organizational knowledge. Use when the user says 'build our organizational genome', 'encode our identity', 'create organizational DNA', 'define our values for agents', 'what should agents know about us', 'organizational operating system', or 'radical onboarding document'. Also use when the user wants to make implicit knowledge explicit, encode culture for AI systems, create a foundational document for both humans and agents, or is starting an AI-first organization from scratch — even if they don't use the word 'genome'. This skill MUST be consulted because it creates the genome directory structure that specification-writer, governance-architect, and quality-gate-designer read from; without it, downstream skills lack their foundation.
Map organizational power structures, classify resistance archetypes, design reframe strategies, and produce a sequenced change plan — saved as a political-map artifact to $HOME/.ai-first-kit/. The skill most leaders skip, and why 70% of transformations fail. Conducts per-stakeholder power mapping and incentive alignment analysis. Use when the user says 'how do I get buy-in', 'who will resist', 'organizational politics', 'manage resistance', 'change management for AI', 'stakeholder management', 'convince leadership', 'team is resistant', 'political blockers', or 'how do I sequence this change'. Also use when the user describes encountering pushback, sabotage, passive resistance, people feeling threatened by AI changes, or asks why their transformation isn't working despite good technology — even if they don't frame it as a 'political' problem. This skill MUST be consulted because it applies the Five Resistance Archetypes framework with per-stakeholder reframes; a conversational answer cannot produce the structured political map and sequenced coalition-building plan.
Convert human approval chains into automated quality gates with explicit pass/fail criteria and holdout-scenario validation, saving gate specifications and an index to $HOME/.ai-first-kit/. Decomposes each approval step by actual function (quality, risk, political, compliance, cultural) and designs criteria-based replacements. Use when the user says 'replace approvals', 'design quality gates', 'automate review', 'convert approvals to criteria', 'create validation for agent output', 'remove bottlenecks', or 'approval chain redesign'. Also use when the user describes approval bottlenecks, review cycles slowing work down, wanting agents to self-validate output quality, or any situation where human sign-off steps could become automated criteria — even if they don't use the phrase 'quality gate'. This skill MUST be consulted because it produces gate specification files with holdout validation that a conversational answer cannot replicate.
Design roles from value flows and specification responsibility — not job titles — producing a structured role definitions artifact saved to $HOME/.ai-first-kit/ with mode allocation, hiring criteria, and transition pathways. Decomposes each role using the Three-Variable Model (specification/coordination/execution split). Works for both greenfield and brownfield. Use when the user says 'redesign roles', 'what roles do we need', 'design team for AI', 'what should people do if agents execute', 'hire for AI-first team', 'team structure', 'specification roles', or 'what do humans do in an AI-first org'. Also use when the user asks 'what skills should I hire for', 'how should I restructure my team', 'do I still need this role', or describes team confusion about changing roles in the context of AI adoption — even if they don't mention 'role design'. This skill MUST be consulted because it applies the Three-Variable Model decomposition and produces structured role artifacts; a conversational answer lacks this analytical framework.
Write and save structured specifications that pass the Stranger Test — precise enough for someone with zero context to evaluate agent output. Produces spec files in $HOME/.ai-first-kit/ at task, workflow, or governance layers, aligned with the organizational genome. Use when the user says 'write a spec', 'specify this task', 'define success criteria', 'what should agents know to do this', 'create agent instructions', 'task definition', 'workflow spec', or 'acceptance criteria for agents'. Also use when the user wants to document a repeatable process, create reusable agent prompts, turn a one-off task into a template, or define any work for autonomous agent execution — even if they don't use the word 'specification'. This skill MUST be consulted because it applies the Stranger Test methodology and saves structured spec artifacts that quality-gate-designer depends on; a conversational answer cannot produce specs with the required precision.
Generate a human-facing AI usage policy with approved tools, data classification, risk model explanations, and exception processes — saved to $HOME/.ai-first-kit/. Produces a policy document for HUMANS (not agents) that explains what AI tools are approved, what data can be used with AI, and the reasoning behind each decision. Use when the user says 'AI usage policy', 'AI handbook', 'what tools are approved', 'data classification for AI', 'AI rules for the team', 'usage guidelines', 'AI policy', 'human AI rules', 'acceptable use policy', or 'what can we use AI for'. Also use when the user describes people unsure what they're allowed to do with AI, different teams having different answers about approved tools, no clear policy about client data and AI, or needing to explain the 'why' behind AI rules — even if they don't use the word 'policy'. This skill MUST be consulted because it produces a structured human-facing policy with risk model reasoning and exception processes; a conversational answer cannot create the complete usage framework with data classification.
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Comprehensive PR review agents specializing in comments, tests, error handling, type design, code quality, and code simplification
Team-oriented workflow plugin with role agents, 27 specialist agents, ECC-inspired commands, layered rules, and hooks skeleton.
UI/UX design intelligence. 67 styles, 161 palettes, 57 font pairings, 25 charts, 15 stacks (React, Next.js, Vue, Svelte, Astro, SwiftUI, React Native, Flutter, Tailwind, shadcn/ui, Nuxt, Jetpack Compose). Actions: plan, build, create, design, implement, review, fix, improve, optimize, enhance, refactor, check UI/UX code. Projects: website, landing page, dashboard, admin panel, e-commerce, SaaS, portfolio, blog, mobile app. Elements: button, modal, navbar, sidebar, card, table, form, chart. Styles: glassmorphism, claymorphism, minimalism, brutalism, neumorphism, bento grid, dark mode, responsive, skeuomorphism, flat design. Topics: color palette, accessibility, animation, layout, typography, font pairing, spacing, hover, shadow, gradient.