End-to-end Ansible workflow automation: design, create, validate, and review playbooks and roles with orchestrated multi-agent pipelines
npx claudepluginhub basher83/lunar-claude --plugin ansible-workflowsAnalyze Ansible code for improvements or suggest enhancements
Scaffold a state-based Ansible playbook with present/absent pattern
Scaffold a new Ansible role with standard directory structure
Run ansible-lint with guidance on fixing issues
Use this agent when debugging Ansible failures, analyzing errors, fixing idempotency issues, or receiving handoffs from ansible-validator (on FAIL) or ansible-reviewer (on NEEDS_REWORK). Examples: <example> Context: Ansible playbook fails during execution with connection error user: "My playbook failed with 'UNREACHABLE' error on the Proxmox hosts" assistant: "I'll analyze the connection failure and identify the root cause." <commentary> The user is experiencing a specific Ansible execution failure (UNREACHABLE). This triggers the debugger because it requires root cause analysis, examining inventory configuration, SSH settings, and network connectivity to diagnose and fix the issue. </commentary> assistant: "I'll use the ansible-debugger agent to analyze the UNREACHABLE error and provide a fix." </example> <example> Context: ansible-validator agent reports FAIL status with lint errors user: "(from validator handoff) Validation failed: 3 lint errors, missing FQCN on lines 15, 28, 45" assistant: "I'll analyze the validation failures and provide fixes." <commentary> This is a handoff from ansible-validator after it detected errors. The debugger takes over to categorize the issues, provide exact code fixes, and prepare the code for re-validation. </commentary> assistant: "I'll use the ansible-debugger agent to fix the validation issues and prepare for re-validation." </example> <example> Context: Task always shows "changed" even when nothing actually changes user: "This task shows changed every time I run it, but the cluster status command is read-only" assistant: "I'll diagnose the idempotency issue and fix it." <commentary> Idempotency issues (tasks reporting "changed" when no change occurs) are a common Ansible problem. The debugger uses the ansible-idempotency skill to diagnose missing changed_when directives and provide the correct fix pattern. </commentary> assistant: "I'll use the ansible-debugger agent to fix the idempotency issue with your status command." </example> <example> Context: ansible-reviewer returns NEEDS_REWORK with multiple HIGH severity findings user: "(from reviewer handoff) Review found 4 HIGH severity issues: missing no_log on secret tasks, hardcoded credentials, no error handling on cluster join" assistant: "I'll address the review findings systematically." <commentary> When ansible-reviewer recommends NEEDS_REWORK, the debugger receives a handoff to address all HIGH severity findings. It loads relevant skills, fixes each issue, and hands off to ansible-validator for re-validation. </commentary> assistant: "I'll use the ansible-debugger agent to address all the review findings and prepare for re-review." </example>
Use this agent when creating new Ansible playbooks, roles, or automation tasks. This includes explicit requests to generate Ansible code, handoffs from /ansible:create-playbook or /ansible:create-role commands, or when the user asks to automate Proxmox operations with Ansible. <example> Context: User explicitly requests creation of Ansible automation for a specific task. user: "Create a playbook to set up Docker on all Proxmox hosts" assistant: "I'll create a production-quality playbook for Docker setup. Let me use the ansible-generator agent to ensure it follows all repository patterns including FQCN, idempotency controls, and proper secret handling." <commentary> The user is explicitly asking to create Ansible automation. The ansible-generator agent should trigger because it specializes in generating complete, runnable Ansible code that follows repository conventions. </commentary> </example> <example> Context: The /ansible:create-playbook command has scaffolded a playbook and hands off implementation. user: "(from /ansible:create-playbook) Implement the playbook at ansible/playbooks/setup-vm.yml to provision VMs" assistant: "The scaffolding is ready. I'll use ansible-generator to implement the full playbook with proper Proxmox module usage, idempotency patterns, and validation tasks." <commentary> This is a handoff from the create-playbook command. The generator agent should implement the actual tasks since the scaffolding only creates the structure. The agent will load skills and generate production-ready code. </commentary> </example> <example> Context: User wants to automate a Proxmox-specific operation. user: "Write a role to configure VLAN-aware network bridges on the Proxmox cluster" assistant: "I'll generate a complete role for Proxmox network configuration. Using ansible-generator to ensure it uses community.proxmox modules where available and includes proper idempotency wrappers for any CLI operations." <commentary> Proxmox network configuration requires specialized knowledge of the community.proxmox collection and proper handling of VLAN bridges. The generator agent should trigger to produce code that follows the repository's established network role patterns. </commentary> </example> <example> Context: User asks to create a new role for system configuration. user: "Create an Ansible role to manage user accounts with SSH keys and sudo access" assistant: "I'll create a complete role for user management. Using ansible-generator to ensure proper variable prefixing, defaults structure, and integration with the existing system_user role patterns." <commentary> Role creation requires understanding the repository's role architecture patterns. The generator agent will load role-design and fundamentals skills to ensure the output matches existing conventions. </commentary> </example>
Use this agent to orchestrate the full Ansible development pipeline - from scaffolding through generation, validation, and review. This agent coordinates the workflow, dispatches specialized agents, manages state transitions, and ensures quality gates are met before completion. <example> Context: User wants to create a complete Ansible playbook with full quality assurance. user: "Create a production-ready playbook to deploy a 3-node MicroK8s cluster on Proxmox" assistant: "This requires the full pipeline - scaffolding, code generation, validation, and review. I'll use the ansible-orchestrator agent to coordinate all phases and ensure production readiness." <commentary> Complex multi-step Ansible creation benefits from orchestrated pipeline execution. The orchestrator ensures each phase completes successfully before proceeding, maintains state, and handles failures appropriately. </commentary> </example> <example> Context: User wants end-to-end automation of a new role with quality gates. user: "Build me a complete Ansible role for Proxmox CEPH storage management with all the bells and whistles" assistant: "I'll orchestrate the full development pipeline for this role - generation with proper patterns, validation for correctness, and deep review for production readiness. Using ansible-orchestrator to coordinate." <commentary> "Complete" and "all the bells and whistles" signals the user wants the full pipeline treatment, not just code generation. The orchestrator ensures comprehensive coverage. </commentary> </example> <example> Context: Pipeline was interrupted and needs to resume from current state. user: "The validation failed. Can you continue the pipeline and fix the issues?" assistant: "I'll use the ansible-orchestrator to assess the current pipeline state, dispatch the debugger to fix validation failures, and continue through to completion." <commentary> Pipeline recovery and continuation is a core orchestrator responsibility. It reads current state, determines next action, and coordinates the appropriate agents. </commentary> </example> <example> Context: User wants to understand the pipeline status and next steps. user: "Where is my Ansible pipeline at? What happens next?" assistant: "I'll use the ansible-orchestrator to check the pipeline state and provide a status report with recommended next actions." <commentary> Pipeline status queries should route to the orchestrator, which maintains awareness of all phases and can explain the current state and path forward. </commentary> </example>
Use this agent when performing deep best-practices review of Ansible code with structured, measurable output and actionable recommendations. This agent triggers after ansible-validator passes (handoff), when the user explicitly asks for code review or production readiness checks, or before merging Ansible PRs. <example> Context: The ansible-validator agent has completed validation and passed, now handing off for deeper review. user: "The validator passed. Now review ansible/playbooks/create-cluster.yml for production readiness." assistant: "The validation checks passed. Now I'll use the ansible-reviewer agent to perform a comprehensive best-practices analysis against all quality dimensions." <commentary> This is a handoff from ansible-validator after passing. The reviewer performs deeper analysis beyond linting - evaluating idempotency patterns, security practices, structure, performance, and maintainability. This triggers because the user explicitly requested production readiness review after validation. </commentary> </example> <example> Context: User has created or modified an Ansible role and wants a thorough code review before merging. user: "Review my proxmox_ceph role for best practices. I want to make sure it's production-ready." assistant: "I'll use the ansible-reviewer agent to perform a comprehensive analysis of your proxmox_ceph role. This will evaluate the role against idempotency, security, structure, performance, maintainability, and Proxmox-specific best practices." <commentary> The user explicitly requested a review for production readiness. This is the primary use case - detailed code review with structured findings and actionable recommendations. The reviewer will load all ansible-* skills and produce a scored assessment. </commentary> </example> <example> Context: Significant changes have been made to Ansible playbooks and the user wants to verify quality before committing. user: "I've refactored the cluster setup playbooks. Is this ready for production?" assistant: "I'll use the ansible-reviewer agent to evaluate your refactored playbooks against production readiness criteria. This will provide a structured assessment with scores, specific findings, and a clear recommendation." <commentary> After significant changes, a comprehensive review ensures the code meets all best practices before deployment. The question "Is this ready for production?" signals the need for the reviewer's structured assessment rather than just validation. </commentary> </example> <example> Context: Before merging an Ansible PR, the user wants quality assurance. user: "Can you review the Ansible changes in this PR before I merge?" assistant: "I'll use the ansible-reviewer agent to analyze the Ansible changes for best practices compliance. This will provide a structured report with ratings across idempotency, security, structure, performance, maintainability, and Proxmox categories." <commentary> PR review is a key trigger. The reviewer provides the structured assessment needed to make informed merge decisions. This goes beyond linting to evaluate patterns and practices. </commentary> </example>
Use this agent when validating Ansible code through linting, syntax checking, and best practices verification. Triggers after ansible-generator completes code generation (workflow handoff), when user explicitly requests validation or linting of Ansible code, or before committing Ansible changes. Examples: <example> Context: The ansible-generator agent has just created a new playbook and needs validation before review. user: "(from generator handoff) Validate the playbook at ansible/playbooks/setup-docker.yml" assistant: "I'll use the ansible-validator agent to run comprehensive checks on the generated playbook." <commentary> Agent should trigger because ansible-generator has handed off newly created code that requires validation before proceeding to review. This is the standard workflow handoff point in the ansible-generator -> ansible-validator -> ansible-reviewer pipeline. </commentary> </example> <example> Context: User is developing Ansible code and wants to verify it passes quality checks. user: "Check if my playbook passes all lint rules" assistant: "I'll use the ansible-validator agent to run ansible-lint and syntax checks on your playbook." <commentary> Agent should trigger because user explicitly requested lint validation. Keywords like "lint", "validate", "check", "verify" for Ansible code indicate validation is needed. </commentary> </example> <example> Context: User is preparing to commit changes and wants pre-commit validation. user: "Validate all my Ansible changes before I commit" assistant: "I'll use the ansible-validator agent to validate all modified Ansible files before your commit." <commentary> Agent should trigger because user wants validation as part of pre-commit workflow. This ensures code quality before changes are committed to the repository. </commentary> </example> <example> Context: User has made changes to an existing role and wants to ensure nothing is broken. user: "Run ansible-lint on the proxmox_ceph role" assistant: "I'll use the ansible-validator agent to lint the proxmox_ceph role and check for any issues." <commentary> Agent should trigger because user explicitly mentioned ansible-lint. Direct tool invocation requests always indicate validation is needed. </commentary> </example>
This skill should be used when implementing error handling in Ansible, using block/rescue/always patterns, creating retry logic with until/retries, handling expected failures gracefully, or providing clear error messages with assert and fail.
This skill should be used when writing Ansible playbooks, creating Ansible tasks, running ansible-playbook commands, selecting Ansible modules, or working with Ansible collections. Provides golden rules, FQCN requirements, module selection guidance, and execution patterns using uv run.
This skill should be used when writing idempotent Ansible tasks, using command or shell modules, implementing changed_when and failed_when directives, creating check-before-create patterns, or troubleshooting tasks that always show "changed".
This skill should be used when creating new Ansible playbooks, designing playbook structure, implementing state-based playbooks with present/absent patterns, organizing plays and tasks, or structuring playbook variables. Covers play organization, variable scoping, and state-based design patterns.
This skill should be used when automating Proxmox VE with Ansible, creating VMs or templates, managing Proxmox clusters, using community.proxmox collection, or deciding between native modules and CLI commands (pvecm, pveceph, qm).
This skill should be used when creating Ansible roles, designing role directory structure, organizing role variables in defaults vs vars, writing role handlers, or structuring role tasks. Based on analysis of 7 production geerlingguy roles.
This skill should be used when working with secrets in Ansible playbooks, integrating Infisical vault, using no_log directive, retrieving credentials securely, or implementing fallback patterns for secrets. Covers the reusable Infisical lookup task.
This skill should be used when running ansible-lint, configuring linting rules, testing Ansible playbooks, validating playbook syntax, or setting up integration tests. Covers ansible-lint configuration and testing strategies.
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, and rules evolved over 10+ months of intensive daily use
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Core skills library for Claude Code: TDD, debugging, collaboration patterns, and proven techniques
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Code cleanup, refactoring automation, and technical debt management with context restoration
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.