End-to-end test planning workflow for RHOAI: generate test plans from strategies, create test cases, implement executable automation code, verify UI tests against live clusters via Playwright, publish to GitHub with PR creation, resolve review feedback, and score quality with automated rubrics using parallel sub-agent analysis
npx claudepluginhub opendatahub-io/skills-registry --plugin test-planAnalyzes strategy and ADR to extract feature scope, test objectives, and API endpoints/methods under test. Use for extracting technical scope and API surface area from requirements documents.
Analyzes strategy and ADR to identify test environment configuration, test data, test users, infrastructure, and tooling requirements. Use for determining test execution prerequisites and infrastructure setup needs.
Analyze test cases and recommend placement (component repo vs downstream E2E repo). Use for determining where each test should be implemented based on test level, dependencies, and repository capabilities.
Analyzes strategy and ADR to determine test levels, test types, priority definitions, non-functional requirements, and risks with mitigations. Use for identifying what needs testing, how to prioritize test coverage, and what risks to mitigate.
Generate executable test automation code from test case specifications with intelligent placement in component or downstream repos. Use after test cases are reviewed to create production-ready pytest code that follows repository conventions.
Generate individual test case files from an existing test plan. Use after test plan approval to generate individual TC specifications with preconditions, steps, and expected results organized by category and priority.
Generate a test plan from a strategy (RHAISTRAT or RHOAIENG issue), with optional ADR for extra technical depth. Use when starting test planning for a new RHOAI feature with a defined Jira strategy.
Generate one complete test file with all functions for assigned test cases, including quality scoring and auto-revision
Intelligently merge new analyzer findings into an existing test plan, preserving user edits while incorporating updates from new documentation. Use for test plan updates when re-analysis produces new findings that need to be integrated without overwriting human modifications.
Publish test plan artifacts to GitHub — creates a branch, commits all artifacts, and opens a PR with optional reviewer assignment. Use after test plan review to make artifacts available for team collaboration and formal review feedback.
Assess PR review comments on a published test plan, let the user decide what to apply, make changes, and push updates to the same branch. Use after receiving PR review feedback to efficiently apply approved changes with human control over what gets updated.
Cross-reference existing test plan gaps with new analyzer findings and documentation to determine which gaps are resolved and which remain open. Use after adding new documentation to identify what questions have been answered and what's still missing.
Reviews a generated test plan for completeness, consistency, and quality using a 5-criteria rubric. Scores, auto-revises, and re-scores (max 2 cycles). Use for automated quality assessment and iterative improvement of generated test plans.
Score generated test function code for completeness, quality, and convention adherence using a 5-criteria rubric. Use for validating generated test code quality before including in the final implementation.
Score an existing test plan using the quality rubric without triggering auto-revision. Use for standalone quality assessment of test plans or evaluating test plans created outside the automated generation pipeline.
Browser-based UI test execution against live ODH/RHOAI clusters. Loads TCs from a GitHub PR or repo folder via ui_prepare.py, executes each via a persistent Playwright browser, and produces a visual HTML report with PASS/FAIL/BLOCKED/INCOMPLETE verdicts and screenshots. Use for verifying UI test cases against live clusters with visual evidence and screenshot capture.
Update an existing test plan with new documentation (ADR, API specs, design docs). Re-analyzes, updates artifacts, bumps version, and optionally regenerates test cases. Use when requirements evolve or new technical documentation becomes available after initial test plan creation.
Battle-tested Claude Code plugin for engineering teams — 50 agents, 188 skills, 68 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Unity Development Toolkit - Expert agents for scripting/refactoring/optimization, script templates, and Agent Skills for Unity C# development
Tools to maintain and improve CLAUDE.md files - audit quality, capture session learnings, and keep project memory current.
Share bugs, ideas, or general feedback.
Manus-style persistent markdown files for planning, progress tracking, and knowledge storage. Works with Claude Code, Kiro, Clawd CLI, Gemini CLI, Cursor, Continue, Hermes, and 17+ AI coding assistants. Now with Arabic, German, Spanish, and Chinese (Simplified & Traditional) support.
Browser automation and end-to-end testing MCP server by Microsoft. Enables Claude to interact with web pages, take screenshots, fill forms, click elements, and perform automated browser testing workflows.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim