Analyze quality assurance practices and create a quality improvement plan
Analyzes quality assurance practices and creates a comprehensive improvement plan with specific findings and actionable recommendations.
/plugin marketplace add dgriffith/bad-daves-robot-army/plugin install dgriffith-bad-daves-robot-army@dgriffith/bad-daves-robot-armyUsing @agent-quality-assurance-expert prepare a quality assurance review report. You must analyze testing strategies, quality metrics, and defect prevention practices in the codebase and create a comprehensive plan WITHOUT making any changes.
The user invoked: /quality-review {optional_scope}
Valid scopes:
git status and git diff)git log and git diff)gh pr view and gh pr diff)If scope is "current changes":
git status to identify changed filesgit diff to see uncommitted changesIf scope is "recent changes":
git log --oneline -10 to see recent commitsgit diff HEAD~5..HEAD or appropriate rangeIf scope starts with "PR":
gh pr view {number} to get PR detailsgh pr diff {number} to get the changesIf scope is a path:
If no scope provided:
Testing Strategy Review
Quality Metrics Analysis
Process Assessment
Create a markdown file at /reports/quality-review-{timestamp}.md with:
# Quality Assurance Review Plan
Generated: {timestamp}
Scope: {full_path_or_entire_project}
## Executive Summary
Brief overview of quality maturity and critical findings
## Quality Assurance Findings
### Critical Issues (Quality Risk)
- [ ] No test coverage: Critical module X
- [ ] Failing tests ignored: Test suite Y
- [ ] No quality gates: Deployment pipeline Z
### High Priority Issues
- [ ] Low test coverage: < 40% in service A
- [ ] Missing integration tests: API B
- [ ] No performance testing: System C
### Medium Priority Issues
- [ ] Flaky tests: Suite D
- [ ] Outdated test data: Environment E
- [ ] Manual testing only: Feature F
## Current Quality Assessment
### Testing Metrics
- Unit test coverage: X%
- Integration test coverage: Y%
- E2E test coverage: Z%
- Total test count: N
- Test execution time: M minutes
- Test success rate: P%
### Quality Metrics
- Defect density: X per KLOC
- Escaped defects: Y per release
- MTTR: Z hours
- Customer issues: N per month
- Code review coverage: M%
## Quality Improvement Plan
### Immediate Actions (1-3 days)
1. Fix failing critical tests
2. Add tests for uncovered critical paths
3. Implement basic quality gates
### Short-term Improvements (1-2 weeks)
1. Increase test coverage to 70%
2. Add integration test suite
3. Implement automated regression
### Long-term Transformations (2-6 weeks)
1. Achieve 85%+ test coverage
2. Implement full test automation
3. Create comprehensive quality framework
## Testing Strategy Enhancement
### Testing Pyramid
/\
/E2E\ 5% - Critical user journeys
/------\
/ API \ 15% - Service integration
/----------\
/ Integration\ 30% - Component interaction
/--------------
/ Unit Tests \ 50% - Business logic
/------------------\
### Test Coverage Goals
- Unit tests: 85% coverage
- Integration: Key workflows
- API tests: All endpoints
- E2E tests: Critical paths
- Performance: Load testing
- Security: Penetration testing
## Test Automation
### Automation Priorities
1. Smoke tests: Deploy validation
2. Regression suite: Core features
3. API tests: Contract testing
4. UI tests: Critical flows
5. Performance: Baseline tests
### Framework Selection
- Unit: Jest/JUnit/pytest
- Integration: Framework choice
- E2E: Selenium/Cypress/Playwright
- API: Postman/REST Assured
- Performance: JMeter/k6
## Quality Gates
### CI/CD Gates
- Build success: Required
- Unit tests: > 85% pass
- Code coverage: > 70%
- Security scan: No high issues
- Linting: Zero errors
- Performance: Baseline met
### Deployment Criteria
- All tests passing
- Code review approved
- Documentation updated
- Security cleared
- Performance validated
## Defect Prevention
### Root Cause Analysis
- Common defect patterns
- Prevention strategies
- Process improvements
- Training needs
### Shift-Left Practices
- Requirements review
- Design review
- Code review
- Early testing
- Static analysis
## Test Data Management
### Data Strategy
- Synthetic data generation
- Data privacy compliance
- Test data refresh
- Environment isolation
- Data versioning
### Environment Management
- Dev: Continuous updates
- Test: Stable baseline
- Staging: Production mirror
- Performance: Isolated
## Performance Testing
### Test Scenarios
- Load testing: Normal traffic
- Stress testing: Peak loads
- Spike testing: Sudden increases
- Endurance: Extended duration
- Scalability: Growth simulation
### Performance Targets
- Response time: < 200ms
- Throughput: X requests/sec
- Error rate: < 0.1%
- CPU usage: < 70%
- Memory: Stable
## Security Testing
### Security Checks
- SAST: Static analysis
- DAST: Dynamic testing
- Dependency scanning
- Penetration testing
- Security review
### Compliance
- OWASP Top 10
- Industry standards
- Regulatory requirements
- Data protection
## Accessibility Testing
### Coverage Areas
- WCAG compliance
- Screen reader testing
- Keyboard navigation
- Color contrast
- Alternative text
### Testing Tools
- Automated scanners
- Manual testing
- User testing
- Compliance reports
## Cross-Platform Testing
### Browser Coverage
- Chrome: Latest 2 versions
- Firefox: Latest version
- Safari: Latest version
- Edge: Latest version
- Mobile browsers
### Device Testing
- Desktop: Various resolutions
- Tablet: iOS/Android
- Mobile: iOS/Android
- Responsive design
## Monitoring and Metrics
### Quality Dashboard
- Test results trend
- Coverage trend
- Defect trends
- Performance metrics
- Build success rate
### Key Indicators
- Test effectiveness
- Defect escape rate
- Test efficiency
- Automation ROI
- Quality cost
## Risk-Based Testing
### Risk Assessment
- Business criticality
- Technical complexity
- Change frequency
- Defect history
- User impact
### Test Prioritization
- Critical: Full coverage
- High: Comprehensive
- Medium: Standard
- Low: Basic smoke
## Continuous Improvement
### Retrospectives
- Testing effectiveness
- Process improvements
- Tool evaluation
- Skill development
### Innovation
- AI-powered testing
- Chaos engineering
- Property-based testing
- Contract testing
## Team Development
### Training Plan
- Testing best practices
- Tool training
- Domain knowledge
- Automation skills
### Quality Culture
- Quality ownership
- Early involvement
- Continuous learning
- Knowledge sharing
## Estimated Impact
- Defect reduction: 70%
- Test efficiency: +50%
- Release confidence: 95%
- Customer satisfaction: +30%
## Implementation Roadmap
### Week 1-2: Foundation
- Fix critical tests
- Add quality gates
- Improve coverage
### Week 3-4: Automation
- Automate regression
- Add API tests
- Implement CI/CD
### Week 5-6: Excellence
- Performance testing
- Security testing
- Full automation
## Success Metrics
- Test coverage > 85%
- Zero escaped defects
- All tests automated
- Quality gates enforced
YOU MUST CREATE THE REPORT FILE. This is not optional.
Create the report file using the Write tool at the specified path:
/reports/{command-name}-{scope}-{timestamp}.mdYYYY-MM-DD-HHmmss/reports/architecture-review-entire-project-2025-10-14-143022.mdFill in ALL sections of the report template
Confirm completion by telling the user:
❌ DON'T: Just summarize findings in the chat ❌ DON'T: Say "I'll create a report" without actually doing it ❌ DON'T: Leave sections incomplete or with placeholders ❌ DON'T: Forget to use the Write tool
✅ DO: Always use the Write tool to create the markdown file ✅ DO: Fill in every section with real findings ✅ DO: Provide the full path to the user when done ✅ DO: Include actionable recommendations
Before responding to the user, verify:
Remember: The report is the primary deliverable. The chat summary is secondary.