From agentic-qe-fleet
Evaluates code quality via complexity analysis, lint results, code smells, test coverage, and metrics. Checks deployment readiness, enforces quality gates, scores codebases, and generates pass/fail reports.
npx claudepluginhub proffesor-for-testing/agentic-qe --plugin agentic-qe-fleetThis skill is limited to using the following tools:
Guide the use of v3's quality assessment capabilities including automated quality gates, metrics aggregation, trend analysis, and deployment readiness evaluation.
Assesses code maintainability using 5 qualities: cohesion, coupling, encapsulation, testability, non-redundancy. Scores methods/classes/modules across languages; generates markdown/JSON/HTML reports with remediation guidance.
Assesses code maintainability using 5 qualities (cohesion, coupling, encapsulation, testability, non-redundancy) with scoring rubrics across languages at method/class/module levels. Generates markdown reports with remediation guidance.
Computes composite quality score from test results, security audits, dev detection, and PR review outputs to determine review depth.
Share bugs, ideas, or general feedback.
Guide the use of v3's quality assessment capabilities including automated quality gates, metrics aggregation, trend analysis, and deployment readiness evaluation.
# Run quality assessment
aqe quality assess --scope src/ --gates all
# Check deployment readiness
aqe quality deploy-ready --environment production
# Generate quality report
aqe quality report --format dashboard --period 30d
# Compare quality between releases
aqe quality compare --from v1.0 --to v2.0
// Comprehensive quality assessment
Task("Assess code quality", `
Evaluate quality for src/:
- Code complexity (cyclomatic, cognitive)
- Test coverage and mutation score
- Security vulnerabilities
- Code smells and technical debt
- Documentation coverage
Generate quality score and recommendations.
`, "qe-quality-analyzer")
// Deployment readiness check
Task("Check deployment readiness", `
Evaluate if release v2.1.0 is ready for production:
- All tests passing
- Coverage thresholds met
- No critical vulnerabilities
- Performance benchmarks passed
- Documentation updated
Provide go/no-go recommendation.
`, "qe-deployment-advisor")
await qualityAnalyzer.assessCode({
scope: 'src/**/*.ts',
metrics: {
complexity: {
cyclomatic: { max: 15, warn: 10 },
cognitive: { max: 20, warn: 15 }
},
maintainability: {
index: { min: 65 },
duplication: { max: 3 } // percent
},
documentation: {
publicAPIs: { min: 80 },
complexity: { min: 70 }
}
}
});
await qualityGate.evaluate({
gates: {
coverage: { min: 80, blocking: true },
complexity: { max: 15, blocking: false },
vulnerabilities: { critical: 0, high: 0, blocking: true },
duplications: { max: 3, blocking: false },
techDebt: { maxRatio: 5, blocking: false }
},
action: {
onPass: 'proceed',
onFail: 'block-merge',
onWarn: 'notify'
}
});
await deploymentAdvisor.assess({
release: 'v2.1.0',
criteria: {
testing: {
unitTests: 'all-pass',
integrationTests: 'all-pass',
e2eTests: 'critical-pass',
performanceTests: 'baseline-met'
},
quality: {
coverage: 80,
noNewVulnerabilities: true,
noRegressions: true
},
documentation: {
changelog: true,
apiDocs: true,
releaseNotes: true
}
}
});
quality_score:
components:
test_coverage:
weight: 0.25
metrics: [statement, branch, function]
code_quality:
weight: 0.20
metrics: [complexity, maintainability, duplication]
security:
weight: 0.25
metrics: [vulnerabilities, dependencies]
reliability:
weight: 0.20
metrics: [bug_density, flaky_tests, error_rate]
documentation:
weight: 0.10
metrics: [api_coverage, readme, changelog]
scoring:
A: 90-100
B: 80-89
C: 70-79
D: 60-69
F: 0-59
interface QualityDashboard {
overallScore: number; // 0-100
grade: 'A' | 'B' | 'C' | 'D' | 'F';
dimensions: {
name: string;
score: number;
trend: 'improving' | 'stable' | 'declining';
issues: Issue[];
}[];
gates: {
name: string;
status: 'pass' | 'fail' | 'warn';
value: number;
threshold: number;
}[];
trends: {
period: string;
scores: number[];
alerts: Alert[];
};
recommendations: Recommendation[];
}
# Quality gate in pipeline
quality_check:
stage: verify
script:
- aqe quality assess --gates all --output report.json
rules:
- if: $CI_PIPELINE_SOURCE == "merge_request_event"
artifacts:
reports:
quality: report.json
allow_failure:
exit_codes:
- 1 # Warnings only
After each quality assessment, append results to run-history.json in this skill directory:
node -e "
const fs = require('fs');
const h = JSON.parse(fs.readFileSync('.claude/skills/qe-quality-assessment/run-history.json'));
h.runs.push({date: new Date().toISOString().split('T')[0], gate_result: 'PASS_OR_FAIL', failed_checks: []});
fs.writeFileSync('.claude/skills/qe-quality-assessment/run-history.json', JSON.stringify(h, null, 2));
"
Read run-history.json before each run — alert if quality gate failed 3 of last 5 runs.
/qe-coverage-analysis and /mutation-testing first/test-failure-investigator to diagnose failures/code-review-quality for comprehensive reviewaqe health to diagnose, or aqe init to re-initializePrimary Agents: qe-quality-analyzer, qe-deployment-advisor, qe-metrics-collector Coordinator: qe-quality-coordinator Related Skills: qe-coverage-analysis, security-testing