From ai-security-skills
Analyzes code securability using OWASP FIASSE/SSEM framework. Scores maintainability (analyzability, modifiability, testability), trustworthiness, reliability pillars 0-10 for merge reviews and security baselines.
npx claudepluginhub cmaenner/agent-security-playbookThis skill uses the workspace's default tool permissions.
Analyze code for securable engineering qualities by following the full procedure in `plays/tier1-code-analysis/securability-engineering-review.md`.
Audits code for security vulnerabilities including OWASP Top 10, auth flaws, injection, data exposure, and dependency risks using STRIDE threat modeling and phased reviews.
Performs comprehensive code security audits across 8 dimensions: OWASP Top 10/CWE vulns, secrets, deps/supply chain, IaC, threats/MITRE ATT&CK, auth, AI code, compliance via 8 parallel agents.
Audits codebases for vulnerabilities, OWASP Top 10 issues, and security anti-patterns. Checks Claude Code file denial settings first and invokes security subagent.
Share bugs, ideas, or general feedback.
Analyze code for securable engineering qualities by following the full procedure in plays/tier1-code-analysis/securability-engineering-review.md.
Each SSEM attribute is scored 0–10. Pillar scores are calculated using weighted sub-attribute scores. The overall SSEM score is the weighted average of the three pillar scores.
| Pillar | Weight | Sub-Attributes (Weight) |
|---|---|---|
| Maintainability | 33% | Analyzability (40%), Modifiability (30%), Testability (30%) |
| Trustworthiness | 34% | Confidentiality (35%), Accountability (30%), Authenticity (35%) |
| Reliability | 33% | Availability (25%), Integrity (35%), Resilience (40%) |
Overall SSEM Score = (Maintainability × 0.33) + (Trustworthiness × 0.34) + (Reliability × 0.33)
| Score Range | Grade | Description |
|---|---|---|
| 9.0 – 10.0 | Excellent | Exemplary implementation, minimal improvement needed |
| 8.0 – 8.9 | Good | Strong implementation, minor improvements beneficial |
| 7.0 – 7.9 | Adequate | Functional but notable improvement opportunities exist |
| 6.0 – 6.9 | Fair | Basic requirements met, significant improvements needed |
| < 6.0 | Poor | Critical deficiencies requiring immediate attention |
| Severity | Criteria |
|---|---|
| CRITICAL | Attribute deficit directly enables exploitation or prevents incident response |
| HIGH | Attribute deficit significantly increases probability of material impact |
| MEDIUM | Attribute deficit degrades securability but does not directly enable attack |
| LOW | Attribute deficit is a code quality concern with indirect security implications |
| INFORMATIONAL | Positive observation or minor improvement opportunity |
Scope & Context — Establish language/framework, system type, data sensitivity, exposure, lifecycle stage, and team context.
SSEM Attribute Assessment — Maintainability:
SSEM Attribute Assessment — Trustworthiness:
SSEM Attribute Assessment — Reliability:
Transparency Assessment — Self-documenting code, structured logging, audit trails, instrumentation, trust boundary logging.
Code-Level Threat Identification — Apply "What can go wrong?" using the Four Question Framework; map solutions to SSEM attributes.
Dependency Securability — Evaluate dependencies against SSEM attributes (analyzability, modifiability, testability, trustworthiness, reliability).
Produce Findings — Score each sub-attribute 0–10 using rubrics, calculate weighted pillar scores and overall SSEM score, assign grade (Excellent/Good/Adequate/Fair/Poor), generate SSEM Score Summary, detailed findings per pillar with expected improvement estimates, and 45-item evaluation checklist.
Part 1: SSEM Score Summary (overall score, grade, pillar breakdown with weights, top strengths, top improvement opportunities). Part 2: Detailed Findings per pillar (strengths with evidence, weaknesses with examples, recommendations with priority and expected point improvement). Part 3: Appendix A — 45-item Evaluation Checklist (15 per pillar) with pass/fail summary percentages. Severity count table.