From mastepanoski-claude-skills
AI risk assessment using NIST AI RMF 1.0 framework. Evaluate AI systems across 4 core functions (Govern, Map, Measure, Manage) for trustworthy and responsible AI deployment.
npx claudepluginhub joshuarweaver/cascade-content-creation-misc-1 --plugin mastepanoski-claude-skillsThis skill uses the workspace's default tool permissions.
This skill enables AI agents to perform a comprehensive **AI risk assessment** using the **NIST AI Risk Management Framework (AI RMF 1.0)**, published January 2023 by the National Institute of Standards and Technology.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Builds DCF models with sensitivity analysis, Monte Carlo simulations, and scenario planning for investment valuation and risk assessment.
Calculates profitability (ROE, margins), liquidity (current ratio), leverage, efficiency, and valuation (P/E, EV/EBITDA) ratios from financial statements in CSV, JSON, text, or Excel for investment analysis.
This skill enables AI agents to perform a comprehensive AI risk assessment using the NIST AI Risk Management Framework (AI RMF 1.0), published January 2023 by the National Institute of Standards and Technology.
The AI RMF is a voluntary, technology- and sector-agnostic framework designed to help organizations manage risks associated with AI systems throughout their lifecycle. It promotes trustworthy AI development by addressing risks that affect individuals, organizations, and society.
Use this skill to identify, assess, and manage AI risks; establish governance structures; ensure trustworthy AI characteristics; and align with international AI risk management best practices.
Combine with "ISO 42001 AI Governance" for comprehensive compliance coverage or "OWASP LLM Top 10" for security-focused assessment.
Invoke this skill when:
When executing this assessment, gather:
The AI RMF identifies seven characteristics of trustworthy AI that serve as evaluation criteria across all functions:
The AI RMF Core is composed of four functions, each broken into categories and subcategories:
Establishes organizational policies, processes, and accountability for AI risk management. GOVERN is cross-cutting and applies across all other functions.
Policies, processes, procedures, and practices across the organization related to the mapping, measuring, and managing of AI risks are in place, transparent, and implemented effectively.
Accountability structures ensure appropriate teams and individuals are empowered, responsible, and trained for AI risk management.
Workforce diversity, equity, inclusion, and accessibility processes are prioritized in AI risk management.
Organizational teams are committed to a culture that considers and communicates AI risk.
Processes are in place for robust engagement with relevant AI actors.
Policies and procedures address AI risks from third-party software, data, and supply chain.
Identifies and contextualizes AI system risks within the operational environment.
Context is established and understood.
Categorization of the AI system is performed.
AI capabilities, targeted usage, goals, expected benefits, and costs are understood.
Risks and benefits are mapped for all components including third-party.
Impacts to individuals, groups, communities, organizations, and society are characterized.
Employs tools, techniques, and methodologies to assess, benchmark, and monitor AI risk.
Appropriate methods and metrics are identified and applied.
AI systems are evaluated for trustworthy characteristics.
Mechanisms for tracking identified AI risks over time are in place.
Feedback about efficacy of measurement is gathered and assessed.
Allocates resources to mapped and measured risks on a regular basis.
AI risks based on assessments are prioritized, responded to, and managed.
Strategies to maximize AI benefits and minimize negative impacts are planned and documented.
AI risks and benefits from third-party entities are managed.
Risk treatments and communication plans are documented and monitored.
Follow these steps systematically:
Review AI system:
ai_system_description and system_lifecycle_stageUnderstand context:
organization_context and regulatory environmentDefine scope:
Evaluate organizational governance:
Evaluate risk identification and context:
Evaluate risk measurement and monitoring:
Evaluate risk response and treatment:
Compile assessment findings with ratings and recommendations.
Generate a comprehensive NIST AI RMF assessment report:
# NIST AI RMF Assessment Report
**AI System**: [Name/Description]
**Organization**: [Name]
**Date**: [Date]
**Lifecycle Stage**: [Design/Development/Deployment/Monitoring]
**Evaluator**: [AI Agent or Human]
**AI RMF Version**: 1.0 (January 2023)
---
## Executive Summary
### Overall Risk Profile: [Low / Medium / High / Critical]
**System Type**: [Classifier / Generative / Recommender / Autonomous / Other]
**Deployment Context**: [Internal / Customer-facing / Public / Critical infrastructure]
**Regulatory Applicability**: [EU AI Act risk level, state laws, sector regulations]
### Key Findings
- **Total Issues**: [X]
- Critical: [X] (immediate action required)
- High: [X] (action required within 30 days)
- Medium: [X] (action required within 90 days)
- Low: [X] (improvements recommended)
### Trustworthiness Summary
| Characteristic | Status | Rating |
|---|---|---|
| Valid & Reliable | [Status] | [1-5] |
| Safe | [Status] | [1-5] |
| Secure & Resilient | [Status] | [1-5] |
| Accountable & Transparent | [Status] | [1-5] |
| Explainable & Interpretable | [Status] | [1-5] |
| Privacy-Enhanced | [Status] | [1-5] |
| Fair (Bias Managed) | [Status] | [1-5] |
---
## GOVERN Function Assessment
### GOVERN 1: Policies and Processes
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Finding 1 with evidence]
- [Finding 2 with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
### GOVERN 2: Accountability Structures
**Rating**: [Not Implemented / Partial / Substantial / Full]
[Continue for all GOVERN categories...]
---
## MAP Function Assessment
### MAP 1: Context Established
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Findings with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
[Continue for all MAP categories...]
---
## MEASURE Function Assessment
### MEASURE 1: Methods and Metrics
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Findings with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
[Continue for all MEASURE categories...]
---
## MANAGE Function Assessment
### MANAGE 1: Risk Prioritization
**Rating**: [Not Implemented / Partial / Substantial / Full]
**Findings:**
- [Findings with evidence]
**Gaps:**
- [ ] [Gap description]
**Recommendations:**
- [Recommendation with priority]
[Continue for all MANAGE categories...]
---
## Risk Register
| ID | Risk Description | Function | Likelihood | Impact | Priority | Mitigation |
|---|---|---|---|---|---|---|
| R1 | [Description] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [Strategy] |
| R2 | [Description] | [G/M/ME/MA] | [L/M/H] | [L/M/H] | [P0-P3] | [Strategy] |
---
## Remediation Roadmap
### Phase 1: Critical (0-30 days)
1. [Action item with owner and deadline]
2. [Action item with owner and deadline]
### Phase 2: High Priority (30-90 days)
1. [Action item with owner and deadline]
### Phase 3: Medium Priority (90-180 days)
1. [Action item with owner and deadline]
### Phase 4: Continuous Improvement
1. [Ongoing practices]
---
## Compliance Alignment
### Regulatory Mapping
| Regulation | Relevant AI RMF Functions | Status |
|---|---|---|
| EU AI Act | GOVERN, MAP, MEASURE | [Status] |
| NIST CSF 2.0 | GOVERN, MANAGE | [Status] |
| State AI Laws | GOVERN, MAP | [Status] |
| Sector Regulations | [Relevant functions] | [Status] |
---
## Next Steps
### Immediate Actions
1. [ ] Address critical findings
2. [ ] Assign risk owners
3. [ ] Establish monitoring cadence
### Short-term (1-3 months)
1. [ ] Implement Phase 1 remediation
2. [ ] Establish governance structure
3. [ ] Train personnel on AI RMF
### Long-term (3-12 months)
1. [ ] Complete all remediation phases
2. [ ] Conduct follow-up assessment
3. [ ] Integrate into organizational risk management
---
## Resources
- [NIST AI RMF 1.0](https://www.nist.gov/itl/ai-risk-management-framework)
- [NIST AI RMF Playbook](https://www.nist.gov/itl/ai-risk-management-framework/nist-ai-rmf-playbook)
- [NIST AI RMF Generative AI Profile](https://airc.nist.gov/Docs/1)
- [NIST Trustworthy AI Resource Center](https://airc.nist.gov/)
---
**Assessment Version**: 1.0
**Date**: [Date]
Use this scale for subcategory ratings:
| Rating | Description |
|---|---|
| Not Implemented | No evidence of activity or documentation |
| Partial | Some activity but inconsistent or incomplete |
| Substantial | Mostly implemented with minor gaps |
| Full | Fully implemented and regularly maintained |
Use this scale for trustworthiness characteristics:
| Score | Description |
|---|---|
| 1 | Not addressed |
| 2 | Minimally addressed |
| 3 | Partially addressed |
| 4 | Substantially addressed |
| 5 | Fully addressed and monitored |
For generative AI systems, additionally evaluate (per NIST AI 600-1 GenAI Profile, July 2024):
1.0 - Initial release (NIST AI RMF 1.0 compliant)
Remember: The NIST AI RMF is voluntary and risk-based. Not all subcategories apply to every system. Tailor the assessment depth to the system's risk profile and organizational context.