From new-radicle-os
Run an interactive AI security and privacy audit for any business. This skill walks users through identifying every AI tool and integration in their organization, researches the actual security policies and risk posture of each tool, and generates a scored risk assessment with visual scorecard, prioritized action plan, cost-of-inaction estimates, policy templates, and safer alternatives. Use this skill whenever someone asks about AI security assessment, AI privacy audit, AI tool risk evaluation, AI governance review, evaluating AI tools for security, shadow AI discovery, AI vendor risk assessment, or anything related to assessing the security and privacy posture of AI tools used in a business. Also trigger this skill when the user asks to push code to GitHub or run git push (for example: "push this to GitHub", "push this branch", "open a PR and push"). This pre-push trigger ensures AI security review runs before code publication. Also triggers on "audit my AI tools", "what are the risks of the AI tools we use", "AI security scorecard", "evaluate our AI stack", "AI risk assessment", "AI acceptable use policy", or "help me secure our AI tools". Always use this skill even if the user only mentions a single tool — the skill will scope appropriately.
npx claudepluginhub new-radicle/nr_plugins --plugin new-radicle-osThis skill uses the workspace's default tool permissions.
This skill conducts a comprehensive AI security and privacy audit. It maps every AI touchpoint, researches actual vendor policies, scores risk weighted by data sensitivity, and produces a full output package: interactive scorecard, prioritized action plan, financial exposure estimates, data flow diagram, exportable policy templates, safer alternatives for high-risk tools, and a drift monitoring...
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
This skill conducts a comprehensive AI security and privacy audit. It maps every AI touchpoint, researches actual vendor policies, scores risk weighted by data sensitivity, and produces a full output package: interactive scorecard, prioritized action plan, financial exposure estimates, data flow diagram, exportable policy templates, safer alternatives for high-risk tools, and a drift monitoring checklist.
Choose the mode based on user intent:
Use this mode when the user says things like "push to github", "git push", "open a PR and push", or asks for a pre-push security check.
Goal: fast go/no-go decision before code publication.
Use only these references by default:
references/categories.mdreferences/evaluation-domains.mdreferences/regulatory-profiles.md (only if regulated data/jurisdiction risk appears)references/alternatives-database.md (only if a finding is Orange/Red)Do not generate full audit artifacts in this mode unless explicitly requested. Skip scorecard UI, policy templates, drift checklist, and full action-plan package.
categories.md model but only for directly relevant tools and integrations in this push path.## Pre-Push AI Security Gate
Decision: PASS | PASS WITH CONDITIONS | BLOCK
Scope reviewed:
- ...
Top findings:
- [Severity] [Finding] -> [Why it matters]
Required before push:
- [Concrete fix]
Recommended after push:
- [Follow-up improvement]
Use this mode for full organizational audits and governance outputs.
Before mapping tools, establish context that affects scoring weight. Read references/regulatory-profiles.md for question set and weight mappings.
Ask upfront (use ask_user_input_v0):
This drives Domain 6 (Regulatory) calibration and data sensitivity weighting.
Walk through tool categories per references/categories.md. For each tool, capture:
Present full inventory with sensitivity and access columns for confirmation.
Research per references/evaluation-domains.md. Enhancements:
references/alternatives-database.md and quick-score themScore per references/evaluation-domains.md with sensitivity weighting:
Assess blast radius per tool (Org-wide / Team / Individual / Unknown→Org-wide).
For Orange and Red findings, estimate financial exposure per references/cost-model.md.
5 outputs (save all to /mnt/user-data/outputs/):
Interactive Scorecard (React .jsx) — per references/scorecard-template.md. Includes sensitivity badges, weighted scores, financial exposure, alternatives for Orange/Red tools.
Quick Wins Action Plan (.md) — per references/action-plan-template.md. Four time horizons: Day 1 / Week 1 / Month 1 / Quarter 1. Standalone doc for ops handoff. This is the most important output.
Detailed Report (.md) — Full findings with regulatory context, data sensitivity table, weighted scores, financial exposure, alternatives, policy references, monitoring checklist.
Data Flow Diagram — Use Figma generate_diagram (if connected) or Visualizer to show: tools → providers → retention periods → contractual protections. Makes the architecture legible at a glance.
Policy Templates (.md files) — Tailored from references/policy-templates.md:
Generate a Monitoring Checklist per references/drift-monitoring.md:
Read these as needed during each phase:
references/categories.md — Tool categories, intake questions, data sensitivity classificationreferences/evaluation-domains.md — 7 domains, scoring criteria, weighted scoring mathreferences/regulatory-profiles.md — Regulatory intake, jurisdiction-specific weight adjustmentsreferences/alternatives-database.md — Safer alternatives lookup for common toolsreferences/cost-model.md — Financial exposure estimation methodologyreferences/policy-templates.md — AI AUP, vendor questionnaire, meeting AI policy templatesreferences/action-plan-template.md — Quick Wins output formatreferences/scorecard-template.md — React scorecard output specsreferences/drift-monitoring.md — Ongoing monitoring checklist format