From grc-reporter
Composes week-over-week automation coverage narratives. Use when /report:automation-coverage is running. Frames the delta for leadership around time saved, quality of evidence, and forward-looking compounding value.
npx claudepluginhub rifh2000/claude-grc-engineering. --plugin grc-reporterThis skill is limited to using the following tools:
Week-over-week automation is the most under-sold thing GRC engineers do. A control moved from manual to automated is an hour given back to the team, a better evidence trail, and a compounding asset the next audit leverages without anyone thinking about it.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Week-over-week automation is the most under-sold thing GRC engineers do. A control moved from manual to automated is an hour given back to the team, a better evidence trail, and a compounding asset the next audit leverages without anyone thinking about it.
The analysis has to land the story, not the raw number.
Do not infer automation coverage from Finding remediation metadata alone. If the program did not record explicit automation metrics or curated notes, say the delta is unknown rather than inventing one.
Every automation-coverage report is built on three pieces. Pick the one that matters most for this period's audience and lead with it.
This is the headline most weeks.
Every automated control is hours back to a GRC analyst, security engineer, or auditor. Do not present automation as a technical metric. Present it as labor cost.
"We automated evidence collection for 12 controls this week" is the setup. "That's roughly 40 hours per audit cycle returned to the team, 160 hours per year" is the point.
Honest ranges beat fake precision. If the manual version took "between 2 and 4 hours per cycle," say that. Do not round up to make the number bigger.
This anchor matters most when the audience is an auditor, a regulator, or a security-posture-minded exec.
An automated control tested every hour is not the same as a manual control tested quarterly. Say that out loud.
"Before: we sampled 25 repositories annually. Now: every repository, every hour, every push."
That's a security posture change, not a tooling change. Frame it as one. This is where you earn engineering credit from security leadership, not just GRC leadership.
This anchor is where you graduate from "did the work" to "shape the program."
Automation done is table stakes. The story is what it enables.
The value of automation compounds. The narrative should show the compound. If you only report what got done, you're reporting tasks. If you report what it unlocks, you're reporting strategy.
Lead with one of the three anchors. Do not lead with the table. The table is proof, not message.
Do not run this analysis in weeks where the pipeline only has one metric snapshot. The delta isn't real. Use context-bootstrap to explain why and schedule the next snapshot period instead.
Do not fabricate a week-over-week comparison to fill the slot. A missing report is better than a hollow one.
If the audience isn't clear, default to time-saved. It's the most universally legible.