Aggregates AI literacy assessments across multiple repositories into a portfolio view with level distributions, shared gaps, outliers, and prioritized improvement plans by organizational impact. Discovers repos from local paths, GitHub orgs, or topics.
npx claudepluginhub habitat-thinking/ai-literacy-superpowers --plugin ai-literacy-superpowersThis skill uses the workspace's default tool permissions.
Aggregate AI literacy assessments across multiple repositories into
Assesses team's AI collaboration literacy by scanning repo for signals like habitat docs, CI workflows, vulnerability scans, tool configs; generates report and badge.
Performs first-pass strategic review of repositories, producing evidence-cited maps calibrated to reference classes to guide where to engage, tread carefully, or leave alone. Advisory only.
Runs Agent-Ready Codebase Assessment scoring codebase across 8 dimensions with parallel agents, producing weighted 0-100 score, band rating, and improvement roadmap. Supports Ruby, Python, PHP, TypeScript, JavaScript, Go, Java, Scala, Rust.
Share bugs, ideas, or general feedback.
Aggregate AI literacy assessments across multiple repositories into an organisational portfolio view. Identifies shared gaps, outliers, and stale assessments, then generates an improvement plan grouped by impact scope (organisation-wide, cluster, individual).
This skill reads existing assessment documents from individual repos.
It does not run /assess remotely — each repo must be assessed
individually first, or the lightweight scan estimates a level from
observable evidence.
The skill discovers repos through three entry points, which can be combined.
--local <path>)Scan directories under the given path. Each subdirectory with a
.git directory is treated as a repo. Check each for an
assessments/ directory.
--org <name>)Use the GitHub CLI to list repos:
gh repo list <name> --json name,url,topics --limit 200
--topic <tag>)Filter by topic tag. When combined with --org, scopes to that org:
gh repo list <name> --topic <tag> --json name,url,topics --limit 200
When used alone, searches repos accessible to the authenticated user:
gh search repos --topic <tag> --json fullName,url --limit 200
When both --local and GitHub modes are provided, local repos take
precedence — avoid re-fetching what is already cloned. Deduplicate
by repo name.
Based on the provided flags, build a list of repos. Report what was found:
Discovered N repos:
Local: N (from ~/code/myorg/)
GitHub org: N (from Habitat-Thinking)
Total after deduplication: N
If no repos are found, stop and report.
For each repo, determine its assessment status:
Has assessment: Read the most recent assessments/*.md file
(sorted by filename date). Parse:
**Assessed level** line)**Date** line)No assessment, scan enabled (--scan-unassessed, the default):
Run the lightweight evidence-only scan. See
references/lightweight-scan.md for the signal list and API checks.
Produce an estimated level with lower confidence. No discipline scores
are generated — the scan only determines level, not per-discipline
ratings.
No assessment, scan disabled (--no-scan-unassessed):
Mark as "not assessed." Include in the portfolio view with no level.
Report progress as repos are processed:
Gathering assessments...
api-gateway: L3 (assessed 2026-04-01)
billing-service: L2 (assessed 2026-03-15)
new-service: L1 (estimated — no prior assessment)
legacy-auth: not assessed (--no-scan-unassessed)
Compute portfolio-level metrics:
Shared gaps: Gaps that appear in 3 or more repos. These indicate organisational problems, not repo-specific issues. Read the Gaps section from each assessed repo's assessment document. For estimated repos, infer gaps from missing evidence signals.
Outliers: Repos significantly above or below the portfolio median.
Stale assessments: Repos where the most recent assessment is older than 90 days. These need re-assessment before their data should drive decisions.
Using the literacy-improvements skill's
references/improvement-mapping.md, map shared gaps to plugin
commands and skills. Group by impact scope:
Organisation-wide (50%+ of repos): One action lifts many repos. These get highest priority. Typical actions: deploy shared CI templates, establish org-wide cadences, create shared skill libraries.
Cluster (2-4 repos): Targeted skill-sharing between related repos. Typical actions: roll out a specific tool, share a constraint pattern.
Individual: Unique to one repo. Defer to that repo's own
literacy-improvements plan.
Present the plan with estimated impact (how many repos each action lifts and toward which level).
For each assessed project that is locally accessible, check for a
harness-health snapshot in observability/snapshots/. If one exists,
read the most recent snapshot's markdown sections and extract:
- Constraints: N/M enforced (P%) → extract P as a decimal)promotions_this_period / weeks_between_snapshots)- Rules active: N/M → compute N/M as a decimal)specs/ directory, threat model doc. Score = count / 5.If a project has an assessment but no snapshot, its habitat metrics are "unavailable".
Compute means across projects that have values for each metric.
Display the full portfolio view to the user.
Write the document to assessments/YYYY-MM-DD-portfolio-assessment.md
in the current working directory (typically the portfolio or platform
repo). Use the template from references/portfolio-template.md.
/assess remotely in individual repos