This skill should be used when the user asks to "assess AI literacy", "run an assessment", "check literacy level", "evaluate our AI collaboration", "where are we on the framework", or wants to determine their team's AI literacy level using the ALCI instrument.
From ai-literacy-superpowersnpx claudepluginhub russmiles/ai-literacy-superpowers --plugin ai-literacy-superpowersThis skill uses the workspace's default tool permissions.
references/assessment-template.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Assess a team's AI collaboration literacy level by combining observable evidence from the repository with clarifying questions, then produce a timestamped assessment document and a README badge.
Scan the repository for signals that indicate which framework level the team is operating at. Each signal maps to a specific level:
Level 0-1 indicators (awareness + prompting):
Level 2 indicators (verification):
*.yml in .github/workflows/)Level 3 indicators (habitat engineering):
CLAUDE.md or equivalent context engineering fileHARNESS.md with declared constraintsAGENTS.md compound learning memoryMODEL_ROUTING.md model-tier guidance.claude/skills/ project-local skills.claude/agents/ custom agent definitions.claude/commands/ custom commandshooks.json)REFLECTION_LOG.md with entries.markdownlint.json or equivalent configLevel 4 indicators (specification architecture):
specs/ directory with specification filesplan.md, plan-*.md)Level 5 indicators (sovereign engineering):
After scanning, ask questions to fill gaps that observable evidence cannot answer. Focus on:
Ask 3-5 questions maximum. Each question should disambiguate between adjacent levels.
Produce a timestamped Markdown document at
assessments/YYYY-MM-DD-assessment.md with:
After documenting the assessment, identify adjustments that can be made immediately — without changing any application code or requiring team discussion. These are habitat hygiene fixes:
Stale counts: If HARNESS.md Status section shows outdated counts, update them. If README badges show old numbers, update them.
Missing entries: If AGENTS.md GOTCHAS is empty but the assessment revealed gotchas, add them. If REFLECTION_LOG.md has no entries from this assessment, add one.
Drift detection: If HARNESS.md declares constraints that no longer match reality (tools removed, workflows renamed), update the declarations.
Mechanism map staleness: If the README mechanism map is missing components that the scan found (new agents, commands, hooks, skills), update it.
Present each adjustment to the user and apply it immediately. Record what was adjusted in the assessment document.
Based on the gaps identified, recommend specific changes to how existing workflows and artifacts are operated (not built — the infrastructure exists, it just needs to be used differently):
Operating rhythm: Recommend cadences for harness audits, reflection reviews, mutation score checks, and cost monitoring. Suggest adding these to a calendar or checklist.
Habit formation: Identify which framework habits (from Part VII) are not yet automatic and suggest specific practice exercises.
Artifact activation: Identify artifacts that exist but are not actively used (e.g., AGENTS.md that isn't read at session start, MODEL_ROUTING.md that isn't consulted when dispatching agents) and recommend how to activate them.
Promotion opportunities: Identify unverified HARNESS.md constraints that could be promoted to agent or deterministic with available tooling.
Present each recommendation to the user. For accepted recommendations, apply the change (update CLAUDE.md with new cadences, promote HARNESS.md constraints, add operating notes to AGENTS.md). Record accepted and rejected recommendations in the assessment document.
Capture a reflection on the assessment itself:
Append this to REFLECTION_LOG.md as a structured entry.
Add or update a badge in the project's README showing the assessed level:
[](assessments/YYYY-MM-DD-assessment.md)
Colour coding:
| Level | Colour | Hex |
|---|---|---|
| L0 | Grey | 808080 |
| L1 | Light blue | 87CEEB |
| L2 | Blue | 4682B4 |
| L3 | Teal | 20B2AA |
| L4 | Green | 2E8B57 |
| L5 | Gold | DAA520 |
Link target: the assessment document, so anyone who clicks the badge sees the full assessment with evidence and rationale.
The assessed level is the highest level where the team has substantial evidence across all three disciplines. A team with L3 context engineering but L1 verification is assessed at L1 — the weakest discipline is the ceiling.
| Level | Minimum evidence required |
|---|---|
| L0 | Repo exists, team is aware of AI tools |
| L1 | Some AI tool usage, basic prompting |
| L2 | Automated tests in CI, systematic verification of AI output |
| L3 | CLAUDE.md + at least 3 harness constraints enforced + custom agents or skills |
| L4 | Specifications before code + agent pipeline with safety gates |
| L5 | Platform-level governance + cross-team standards + observability |