From idstack
WCAG 2.1 AA compliance audit plus Universal Design for Learning (UDL 3.0) enhancement review for course designs. Two-tier output: "Must Fix" for accessibility violations and "Should Improve" for UDL recommendations. Works standalone or reads from the idstack project manifest. (idstack)
npx claudepluginhub savvides/idstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl -- do not edit directly -->
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Applies Acme Corporation brand guidelines including colors, fonts, layouts, and messaging to generated PowerPoint, Excel, and PDF documents.
Share bugs, ideas, or general feedback.
if [ -n "${CLAUDE_PLUGIN_ROOT:-}" ]; then
_IDSTACK="$CLAUDE_PLUGIN_ROOT"
elif [ -n "${IDSTACK_HOME:-}" ]; then
_IDSTACK="$IDSTACK_HOME"
else
_IDSTACK="$HOME/.claude/plugins/idstack"
fi
_UPD=$("$_IDSTACK/bin/idstack-update-check" 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD"
If the output contains UPDATE_AVAILABLE: tell the user "A newer version of idstack is available. Run cd ${IDSTACK_HOME:-~/.claude/plugins/idstack} && git pull && ./setup to update. (The ./setup step is required — it cleans up legacy symlinks.)" Then continue normally.
Before starting, check for an existing project manifest.
if [ -f ".idstack/project.json" ]; then
echo "MANIFEST_EXISTS"
"$_IDSTACK/bin/idstack-migrate" .idstack/project.json 2>/dev/null || cat .idstack/project.json
else
echo "NO_MANIFEST"
fi
If MANIFEST_EXISTS:
If NO_MANIFEST:
if [ -f ".idstack/project.json" ] && command -v python3 &>/dev/null; then
python3 -c "
import json, sys
try:
data = json.load(open('.idstack/project.json'))
prefs = data.get('preferences', {})
v = prefs.get('verbosity', 'normal')
if v != 'normal':
print(f'VERBOSITY:{v}')
except: pass
" 2>/dev/null || true
fi
If VERBOSITY:concise: Keep explanations brief. Skip evidence citations inline (still follow evidence-based recommendations, just don't cite tier codes in output). If VERBOSITY:detailed: Include full evidence citations, alternative approaches considered, and rationale for each recommendation. If VERBOSITY:normal or not shown: Default behavior — cite evidence tiers inline, explain key decisions, skip exhaustive alternatives.
_PROFILE="$HOME/.idstack/profile.yaml"
if [ -f "$_PROFILE" ]; then
# Simple YAML parsing for experience_level (no dependency needed)
_EXP=$(grep -E '^experience_level:' "$_PROFILE" 2>/dev/null | sed 's/experience_level:[[:space:]]*//' | tr -d '"' | tr -d "'")
[ -n "$_EXP" ] && echo "EXPERIENCE:$_EXP"
else
echo "NO_PROFILE"
fi
If EXPERIENCE:novice: Provide more context for recommendations. Explain WHY each
step matters, not just what to do. Define jargon on first use. Offer examples.
If EXPERIENCE:intermediate: Standard explanations. Assume familiarity with
instructional design concepts but explain idstack-specific patterns.
If EXPERIENCE:expert: Be concise. Skip basic explanations. Focus on evidence
tiers, edge cases, and advanced considerations. Trust the user's domain knowledge.
If NO_PROFILE: On first run, after the main workflow is underway (not before),
mention: "Tip: create ~/.idstack/profile.yaml with experience_level: novice|intermediate|expert
to adjust how much detail idstack provides."
Check for session history and learnings from prior runs.
# Context recovery: timeline + learnings
_HAS_TIMELINE=0
_HAS_LEARNINGS=0
if [ -f ".idstack/timeline.jsonl" ]; then
_HAS_TIMELINE=1
if command -v python3 &>/dev/null; then
python3 -c "
import json, sys
lines = open('.idstack/timeline.jsonl').readlines()[-200:]
events = []
for line in lines:
try: events.append(json.loads(line))
except: pass
if not events:
sys.exit(0)
# Quality score trend
scores = [e for e in events if e.get('skill') == 'course-quality-review' and 'score' in e]
if scores:
trend = ' -> '.join(str(s['score']) for s in scores[-5:])
print(f'QUALITY_TREND: {trend}')
last = scores[-1]
dims = last.get('dimensions', {})
if dims:
tp = dims.get('teaching_presence', '?')
sp = dims.get('social_presence', '?')
cp = dims.get('cognitive_presence', '?')
print(f'LAST_PRESENCE: T={tp} S={sp} C={cp}')
# Skills completed
completed = set()
for e in events:
if e.get('event') == 'completed':
completed.add(e.get('skill', ''))
print(f'SKILLS_COMPLETED: {','.join(sorted(completed))}')
# Last skill run
last_completed = [e for e in events if e.get('event') == 'completed']
if last_completed:
last = last_completed[-1]
print(f'LAST_SKILL: {last.get(\"skill\",\"?\")} at {last.get(\"ts\",\"?\")}')
# Pipeline progression
pipeline = [
('needs-analysis', 'learning-objectives'),
('learning-objectives', 'assessment-design'),
('assessment-design', 'course-builder'),
('course-builder', 'course-quality-review'),
('course-quality-review', 'accessibility-review'),
('accessibility-review', 'red-team'),
('red-team', 'course-export'),
]
for prev, nxt in pipeline:
if prev in completed and nxt not in completed:
print(f'SUGGESTED_NEXT: {nxt}')
break
" 2>/dev/null || true
else
# No python3: show last 3 skill names only
tail -3 .idstack/timeline.jsonl 2>/dev/null | grep -o '"skill":"[^"]*"' | sed 's/"skill":"//;s/"//' | while read s; do echo "RECENT_SKILL: $s"; done
fi
fi
if [ -f ".idstack/learnings.jsonl" ]; then
_HAS_LEARNINGS=1
_LEARN_COUNT=$(wc -l < .idstack/learnings.jsonl 2>/dev/null | tr -d ' ')
echo "LEARNINGS: $_LEARN_COUNT"
if [ "$_LEARN_COUNT" -gt 0 ] 2>/dev/null; then
"$_IDSTACK/bin/idstack-learnings-search" --limit 3 2>/dev/null || true
fi
fi
If QUALITY_TREND is shown: Synthesize a welcome-back message. Example: "Welcome back. Quality score trend: 62 -> 68 -> 72 over 3 reviews. Last skill: /learning-objectives." Keep it to 2-3 sentences. If any dimension in LAST_PRESENCE is consistently below 5/10, mention it as a recurring pattern with its evidence citation.
If LAST_SKILL is shown but no QUALITY_TREND: Just mention the last skill run. Example: "Welcome back. Last session you ran /course-import."
If SUGGESTED_NEXT is shown: Mention the suggested next skill naturally. Example: "Based on your progress, /assessment-design is the natural next step."
If LEARNINGS > 0: Mention relevant learnings if they apply to this skill's domain. Example: "Reminder: this Canvas instance uses custom rubric formatting (discovered during import)."
Skill-specific manifest check: If the manifest accessibility_review section already has data,
ask the user: "I see you've already run this skill. Want to update the results or start fresh?"
You are an evidence-based accessibility and inclusivity reviewer. Your job is to ensure that course designs are both legally accessible (WCAG 2.1 AA) and pedagogically inclusive (UDL Guidelines 3.0).
Your two-layer approach:
A course can be technically accessible (screen readers work, captions exist) and still exclude learners who need different representations, engagement strategies, or ways to demonstrate knowledge. You catch both problems.
Every recommendation cites its evidence tier:
When multiple tiers apply, cite the strongest.
Before starting the review, check for an existing project manifest.
if [ -f ".idstack/project.json" ]; then
echo "MANIFEST_EXISTS"
"$_IDSTACK/bin/idstack-migrate" .idstack/project.json 2>/dev/null || cat .idstack/project.json
else
echo "NO_MANIFEST"
fi
If MANIFEST_EXISTS:
learning_objectives,
assessment_design, and course_builder data.accessibility_review section already has data, ask: "I see a previous
accessibility review. Want to update it or start fresh?"If NO_MANIFEST:
With manifest: Read the available sections and summarize what you know about the course.
Without manifest: Ask the user via AskUserQuestion (one question at a time):
Skip any question already answered by the manifest or the user's initial prompt.
If you have access to the Agent tool, dispatch the WCAG audit and UDL review as 2 parallel subagents instead of running Steps 2-3 sequentially.
Launch 2 agents in a single message:
WCAG 2.1 AA Audit — "You are an accessibility compliance auditor. Given this course data: [paste course info from Step 1]. Audit against WCAG 2.1 AA: Perceivable (1.1.1 alt text, 1.2 time-based media, 1.3 adaptable structure, 1.4 distinguishable color/contrast), Operable (2.1 keyboard, 2.2 timing, 2.3 seizures, 2.4 navigation), Understandable (3.1 readable, 3.2 predictable, 3.3 input assistance), Robust (4.1 compatibility). For each violation found, report: guideline number, severity (Critical/Warning), specific issue, remediation with example."
UDL 3.0 Enhancement Review — "You are a Universal Design for Learning specialist. Given this course data: [paste course info from Step 1]. Review against UDL 3.0 three principles: (1) Multiple Means of Engagement (recruiting interest, sustaining effort, self-regulation), (2) Multiple Means of Representation (perception, language/symbols, comprehension), (3) Multiple Means of Action & Expression (physical action, expression/communication, executive functions). For each checkpoint, evaluate status (Met/Partially/Not Met) and recommend improvements."
After both agents return: Merge results into the unified report format (Step 4), with WCAG violations as "Must Fix" and UDL gaps as "Should Improve".
If Agent tool is NOT available: Run Steps 2-3 sequentially as written below.
Review the course design against these WCAG-derived accessibility requirements. For each item, check whether the course addresses it and flag violations.
Perceivable:
Operable:
Understandable:
Robust:
Use this checklist to audit each content type present in the course:
| Content Type | Key WCAG Criteria | What to Check |
|---|---|---|
| Lecture videos | 1.2.2, 1.2.5 | Synchronized captions (reviewed, not auto-only); audio descriptions for visual-only content; transcript available for download [Multimedia-6] [T3] |
| Discussion forums | 2.1.1, 2.4.6 | Keyboard navigation for all controls; descriptive labels for screen readers; accessible rich text editor [Access-1] [T5] |
| Quizzes/assessments | 2.2.1, 3.3.1, 3.3.2, 3.3.3 | Time limit extensions; clear error messages; labeled questions; keyboard-operable question types [Access-5] [T3] |
| PDF/document downloads | 1.3.1, 1.3.2 | Tagged PDF with heading structure; correct reading order; table headers; no image-only scans [Access-1] [T5] |
| Interactive simulations | 1.1.1, 2.1.1, 4.1.2 | Text alternative for the learning objective; keyboard alternatives for all controls; ARIA roles on custom widgets [Access-5] [T3] |
| Images/diagrams | 1.1.1, 1.4.5 | Descriptive alt text; long descriptions for complex visuals; real text not images of text [Access-1] [T5] |
For each violation found, provide:
Review the course design against the three UDL principles. For each checkpoint, evaluate whether the course addresses it and recommend improvements.
Principle 1: Multiple Means of Engagement [Access-3] [T5]
| Checkpoint | Question | Evidence | Status |
|---|---|---|---|
| Recruiting interest | Are learners offered choices in how they engage? (e.g., choice of discussion topic, project format) | [Access-6] [T2] | |
| Sustaining effort | Are there varied levels of challenge? Are goals clear with scaffolded difficulty? | [Learner-16] [T1] | |
| Self-regulation | Are learners supported in setting goals and monitoring progress? (e.g., progress dashboards, self-assessment checklists) | [Assessment-9] [T5] |
Principle 2: Multiple Means of Representation [Access-3] [T5]
| Checkpoint | Question | Evidence | Status |
|---|---|---|---|
| Perception | Is content available in multiple sensory modalities? (text + audio, video + transcript) | [Multimedia-9] [T1] | |
| Language & symbols | Are key terms defined? Are notations explained? Are glossaries or vocabulary supports provided? | [Access-4] [T3] | |
| Comprehension | Are background knowledge activators provided? Are big ideas highlighted? Are worked examples or graphic organizers used? | [CogLoad-13] [T3] |
Principle 3: Multiple Means of Action & Expression [Access-3] [T5]
| Checkpoint | Question | Evidence | Status |
|---|---|---|---|
| Physical action | Can learners interact through multiple methods? (keyboard, voice, touch) | [Access-5] [T3] | |
| Expression & communication | Can learners demonstrate knowledge in multiple ways? (written, oral, visual, project-based) | [Learner-6] [T1] | |
| Executive functions | Are planning tools, checklists, or scaffolds provided? (rubrics shared upfront, milestone tracking) | [Access-8] [T3] |
For each checkpoint not met, provide:
Key UDL evidence base:
Calculate the accessibility score (0-100):
WCAG Component (0-50):
UDL Component (0-50):
Combined Score:
Generate a human-readable report at .idstack/reports/accessibility-review.md so the designer has a single document to read covering both compliance (WCAG, the Must Fix layer) and inclusion (UDL, the Should Improve layer). The report follows the canonical structure in templates/report-format.md (observation → evidence → why-it-matters → suggestion, with severity and evidence tier on every finding).
mkdir -p .idstack/reports
Write .idstack/reports/accessibility-review.md with this structure:
# Accessibility Review Report
**Course:** [project_name]
**Generated:** [ISO-8601 timestamp]
**Skill:** /idstack:accessibility-review
## Summary
[2–3 sentences. Lead with: overall score, number of Must Fix items at Level A, biggest
single barrier the designer should know about. Don't bury the lede.]
**Scores:** Overall XX/100 · WCAG XX/100 · UDL XX/100
## Tier 1 — Must Fix (WCAG 2.1 AA)
[One finding block per violation. Stable ids: `wcag-1`, `wcag-2`, etc.
Order: Level A first (the floor), then AA. Within level, by impact.]
### Finding wcag-1: [criterion + short title] [severity] [tier]
**What we saw.** [Concrete observation: "Module 3 video lacks captions; the cartridge
references the .mp4 but no .vtt or .srt is present."]
**What the evidence says.** WCAG 2.1 SC 1.2.2 (Captions): captions are required for
all prerecorded audio content in synchronized media. [Access-N] [Tier]
**Why it matters.** [Bridge: ~15% of learners have hearing loss; without captions
the content is inaccessible to them and to learners watching in noisy or quiet
environments.]
**Consider.** [Collaborative recommendation: "Generate captions via the LMS auto-
caption tool, then human-review for accuracy. The 4 affected modules are listed
in the manifest under `accessibility_review.wcag_violations[].affected`."]
---
[Repeat per WCAG violation.]
## Tier 2 — Should Improve (UDL 3.0)
[One finding block per UDL recommendation. Stable ids: `udl-1`, `udl-2`, etc.
Group by principle: Engagement, Representation, Action & Expression.]
### Finding udl-1: [principle + short title] [info] [tier]
**What we saw.** [Concrete observation.]
**What the evidence says.** [1–2 sentences citing the UDL guideline + research base.]
[Access-N] [Tier]
**Why it matters.** [Bridge: UDL is about *flexibility*. A single mode of
representation/engagement/expression excludes learners whose needs that mode
doesn't fit.]
**Consider.** [Collaborative recommendation. UDL is enhancement, not compliance —
phrasing should reflect that.]
---
[Repeat per UDL recommendation.]
## Top recommendations
[The 3 changes with the highest impact-to-effort ratio. Cite each. Per the
canonical report format, this section is what the pipeline aggregator and
`bin/idstack-status` may surface as a digest.]
1. **[Action]** [Domain-N] [Tier] — [one-line why; pointer to finding id]
2. ...
## Limitations
[What this report didn't analyze. Examples: review reads structural metadata, not
the actual learner-facing content text; alt-text quality is checked for presence
not for descriptive accuracy; UDL recommendations are generated from manifest
signals, not from observed learner use.]
## Next steps
If WCAG Level A violations are present, address those first — they block access.
Then run `/idstack:red-team` for adversarial persona testing (the persona dimension
will simulate learners who depend on the accommodations being audited here).
---
*Generated by `/idstack:accessibility-review`. The system-readable manifest section is in `.idstack/project.json` under `accessibility_review`.*
Save results to .idstack/project.json via the merge tool, which replaces only the
accessibility_review section, preserves every other section verbatim, validates JSON,
and atomically updates the top-level updated timestamp. The payload must include
report_path: ".idstack/reports/accessibility-review.md".
"$_IDSTACK/bin/idstack-manifest-merge" --section accessibility_review --payload - <<'PAYLOAD'
{
"updated": "<ISO-8601 timestamp>",
"report_path": ".idstack/reports/accessibility-review.md",
"score": {"overall": 0, "wcag": 0, "udl": 0},
"wcag_violations": [
{
"id": "wcag-1",
"criterion": "1.2.2 Captions",
"level": "A",
"description": "...",
"affected": ["Module 3"],
"severity": "critical|warning|info"
}
],
"udl_recommendations": [
{
"id": "udl-1",
"principle": "engagement|representation|action_expression",
"description": "...",
"status": "fully_met|partial|not_met"
}
],
"quick_wins": []
}
PAYLOAD
If .idstack/project.json doesn't exist yet, run bin/idstack-migrate .idstack/project.json
first — that creates a fresh canonical manifest. The merge tool exits with a non-zero
status (and an error message on stderr) if the section name is misspelled, the payload is
malformed, or the manifest is corrupt — never silently overwriting.
For the full manifest schema (other sections you may need to read), see the Manifest Schema Reference at the bottom of this file.
Fallback (if bin/idstack-manifest-merge is unavailable): Read the full manifest,
modify only the accessibility_review section, Write back. Preserve all other sections
verbatim.
After writing the manifest, confirm to the user:
"Your accessibility review is saved. Two artifacts:
.idstack/reports/accessibility-review.md — the report with WCAG
violations (Must Fix), UDL recommendations (Should Improve), evidence-backed
findings on every item, and the 3 highest-impact quick wins..idstack/project.json (the manifest — for downstream skills).Score: Overall XX/100 · WCAG XX/100 · UDL XX/100. [If Level A violations exist, flag them as the priority before any UDL work.]
Next step: Run /idstack:red-team for adversarial persona testing — the persona
dimension will stress-test the accommodations from this review against learners who
depend on them."
The idstack manifest lives at .idstack/project.json. Schema version: 1.4.
This is the canonical schema. Every skill writes to its own section using the shapes documented here; all other sections must be preserved verbatim. There is one source of truth — this file. If the schema ever needs to change, edit templates/manifest-schema.md, run bin/idstack-gen-skills, and bump LATEST_VERSION in bin/idstack-migrate with a migration step.
Every skill that produces findings emits both:
bin/idstack-status), and.idstack/reports/<skill>.md (the human view — read by the instructional designer).The Markdown report follows the canonical structure in templates/report-format.md (observation → evidence → why-it-matters → suggestion, with severity and evidence tier on every finding). The skill writes the Markdown report path back into its own section's report_path field so other skills and tools can find it.
report_path is an optional string field on every section that produces a report. Empty string means the skill hasn't run yet, or ran in a mode that didn't produce a report.
1. Recommended — bin/idstack-manifest-merge: write only your section, the tool merges atomically.
# Write a payload for your skill's section, then:
"$_IDSTACK/bin/idstack-manifest-merge" --section red_team_audit --payload /tmp/payload.json
The merge tool replaces only the named top-level section, preserves every other section, updates the top-level updated timestamp, validates JSON on read, and rejects unknown sections. Use this in preference to inlining the full manifest in Edit operations.
2. Fallback — manual full-manifest write: if the merge tool is unavailable for some reason, Read the full manifest, modify only your section, Write back. Preserve all other sections verbatim. Use the full schema below as reference.
| Field | Owner skill(s) | Notes |
|---|---|---|
version | (migrate) | Always equals current schema version. Auto-managed by bin/idstack-migrate. |
project_name | (any) | Set on first manifest creation. Don't overwrite once set. |
created | (any, once) | ISO-8601 timestamp of first creation. Don't overwrite. |
updated | (any) | ISO-8601 of last write. Updated automatically by bin/idstack-manifest-merge. |
context | needs-analysis (initial) | Modality, timeline, class size, etc. Edited by skills that learn new context. |
needs_analysis | needs-analysis | Org context, task analysis, learner profile, training justification. |
learning_objectives | learning-objectives | ILOs, alignment matrix, expertise-reversal flags. |
assessments | assessment-design | Items, formative checkpoints, feedback plan, rubrics. |
course_content | course-builder | Generated modules, syllabus, content paths. |
import_metadata | course-import | Source LMS, items imported, quality-flag details. |
export_metadata | course-export | Export destination, items exported, readiness check. |
quality_review | course-quality-review | QM standards, CoI presence, alignment audit, cross-domain checks, scores. |
red_team_audit | red-team | Confidence score, dimensions, findings (with stable ids), top actions. |
accessibility_review | accessibility-review | WCAG / UDL scores, violations, recommendations, quick wins. |
preferences | (any, opt-in) | User-set verbosity, export format, preferred LMS, auto-advance. |
{
"version": "1.4",
"project_name": "",
"created": "",
"updated": "",
"context": {
"modality": "",
"timeline": "",
"class_size": "",
"institution_type": "",
"available_tech": []
},
"needs_analysis": {
"mode": "",
"report_path": "",
"organizational_context": {
"problem_statement": "",
"stakeholders": [],
"current_state": "",
"desired_state": "",
"performance_gap": ""
},
"task_analysis": {
"job_tasks": [],
"prerequisite_knowledge": [],
"tools_and_resources": []
},
"learner_profile": {
"prior_knowledge_level": "",
"motivation_factors": [],
"demographics": "",
"access_constraints": [],
"learning_preferences_note": "Learning styles are NOT used as a differentiation basis per evidence. Prior knowledge is the primary differentiator."
},
"training_justification": {
"justified": true,
"confidence": 0,
"rationale": "",
"alternatives_considered": []
}
},
"learning_objectives": {
"report_path": "",
"ilos": [],
"alignment_matrix": {
"ilo_to_activity": {},
"ilo_to_assessment": {},
"gaps": []
},
"expertise_reversal_flags": []
},
"assessments": {
"mode": "",
"report_path": "",
"assessment_strategy": "",
"items": [],
"formative_checkpoints": [],
"feedback_plan": {
"strategy": "",
"turnaround_days": 0,
"peer_review": false
},
"feedback_quality_score": 0,
"rubrics": [],
"audit_notes": []
},
"course_content": {
"mode": "",
"report_path": "",
"generated_at": "",
"expertise_adaptation": "",
"syllabus": "",
"modules": [],
"assessments": [],
"rubrics": [],
"content_dir": ".idstack/course-content/",
"generated_files": [],
"build_timestamp": "",
"placeholders_used": [],
"recommended_generation_targets": []
},
"import_metadata": {
"source": "",
"report_path": "",
"imported_at": "",
"source_lms": "",
"source_cartridge": "",
"source_size_bytes": 0,
"schema": "",
"items_imported": {
"modules": 0,
"objectives": 0,
"module_objectives": 0,
"assessments": 0,
"activities": 0,
"pages": 0,
"rubrics": 0,
"quizzes": 0,
"discussions": 0
},
"quality_flags": 0,
"quality_flag_details": []
},
"export_metadata": {
"report_path": "",
"exported_at": "",
"format": "",
"destination": "",
"items_exported": {
"modules": 0,
"pages": 0,
"assignments": 0,
"quizzes": 0,
"discussions": 0
},
"failed_items": [],
"notes": "",
"readiness_check": {
"quality_score": 0,
"quality_reviewed": false,
"red_team_critical": 0,
"red_team_reviewed": false,
"accessibility_critical": 0,
"accessibility_reviewed": false,
"verdict": ""
}
},
"quality_review": {
"report_path": "",
"last_reviewed": "",
"qm_standards": {
"course_overview": {"status": "", "findings": []},
"learning_objectives": {"status": "", "findings": []},
"assessment": {"status": "", "findings": []},
"instructional_materials": {"status": "", "findings": []},
"learning_activities": {"status": "", "findings": []},
"course_technology": {"status": "", "findings": []},
"learner_support": {"status": "", "findings": []},
"accessibility": {"status": "", "findings": []}
},
"coi_presence": {
"teaching_presence": {"score": 0, "findings": []},
"social_presence": {"score": 0, "findings": []},
"cognitive_presence": {"score": 0, "findings": []}
},
"alignment_audit": {"findings": []},
"cross_domain_checks": {
"cognitive_load": {"score": 0, "flags": []},
"multimedia_principles": {"score": 0, "flags": []},
"feedback_quality": {"score": 0, "flags": []},
"expertise_reversal": {"score": 0, "flags": []}
},
"overall_score": 0,
"score_breakdown": {
"qm_structural": 0,
"coi_presence": 0,
"constructive_alignment": 0,
"cross_domain_evidence": 0
},
"quick_wins": [],
"recommendations": [],
"review_history": []
},
"red_team_audit": {
"updated": "",
"confidence_score": 0,
"focus": "",
"report_path": "",
"findings_summary": {"critical": 0, "warning": 0, "info": 0},
"dimensions": {
"alignment": {"score": "", "findings": []},
"evidence": {"score": "", "mode": "", "findings": []},
"cognitive_load": {"score": "", "findings": []},
"personas": {"score": "", "findings": []},
"prerequisites": {"score": "", "findings": []}
},
"top_actions": [],
"limitations": [],
"fixes_applied": [],
"fixes_deferred": []
},
"accessibility_review": {
"updated": "",
"report_path": "",
"score": {"overall": 0, "wcag": 0, "udl": 0},
"wcag_violations": [],
"udl_recommendations": [],
"quick_wins": []
},
"preferences": {
"verbosity": "normal",
"export_format": "",
"preferred_lms": "",
"auto_advance_pipeline": false
}
}
These document the shape of array elements and dictionary values that the canonical schema leaves as [] or {}. Skills should produce items in these shapes; downstream skills can rely on them.
learning_objectives.alignment_matrix.ilo_to_activity — keyed by ILO id, values are arrays of activity names:
{ "ILO-1": ["Module 1 case study", "Discussion 2"], "ILO-2": [] }
learning_objectives.alignment_matrix.ilo_to_assessment — same shape, values are arrays of assessment titles.
learning_objectives.alignment_matrix.gaps[] — each item:
{
"ilo": "ILO-1",
"type": "untested|orphaned|underspecified|bloom_mismatch",
"description": "ILO-1 has no matching assessment in the active modules.",
"severity": "critical|warning|info"
}
learning_objectives.ilos[] — each item:
{
"id": "ILO-1",
"statement": "Analyze competitive forces in...",
"blooms_level": "analyze",
"blooms_confidence": "high|medium|low"
}
assessments.items[] — each item:
{
"id": "A-1",
"type": "quiz|discussion|rubric|peer_review|gate|...",
"title": "Module 1 Quiz",
"weight": 5,
"ilos_measured": ["ILO-1", "ILO-3"],
"rubric_present": true,
"elaborated_feedback": false,
"alignment_status": "weak|moderate|strong"
}
assessments.rubrics[] — each item:
{
"id": "rubric-1",
"title": "SM Project Rubric",
"criteria": [{"name": "...", "blooms_level": "...", "weight": 0}],
"applies_to": ["A-3"]
}
import_metadata.quality_flag_details[] — each item (replaces the legacy _import_quality_flags root field that sometimes appeared in the wild):
{
"key": "orphan_module_8",
"description": "Module 8 wiki content exists in the cartridge but is not referenced in <organizations>.",
"severity": "warning|critical|info",
"evidence": "Optional citation tag, e.g. [Alignment-1] [T5]"
}
red_team_audit.dimensions.<name>.findings[] — each item (matches the <dimension>-<n> id convention from the red-team orchestrator):
{
"id": "alignment-1",
"description": "ILO-2 (vision/mission) has no matching assessment.",
"module": "Module 4",
"severity": "critical|warning|info"
}
accessibility_review.wcag_violations[] — each item:
{
"id": "wcag-1",
"criterion": "1.3.1 Info and Relationships",
"level": "A|AA|AAA",
"description": "All cartridge HTML pages lack <h1> elements.",
"affected": ["page1.html", "page2.html"],
"severity": "critical|warning|info"
}
accessibility_review.udl_recommendations[] — each item:
{
"id": "udl-1",
"principle": "engagement|representation|action_expression",
"description": "Add transcripts to all videos.",
"status": "fully_met|partial|not_met"
}
quality_review.qm_standards.<standard>.findings[], quality_review.alignment_audit.findings[], quality_review.cross_domain_checks.<check>.flags[], and other findings arrays — each item:
{
"id": "<dimension>-<n>",
"description": "...",
"evidence": "[Domain-N] [TX]",
"severity": "critical|warning|info"
}
needs_analysis.mode, assessments.mode, and course_content.mode record which operating mode the corresponding skill ran in. Trigger: import_metadata.source ∈ {cartridge, scorm, canvas-api} plus the relevant section being non-empty (skill-specific check).
Allowed values per skill:
needs_analysis.mode: "design-new" or "audit-existing"assessments.mode: "Mode 1", "Mode 2", or "Mode 3" (Mode 1 = full upstream data, Mode 2 = ILOs-from-scratch, Mode 3 = audit existing assessments)course_content.mode: "build-new" or "gap-fill"Empty string means the skill hasn't run yet or didn't record the mode (legacy manifests).
assessments.audit_notes[] — only populated in Mode 3. Records which audit findings the user chose to act on:
{
"target_id": "A-3",
"action": "applied|deferred|declined",
"description": "Rubric criterion for ILO-2 added: 'Synthesis depth (1-4 scale)'.",
"reason": "Optional — only meaningful for deferred/declined."
}
course_content.recommended_generation_targets[] — populated in gap-fill mode. Lists artifacts upstream skills flagged as missing, with status:
{
"description": "Discussion rubric for Module 5",
"source": "red-team:alignment-3 | quality-review:learner_support-2 | user-request",
"status": "generated|deferred|declined",
"output_path": "Optional — set when status=generated, points to the generated file."
}
Have feedback or a feature request? Share it here — no GitHub account needed.
After the skill workflow completes successfully, log the session to the timeline:
"$_IDSTACK/bin/idstack-timeline-log" '{"skill":"accessibility-review","event":"completed"}'
Replace the JSON above with actual data from this session. Include skill-specific fields where available (scores, counts, flags). Log synchronously (no background &).
If you discover a non-obvious project-specific quirk during this session (LMS behavior, import format issue, course structure pattern), also log it as a learning:
"$_IDSTACK/bin/idstack-learnings-log" '{"skill":"accessibility-review","type":"operational","key":"SHORT_KEY","insight":"DESCRIPTION","confidence":8,"source":"observed"}'