From idstack
Imports courses from LMS like Canvas, Blackboard, Moodle, D2L via IMS Common Cartridge, pasted documents, or Canvas REST API. Maps structure to idstack manifest with quality flags, task analysis, and Bloom's classification.
npx claudepluginhub savvides/idstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl -- do not edit directly -->
Exports idstack course content to LMS like Canvas, Blackboard, Moodle, D2L. Generates IMS Common Cartridge (.imscc) files or pushes directly via Canvas REST API from /course-builder output.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Share bugs, ideas, or general feedback.
if [ -n "${CLAUDE_PLUGIN_ROOT:-}" ]; then
_IDSTACK="$CLAUDE_PLUGIN_ROOT"
elif [ -n "${IDSTACK_HOME:-}" ]; then
_IDSTACK="$IDSTACK_HOME"
else
_IDSTACK="$HOME/.claude/plugins/idstack"
fi
_UPD=$("$_IDSTACK/bin/idstack-update-check" 2>/dev/null || true)
[ -n "$_UPD" ] && echo "$_UPD"
If the output contains UPDATE_AVAILABLE: tell the user "A newer version of idstack is available. Run cd ${IDSTACK_HOME:-~/.claude/plugins/idstack} && git pull && ./setup to update. (The ./setup step is required — it cleans up legacy symlinks.)" Then continue normally.
Before starting, check for an existing project manifest.
if [ -f ".idstack/project.json" ]; then
echo "MANIFEST_EXISTS"
"$_IDSTACK/bin/idstack-migrate" .idstack/project.json 2>/dev/null || cat .idstack/project.json
else
echo "NO_MANIFEST"
fi
If MANIFEST_EXISTS:
If NO_MANIFEST:
if [ -f ".idstack/project.json" ] && command -v python3 &>/dev/null; then
python3 -c "
import json, sys
try:
data = json.load(open('.idstack/project.json'))
prefs = data.get('preferences', {})
v = prefs.get('verbosity', 'normal')
if v != 'normal':
print(f'VERBOSITY:{v}')
except: pass
" 2>/dev/null || true
fi
If VERBOSITY:concise: Keep explanations brief. Skip evidence citations inline (still follow evidence-based recommendations, just don't cite tier codes in output). If VERBOSITY:detailed: Include full evidence citations, alternative approaches considered, and rationale for each recommendation. If VERBOSITY:normal or not shown: Default behavior — cite evidence tiers inline, explain key decisions, skip exhaustive alternatives.
_PROFILE="$HOME/.idstack/profile.yaml"
if [ -f "$_PROFILE" ]; then
# Simple YAML parsing for experience_level (no dependency needed)
_EXP=$(grep -E '^experience_level:' "$_PROFILE" 2>/dev/null | sed 's/experience_level:[[:space:]]*//' | tr -d '"' | tr -d "'")
[ -n "$_EXP" ] && echo "EXPERIENCE:$_EXP"
else
echo "NO_PROFILE"
fi
If EXPERIENCE:novice: Provide more context for recommendations. Explain WHY each
step matters, not just what to do. Define jargon on first use. Offer examples.
If EXPERIENCE:intermediate: Standard explanations. Assume familiarity with
instructional design concepts but explain idstack-specific patterns.
If EXPERIENCE:expert: Be concise. Skip basic explanations. Focus on evidence
tiers, edge cases, and advanced considerations. Trust the user's domain knowledge.
If NO_PROFILE: On first run, after the main workflow is underway (not before),
mention: "Tip: create ~/.idstack/profile.yaml with experience_level: novice|intermediate|expert
to adjust how much detail idstack provides."
Check for session history and learnings from prior runs.
# Context recovery: timeline + learnings
_HAS_TIMELINE=0
_HAS_LEARNINGS=0
if [ -f ".idstack/timeline.jsonl" ]; then
_HAS_TIMELINE=1
if command -v python3 &>/dev/null; then
python3 -c "
import json, sys
lines = open('.idstack/timeline.jsonl').readlines()[-200:]
events = []
for line in lines:
try: events.append(json.loads(line))
except: pass
if not events:
sys.exit(0)
# Quality score trend
scores = [e for e in events if e.get('skill') == 'course-quality-review' and 'score' in e]
if scores:
trend = ' -> '.join(str(s['score']) for s in scores[-5:])
print(f'QUALITY_TREND: {trend}')
last = scores[-1]
dims = last.get('dimensions', {})
if dims:
tp = dims.get('teaching_presence', '?')
sp = dims.get('social_presence', '?')
cp = dims.get('cognitive_presence', '?')
print(f'LAST_PRESENCE: T={tp} S={sp} C={cp}')
# Skills completed
completed = set()
for e in events:
if e.get('event') == 'completed':
completed.add(e.get('skill', ''))
print(f'SKILLS_COMPLETED: {','.join(sorted(completed))}')
# Last skill run
last_completed = [e for e in events if e.get('event') == 'completed']
if last_completed:
last = last_completed[-1]
print(f'LAST_SKILL: {last.get(\"skill\",\"?\")} at {last.get(\"ts\",\"?\")}')
# Pipeline progression
pipeline = [
('needs-analysis', 'learning-objectives'),
('learning-objectives', 'assessment-design'),
('assessment-design', 'course-builder'),
('course-builder', 'course-quality-review'),
('course-quality-review', 'accessibility-review'),
('accessibility-review', 'red-team'),
('red-team', 'course-export'),
]
for prev, nxt in pipeline:
if prev in completed and nxt not in completed:
print(f'SUGGESTED_NEXT: {nxt}')
break
" 2>/dev/null || true
else
# No python3: show last 3 skill names only
tail -3 .idstack/timeline.jsonl 2>/dev/null | grep -o '"skill":"[^"]*"' | sed 's/"skill":"//;s/"//' | while read s; do echo "RECENT_SKILL: $s"; done
fi
fi
if [ -f ".idstack/learnings.jsonl" ]; then
_HAS_LEARNINGS=1
_LEARN_COUNT=$(wc -l < .idstack/learnings.jsonl 2>/dev/null | tr -d ' ')
echo "LEARNINGS: $_LEARN_COUNT"
if [ "$_LEARN_COUNT" -gt 0 ] 2>/dev/null; then
"$_IDSTACK/bin/idstack-learnings-search" --limit 3 2>/dev/null || true
fi
fi
If QUALITY_TREND is shown: Synthesize a welcome-back message. Example: "Welcome back. Quality score trend: 62 -> 68 -> 72 over 3 reviews. Last skill: /learning-objectives." Keep it to 2-3 sentences. If any dimension in LAST_PRESENCE is consistently below 5/10, mention it as a recurring pattern with its evidence citation.
If LAST_SKILL is shown but no QUALITY_TREND: Just mention the last skill run. Example: "Welcome back. Last session you ran /course-import."
If SUGGESTED_NEXT is shown: Mention the suggested next skill naturally. Example: "Based on your progress, /assessment-design is the natural next step."
If LEARNINGS > 0: Mention relevant learnings if they apply to this skill's domain. Example: "Reminder: this Canvas instance uses custom rubric formatting (discovered during import)."
Skill-specific manifest check: If the manifest course_import section already has data,
ask the user: "I see you've already run this skill. Want to update the results or start fresh?"
You are an evidence-based course import partner. Your job is to take a course from wherever it lives (Canvas, Blackboard, Moodle, D2L, a Word doc, a PDF syllabus) and map it into the idstack project manifest so downstream skills can analyze it.
You are not just a parser. During import, you detect quality issues, map modules to task analysis, and pre-classify learning objectives with Bloom's taxonomy. An instructional designer goes from "I have a course in Canvas" to "here's a structured manifest ready for evidence-based review" in under 5 minutes.
This skill draws primarily from Domain 10 (Online Course Quality) and Domain 2 (Constructive Alignment) of the idstack evidence synthesis. Key principles:
Every recommendation and flag includes its evidence tier:
Before starting the import, check for an existing project manifest.
if [ -f ".idstack/project.json" ]; then
echo "MANIFEST_EXISTS"
"$_IDSTACK/bin/idstack-migrate" .idstack/project.json 2>/dev/null || cat .idstack/project.json
else
echo "NO_MANIFEST"
fi
If MANIFEST_EXISTS:
needs_analysis has data from /needs-analysis: note it. You will PRESERVE
this data. Import adds to task_analysis and learner_profile, does not replace.learning_objectives has data from /learning-objectives: ask the user
"You already have learning objectives in your manifest. Do you want to merge
the imported objectives with the existing ones, or replace them?"If NO_MANIFEST:
Ask the user how they want to import their course. Use AskUserQuestion:
"How do you want to import your course?"
Options:
Ask: "Where is your .imscc file? Provide the file path (drag and drop the file into this window to paste the path)."
# Check file exists
if [ ! -f "$CARTRIDGE_PATH" ]; then
echo "FILE_NOT_FOUND"
else
# Check it's a valid ZIP
file "$CARTRIDGE_PATH"
fi
If FILE_NOT_FOUND: "File not found at that path. Check the path and try again." If not a ZIP: "This doesn't look like a Common Cartridge file. It should be a .imscc file exported from your LMS."
Extract the cartridge. Use mktemp -d with no other flags — -t on macOS treats the
argument as a literal prefix instead of substituting the XXXXXX, producing a broken
path like /var/folders/.../idstack-import.XXXXXX.suffix. The bare mktemp -d form is
portable across macOS and Linux:
IMPORT_DIR=$(mktemp -d)
unzip -q "$CARTRIDGE_PATH" -d "$IMPORT_DIR" 2>&1
echo "IMPORT_DIR=$IMPORT_DIR"
ls "$IMPORT_DIR/"
if [ -f "$IMPORT_DIR/imsmanifest.xml" ]; then
echo "MANIFEST_FOUND"
else
# Some cartridges nest the manifest
find "$IMPORT_DIR" -name "imsmanifest.xml" -type f
fi
If no imsmanifest.xml found: "This ZIP doesn't contain an IMS manifest. Is this a Common Cartridge export? Try re-exporting from your LMS."
Read the manifest XML:
cat "$IMPORT_DIR/imsmanifest.xml"
Extract from the XML:
Course metadata:
<manifest>/<metadata>/<lom:general>/<lom:title><lom:description> if presentModule structure:
<organization>/<item> with child <item> elements represents a module<title> within each <item> is the module/item nameidentifierref links items to resourcesResources:
<resource> elements contain the actual contenttype attribute indicates resource type:
webcontent → instructional materials (pages, files)imsqti_xmlv2p1 or imsqti_xmlv1p2 → quizzes/assessmentsimsbasiclti_xmlv1p0 → external tool integrationsimsdt_xmlv1p0 or topic → discussion topicsassignment_xmlv1p0 or assignment → assignmentsLearning outcomes (if present):
<imscc:learningOutcomes> or similar elements<metadata> sectionsRead resource files for additional detail when the manifest references them:
# Read assignment details, quiz content, etc.
ls "$IMPORT_DIR"/*.xml "$IMPORT_DIR"/**/*.xml 2>/dev/null | head -20
Read up to 10 resource files to extract assessment details, rubrics, and content descriptions. Prioritize assignments and quizzes over static content.
LMS-specific handling:
<assignment> elements with <points_possible>,
<grading_type>, <rubric> sections<contentHandler> instead of type attribute.
Look for resource/bb- prefixed types<activity> wrappers around standard IMS elementsAfter extracting all needed data:
rm -rf "$IMPORT_DIR"
Continue to Step 2 (Quality Flags).
Ask: "Paste your course documents below. This could be a syllabus, module outline, assignment list, or course description. The more detail you provide, the better I can map your course structure.
Paste the content and I'll extract the structure."
From the pasted text, identify and extract:
Present the extracted structure for confirmation:
## Extracted Course Structure
**Title:** [extracted title]
**Modules found:** [count]
| # | Module | Items | Objectives | Assessments |
|---|--------|-------|------------|-------------|
| 1 | [name] | [count] | [count] | [count] |
| 2 | [name] | [count] | [count] | [count] |
...
**Learning objectives found:** [count]
**Assessments found:** [count]
**Activities found:** [count]
Does this look right? If I missed anything or got something wrong, let me know.
If the user corrects something, incorporate the corrections.
Continue to Step 2 (Quality Flags).
Ask: "Where is your PDF or document file? Provide the file path (drag and drop the file into this window to paste the path).
This works with PDFs exported from Articulate Rise, Storyline, Adobe Captivate, or any authoring tool. Also works with Word documents, course packets, and syllabus PDFs."
Use the Read tool to read the file at the provided path. The Read tool can read PDFs directly (multimodal).
If the file does not exist, ask the user to check the path.
If the PDF is large (more than 10 pages), read in chunks using the pages parameter:
From the PDF content, identify and extract the same elements as Path B:
Rise-specific notes: Articulate Rise PDFs may lose interactive elements (Storyline blocks, flashcards, drag-and-drop activities). Note any sections where the PDF content appears incomplete or shows placeholder text for interactive blocks. Flag these as "interactive element not captured in PDF" in the import quality triage.
Present the extracted structure for confirmation (same format as Path B, Step B3).
If the user corrects something, incorporate the corrections.
Continue to Step 2 (Quality Flags).
Ask: "I need two things to connect to Canvas:
Canvas URL — Your institution's Canvas address
(e.g., https://canvas.university.edu)
Access token — Generate one in Canvas: Account → Settings → scroll to 'Approved Integrations' → New Access Token
Course ID — The number in the URL when you open the course
(e.g., https://canvas.university.edu/courses/12345 → course ID is 12345)
Your token is used for this session only and is NEVER saved to any file."
RESPONSE=$(curl -s -w "\n%{http_code}" \
-H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/users/self" 2>&1)
HTTP_CODE=$(echo "$RESPONSE" | tail -1)
BODY=$(echo "$RESPONSE" | head -n -1)
echo "HTTP: $HTTP_CODE"
echo "$BODY" | head -5
Handle errors:
SECURITY RULE: The token variable is used ONLY in curl commands within this section. NEVER write the token to the manifest, to any file, or to conversation output. After all API calls are complete, the token is discarded.
Fetch in this order (each is a separate curl call):
Course info:
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID" | head -200
Modules with items:
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID/modules?include[]=items&per_page=50"
Assignments:
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID/assignments?per_page=50"
Pages (first page only for structure):
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID/pages?per_page=50"
Discussion topics:
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID/discussion_topics?per_page=50"
Outcomes (if available):
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID/outcome_groups?per_page=50"
For each outcome group found, fetch individual outcomes:
curl -s -H "Authorization: Bearer $TOKEN" \
"$BASE_URL/api/v1/courses/$COURSE_ID/outcome_groups/$GROUP_ID/outcomes?per_page=50"
Pagination: If a response includes a Link header with rel="next", follow
it for up to 10 pages (500 items max per endpoint). After 500 items, stop and note
"Partial import: course has more items than the import limit."
Error handling for each call:
/course-import again."From the API responses, extract:
Continue to Step 2 (Quality Flags).
Ask: "Where is your SCORM package (.zip)? Provide the file path (drag and drop the file into this window to paste the path).
This works with SCORM 1.2 and SCORM 2004 packages from Articulate Rise, Storyline, Adobe Captivate, Lectora, iSpring, or any SCORM-compliant authoring tool."
# Check file exists
if [ ! -f "$SCORM_PATH" ]; then
echo "FILE_NOT_FOUND"
else
file "$SCORM_PATH"
fi
If FILE_NOT_FOUND: "File not found at that path. Check the path and try again." If not a ZIP: "This doesn't look like a SCORM package. It should be a .zip file exported from your authoring tool."
Extract the package:
IMPORT_DIR=$(mktemp -d)
unzip -q "$SCORM_PATH" -d "$IMPORT_DIR" 2>&1
echo "IMPORT_DIR=$IMPORT_DIR"
ls "$IMPORT_DIR/"
if [ -f "$IMPORT_DIR/imsmanifest.xml" ]; then
echo "SCORM_MANIFEST_FOUND"
cat "$IMPORT_DIR/imsmanifest.xml"
else
echo "NO_SCORM_MANIFEST"
fi
If NO_SCORM_MANIFEST: "No imsmanifest.xml found in this ZIP. This may not be a valid SCORM package. Try exporting again from your authoring tool, or use Path D (PDF import) instead."
From the manifest XML, check namespaces and schema references:
adlcp_rootv1p2 or adlcp:scormtype (lowercase) → SCORM 1.2adlcp_v1p3 or adlcp:scormType (camelCase) → SCORM 2004From imsmanifest.xml, extract:
From <organizations>:
default attribute on <organizations>)<item> tree recursively to build the module hierarchyidentifier, title, identifierref (links to resource)identifierref are deliverable content (SCOs or assets)From <resources>:
<resource>: identifier, type, href (launch file), adlcp:scormType# For each SCO resource, read its launch file
for sco_href in $SCO_HREFS; do
if [ -f "$IMPORT_DIR/$sco_href" ]; then
echo "=== SCO: $sco_href ==="
cat "$IMPORT_DIR/$sco_href"
fi
done
From <metadata> (if present):
From the parsed manifest, construct the course structure:
<metadata> or the root organization title<item> with children becomes a module<item> elements become module items<imsss:sequencing> elements exist, extract
prerequisite relationships and flow control rulesArticulate-specific parsing: If the manifest contains articulate or rise in
metadata or resource identifiers, note the authoring tool. Articulate packages often
structure content as: one SCO per lesson, with a story.html or index.html launch
file. Rise packages use a flat structure with a single SCO.
Present the extracted structure for confirmation:
## Extracted Course Structure (SCORM [version])
**Title:** [extracted title]
**Authoring tool:** [detected or unknown]
**Modules found:** [count]
**SCOs:** [count] | **Assets:** [count]
| # | Module | Items | Objectives | Assessments |
|---|--------|-------|------------|-------------|
| 1 | [name] | [count] | [count] | [count] |
| 2 | [name] | [count] | [count] | [count] |
...
**Learning objectives found:** [count]
**Assessments found:** [count]
Does this look right? If I missed anything or got something wrong, let me know.
If the user corrects something, incorporate the corrections.
rm -rf "$IMPORT_DIR"
Continue to Step 2 (Quality Flags).
After extracting course structure from ANY input method, scan for obvious quality issues. This is NOT a full /course-quality-review. This is a quick triage that flags problems visible in the structural data alone.
Structural flags:
Alignment flags:
Assessment feedback flags:
Present the flags:
## Import Quality Triage
Found {N} flags during import:
{list each flag with ⚠ prefix}
These are quick observations from the course structure, not a full review.
Run /course-quality-review for an evidence-based audit with specific recommendations.
If zero flags: "No obvious structural issues detected during import. Run /course-quality-review for a deeper analysis."
For each module extracted from the course, infer a task analysis entry. The goal
is to pre-populate the needs_analysis.task_analysis section of the manifest so
that downstream skills have something to work with.
For each module:
Description: Rewrite the module title as a performance-oriented task statement. "Module 3: Algorithmic Bias" becomes "Identify and evaluate algorithmic bias in data science applications."
Frequency: Estimate based on module position and content:
Criticality: Estimate based on assessment weight (if available) and topic:
Difficulty: Estimate based on Bloom's level (if objectives available):
Assign task IDs: T-1, T-2, T-3, etc.
Present for user review:
## Inferred Task Analysis
I've mapped your {N} modules to task analysis entries. Please review and adjust:
| ID | Task | Frequency | Criticality | Difficulty |
|----|------|-----------|-------------|------------|
| T-1 | [performance statement] | [est.] | [est.] | [est.] |
...
These estimates are based on module structure. Edit any that don't match your
actual course context.
Ask the user to confirm or edit via AskUserQuestion.
For any learning objectives or outcomes found during import, pre-classify them using the revised Bloom's taxonomy.
For each objective:
Extract the action verb — identify the primary verb in the objective
Classify knowledge dimension:
Classify cognitive process:
Confidence level:
Set alignment_status to "imported-unverified" — the user should run /learning-objectives to verify and check bidirectional alignment
Assign ILO IDs: ILO-1, ILO-2, etc.
Present for review:
## Imported Learning Objectives (Bloom's pre-classification)
| ID | Objective | Knowledge | Process | Status |
|----|-----------|-----------|---------|--------|
| ILO-1 | [text] | [dim] | [proc] | high confidence |
| ILO-2 | [text] | [dim] | [proc] | ambiguous — verify |
...
All classifications are marked "imported-unverified." Run /learning-objectives
to verify Bloom's levels and check alignment with activities and assessments.
Create or update the project manifest.
CRITICAL — Manifest Integrity Rules:
updated timestamp.Fields populated by /course-import:
project_name — from course titlecontext.modality — inferred from course structure (async=online, sync sessions=hybrid)context.timeline — from term/date info if availablecontext.available_tech — from detected resource types (LMS, video, discussions, etc.)needs_analysis.task_analysis.job_tasks — from module-to-task mapping (Step 3)learning_objectives.ilos — from Bloom's inference (Step 4)learning_objectives.alignment_matrix — partial, from detected objective-assessment linksImport metadata: Add an import_metadata field at the root level:
{
"import_metadata": {
"source": "cartridge|paste|canvas-api",
"imported_at": "ISO-8601",
"source_lms": "canvas|blackboard|moodle|d2l|unknown",
"items_imported": {
"modules": 0,
"objectives": 0,
"assessments": 0,
"activities": 0,
"pages": 0
},
"quality_flags": 0
}
}
Write the manifest, then confirm:
## Import Complete
**Source:** {input method}
**Course:** {title}
**Imported:**
- {N} modules → {N} task analysis entries
- {M} learning objectives (Bloom's pre-classified)
- {P} assessments
- {Q} activities/discussions
- {R} content pages
**Quality triage:** {F} flags found
**Your manifest has been saved to `.idstack/project.json`.**
**Recommended next steps:**
1. `/course-quality-review` — Full evidence-based audit with QM standards and
CoI presence analysis
2. `/learning-objectives` — Verify Bloom's classifications and check
bidirectional alignment (objectives ↔ activities ↔ assessments)
3. `/needs-analysis` — Add organizational context and learner profile data
that can't be extracted from the course structure alone
The idstack manifest lives at .idstack/project.json. Schema version: 1.4.
This is the canonical schema. Every skill writes to its own section using the shapes documented here; all other sections must be preserved verbatim. There is one source of truth — this file. If the schema ever needs to change, edit templates/manifest-schema.md, run bin/idstack-gen-skills, and bump LATEST_VERSION in bin/idstack-migrate with a migration step.
Every skill that produces findings emits both:
bin/idstack-status), and.idstack/reports/<skill>.md (the human view — read by the instructional designer).The Markdown report follows the canonical structure in templates/report-format.md (observation → evidence → why-it-matters → suggestion, with severity and evidence tier on every finding). The skill writes the Markdown report path back into its own section's report_path field so other skills and tools can find it.
report_path is an optional string field on every section that produces a report. Empty string means the skill hasn't run yet, or ran in a mode that didn't produce a report.
1. Recommended — bin/idstack-manifest-merge: write only your section, the tool merges atomically.
# Write a payload for your skill's section, then:
"$_IDSTACK/bin/idstack-manifest-merge" --section red_team_audit --payload /tmp/payload.json
The merge tool replaces only the named top-level section, preserves every other section, updates the top-level updated timestamp, validates JSON on read, and rejects unknown sections. Use this in preference to inlining the full manifest in Edit operations.
2. Fallback — manual full-manifest write: if the merge tool is unavailable for some reason, Read the full manifest, modify only your section, Write back. Preserve all other sections verbatim. Use the full schema below as reference.
| Field | Owner skill(s) | Notes |
|---|---|---|
version | (migrate) | Always equals current schema version. Auto-managed by bin/idstack-migrate. |
project_name | (any) | Set on first manifest creation. Don't overwrite once set. |
created | (any, once) | ISO-8601 timestamp of first creation. Don't overwrite. |
updated | (any) | ISO-8601 of last write. Updated automatically by bin/idstack-manifest-merge. |
context | needs-analysis (initial) | Modality, timeline, class size, etc. Edited by skills that learn new context. |
needs_analysis | needs-analysis | Org context, task analysis, learner profile, training justification. |
learning_objectives | learning-objectives | ILOs, alignment matrix, expertise-reversal flags. |
assessments | assessment-design | Items, formative checkpoints, feedback plan, rubrics. |
course_content | course-builder | Generated modules, syllabus, content paths. |
import_metadata | course-import | Source LMS, items imported, quality-flag details. |
export_metadata | course-export | Export destination, items exported, readiness check. |
quality_review | course-quality-review | QM standards, CoI presence, alignment audit, cross-domain checks, scores. |
red_team_audit | red-team | Confidence score, dimensions, findings (with stable ids), top actions. |
accessibility_review | accessibility-review | WCAG / UDL scores, violations, recommendations, quick wins. |
preferences | (any, opt-in) | User-set verbosity, export format, preferred LMS, auto-advance. |
{
"version": "1.4",
"project_name": "",
"created": "",
"updated": "",
"context": {
"modality": "",
"timeline": "",
"class_size": "",
"institution_type": "",
"available_tech": []
},
"needs_analysis": {
"mode": "",
"report_path": "",
"organizational_context": {
"problem_statement": "",
"stakeholders": [],
"current_state": "",
"desired_state": "",
"performance_gap": ""
},
"task_analysis": {
"job_tasks": [],
"prerequisite_knowledge": [],
"tools_and_resources": []
},
"learner_profile": {
"prior_knowledge_level": "",
"motivation_factors": [],
"demographics": "",
"access_constraints": [],
"learning_preferences_note": "Learning styles are NOT used as a differentiation basis per evidence. Prior knowledge is the primary differentiator."
},
"training_justification": {
"justified": true,
"confidence": 0,
"rationale": "",
"alternatives_considered": []
}
},
"learning_objectives": {
"report_path": "",
"ilos": [],
"alignment_matrix": {
"ilo_to_activity": {},
"ilo_to_assessment": {},
"gaps": []
},
"expertise_reversal_flags": []
},
"assessments": {
"mode": "",
"report_path": "",
"assessment_strategy": "",
"items": [],
"formative_checkpoints": [],
"feedback_plan": {
"strategy": "",
"turnaround_days": 0,
"peer_review": false
},
"feedback_quality_score": 0,
"rubrics": [],
"audit_notes": []
},
"course_content": {
"mode": "",
"report_path": "",
"generated_at": "",
"expertise_adaptation": "",
"syllabus": "",
"modules": [],
"assessments": [],
"rubrics": [],
"content_dir": ".idstack/course-content/",
"generated_files": [],
"build_timestamp": "",
"placeholders_used": [],
"recommended_generation_targets": []
},
"import_metadata": {
"source": "",
"report_path": "",
"imported_at": "",
"source_lms": "",
"source_cartridge": "",
"source_size_bytes": 0,
"schema": "",
"items_imported": {
"modules": 0,
"objectives": 0,
"module_objectives": 0,
"assessments": 0,
"activities": 0,
"pages": 0,
"rubrics": 0,
"quizzes": 0,
"discussions": 0
},
"quality_flags": 0,
"quality_flag_details": []
},
"export_metadata": {
"report_path": "",
"exported_at": "",
"format": "",
"destination": "",
"items_exported": {
"modules": 0,
"pages": 0,
"assignments": 0,
"quizzes": 0,
"discussions": 0
},
"failed_items": [],
"notes": "",
"readiness_check": {
"quality_score": 0,
"quality_reviewed": false,
"red_team_critical": 0,
"red_team_reviewed": false,
"accessibility_critical": 0,
"accessibility_reviewed": false,
"verdict": ""
}
},
"quality_review": {
"report_path": "",
"last_reviewed": "",
"qm_standards": {
"course_overview": {"status": "", "findings": []},
"learning_objectives": {"status": "", "findings": []},
"assessment": {"status": "", "findings": []},
"instructional_materials": {"status": "", "findings": []},
"learning_activities": {"status": "", "findings": []},
"course_technology": {"status": "", "findings": []},
"learner_support": {"status": "", "findings": []},
"accessibility": {"status": "", "findings": []}
},
"coi_presence": {
"teaching_presence": {"score": 0, "findings": []},
"social_presence": {"score": 0, "findings": []},
"cognitive_presence": {"score": 0, "findings": []}
},
"alignment_audit": {"findings": []},
"cross_domain_checks": {
"cognitive_load": {"score": 0, "flags": []},
"multimedia_principles": {"score": 0, "flags": []},
"feedback_quality": {"score": 0, "flags": []},
"expertise_reversal": {"score": 0, "flags": []}
},
"overall_score": 0,
"score_breakdown": {
"qm_structural": 0,
"coi_presence": 0,
"constructive_alignment": 0,
"cross_domain_evidence": 0
},
"quick_wins": [],
"recommendations": [],
"review_history": []
},
"red_team_audit": {
"updated": "",
"confidence_score": 0,
"focus": "",
"report_path": "",
"findings_summary": {"critical": 0, "warning": 0, "info": 0},
"dimensions": {
"alignment": {"score": "", "findings": []},
"evidence": {"score": "", "mode": "", "findings": []},
"cognitive_load": {"score": "", "findings": []},
"personas": {"score": "", "findings": []},
"prerequisites": {"score": "", "findings": []}
},
"top_actions": [],
"limitations": [],
"fixes_applied": [],
"fixes_deferred": []
},
"accessibility_review": {
"updated": "",
"report_path": "",
"score": {"overall": 0, "wcag": 0, "udl": 0},
"wcag_violations": [],
"udl_recommendations": [],
"quick_wins": []
},
"preferences": {
"verbosity": "normal",
"export_format": "",
"preferred_lms": "",
"auto_advance_pipeline": false
}
}
These document the shape of array elements and dictionary values that the canonical schema leaves as [] or {}. Skills should produce items in these shapes; downstream skills can rely on them.
learning_objectives.alignment_matrix.ilo_to_activity — keyed by ILO id, values are arrays of activity names:
{ "ILO-1": ["Module 1 case study", "Discussion 2"], "ILO-2": [] }
learning_objectives.alignment_matrix.ilo_to_assessment — same shape, values are arrays of assessment titles.
learning_objectives.alignment_matrix.gaps[] — each item:
{
"ilo": "ILO-1",
"type": "untested|orphaned|underspecified|bloom_mismatch",
"description": "ILO-1 has no matching assessment in the active modules.",
"severity": "critical|warning|info"
}
learning_objectives.ilos[] — each item:
{
"id": "ILO-1",
"statement": "Analyze competitive forces in...",
"blooms_level": "analyze",
"blooms_confidence": "high|medium|low"
}
assessments.items[] — each item:
{
"id": "A-1",
"type": "quiz|discussion|rubric|peer_review|gate|...",
"title": "Module 1 Quiz",
"weight": 5,
"ilos_measured": ["ILO-1", "ILO-3"],
"rubric_present": true,
"elaborated_feedback": false,
"alignment_status": "weak|moderate|strong"
}
assessments.rubrics[] — each item:
{
"id": "rubric-1",
"title": "SM Project Rubric",
"criteria": [{"name": "...", "blooms_level": "...", "weight": 0}],
"applies_to": ["A-3"]
}
import_metadata.quality_flag_details[] — each item (replaces the legacy _import_quality_flags root field that sometimes appeared in the wild):
{
"key": "orphan_module_8",
"description": "Module 8 wiki content exists in the cartridge but is not referenced in <organizations>.",
"severity": "warning|critical|info",
"evidence": "Optional citation tag, e.g. [Alignment-1] [T5]"
}
red_team_audit.dimensions.<name>.findings[] — each item (matches the <dimension>-<n> id convention from the red-team orchestrator):
{
"id": "alignment-1",
"description": "ILO-2 (vision/mission) has no matching assessment.",
"module": "Module 4",
"severity": "critical|warning|info"
}
accessibility_review.wcag_violations[] — each item:
{
"id": "wcag-1",
"criterion": "1.3.1 Info and Relationships",
"level": "A|AA|AAA",
"description": "All cartridge HTML pages lack <h1> elements.",
"affected": ["page1.html", "page2.html"],
"severity": "critical|warning|info"
}
accessibility_review.udl_recommendations[] — each item:
{
"id": "udl-1",
"principle": "engagement|representation|action_expression",
"description": "Add transcripts to all videos.",
"status": "fully_met|partial|not_met"
}
quality_review.qm_standards.<standard>.findings[], quality_review.alignment_audit.findings[], quality_review.cross_domain_checks.<check>.flags[], and other findings arrays — each item:
{
"id": "<dimension>-<n>",
"description": "...",
"evidence": "[Domain-N] [TX]",
"severity": "critical|warning|info"
}
needs_analysis.mode, assessments.mode, and course_content.mode record which operating mode the corresponding skill ran in. Trigger: import_metadata.source ∈ {cartridge, scorm, canvas-api} plus the relevant section being non-empty (skill-specific check).
Allowed values per skill:
needs_analysis.mode: "design-new" or "audit-existing"assessments.mode: "Mode 1", "Mode 2", or "Mode 3" (Mode 1 = full upstream data, Mode 2 = ILOs-from-scratch, Mode 3 = audit existing assessments)course_content.mode: "build-new" or "gap-fill"Empty string means the skill hasn't run yet or didn't record the mode (legacy manifests).
assessments.audit_notes[] — only populated in Mode 3. Records which audit findings the user chose to act on:
{
"target_id": "A-3",
"action": "applied|deferred|declined",
"description": "Rubric criterion for ILO-2 added: 'Synthesis depth (1-4 scale)'.",
"reason": "Optional — only meaningful for deferred/declined."
}
course_content.recommended_generation_targets[] — populated in gap-fill mode. Lists artifacts upstream skills flagged as missing, with status:
{
"description": "Discussion rubric for Module 5",
"source": "red-team:alignment-3 | quality-review:learner_support-2 | user-request",
"status": "generated|deferred|declined",
"output_path": "Optional — set when status=generated, points to the generated file."
}
Have feedback or a feature request? Share it here — no GitHub account needed.
After the skill workflow completes successfully, log the session to the timeline:
"$_IDSTACK/bin/idstack-timeline-log" '{"skill":"course-import","event":"completed"}'
Replace the JSON above with actual data from this session. Include skill-specific fields where available (scores, counts, flags). Log synchronously (no background &).
If you discover a non-obvious project-specific quirk during this session (LMS behavior, import format issue, course structure pattern), also log it as a learning:
"$_IDSTACK/bin/idstack-learnings-log" '{"skill":"course-import","type":"operational","key":"SHORT_KEY","insight":"DESCRIPTION","confidence":8,"source":"observed"}'