From xapi-engineer
This skill should be used when the user asks to "analyze a prompt", "design assessment prompts", "identify required materials for xAPI", "map prompt to xAPI structure", "optimize test question prompts", or needs to understand what context materials are needed to generate valid xAPI statements from educational assessment scenarios.
npx claudepluginhub tim80411/ai-agent-extension --plugin xapi-engineerThis skill uses the workspace's default tool permissions.
Guide the analysis of prompts and identification of required materials for generating xAPI statements in educational assessment contexts.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Guide the analysis of prompts and identification of required materials for generating xAPI statements in educational assessment contexts.
To generate valid xAPI statements, identify and collect these material categories:
| xAPI Component | Required Materials | Source |
|---|---|---|
| Actor | User ID, name, account info | Authentication system |
| Verb | Action type, interaction mode | Assessment design |
| Object | Question/quiz metadata, content | Content management |
| Result | Answer, score, duration | Runtime collection |
| Context | Course hierarchy, session info | LMS context |
Determine which assessment scenario the prompt describes:
| Scenario | Characteristics | Primary Verb |
|---|---|---|
| Quiz attempt | Multiple questions, scored | attempted → completed |
| Single question | One interaction | answered |
| Course progress | Module completion | progressed |
| Certification | Pass/fail criteria | passed/failed |
| Practice | No scoring | experienced |
Identify learner identification needs:
Required materials:
Analysis questions:
Match the described action to appropriate verb:
Decision tree:
Is it a question response? → answered
Is it starting an activity? → attempted / launched
Is it finishing an activity? → completed
Is there a pass/fail criterion? → passed / failed
Is it passive consumption? → experienced
Is it progress without completion? → progressed
Extract activity/question metadata:
For assessments:
For interactions:
Identify what outcome data to collect:
| Result Field | When Required | Data Source |
|---|---|---|
| score.scaled | Scored assessments | Calculation |
| score.raw/min/max | Detailed scoring | Assessment config |
| success | Pass/fail activities | Threshold comparison |
| completion | Trackable activities | Completion logic |
| response | Question interactions | User input |
| duration | Timed activities | Timer |
Identify contextual relationships:
Hierarchical context:
Session context:
Before quiz starts (static materials):
At runtime (dynamic materials):
Static materials:
Dynamic materials:
Prompt: "Track when a student answers a multiple choice question about xAPI verbs"
Analysis:
Required materials:
Static:
- Question ID: "https://lms.example.com/questions/xapi-verbs-q1"
- Question text: "Which verb indicates completion?"
- Choices: ["completed", "attempted", "launched", "terminated"]
- Correct answer: "completed"
Dynamic:
- Student account: { homePage, name }
- Response: student's selection
- Timestamp: when answered
- Duration: time to answer
Prompt: "Generate xAPI when learner finishes the xAPI fundamentals course"
Analysis:
Required materials:
Static:
- Course ID: "https://lms.example.com/courses/xapi-fundamentals"
- Course title
- Course description
Dynamic:
- Learner account
- Total duration
- Completion timestamp
Prompt: "Track quiz results where 80% is passing"
Analysis:
Required materials:
Static:
- Assessment ID
- Assessment metadata
- Mastery score: 0.8
- Maximum possible score
Dynamic:
- Learner account
- Raw score achieved
- Calculated scaled score
- Pass/fail determination
- Total duration
Context hierarchy for nested tracking:
Quiz (parent)
└── Question (object)
Statement structure:
{
"object": { "id": "question-uri" },
"context": {
"contextActivities": {
"parent": [{ "id": "quiz-uri" }]
}
}
}
Track individual attempts with registration:
{
"context": {
"registration": "attempt-uuid"
}
}
Use same registration for all statements in one attempt.
Support internationalization:
{
"object": {
"definition": {
"name": {
"en-US": "Quiz Title",
"zh-TW": "測驗標題"
}
}
}
}
Before generating statements, verify:
references/material-templates.md - Templates for common scenariosreferences/analysis-examples.md - Detailed prompt analysis examples