From skilluminator
A Claude skill that analyzes your M365 work activity to discover which of your repeated work patterns are the best candidates for AI automation — and generates a visual dashboard of the findings.
npx claudepluginhub serenaxxiee/skilluminator --plugin skilluminatorThis skill uses the workspace's default tool permissions.
A Claude skill that analyzes your M365 work activity to discover which of your repeated work patterns are the best candidates for AI automation — and generates a visual dashboard of the findings.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
A Claude skill that analyzes your M365 work activity to discover which of your repeated work patterns are the best candidates for AI automation — and generates a visual dashboard of the findings.
Works with WorkIQ M365 data. Compatible with any role.
Skilluminator runs a single analysis pass over your M365 activity for a specified time period. It does NOT accumulate data across runs — every run is a fresh analysis grounded entirely in what WorkIQ returns.
/skill-creatorRun these queries against WorkIQ for the user's specified time range. Replace {TIME_RANGE} with the actual period (e.g., "past 7 days", "past 30 days", "in March 2026").
Email:
Meetings: 4. "What recurring meetings did I attend {TIME_RANGE}? For each, describe the typical agenda type, attendee count, and duration." 5. "How much total meeting time {TIME_RANGE}, broken down by type (1:1, team sync, external, all-hands)?" 6. "Which meeting types happen on a regular cadence with roughly the same people?"
Teams: 7. "What Teams channels or chats was I most active in {TIME_RANGE}? What were the recurring topics?" 8. "Are there questions I get asked repeatedly in Teams chats? What topics come up most?" 9. "What types of information do I most frequently share or look up in Teams {TIME_RANGE}?"
Documents: 10. "What types of documents did I create, edit, or review most often {TIME_RANGE}?" 11. "Are there documents I update on a regular cadence (weekly/monthly reports, trackers, dashboards)?" 12. "What SharePoint sites or OneDrive folders do I access most frequently?"
Cross-source: 13. "Across email, meetings, and Teams, what topics consumed the most of my time {TIME_RANGE}?" 14. "Are there workflows that repeat across sources (e.g., receive email -> schedule meeting -> create document)?" 15. "What tasks do multiple people on my team do independently that could potentially be standardized?"
Query handling:
Meeting attendance verification:
Calendar invites do NOT equal attendance. WorkIQ can distinguish "invited" from "attended" using the HasUserAttended flag in meeting telemetry. Before building any meeting-based pattern, run a follow-up query:
"Of these meetings, which ones did I actually attend (joined the call or was physically present) in {TIME_RANGE}? [list meeting titles]"
Only include meetings with confirmed attendance in patterns. If attendance can't be confirmed, note it as "attendance unconfirmed" and do NOT present it as a high-confidence pattern. This prevents false patterns like "Large Forum Passive Attendance" for meetings the user was invited to but never joined.
For each WorkIQ response, extract structured signals:
source: email | meeting | teams | document
title: short description of the repeating behavior
participants: roles involved (PM, Engineering Lead, etc.) — NEVER names
frequency: times per week as reported by WorkIQ
timeSpentHours: hours spent on this behavior in the analyzed period, as reported by WorkIQ
keywords: key terms from the WorkIQ response
rawExcerpt: exact quote from WorkIQ response (anonymized)
Rules:
manual, automated, or unclear. For example, "meeting notes are generated after meetings" is likely automated by Teams Copilot — tag as automated. "User writes status update emails" is manual. Only manual signals represent real automation opportunities.Group signals into patterns when they share 2+ of:
Each pattern requires at least 2 signals AND those signals must share at least 2 of the 4 clustering criteria above. A single signal is not a pattern — drop it silently. The user will have a chance to surface missing work during the Surface & Reflect step.
Before scoring, check every pattern against M365 built-in automations that already handle the work. If a pattern is substantially covered by an existing tool, it is NOT a good skill candidate — even if it scores high on automation and frequency.
Known M365 built-in automations to check:
| Built-in Feature | What It Already Does | Patterns to Flag |
|---|---|---|
| Teams Copilot Meeting Recap | Auto-generates meeting summaries, action items, and follow-ups from transcripts | Meeting summarization, action item extraction, meeting notes distribution |
| Outlook Focused Inbox | Separates important email from low-priority/automated notifications | Email triage (partial — still applies for fine-grained classification) |
| Copilot in Word/PowerPoint | Drafts, summarizes, and reformats documents from prompts | Document drafting (partial — applies for first-draft generation) |
| Viva Insights | Tracks meeting load, focus time, and collaboration patterns | Time-tracking and workload visibility |
| Teams Copilot Chat Recap | Summarizes missed Teams chat conversations | Chat summarization, catching up on threads |
| Loop Components | Real-time collaborative task tracking in Teams/Outlook | Shared task/action tracking across channels |
How to apply:
-10 Already partially automated value deduction AND note what the gap is that a skill would fill.Important: When WorkIQ reports behaviors like "extracts meeting outcomes" or "summarizes chat threads," determine whether the USER does this manually or whether an M365 tool does it automatically. WorkIQ describes what happens in the user's workflow — it does not distinguish between manual and automated steps. A signal that says "meeting notes are produced after meetings" could mean the user writes them OR that Teams Copilot generates them. Do NOT assume manual effort without evidence.
Before scoring, drop patterns that are too thin to be meaningful. A pattern must meet at least ONE of:
Patterns that fail all four criteria are noise — normal work overhead, not candidates for automation. Drop them silently.
Measures how rule-based and repetitive the pattern is. High score = an AI agent can do this reliably.
| Rubric Item | Points | Criteria |
|---|---|---|
| Clear trigger | +20 | The pattern starts from a specific, observable event (email arrives, meeting ends, notification fires) |
| Fixed output format | +20 | The output is always the same structure (template, summary, classification, status update) |
| Same steps every time | +20 | The process doesn't branch based on subjective judgment |
| No sensitive sign-off | +15 | Doesn't require human approval for compliance, legal, or privacy reasons |
| Single source | +10 | Only involves one M365 tool (email only, or docs only) |
| High volume | +10 | Occurs 20+ times per week in the analyzed period |
| Deductions | ||
| Requires nuanced judgment | -20 | Strategic decisions, creative work, sensitive negotiations |
| Involves external parties | -10 | Customers, partners, vendors with unpredictable inputs |
| Multi-system without API | -10 | Requires manual copy-paste between disconnected tools |
| Low volume (<5/week) | -10 | Too infrequent to justify automation investment |
Ceiling rule: No pattern scores above 95. Reserve at least 5 points for human review overhead.
Measures how much this pattern costs you and how much automating it would help.
| Rubric Item | Points | Criteria |
|---|---|---|
| High time cost | +25 | Takes 2+ hours per week in the analyzed period |
| High frequency | +20 | Occurs 10+ times per week |
| Blocks others | +20 | Other people are waiting on this output before they can proceed |
| Critical workflow | +15 | Part of a workflow that directly impacts delivery, quality, or compliance |
| Pain expressed | +10 | User or WorkIQ data indicates frustration, overload, or complaints about this task |
| Deductions | ||
| Low impact if skipped | -15 | Nothing bad happens if this is done a day late |
| Already partially automated | -10 | Tools already handle some of this |
| Affects only one person | -5 | No downstream impact on others |
Composite = (Automation × 0.55) + (Value × 0.45)
Score ALL patterns that passed the filter and relevance threshold. Generate a skill candidate card for every pattern. Use the composite score to rank and tier them:
Est. hours saved/week = timeSpentHours × savePct
Where savePct is determined by automation score:
Transparency note: savePct is a heuristic, not a measured outcome. timeSpentHours comes from WorkIQ data for the analyzed period. Always show the formula so users can verify. Label estimates clearly: "Estimated X hrs/week saveable (based on Y hrs observed, Z% automation estimate)."
Write results to patterns.json:
{
"analyzedAt": "<ISO timestamp>",
"timeRange": "<what was analyzed, e.g. 'past 7 days'>",
"weekOf": "<ISO date>",
"signalCount": 0,
"queriesRun": 15,
"queryErrors": [],
"patterns": [
{
"patternId": "kebab-case-id",
"label": "Human Readable Name",
"sources": ["email", "meeting"],
"signalCount": 3,
"occurrenceCount": 12,
"participantCount": 3,
"timeSpentHours": 4.5,
"automationScore": 85,
"automationRubric": {
"clearTrigger": 20,
"fixedOutput": 20,
"sameSteps": 20,
"noSensitiveSignoff": 15,
"singleSource": 10,
"highVolume": 0,
"deductions": 0,
"notes": "Trigger: weekly meeting ends. Output: structured notes. No deductions."
},
"valueScore": 78,
"valueRubric": {
"timeCost": 25,
"frequency": 20,
"blocksOthers": 0,
"criticalWorkflow": 15,
"painExpressed": 0,
"deductions": -5,
"notes": "4.5 hrs/week is significant. Doesn't block others directly. -5: affects only one person."
},
"compositeScore": 81.9,
"tier": "strong",
"candidateSkillName": "kebab-case-skill-name",
"estHoursSavedPerWeek": 3.15,
"llmRationale": "Plain-language explanation of why this pattern was identified and scored this way"
}
],
"filteredPatterns": [
{
"label": "Meeting Summarization",
"reason": "Already automated by Teams Copilot Meeting Recap"
}
],
"unflaggedPatterns": [
{
"label": "Pattern the user did not select as a pain point",
"reason": "User said: not a real time sink"
}
]
}
Sort patterns by compositeScore descending. Include all scored patterns.
For every user-flagged pattern, generate an output card. Include the tier label based on composite score:
═══════════════════════════════════════════════════════
SKILL CANDIDATE: [skill-name]
Pattern: [Label] Rank: #N
Tier: [STRONG CANDIDATE | MODERATE CANDIDATE | WORTH EXPLORING]
═══════════════════════════════════════════════════════
SCORES
Automation: NN/100
[list each rubric item that contributed, e.g. "+20 clear trigger (meeting ends)"]
Value: NN/100
[list each rubric item that contributed]
Composite: NN.N/100
───────────────────────────────────────────────────────
WHY THIS MATTERS
WHAT: [What does this skill do for you? Plain English, one sentence.]
WHY: [Why should you care? Name the specific pain it eliminates.]
FUTURE: [What does your work look like with this skill? Before/after.]
───────────────────────────────────────────────────────
THE MATH (transparent)
Time observed: X.X hrs in [time range]
Automation estimate: NN% saveable (based on auto score NN)
Est. savings: ~X.X hrs/week
Formula: timeSpentHours(X.X) × savePct(NN%) = X.X hrs
───────────────────────────────────────────────────────
TRIGGER EXAMPLES
- [specific observable event from the data]
- [another specific event]
SAMPLE CLAUDE PROMPTS
> [prompt 1 the user could try right now]
> [prompt 2]
SKILL SKELETON
TRIGGER: [specific event]
INPUT: [what data to provide]
STEPS: 1. [step] 2. [step] 3. [step]
OUTPUT: [what the skill produces]
RECOMMENDATION
[For STRONG: "Build this skill. Here's the skeleton."
For MODERATE: "Consider a lighter approach first — e.g., a template,
a Power Automate rule, or a pinned FAQ — before building a full skill."
For WORTH EXPLORING: "This is hard to fully automate. Try: [specific
partial solution — template, checklist, delegation, process change]."]
TRY IT NOW
[One specific action the user can take today. Be concrete.]
═══════════════════════════════════════════════════════
After presenting scored candidates, offer to build any candidate the user wants:
"Want to build any of these skills? Pick a candidate and I'll run
/skill-creatorto scaffold it."
If the user picks a candidate, invoke /skill-creator with context from the skill candidate card:
This closes the loop: Skilluminator discovers what to automate → user validates → Skill Creator builds it.