From people-management
Gathers and organizes evidence for performance reviews: goal completion, peer feedback, development progress, scope changes, values alignment. Structures along org framework dimensions from Notion and md files. Surfaces gaps without ratings.
npx claudepluginhub techwolf-ai/ai-first-toolkitThis skill uses the workspace's default tool permissions.
> **Principle: "You are responsible."** This skill gathers and organises evidence. Rating decisions and development assessments are the manager's alone.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Automates semantic versioning and release workflow for Claude Code plugins: bumps versions in package.json, marketplace.json, plugin.json; verifies builds; creates git tags, GitHub releases, changelogs.
Principle: "You are responsible." This skill gathers and organises evidence. Rating decisions and development assessments are the manager's alone.
Helps managers prepare evidence-based assessments for performance review cycles. The org's performance framework dimensions measure what was achieved and how the person developed. Organizational values measure how they showed up while doing it.
Load the org's performance framework from manager-context/performance-framework.md (created during /setup). This defines:
If manager-context/performance-framework.md doesn't exist, ask the manager to run /setup first.
If any MCP connector is unavailable, follow the connector unavailability protocol in ../../references/operating-principles.md.
Determine who to prepare for:
Determine the review period:
For the target team member, read from manager-context/team/[name].md:
Also load:
manager-context/performance-framework.md -- for org-specific framework dimensions and rating descriptors (falls back to ../../references/performance-framework.md defaults)manager-context/management-framework.md -- for org-specific management dimensions (falls back to ../../references/management-framework.md defaults)../../references/values-guide.md -- for values definitions and signal guidancemanager-context/values.md -- for the organization's specific valuesFor each dimension and sub-dimension in the org's performance framework (from manager-context/performance-framework.md), gather evidence from connected sources.
For each sub-dimension:
Common evidence patterns by dimension type:
For dimensions that are hardest to assess digitally (e.g., behavioural growth, leadership presence), explicitly flag that the manager's direct observations carry more weight.
Values are the "how" -- how this person delivered their results and showed up for the team. Search for evidence across the organization's values (from manager-context/values.md). See ../../references/values-guide.md for guidance on finding value signals.
For each value defined in manager-context/values.md, search for evidence using the signal guidance stored there. Common evidence sources by value type:
Collaboration / teamwork values:
Ambition / ownership values:
Innovation / resourcefulness values:
Transparency / communication values:
Care / wellbeing values:
For each value, compile evidence as observations (not judgments):
Search Slack for recognition this person received during the review period:
For each dimension, assess evidence strength:
Read references/output-template.md for the full output template structure (individual and batch mode).
If preparing for the whole team, produce individual evidence summaries for each team member plus a team-level comparison view. See the batch mode template in references/output-template.md.
Here's the evidence I gathered for [name]'s review. I've flagged gaps where you'll want to add your own observations.
Remember: this is evidence gathering only. Rating decisions and promotion assessments are yours to make based on the full picture -- including things I can't see.
Spawn a sub-agent to review the evidence summary with fresh eyes. The reviewer should:
Incorporate the reviewer's feedback before presenting the final summary to the manager.
Read ../../references/operating-principles.md for shared operating principles (data scope, DM flagging, signals vs diagnoses, connector unavailability).
Additional notes specific to this skill: