From okr-definition
Autonomous OKR definition creating Objectives and Key Results at company, team, and individual levels. Supports cascading and alignment approaches. Scoring (0-1.0 scale), committed vs aspirational classification, CFR companion framework. Can import from vision-crafting output.
npx claudepluginhub ssiertsema/claude-code-plugins --plugin okr-definitionThis skill uses the workspace's default tool permissions.
You create Objectives and Key Results at company, team, and individual levels. You research industry OKR patterns and best practices yourself -- do not ask the user for data they would need to look up. Only ask the user for decisions and confirmations.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
You create Objectives and Key Results at company, team, and individual levels. You research industry OKR patterns and best practices yourself -- do not ask the user for data they would need to look up. Only ask the user for decisions and confirmations.
This skill complements vision-crafting (which defines mission, vision, and strategic priorities) by translating strategic intent into measurable objectives and key results. It also feeds into theme-roadmapping (which organizes execution into themes and initiatives).
| Field | Value |
|---|---|
| Name | okr-definition |
| Version | 1.0.0 |
| Primary category | planning |
| Secondary category | assessment |
| Output mode | human_readable |
| Mixins | diagram-rendering, autonomous-research |
| Field | Description |
|---|---|
| Organization/team/product context | What entity the OKRs are for |
| Field | Description | Default |
|---|---|---|
| Vision/strategy document | Path to vision-crafting output or strategy doc | Will identify strategic context itself |
| OKR level | Company, team, or individual | Company |
| Cadence | Annual or quarterly | Quarterly |
| Existing OKRs | Current OKRs to assess or build upon | None |
context:
type: string
required: true
description: Organization, team, or product name and context
vision_input:
type: file_path | string
required: false
description: Vision-crafting output or strategy document
okr_level:
type: string
enum: [company, team, individual]
default: company
cadence:
type: string
enum: [annual, quarterly]
default: quarterly
existing_okrs:
type: file_path | string
required: false
description: Current OKRs to assess or extend
render_mode:
type: string
enum: [code, image]
default: code
dependency_if_image: "@mermaid-js/mermaid-cli (mmdc)"
Follow shared foundation SS7 -- interview mode. When input is missing or insufficient, interview to gather at minimum:
| Dimension | Required | Default |
|---|---|---|
| Organization/team/product context | Yes | -- |
| OKR level | No | Company |
| Cadence | No | Quarterly |
| Vision/strategy input | No | Will research context itself |
Exit interview when: Context is clear enough to define objectives and key results.
Accept one of:
From the input (or interview results), identify:
**Entity**: [name]
**OKR level**: [company/team/individual]
**Cadence**: [annual/quarterly]
**Strategic input**: [imported from vision-crafting / will research]
Ask the user to confirm or adjust. Ask diagram render mode and output path per the diagram-rendering and autonomous-research mixins.
Use WebSearch and WebFetch per the autonomous-research mixin.
Research OKR patterns for this domain/industry:
Research OKR best practices relevant to the context:
Import strategic context:
Identify through research:
Present strategic context summary for user confirmation.
Define 3-5 objectives per level.
Each objective must be:
| Type | Description | Expected score |
|---|---|---|
| Committed | Must achieve -- failure indicates planning or execution problems | 1.0 expected |
| Aspirational | Stretch goals -- reaching 0.7 is strong performance | 0.7 expected |
| Learning | Experimental -- success means validated learning | 0.7 expected, pivot acceptable |
| # | Objective | Type | Strategic priority link | Level |
|---|---|---|---|---|
| O1 | [qualitative, inspirational statement] | Committed | [priority] | Company |
| O2 | [qualitative, inspirational statement] | Aspirational | [priority] | Company |
Present objectives for user confirmation before proceeding to key results.
Define 2-5 key results per objective.
Each key result must be:
Each key result uses a 0-1.0 scale:
| Objective | KR # | Key Result | Baseline | Target | Scoring method | 0.3 | 0.5 | 0.7 | 1.0 |
|---|---|---|---|---|---|---|---|---|---|
| O1 | KR1.1 | [quantitative statement] | [current] | [target] | [method] | [threshold] | [threshold] | [threshold] | [threshold] |
For multi-level OKRs:
| Team/Individual OKR | Traces to Company OKR | Alignment strength |
|---|---|---|
| [team objective] | O1: [company objective] | Strong / Moderate / Weak |
Score overall OKR quality (0-100) by checking against 20 common OKR mistakes:
| # | Mistake | Detection | Severity | Points deducted |
|---|---|---|---|---|
| 1 | Output not outcome | KR describes activity, not result | Critical | -10 |
| 2 | Too many OKRs | > 5 objectives per level | Warning | -5 |
| 3 | Too many KRs | > 5 KRs per objective | Warning | -3 |
| 4 | Sandbagging | All KRs easily achievable, no stretch | Warning | -5 |
| 5 | No baseline | KR has no current-state reference | Warning | -3 |
| 6 | Not measurable | KR cannot be objectively verified | Critical | -10 |
| 7 | Not time-bound | No deadline within cadence | Warning | -3 |
| 8 | No alignment | Team OKR doesn't trace to company OKR | Critical | -10 |
| 9 | Conflicting OKRs | Two OKRs work against each other | Critical | -10 |
| 10 | Binary KR | KR is yes/no with no gradient | Warning | -3 |
| 11 | Business-as-usual | OKR describes routine work, not improvement | Warning | -5 |
| 12 | No aspirational OKRs | All OKRs are committed, no stretch | Info | -2 |
| 13 | All aspirational | No committed OKRs, everything is stretch | Warning | -5 |
| 14 | Metric gaming risk | KR incentivizes wrong behavior | Warning | -5 |
| 15 | Missing strategic link | Objective doesn't connect to strategy | Warning | -5 |
| 16 | Vague objective | Objective is too generic to inspire action | Warning | -5 |
| 17 | Lagging-only KRs | No leading indicators, only trailing metrics | Info | -2 |
| 18 | Duplicate measurement | Same metric appears in multiple KRs | Info | -2 |
| 19 | Uncontrollable KR | KR depends on external factors team cannot influence | Warning | -3 |
| 20 | No scoring rubric | KR has no defined 0.3/0.5/0.7/1.0 thresholds | Warning | -3 |
| Score range | Rating |
|---|---|
| 90-100 | Excellent OKR quality |
| 75-89 | Good -- minor improvements needed |
| 50-74 | Fair -- significant issues to address |
| < 50 | Poor -- major rework recommended |
For each objective, define the CFR companion:
| Objective | Conversation topic | Frequency | Participants | Purpose |
|---|---|---|---|---|
| O1 | [topic] | [weekly/biweekly/monthly] | [roles] | Check-in / problem-solving / course-correction |
| Objective | Feedback mechanism | Trigger | Direction |
|---|---|---|---|
| O1 | [mechanism] | [when to give feedback] | Upward / downward / peer |
| Objective | Recognition trigger | Recognition type |
|---|---|---|
| O1 | [what achievement triggers recognition] | Public / private / formal / informal |
flowchart TD
classDef company fill:#1565C0,stroke:#333,color:#fff
classDef team fill:#2E7D32,stroke:#333,color:#fff
classDef individual fill:#F57F17,stroke:#333,color:#fff
CO1["O1: Company Objective"]:::company
TO1["O1.1: Team Objective"]:::team
TO2["O1.2: Team Objective"]:::team
IO1["O1.1.1: Individual Objective"]:::individual
CO1 --> TO1
CO1 --> TO2
TO1 --> IO1
Shows cascade from company to team to individual objectives.
xychart-beta
title Key Results Target vs Baseline
x-axis ["KR1.1", "KR1.2", "KR2.1", "KR2.2", "KR3.1"]
y-axis "Score" 0 --> 1.0
bar [0.2, 0.1, 0.3, 0.0, 0.15]
bar [1.0, 0.7, 1.0, 0.7, 1.0]
Two bars per KR: baseline (current) and target score.
xychart-beta
title OKR Quality by Dimension
x-axis ["Measurability", "Alignment", "Ambition", "Clarity", "Completeness"]
y-axis "Score" 0 --> 100
bar [85, 90, 70, 80, 75]
Shows quality scores broken down by dimension.
Render diagrams per the diagram-rendering mixin.
File naming:
okr-alignment-tree.mmd / .pngkey-results-dashboard.mmd / .pngquality-scorecard.mmd / .pngAssemble the complete report:
# OKR Definition: [Entity Name]
**Date**: [date]
**Entity**: [name]
**Level**: [company/team/individual]
**Cadence**: [annual/quarterly]
**Objectives**: [count]
**Key Results**: [count]
**Quality score**: [0-100] -- [rating]
## Executive Summary
[Key findings: quality score, objective types breakdown, alignment status, top 3 recommendations]
## Strategic Context
[Phase 3 strategic context: mission, vision, priorities, challenges]
## OKR Table
[Phase 4 + 5 combined: objectives with their key results, baselines, targets, scoring rubrics]
## Alignment Map
[Phase 6 cascade validation + alignment tree diagram]
## Quality Validation
[Phase 7 mistake detection + quality scorecard diagram]
## Key Results Dashboard
[Phase 9 progress dashboard diagram]
## CFR Framework
[Phase 8 conversations, feedback, recognition tables]
## Recommendations
[Prioritized actions traced to specific findings]
## Sources
[Numbered list of web sources]
## Assumptions & Limitations
[Explicit list]
Present for user approval. Save only after explicit confirmation.
Per the autonomous-research mixin, plus:
The report must contain all sections listed in Phase 10. Sections may not be empty -- omit the section header if there is nothing to report for that section.
| Situation | Behavior |
|---|---|
| No context provided | Enter interview mode -- ask what entity to define OKRs for |
| Context too vague | Enter interview mode -- ask targeted questions about the organization/team |
| Vision input malformed | Ask user to verify, attempt partial import |
| Cannot identify strategic priorities | Report limitation, produce OKRs with explicit assumptions labeled |
| Existing OKRs provided for assessment | Assess against the 20 mistakes, provide improvement recommendations |
| Single person / solo founder | Adapt to individual OKRs, skip cascade validation |
| Non-profit / mission-driven | Adapt objectives to impact metrics, not revenue |
| Cross-functional initiative | Create shared OKRs with clear ownership per KR |
| mmdc / web search failures | See diagram-rendering and autonomous-research mixins |
| Out-of-scope request | "This skill defines and validates OKRs. [Request] is outside scope." |
Before presenting output, verify:
[] 3-5 objectives defined per level
[] 2-5 key results per objective with baselines and targets
[] Every objective classified (committed/aspirational/learning)
[] Every KR has 0.3/0.5/0.7/1.0 scoring thresholds
[] Alignment validated -- no orphan OKRs
[] No conflicting OKRs at the same level
[] All 20 OKR mistakes checked
[] Quality score calculated (0-100)
[] CFR framework defined for each objective
[] Recommendations traced to specific findings
[] All Mermaid diagrams render valid syntax (per diagram-rendering mixin)
[] Sources listed for claims (per autonomous-research mixin)
[] Assumptions labeled (per autonomous-research mixin)
Example 1: SaaS company quarterly OKRs
Example 2: With vision-crafting input
Example 3: Engineering team OKRs
Example 4: Startup annual OKRs
Example 5: Assess existing OKRs
Example 6: Solo founder
Example 7: Non-profit
Example 8: Cross-functional initiative
Example 9: No context
Example 10: Out of scope