Skill

assess

Assesses and rates quality 0-10 with pros/cons analysis. Use when evaluating code, designs, or approaches.

From ork
Install
1
Run in your terminal
$
npx claudepluginhub yonatangross/orchestkit --plugin ork
Tool Access

This skill is limited to using the following tools:

AskUserQuestionReadGrepGlobTaskTaskCreateTaskUpdateTaskListToolSearchmcp__memory__search_nodesBash
Supporting Assets
View in Repository
assets/assessment-report.md
assets/comparison-table.md
checklists/assessment-checklist.md
references/agent-spawn-definitions.md
references/agent-teams-mode.md
references/alternative-analysis.md
references/improvement-prioritization.md
references/orchestration-mode.md
references/phase-templates.md
references/quality-model.md
references/scope-discovery.md
references/scoring-rubric.md
rules/_sections.md
rules/_template.md
rules/complexity-breakdown.md
rules/complexity-metrics.md
test-cases.json
Skill Content

Assess

Comprehensive assessment skill for answering "is this good?" with structured evaluation, scoring, and actionable recommendations.

Quick Start

/ork:assess backend/app/services/auth.py
/ork:assess our caching strategy
/ork:assess --model=opus the current database schema
/ork:assess frontend/src/components/Dashboard

Argument Resolution

TARGET = "$ARGUMENTS"  # Full argument string, e.g., "backend/app/services/auth.py"
# $ARGUMENTS[0] is the first token (CC 2.1.59 indexed access)

# Model override detection (CC 2.1.72)
MODEL_OVERRIDE = None
for token in "$ARGUMENTS".split():
    if token.startswith("--model="):
        MODEL_OVERRIDE = token.split("=", 1)[1]  # "opus", "sonnet", "haiku"
        TARGET = TARGET.replace(token, "").strip()

Pass MODEL_OVERRIDE to all Agent() calls via model=MODEL_OVERRIDE when set. Accepts symbolic names (opus, sonnet, haiku) or full IDs (claude-opus-4-6) per CC 2.1.74.


STEP -1: MCP Probe + Resume Check

Load: Read("${CLAUDE_PLUGIN_ROOT}/skills/chain-patterns/references/mcp-detection.md")

# 1. Probe MCP servers (once at skill start)
ToolSearch(query="select:mcp__memory__search_nodes")

# 2. Store capabilities
Write(".claude/chain/capabilities.json", {
  "memory": probe_memory.found,
  "skill": "assess",
  "timestamp": now()
})

# 3. Check for resume
state = Read(".claude/chain/state.json")  # may not exist
if state.skill == "assess" and state.status == "in_progress":
    last_handoff = Read(f".claude/chain/{state.last_handoff}")

Phase Handoffs

PhaseHandoff FileContents
000-intent.jsonDimensions, target, mode
101-baseline.jsonInitial codebase scan results
202-evaluation.jsonPer-dimension scores + evidence
303-report.jsonFinal report, grade, recommendations

STEP 0: Verify User Intent with AskUserQuestion

BEFORE creating tasks, clarify assessment dimensions:

AskUserQuestion(
  questions=[{
    "question": "What dimensions to assess?",
    "header": "Dimensions",
    "options": [
      {"label": "Full assessment (Recommended)", "description": "All dimensions: quality, maintainability, security, performance", "markdown": "```\nFull Assessment (7 phases)\n──────────────────────────\n  Dimensions scored 0-10:\n  ┌─────────────────────────────┐\n  │ Correctness      ████████░░ │\n  │ Maintainability  ██████░░░░ │\n  │ Security         █████████░ │\n  │ Performance      ███████░░░ │\n  │ Testability      ██████░░░░ │\n  │ Architecture     ████████░░ │\n  │ Documentation    █████░░░░░ │\n  └─────────────────────────────┘\n  + Pros/cons + alternatives\n  + Effort estimates + report\n  Agents: 4 parallel evaluators\n```"},
      {"label": "Code quality only", "description": "Readability, complexity, best practices", "markdown": "```\nCode Quality Focus\n──────────────────\n  Dimensions scored 0-10:\n  ┌─────────────────────────────┐\n  │ Correctness      ████████░░ │\n  │ Maintainability  ██████░░░░ │\n  │ Testability      ██████░░░░ │\n  └─────────────────────────────┘\n  Skip: security, performance\n  Agents: 1 code-quality-reviewer\n  Output: Score + best practice gaps\n```"},
      {"label": "Security focus", "description": "Vulnerabilities, attack surface, compliance", "markdown": "```\nSecurity Focus\n──────────────\n  ┌──────────────────────────┐\n  │ OWASP Top 10 check       │\n  │ Dependency CVE scan       │\n  │ Auth/AuthZ flow review    │\n  │ Data flow tracing         │\n  │ Secrets detection         │\n  └──────────────────────────┘\n  Agent: security-auditor\n  Output: Vuln list + severity\n          + remediation steps\n```"},
      {"label": "Quick score", "description": "Just give me a 0-10 score with brief notes", "markdown": "```\nQuick Score\n───────────\n  Single pass, ~2 min:\n\n  Read target ──▶ Score ──▶ Done\n                  7.2/10\n\n  Output:\n  ├── Composite score (0-10)\n  ├── Grade (A-F)\n  ├── 3 strengths\n  └── 3 improvements\n  No agents, no deep analysis\n```"}
    ],
    "multiSelect": false
  }]
)

Based on answer, adjust workflow:

  • Full assessment: All 7 phases, parallel agents
  • Code quality only: Skip security and performance phases
  • Security focus: Prioritize security-auditor agent
  • Quick score: Single pass, brief output

STEP 0b: Select Orchestration Mode

Load details: Read("${CLAUDE_SKILL_DIR}/references/orchestration-mode.md") for env var check logic, Agent Teams vs Task Tool comparison, and mode selection rules.


Task Management (CC 2.1.16)

TaskCreate(
  subject="Assess: {target}",
  description="Comprehensive evaluation with quality scores and recommendations",
  activeForm="Assessing {target}"
)

What This Skill Answers

QuestionHow It's Answered
"Is this good?"Quality score 0-10 with reasoning
"What are the trade-offs?"Structured pros/cons list
"Should we change this?"Improvement suggestions with effort
"What are the alternatives?"Comparison with scores
"Where should we focus?"Prioritized recommendations

Workflow Overview

PhaseActivitiesOutput
1. Target UnderstandingRead code/design, identify scopeContext summary
1.5. Scope DiscoveryBuild bounded file listScoped file list
2. Quality Rating7-dimension scoring (0-10)Scores with reasoning
3. Pros/Cons AnalysisStrengths and weaknessesBalanced evaluation
4. Alternative ComparisonScore alternativesComparison matrix
5. Improvement SuggestionsActionable recommendationsPrioritized list
6. Effort EstimationTime and complexity estimatesEffort breakdown
7. Assessment ReportCompile findingsFinal report

Phase 1: Target Understanding

Identify what's being assessed and gather context:

# PARALLEL - Gather context
Read(file_path="$ARGUMENTS[0]")  # If file path
Grep(pattern="$ARGUMENTS[0]", output_mode="files_with_matches")
mcp__memory__search_nodes(query="$ARGUMENTS[0]")  # Past decisions

Phase 1.5: Scope Discovery

Load Read("${CLAUDE_SKILL_DIR}/references/scope-discovery.md") for the full file discovery, limit application (MAX 30 files), and sampling priority logic. Always include the scoped file list in every agent prompt.

Progressive Output (CC 2.1.76)

Output results incrementally as each evaluation phase completes:

After PhaseShow User
1. Target UnderstandingScope summary, file list, context
1.5. Scope DiscoveryBounded file list (max 30 files)
2. Quality RatingEach dimension's score as the evaluating agent returns
3. Pros/ConsBalanced evaluation summary

For Phase 2 parallel agents, show each dimension's score as soon as the evaluating agent returns — don't wait for all 4 agents. If any dimension scores below 4/10, flag it immediately as a priority concern requiring user attention.


Phase 2: Quality Rating (7 Dimensions)

Rate each dimension 0-10 with weighted composite score. Load Read("${CLAUDE_PLUGIN_ROOT}/skills/quality-gates/references/unified-scoring-framework.md") for dimensions, weights, grade interpretation, and per-dimension criteria. Load Read("${CLAUDE_SKILL_DIR}/references/quality-model.md") for assess-specific overrides.

Load Read("${CLAUDE_SKILL_DIR}/references/agent-spawn-definitions.md") for Task Tool mode spawn patterns and Agent Teams alternative.

Composite Score: Weighted average of all 7 dimensions (see quality-model.md).


Phases 3-7: Analysis, Comparison & Report

Load Read("${CLAUDE_SKILL_DIR}/references/phase-templates.md") for output templates for pros/cons, alternatives, improvements, effort, and the final report.

See also: Read("${CLAUDE_SKILL_DIR}/references/alternative-analysis.md") | Read("${CLAUDE_SKILL_DIR}/references/improvement-prioritization.md")


Grade Interpretation

Load Read("${CLAUDE_PLUGIN_ROOT}/skills/quality-gates/references/unified-scoring-framework.md") for grade thresholds and scoring criteria.


Key Decisions

DecisionChoiceRationale
7 dimensionsComprehensive coverageAll quality aspects without overwhelming
0-10 scaleIndustry standardEasy to understand and compare
Parallel assessment4 agents (7 dimensions)Fast, thorough evaluation
Effort/Impact scoring1-5 scaleSimple prioritization math

Rules Quick Reference

RuleImpactWhat It Covers
complexity-metrics (load ${CLAUDE_SKILL_DIR}/rules/complexity-metrics.md)HIGH7-criterion scoring (1-5), complexity levels, thresholds
complexity-breakdown (load ${CLAUDE_SKILL_DIR}/rules/complexity-breakdown.md)HIGHTask decomposition strategies, risk assessment

Related Skills

  • assess-complexity - Task complexity assessment
  • ork:verify - Post-implementation verification
  • ork:code-review-playbook - Code review patterns
  • ork:quality-gates - Quality gate patterns

Version: 1.4.0 (March 2026) — Added progressive output for incremental evaluation results

Stats
Parent Repo Stars128
Parent Repo Forks14
Last CommitMar 20, 2026