Assesses and rates quality 0-10 with pros/cons analysis. Use when evaluating code, designs, or approaches.
Assesses code quality and design approaches with 0-10 scoring, pros/cons analysis, and actionable improvement recommendations.
/plugin marketplace add yonatangross/orchestkit/plugin install orkl@orchestkitThis skill is limited to using the following tools:
assets/assessment-report.mdassets/comparison-table.mdchecklists/assessment-checklist.mdreferences/agent-spawn-definitions.mdreferences/agent-teams-mode.mdreferences/alternative-analysis.mdreferences/improvement-prioritization.mdreferences/orchestration-mode.mdreferences/phase-templates.mdreferences/quality-model.mdreferences/scope-discovery.mdreferences/scoring-rubric.mdrules/_sections.mdrules/_template.mdrules/complexity-breakdown.mdrules/complexity-metrics.mdtest-cases.jsonComprehensive assessment skill for answering "is this good?" with structured evaluation, scoring, and actionable recommendations.
/ork:assess backend/app/services/auth.py
/ork:assess our caching strategy
/ork:assess the current database schema
/ork:assess frontend/src/components/Dashboard
BEFORE creating tasks, clarify assessment dimensions:
AskUserQuestion(
questions=[{
"question": "What dimensions to assess?",
"header": "Dimensions",
"options": [
{"label": "Full assessment (Recommended)", "description": "All dimensions: quality, maintainability, security, performance"},
{"label": "Code quality only", "description": "Readability, complexity, best practices"},
{"label": "Security focus", "description": "Vulnerabilities, attack surface, compliance"},
{"label": "Quick score", "description": "Just give me a 0-10 score with brief notes"}
],
"multiSelect": false
}]
)
Based on answer, adjust workflow:
See Orchestration Mode for env var check logic, Agent Teams vs Task Tool comparison, and mode selection rules.
TaskCreate(
subject="Assess: {target}",
description="Comprehensive evaluation with quality scores and recommendations",
activeForm="Assessing {target}"
)
| Question | How It's Answered |
|---|---|
| "Is this good?" | Quality score 0-10 with reasoning |
| "What are the trade-offs?" | Structured pros/cons list |
| "Should we change this?" | Improvement suggestions with effort |
| "What are the alternatives?" | Comparison with scores |
| "Where should we focus?" | Prioritized recommendations |
| Phase | Activities | Output |
|---|---|---|
| 1. Target Understanding | Read code/design, identify scope | Context summary |
| 1.5. Scope Discovery | Build bounded file list | Scoped file list |
| 2. Quality Rating | 7-dimension scoring (0-10) | Scores with reasoning |
| 3. Pros/Cons Analysis | Strengths and weaknesses | Balanced evaluation |
| 4. Alternative Comparison | Score alternatives | Comparison matrix |
| 5. Improvement Suggestions | Actionable recommendations | Prioritized list |
| 6. Effort Estimation | Time and complexity estimates | Effort breakdown |
| 7. Assessment Report | Compile findings | Final report |
Identify what's being assessed and gather context:
# PARALLEL - Gather context
Read(file_path="$ARGUMENTS") # If file path
Grep(pattern="$ARGUMENTS", output_mode="files_with_matches")
mcp__memory__search_nodes(query="$ARGUMENTS") # Past decisions
See Scope Discovery for the full file discovery, limit application (MAX 30 files), and sampling priority logic. Always include the scoped file list in every agent prompt.
Rate each dimension 0-10 with weighted composite score. See Quality Model for dimensions, weights, and grade interpretation. See Scoring Rubric for per-dimension criteria.
See Agent Spawn Definitions for Task Tool mode spawn patterns and Agent Teams alternative.
Composite Score: Weighted average of all 7 dimensions (see quality-model.md).
See Phase Templates for output templates for pros/cons, alternatives, improvements, effort, and the final report.
See also: Alternative Analysis | Improvement Prioritization
See Quality Model for scoring dimensions, weights, and grade interpretation.
| Decision | Choice | Rationale |
|---|---|---|
| 7 dimensions | Comprehensive coverage | All quality aspects without overwhelming |
| 0-10 scale | Industry standard | Easy to understand and compare |
| Parallel assessment | 4 agents (7 dimensions) | Fast, thorough evaluation |
| Effort/Impact scoring | 1-5 scale | Simple prioritization math |
| Rule | Impact | What It Covers |
|---|---|---|
| complexity-metrics | HIGH | 7-criterion scoring (1-5), complexity levels, thresholds |
| complexity-breakdown | HIGH | Task decomposition strategies, risk assessment |
assess-complexity - Task complexity assessmentork:verify - Post-implementation verificationork:code-review-playbook - Code review patternsork:quality-gates - Quality gate patternsVersion: 1.1.0 (February 2026)
Search, retrieve, and install Agent Skills from the prompts.chat registry using MCP tools. Use when the user asks to find skills, browse skill catalogs, install a skill for Claude, or extend Claude's capabilities with reusable AI agent components.
Activates when the user asks about AI prompts, needs prompt templates, wants to search for prompts, or mentions prompts.chat. Use for discovering, retrieving, and improving prompts.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.