Use when the user asks to review, audit, improve, classify, or reorganise existing documentation — for a single page or a whole docs set. Evaluates docs against Diátaxis principles: type fit, boundary discipline, user fit, structure, and quality. Returns concrete prioritised fixes and, where needed, recommends splitting overloaded pages. Triggers on phrases like "review my docs", "audit the documentation", "what's wrong with this guide", "improve this README", "tell me what's wrong".
From sdlc-workflownpx claudepluginhub jayteealao/agent-skills --plugin sdlc-workflowThis skill uses the workspace's default tool permissions.
Guides AI-assisted editing of real video footage: transcribe/plan cuts with Claude, execute via FFmpeg bash scripts, augment with Remotion/ElevenLabs/fal.ai, polish in Descript/CapCut.
Ingests video/audio from files, URLs, RTSP, desktop; indexes/searches moments with timestamps/clips; transcodes/edits timelines (subtitles/overlays/dubbing); generates assets and live alerts.
Compares coding agents like Claude Code and Aider on custom YAML-defined codebase tasks using git worktrees, measuring pass rate, cost, time, and consistency.
Review one document or a whole docs set using Diátaxis.
The job is not to say whether the writing is "good" in the abstract. The job is to determine whether each page has the right purpose, stays within its boundaries, serves the right user need, and forms part of a usable documentation system.
Use this skill when the user asks for:
Assess each page on these axes:
Type fit
Boundary discipline
User fit
Structure
Functional quality
Deep quality
System quality
Flag examples like:
For each document, return:
For a docs set, also return:
Be direct and specific. Prefer:
Avoid vague advice like:
Before returning, verify: