From llm-evals
Supabase-backed evaluation tracking with runs, cases, and scores tables. Use when storing eval results, building dashboards, or tracking regression over time.
npx claudepluginhub vanman2024/ai-dev-marketplace --plugin llm-evalsThis skill is limited to using the following tools:
Skill for Supabase-backed evaluation result tracking.
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Skill for Supabase-backed evaluation result tracking.
Track evaluations with:
eval_runs - Evaluation run metadataeval_cases - Individual test caseseval_scores - Metric scores per caseThis skill is automatically invoked when:
| Script | Description |
|---|---|
scripts/setup-tracking.sh | Run Supabase migration |
| Template | Description |
|---|---|
templates/schema.sql | Supabase tables and RLS |
templates/queries.sql | Dashboard queries |