From coding-agent
LLM-driven UI via typed JSON spec. For AI agents that produce rich output (reports, dashboards, tool results) beyond markdown. Uses @json-render with a Zod-validated catalog. Not for general product UI.
npx claudepluginhub devjarus/coding-agentThis skill uses the workspace's default tool permissions.
When an AI agent produces output that's richer than markdown — research reports, stock/flight tool results, dashboards, multi-step summaries with charts — you have two bad options and one good one:
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
When an AI agent produces output that's richer than markdown — research reports, stock/flight tool results, dashboards, multi-step summaries with charts — you have two bad options and one good one:
The LLM emits a JSON spec using only components from a catalog you define. A renderer maps spec → React. Zod validates the spec before rendering. Components you didn't register can't be rendered. State and actions flow through providers, not arbitrary code.
This skill is for the specific case of LLM-as-UI-author. Most UI is not that.
{
"dependencies": {
"@json-render/core": "^0.11.0",
"@json-render/react": "^0.11.0",
"@json-render/shadcn": "^0.11.0",
"zod": "^3"
}
}
@json-render/shadcn is optional — a preset catalog wired to shadcn/ui components. Use it as a starting point, then extend.
// render-catalog.ts
import { defineCatalog } from "@json-render/core";
import { schema } from "@json-render/react/schema";
import { z } from "zod";
export const resultCatalog = defineCatalog(schema, {
components: {
Section: {
props: z.object({
title: z.string(),
subtitle: z.string().nullable(),
collapsible: z.boolean().nullable(),
defaultOpen: z.boolean().nullable(),
}),
slots: ["default"],
description: "A titled section that groups related content.",
},
MetricCard: {
props: z.object({
label: z.string(),
value: z.string(),
unit: z.string().nullable(),
trend: z.enum(["up", "down", "neutral"]).nullable(),
}),
description: "Single metric display. Use for prices, counts, KPIs.",
},
DataTable: {
props: z.object({
columns: z.array(z.object({ key: z.string(), label: z.string() })),
rows: z.array(z.record(z.unknown())),
}),
description: "Tabular data.",
},
// ... Chart, Timeline, Grid, Stack, etc.
},
});
The description strings are the LLM's user manual — they appear in the system prompt. Write them as instructions to the model, not as doc comments.
// render-registry.tsx
import { defineRegistry } from "@json-render/react";
import { resultCatalog } from "./render-catalog";
export const { registry, handlers, executeAction } = defineRegistry(resultCatalog, {
components: {
Section: ({ props, children }) => (
<section className="rounded-lg border p-4">
<h3 className="font-semibold">{props.title}</h3>
{props.subtitle && <p className="text-sm text-muted">{props.subtitle}</p>}
<div className="mt-2">{children}</div>
</section>
),
MetricCard: ({ props }) => <div>{/* ... */}</div>,
DataTable: ({ props }) => <table>{/* ... */}</table>,
},
});
Registry separates what the LLM can request (catalog) from how it looks (registry). Ship a new theme without touching prompts.
// ResultRenderer.tsx
import { Renderer, StateProvider, ActionProvider } from "@json-render/react";
import { registry, handlers } from "./render-registry";
export function ResultRenderer({ spec, markdown }: Props) {
const parsed = parseSpec(spec); // returns null on invalid JSON or shape
if (parsed) {
return (
<StateProvider initialState={{}}>
<ActionProvider handlers={handlers(...)}>
<Renderer spec={parsed} registry={registry} />
</ActionProvider>
</StateProvider>
);
}
if (markdown) return <MarkdownContent content={markdown} />;
return <EmptyState />;
}
Always provide a fallback chain. LLMs produce invalid specs sometimes. Your options:
Never show raw JSON to users. If the spec is malformed, degrade gracefully.
The LLM needs to know (a) the catalog and (b) that it must output valid JSON. Two approaches:
Generate the catalog description at build time, paste into the system prompt:
const catalogDocs = Object.entries(resultCatalog.components)
.map(([name, def]) => `- ${name}: ${def.description}`)
.join("\n");
const systemPrompt = `Output a JSON object matching this shape: { root: string, elements: Record<string, Element> }
Available components:
${catalogDocs}
Example: { root: "r1", elements: { r1: { type: "Section", props: { title: "..." }, children: [...] } } }`;
Use the model's JSON mode / structured output feature with the catalog's Zod schemas converted to JSON Schema. The model is then forced to emit valid shapes. Prefer this when your model supports it — it eliminates parse failures.
Static rendering is most of the value. But when the LLM wants a "Show more" button, filter dropdown, or tab switcher, use providers — not arbitrary code:
{{stateKey}} bindings.toggleSection, filterTable) to handlers you implement. The LLM references action names; it can't define handlers.This is the safety boundary: the LLM can request interactivity from a fixed menu of actions. It cannot author interactivity.
If your agent produces images, charts, or files, pass them alongside the spec. In the personal-ai pattern, artifact IDs are passed in spec props (<Image src="/api/artifacts/abc123" />), and the renderer extracts referenced IDs to deduplicate with a separate gallery fallback.
sanitizeMarkdown pattern in personal-ai.eval or new Function anything from the spec. Actions are named references to handlers you register.Personal-ai (packages/ui/src/):
lib/render-catalog.ts — ~15-component catalog with Zod schemaslib/render-registry.tsx — React implementations with shadcn + lucidecomponents/results/ResultRenderer.tsx — fallback chain + debug togglecomponents/tools/Tool*.tsx — per-tool wrappers that produce specsThe Tool* files are worth reading: each tool has its own wrapper that decides when to render via spec vs. fall back to a custom React component. Specs are for LLM-authored output; custom components are for tool-authored output with known shape.
| Alternative | Problem |
|---|---|
| LLM returns JSX string, eval at runtime | Injection, hallucinated components, no types |
| LLM returns markdown + frontmatter | Lossy, no interactivity, hard to theme |
| Custom renderer per tool | Works but doesn't compose; LLM can't mix components across tools |
Vercel AI SDK streamUI / generative UI | Good for streaming, but ties you to specific models and runtime; json-render works with any model that can produce JSON |
json-render is the lightweight, model-agnostic choice when you want a constrained, typed, themeable surface for LLM-authored UI.