From restruct
Set up the restruct meta-prompt system. Runs diagnostics, installs Ollama, selects the right model for your hardware, and warms everything up.
npx claudepluginhub thejustinwalsh/claude-plugins --plugin restructThis skill uses the workspace's default tool permissions.
Execute ALL of the following steps immediately using the Bash tool. Do not describe what you're going to do — just do it. Run each command yourself, report results briefly, and move to the next step. Only pause if a command fails and you cannot fix it.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Guides building MCP servers enabling LLMs to interact with external services via tools. Covers best practices, TypeScript/Node (MCP SDK), Python (FastMCP).
Generates original PNG/PDF visual art via design philosophy manifestos for posters, graphics, and static designs on user request.
Execute ALL of the following steps immediately using the Bash tool. Do not describe what you're going to do — just do it. Run each command yourself, report results briefly, and move to the next step. Only pause if a command fails and you cannot fix it.
Run which ollama or command -v ollama.
If not found, install it:
brew install ollamacurl -fsSL https://ollama.com/install.sh | shCheck if Ollama is running: curl -sf http://localhost:11434/api/version
If not running:
brew services start ollamaollama serve in the backgroundWait 2 seconds, then confirm it responds.
Run sysctl -n hw.memsize (macOS) or grep MemTotal /proc/meminfo (Linux) to get total system RAM.
Select the model based on available memory:
qwen2.5-coder:14b (best quality, ~9GB model, needs ~16GB for inference)qwen2.5-coder:7b (good quality, ~4.5GB model, needs ~8GB for inference)qwen2.5-coder:3b (acceptable quality, ~2GB model)Report the detected RAM and your model choice to the user.
Run ollama pull <selected-model> directly. This downloads the model weights. It may take several minutes for larger models.
Run ollama run <selected-model> "hello" --keepalive 60m to load the model into GPU/RAM and keep it resident. This ensures the first real refinement is fast.
Only needed if the model differs from the default (qwen2.5-coder:14b):
${CLAUDE_PLUGIN_ROOT}/bin/restruct config set ollama.model <selected-model>
Run ${CLAUDE_PLUGIN_ROOT}/bin/restruct doctor to confirm everything is green.
If all_good is true, tell the user: "Restruct is ready. Your prompts will be automatically refined via <selected-model>."
If not, report what's still failing and attempt to fix it.
Scan the project to discover what verification commands should run when Claude completes a task. This generates .restruct/verify.yaml which enforces lint, typecheck, build, and test rules automatically.
Discovery process:
package.json — check for scripts: test, lint, build, typecheck, check, tsc
pnpm, npm, yarn) from lock files--filter syntaxgo.mod — add go vet ./... and go test ./...Cargo.toml — add cargo check, cargo clippy (if available), cargo testMakefile or Justfile — check for lint, test, check targets--help)web/**/*.ts, src/**/*.tsx)**/*.go (or scoped to the Go module directory like cli/**/*.go)**/*.rspnpm test or pnpm build) get no globs — they run on any file changeWrite the config:
Create .restruct/verify.yaml with the discovered checks. Example format:
checks:
- name: test
command: "pnpm test"
- name: build
command: "pnpm build"
- name: typecheck
command: "pnpm --filter web exec tsc --noEmit"
globs:
- "web/**/*.ts"
- "web/**/*.tsx"
- name: go-vet
command: "pnpm --filter @restruct/cli exec go vet ./..."
globs:
- "cli/**/*.go"
Show the user the discovered checks and ask if they want to adjust anything before finalizing.