By crouton-labs
Author optimized Claude Code artifacts—CLAUDE.md files, slash commands, hooks, rules, skills, and scripts—using specialized guides for best practices in LLM prompting, token budgeting, multi-agent orchestration, evaluation strategies, and tool design to enhance project workflows and AI performance.
npx claudepluginhub crouton-labs/crouton-kit --plugin authoringBest practices for writing effective CLAUDE.md files. Use when creating, updating, or auditing CLAUDE.md files for projects or directories.
Guide to writing slash commands for Claude Code. Use when creating commands that set mode, constraints, or workflows invoked via /command-name.
Token budgeting, placement effects, RAG patterns, prompt caching, compression, and multi-turn context strategies for LLM applications. Use when dealing with context windows, token budgets, retrieval-augmented generation, long context, context overflow, caching costs, observation masking, or compressing LLM inputs.
Evaluation strategies and quality gates for LLM systems. LLM-as-judge implementation, prompt regression testing, structural and semantic validation pipelines, production monitoring, guardrails, and metrics that actually work. Use when building evals, setting up CI quality gates, testing prompts, measuring output quality, detecting regressions, or adding safety guardrails to AI applications.
Guide to Claude Code hooks — lifecycle events, handler types, decision control, and common patterns. Use when creating, debugging, or planning hooks for guardrails, context injection, quality gates, notifications, or automation.
Design and implement multi-agent LLM systems. Covers orchestrator patterns, parallel agent coordination, pipeline architecture, hierarchical delegation, agent communication, and failure handling. Use when building multi-agent workflows, coordinating parallel agents, designing orchestrators, splitting tasks across agents, or debugging multi-agent failures.
Techniques for getting varied, non-repetitive outputs from repeated LLM calls. Use when building systems that call LLMs in loops, generating personality lines, commentary, names, or any repeated creative text where outputs feel samey or cliche.
Structure Claude prompts for clarity and better results using roles, explicit instructions, context, positive framing, and strategic organization. Use when crafting prompts for complex tasks, long documents, tool workflows, or code generation.
Guide to writing .claude/rules/*.md files — auto-applied constraints scoped by file patterns. Use when creating or updating rules for code conventions, quality standards, or file-specific guidance.
Guide to creating CLI tools and scripts that augment Claude Code. Use when building bin/ executables, automation scripts, hook handlers, or tooling that abstracts repeated agent workflows.
Guide to writing SKILL.md files for Claude Code. Use when creating skills that provide on-demand reference, methodology, or workflow guidance.
Remove signs of AI-generated writing from text. Use when editing or reviewing text to make it sound more natural and human-written. Based on Wikipedia's comprehensive "Signs of AI writing" guide. Detects and fixes patterns including: inflated symbolism, promotional language, superficial -ing analyses, vague attributions, em dash overuse, rule of three, AI vocabulary words, passive voice, negative parallelisms, and filler phrases.
Get reliable typed JSON from LLMs using constrained decoding, JSON Schema, Zod, Pydantic, and Instructor. Use when implementing structured output, JSON schema validation, typed API responses, constrained generation, Zod schemas, Pydantic models, schema design for LLM extraction, streaming structured data, or debugging malformed JSON responses from models.
What belongs in system prompt vs user prompt and why. Placement decisions for API calls, system messages, and prompt architecture — system as identity/constraints/tools, user as task/context/state. Use when designing system prompts, structuring API calls, deciding where instructions belong, writing prompts for production systems, or debugging instruction compliance.
Design tool interfaces for LLM agents — descriptions, parameter schemas, error messages, granularity, and composition. Use when creating function calling tools, building MCP servers, designing agent tool interfaces, writing tool descriptions, or debugging why a model calls the wrong tool, hallucinates parameters, or fails to call a tool at all.
Battle-tested Claude Code plugin for engineering teams — 48 agents, 182 skills, 68 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Binary reverse engineering, malware analysis, firmware security, and software protection research for authorized security research, CTF competitions, and defensive security
Complete collection of battle-tested Claude Code configs from an Anthropic hackathon winner - agents, skills, hooks, rules, and legacy command shims evolved over 10+ months of intensive daily use
Comprehensive .NET development skills for modern C#, ASP.NET, MAUI, Blazor, Aspire, EF Core, Native AOT, testing, security, performance optimization, CI/CD, and cloud-native applications
Next.js development expertise with skills for App Router, Server Components, Route Handlers, Server Actions, and authentication patterns