Natural-Language Programming Manager — scan, lint, and write NL artifacts with Claude-native quality scoring
npx claudepluginhub xiaolai/nlpm-for-claude --plugin nlpmCheck cross-component consistency — reference integrity, orphans, contradictions
Auto-fix fixable issues in NL artifacts — missing fields, heading gaps, field renames
Initialize NLPM for this project — detect artifacts, set lint strictness
Discover and inventory all natural language programming artifacts in a repository
Score NL programming artifacts — 100-point quality analysis per file
Classify a file path to its NL artifact type — command, agent, skill, rule, hook-config, manifest, etc.
Discover NL programming artifact files in a directory by category (A: plugin, B: project config, F: memory)
Run NL artifact tests — evaluate artifacts against .nlpm-test/*.spec.md specifications (TDD for natural language programming)
Show quality score trends over time — track improvements, detect degradation
Cross-component consistency analyzer for NL programming artifacts. Checks reference integrity, detects orphans, finds behavioral contradictions, and identifies terminology drift across plugin components. <example> Context: User runs /nlpm:check on a plugin directory assistant: "I'll use the checker to verify cross-component consistency." </example> <example> Context: Developer renamed a skill directory and wants to verify no broken references assistant: "I'll dispatch the checker to find any broken skill references across agents and commands." </example>
Discover and classify all NL programming artifacts in a repository. <example> Context: User wants to inventory their NL artifacts user: "/nlpm:scan" assistant: "I'll use the scanner to discover all NL artifacts." </example> <example> Context: User wants to check a specific project user: "/nlpm:scan ~/github/myproject" assistant: "I'll scan that project for NL programming artifacts." </example>
Scores NL programming artifacts on a 100-point scale using deterministic penalties. Use this agent when scoring plugin components, checking artifact quality, or running quality analysis on commands, agents, skills, rules, hooks, or CLAUDE.md. <example> Context: User runs /nlpm:score on a directory assistant: "I'll use the scorer to analyze and score these artifacts." </example> <example> Context: Quality check before a plugin release assistant: "I'll dispatch the scorer to verify all artifacts meet the threshold." </example> <example> Context: Fix command needs to identify issues before applying repairs assistant: "I'll use the scorer to identify issues and their penalties." </example>
Evaluate NL artifacts against test specifications. Predicts trigger accuracy, checks output format expectations, validates frontmatter, and scores against thresholds. <example> Context: Developer wrote a spec for a new agent and wants to check if it passes user: "/nlpm:test" assistant: "I'll use the tester to evaluate your artifacts against their specs." </example> <example> Context: Developer is doing TDD — wrote the spec first, artifact doesn't exist yet user: "/nlpm:test agents/my-agent.spec.md" assistant: "I'll use the tester to check — the artifact doesn't exist yet, so this will be RED." </example>
Mechanical scanner for vague quantifier words in NL artifacts. Counts occurrences of flagged words and reports exact locations. Use this agent for fast, deterministic vague-word counting before the scorer applies judgment. <example> Context: Score command dispatches vague-scanner in parallel with scorer assistant: "I'll scan for vague quantifiers while the scorer runs full analysis." </example> <example> Context: Quick check for vague language in a single file assistant: "I'll scan for vague quantifier words and report exact locations." </example>
Use when writing, reviewing, or validating Claude Code plugin artifacts — check frontmatter schemas, hook event names, naming conventions, prompt structure, or reference syntax. Loaded by the NLPM scorer and checker agents for schema validation.
Multi-agent workflow patterns for Claude Code -- parallel dispatch, sequential pipelines, QC gates, retry loops, shared partials. Use when designing systems with multiple agents, commands, or processing stages.
Use when writing or reviewing NL artifacts and need to check for anti-patterns — vague quantifiers, prohibitions without alternatives, oversized skills, write-on-read-only agents, monolithic prompts, or linter-duplicating rules.
The 50 rules of natural language programming. Loaded when writing, reviewing, or improving any NL artifact — skills, agents, commands, rules, hooks, prompts, plugins, CLAUDE.md. The definitive style guide for NL code quality.
Use when scoring NL artifact quality, applying penalties, or calibrating lint judgment — contains the 100-point rubric with penalty tables per artifact type and 4 worked calibration examples.
Use when writing test specs for NL artifacts, running /nlpm:test, or setting up TDD workflows for skills, agents, commands, rules, hooks, and prompts.
How to write Claude Code agents that trigger reliably, use the right model, and produce consistent output. Use when creating, improving, or reviewing agent definitions.
How to write Claude Code hooks -- event selection, hook types, matcher patterns, blocking vs advisory, portable paths. Use when creating hooks for quality gates, automation, or policy enforcement.
How to design and build Claude Code plugins -- architecture decisions, component selection, file structure, manifest configuration, marketplace publishing. Use when planning, creating, or reviewing a Claude Code plugin.
How to write effective system prompts for any LLM. Universal prompt engineering -- role clarity, structured output, injection resistance, few-shot examples. Use when writing prompts, system instructions, or AI configuration.
How to write .claude/rules/ files that Claude actually follows. Use when creating, improving, or reviewing project rules.
How to write SKILL.md files that trigger reliably and teach effectively. Use when creating, improving, or reviewing Claude Code skills.
Battle-tested Claude Code plugin for engineering teams — 38 agents, 156 skills, 72 legacy command shims, production-ready hooks, and selective install workflows evolved through continuous real-world use
Comprehensive skill pack with 66 specialized skills for full-stack developers: 12 language experts (Python, TypeScript, Go, Rust, C++, Swift, Kotlin, C#, PHP, Java, SQL, JavaScript), 10 backend frameworks, 6 frontend/mobile, plus infrastructure, DevOps, security, and testing. Features progressive disclosure architecture for 50% faster loading.
Access thousands of AI prompts and skills directly in your AI coding assistant. Search prompts, discover skills, save your own, and improve prompts with AI.
Comprehensive toolkit for developing Claude Code plugins. Includes 7 expert skills covering hooks, MCP integration, commands, agents, and best practices. AI-assisted plugin creation and validation.
Orchestrate multi-agent teams for parallel code review, hypothesis-driven debugging, and coordinated feature development using Claude Code's Agent Teams
Context-Driven Development plugin that transforms Claude Code into a project management tool with structured workflow: Context → Spec & Plan → Implement