From intent-layer
Set up hierarchical Intent Layer (AGENTS.md files) for codebases. Use when initializing a new project, adding context infrastructure to an existing repo, user asks to set up AGENTS.md, add intent layer, make agents understand the codebase, or scaffolding AI-friendly project documentation.
npx claudepluginhub orban/intent-layer --plugin intent-layerThis skill uses the workspace's default tool permissions.
> **TL;DR**: Create CLAUDE.md/AGENTS.md files that help AI agents navigate your codebase like senior engineers. Run `detect_state.sh` first to see what's needed.
Creates new Angular apps using Angular CLI with flags for routing, SSR, SCSS, prefixes, and AI config. Follows best practices for modern TypeScript/Angular development. Use when starting Angular projects.
Generates Angular code and provides architectural guidance for projects, components, services, reactivity with signals, forms, dependency injection, routing, SSR, ARIA accessibility, animations, Tailwind styling, testing, and CLI tooling.
Executes ctx7 CLI to fetch up-to-date library documentation, manage AI coding skills (install/search/generate/remove/suggest), and configure Context7 MCP. Useful for current API refs, skill handling, or agent setup.
TL;DR: Create CLAUDE.md/AGENTS.md files that help AI agents navigate your codebase like senior engineers. Run
detect_state.shfirst to see what's needed.
Hierarchical AGENTS.md infrastructure so agents navigate codebases like senior engineers.
This skill includes specialized sub-skills that are automatically invoked when appropriate:
| Sub-Skill | Location | Auto-Invoke When |
|---|---|---|
git-history | git-history/SKILL.md | Creating nodes for existing code (extracts pitfalls from commits) |
pr-review-mining | pr-review-mining/SKILL.md | Creating nodes for existing code (extracts pitfalls from PR discussions) |
pr-review | pr-review/SKILL.md | Reviewing PRs that touch Intent Layer nodes |
When creating nodes for directories with git history, automatically run git-history analysis to pre-populate:
Trigger: Creating AGENTS.md for directory with >50 commits
Action: Run git-history analysis before writing node
When creating nodes for directories with merged PRs, automatically run PR mining to pre-populate:
Trigger: Creating AGENTS.md for directory with merged PRs
Action: Run pr-review-mining alongside git-history, merge findings
When reviewing PRs that modify code covered by Intent Layer:
Trigger: PR touches files under an AGENTS.md
Action: Run pr-review with --ai-generated if applicable
When you discover a non-obvious gotcha during any work (not just Intent Layer setup):
Trigger: Fix error caused by non-obvious behavior (API format, config quirk, etc.)
Action: Append pitfall to nearest AGENTS.md
Format:
### Short descriptive title
**Problem**: What assumption failed
**Symptom**: Error message or unexpected behavior
**Solution**: Correct approach with code reference
Example: After fixing 'list' object has no attribute 'get' because Claude CLI output format varies:
### Claude CLI JSON output format varies
**Problem**: `claude --output-format json` can return dict or list
**Symptom**: `'list' object has no attribute 'get'`
**Solution**: Check `isinstance(data, list)` before `.get()`. See `lib/claude_runner.py:parse_claude_output()`
scripts/detect_state.sh /path/to/project
| State | Action |
|---|---|
none | Create root file (continue below) |
partial | Add Intent Layer section to existing root |
complete | Use intent-layer-maintenance skill instead |
# Auto-discover all candidate directories
scripts/estimate_all_candidates.sh /path/to/project
# Or analyze structure first
scripts/analyze_structure.sh /path/to/project
Choose CLAUDE.md (Anthropic) or AGENTS.md (cross-tool). Pick template by size:
references/templates.md → Small Projectreferences/templates.md → Medium Projectreferences/templates.md → Large Project| Signal | Action |
|---|---|
| Directory >20k tokens | Create AGENTS.md |
| Responsibility shift | Create AGENTS.md |
| Cross-cutting concern | Document at nearest common ancestor |
Use child template from references/templates.md.
scripts/validate_node.sh CLAUDE.md
scripts/validate_node.sh path/to/AGENTS.md
Checks: token count <4k, required sections, no absolute paths, no TODOs.
ln -s CLAUDE.md AGENTS.md # If CLAUDE.md is primary
Step-by-step guided setup with prompts at each decision point.
Run scripts/detect_state.sh, then based on result:
| State | Action |
|---|---|
none | Ask user: "Create CLAUDE.md or AGENTS.md as root?" |
partial | Ask user: "What's the one-line TL;DR for this project?" |
complete | Redirect to intent-layer-maintenance skill |
Run scripts/estimate_all_candidates.sh, then:
Before creating each node, automatically analyze both git history AND PR discussions using the mining scripts. Git-mined pitfalls are consistently the most valuable content in any node. Pitfalls discovered from real bugs (commit fixes, reverts, migrations) are far more useful than pitfalls guessed from reading code.
# Git commit analysis (extracts pitfalls, anti-patterns, decisions, contracts)
${CLAUDE_PLUGIN_ROOT}/scripts/mine_git_history.sh [directory]
# GitHub PR analysis (requires gh CLI)
${CLAUDE_PLUGIN_ROOT}/scripts/mine_pr_reviews.sh --limit 50
# Check for stale nodes (during maintenance)
${CLAUDE_PLUGIN_ROOT}/scripts/detect_staleness.sh --code-changes [directory]
mine_git_history.sh extracts from commits:
mine_pr_reviews.sh extracts from PRs:
Present merged findings to user: "History suggests these pitfalls: [list]. Include them?"
For root node:
references/templates.md)For each child node:
references/templates.mdFor directories with history of mistakes or complex operations:
Ask: "What operations in this directory have caused problems before?"
For each risky operation identified:
Mine from history:
# Find reverts and fixes that suggest missing checks
${CLAUDE_PLUGIN_ROOT}/scripts/mine_git_history.sh [directory] | grep -i "fix\|revert"
Present findings: "These commits suggest potential checks: [list]. Add any?"
Run scripts/validate_node.sh on all created nodes:
Ask user: "Create symlink for cross-tool compatibility? (AGENTS.md → CLAUDE.md)"
If yes: ln -s CLAUDE.md AGENTS.md
For codebases >200k tokens, use parallel subagents to dramatically speed up exploration.
| Codebase Size | Approach |
|---|---|
| <100k tokens | Sequential (standard workflow) |
| 100-500k tokens | Parallel exploration, sequential synthesis |
| >500k tokens | Full parallel mode (explore + validate) |
Run structure analysis first:
scripts/analyze_structure.sh /path/to/project
Identify 3-6 major subsystems from the output (e.g., src/api/, src/core/, src/db/).
Spawn subagents for code exploration, git history, AND PR mining in parallel:
# Code exploration (one per subsystem)
Task 1: "Analyze src/api/ for Intent Layer setup. Find: code map (find-it-fast
+ key relationships), public API (exports used by others + core types),
external dependencies, data flow, entry points, contracts, patterns,
pitfalls. Return structured findings per section."
Task 2: "Analyze src/core/ for Intent Layer setup. Find: code map (find-it-fast
+ key relationships), public API (exports used by others + core types),
external dependencies, data flow, entry points, contracts, patterns,
pitfalls. Return structured findings per section."
Task 3: "Analyze src/db/ for Intent Layer setup. Find: code map (find-it-fast
+ key relationships), public API (exports used by others + core types),
external dependencies, data flow, entry points, contracts, patterns,
pitfalls. Return structured findings per section."
# Git history analysis (parallel with exploration)
Task 4: "Run git-history analysis on src/api/. Find bug fixes, reverts,
refactors, and breaking changes. Return as Intent Layer findings."
Task 5: "Run git-history analysis on src/core/. Find bug fixes, reverts,
refactors, and breaking changes. Return as Intent Layer findings."
Task 6: "Run git-history analysis on src/db/. Find bug fixes, reverts,
refactors, and breaking changes. Return as Intent Layer findings."
# PR review mining (parallel with above)
Task 7: "Run pr-review-mining on src/api/. Extract from PR descriptions
and review comments: pitfalls, contracts, architecture decisions.
Return as Intent Layer findings with PR numbers."
Task 8: "Run pr-review-mining on src/core/. Extract from PR descriptions
and review comments: pitfalls, contracts, architecture decisions.
Return as Intent Layer findings with PR numbers."
Task 9: "Run pr-review-mining on src/db/. Extract from PR descriptions
and review comments: pitfalls, contracts, architecture decisions.
Return as Intent Layer findings with PR numbers."
Critical: Launch all agents in parallel (single message with multiple Task calls).
Once all agents complete:
Validate all nodes in parallel:
Task 1: "Run validate_node.sh on CLAUDE.md, report results"
Task 2: "Run validate_node.sh on src/api/AGENTS.md, report results"
Task 3: "Run validate_node.sh on src/core/AGENTS.md, report results"
For each subsystem, use this structured prompt:
Explore [DIRECTORY] for Intent Layer documentation. Return:
## Design Rationale
[Why does this module exist? What problem does it solve? What's the core insight?]
## Code Map
### Find It Fast
| Looking for... | Go to |
[What common searches map to which files? Focus on non-obvious locations.]
### Key Relationships
[Import direction, layer rules, what depends on what]
## Public API
### Key Exports
| Export | Used By | Change Impact |
[What do OTHER modules import from here?]
### Core Types
[The 3-5 types needed to understand this area]
## External Dependencies
| Service | Used For | Failure Mode |
[External services and what happens when down]
## Data Flow
[How requests/data move through this area - simple diagram]
## Decisions
| Decision | Why | Rejected |
[Architectural choices with rationale]
## Entry Points
| Task | Start Here |
[Common tasks and where to start]
## Contracts
[Non-type-enforced invariants]
## Patterns
[How to do common tasks - sequence and non-obvious steps]
## Pitfalls
[What looks wrong but isn't? What looks fine but breaks?]
Keep findings specific to this directory. Note cross-cutting concerns separately.
| Metric | Sequential | Parallel |
|---|---|---|
| 500k token codebase | ~30 min | ~10 min |
| 1M+ token codebase | ~60 min | ~15 min |
| Subsystem coverage | Variable | Consistent |
For projects WITHOUT existing code. Write Intent Nodes as specs, then scaffold.
Use "Spec Root Template" from references/templates.md:
For each planned subsystem, create AGENTS.md with:
Ask Claude to scaffold against the specs:
Build incrementally:
When implementation complete:
validate_node.sh and transition to maintenance skillAI agents reading raw code lack the tribal knowledge that experienced engineers have:
Intent Nodes (CLAUDE.md/AGENTS.md files) provide compressed, high-signal context that tells agents what matters without reading thousands of lines of code.
CLAUDE.md and AGENTS.md should NOT coexist at project root. Pick one and symlink the other for cross-tool compatibility.
Subdirectory nodes should be AGENTS.md (not CLAUDE.md) for cross-tool compatibility.
Keep: code map (non-obvious locations), public API (what others depend on), external dependencies, data flow, decisions with rationale, contracts, patterns (non-obvious steps), pitfalls Delete: obvious mappings (routes.ts → routes), type-enforced invariants, internal exports, standard patterns, tech stack lists
| Codebase Size | Experienced | Newcomer |
|---|---|---|
| <50k tokens | 1-2 hours | 3-5 hours |
| 50-150k tokens | 3-5 hours | 6-10 hours |
| >150k tokens | 5-10 hours | 10-20 hours |
Budget additional time for SME interviews—tribal knowledge takes conversation to extract.
| Script | Purpose |
|---|---|
detect_state.sh | Check Intent Layer state (none/partial/complete) |
analyze_structure.sh | Find semantic boundaries |
estimate_tokens.sh | Measure single directory |
estimate_all_candidates.sh | Measure all candidates at once |
validate_node.sh | Check node quality before committing |
capture_pain_points.sh | Generate maintenance capture template |
detect_changes.sh | Find affected nodes on merge/PR |
show_status.sh | Health dashboard with metrics and recommendations |
show_hierarchy.sh | Visual tree display of all nodes |
review_pr.sh | Review PR against Intent Layer |
capture_mistake.sh | Generate mistake report for check extraction |
| Sub-Skill | Location | Purpose |
|---|---|---|
git-history | git-history/SKILL.md | Extract pitfalls/contracts from commit history |
pr-review-mining | pr-review-mining/SKILL.md | Extract pitfalls/contracts from PR discussions |
pr-review | pr-review/SKILL.md | Review PRs against Intent Layer contracts |
| File | Purpose |
|---|---|
templates.md | Root (S/M/L) and child templates, three-tier boundaries |
node-examples.md | Real-world examples |
capture-protocol.md | SME interview questions |
compression-techniques.md | How to achieve 100:1 compression, LCA placement |
agent-feedback-protocol.md | Continuous improvement loop |
TL;DR: Ask these when documenting existing code. Focus on what agents can't infer from code itself.
For full protocol: references/capture-protocol.md
TL;DR: >20k tokens or responsibility shift → create. Simple utilities → don't.
| Signal | Action |
|---|---|
| >20k tokens in directory | Create AGENTS.md |
| Responsibility shift (different owner/concern) | Create AGENTS.md |
| Hidden contracts/invariants | Document in nearest ancestor |
| Cross-cutting concern | Place at lowest common ancestor |
Do NOT create for:
TL;DR: Testable verifications before risky operations. Add when mistakes reveal missing checks.
Pre-flight checks are verifiable assertions an agent runs before modifying code. They catch "I thought I understood" mistakes.
| Signal | Action |
|---|---|
| Mistake happened | Write check that would have caught it |
| PR reviewer flagged missing step | Convert to check |
| Complex multi-step operation | Add checks for each step |
| Critical/irreversible operation | Add comprehension + human gate |
See references/templates.md → Writing Pre-flight Checks for the standard format.
# Find commits suggesting missing verifications
scripts/mine_git_history.sh --since "6 months ago" [directory] | grep -E "fix|broke|forgot"
Look for patterns like "forgot to update X" or "broke Y because didn't check Z".
TL;DR: Start at leaves, work up to root. Clarity compounds upward.
Always capture leaf-first, easy-to-hard:
Start with deepest directories (most concrete)
Work up to parent nodes (summarize children)
Finish with root (summarize entire hierarchy)
Why this order?
Anti-pattern: Starting at root and working down leads to vague descriptions that need constant revision as you discover what's actually in the code.
TL;DR: Agents surface missing context during work → humans review → Intent Layer improves → future agents start better.
Agent works → Finds gap → Surfaces finding → Human reviews → Node updated → Future agents benefit
When you encounter gaps while working, surface them using the format in references/agent-feedback-protocol.md:
### Intent Layer Feedback
| Type | Location | Finding |
|------|----------|---------|
| Missing pitfall | `src/api/AGENTS.md` | Rate limiter fails silently when Redis down |
Run change detection to identify which nodes need review:
scripts/detect_changes.sh main HEAD
This outputs affected nodes in leaf-first order for systematic review.
See references/agent-feedback-protocol.md for:
When agents surface mistakes, evaluate for check conversion:
Mistake surfaced → "Would a check have caught this?"
│
┌─────────────┴─────────────┐
│ │
Yes No
│ │
▼ ▼
Write Pre-flight Check Add to Pitfalls
│ (awareness only)
▼
Add to AGENTS.md Pre-flight section
Check conversion template:
Mistake: [What happened]
Operation: [What agent was doing]
Check: Before [operation] → [verification that would have caught it]
Use scripts/capture_mistake.sh to generate structured mistake reports.
TL;DR: Update nodes when behavior changes, not just when code changes.
When files change (e.g., on merge):
scripts/detect_changes.sh base headFor full maintenance workflow, use: intent-layer-maintenance skill
After completing initial setup (state = complete):
| Trigger | Action |
|---|---|
| Quarterly | Run intent-layer-maintenance skill |
| Post-incident | Update Pitfalls + Contracts |
| After refactor | Update Entry Points + Subsystem Boundaries |
| After new feature | Update Architecture Decisions + Patterns |
| PR Review | Auto-invoke pr-review sub-skill |
When reviewing PRs that touch files covered by Intent Layer nodes:
# Automatically run pr-review
scripts/review_pr.sh main HEAD --ai-generated
The pr-review sub-skill will:
- name: Check Intent Layer
run: ${CLAUDE_PLUGIN_ROOT}/scripts/detect_state.sh .
- name: PR Review (if Intent Layer exists)
if: github.event_name == 'pull_request'
run: |
${CLAUDE_PLUGIN_ROOT}/scripts/review_pr.sh origin/main HEAD --exit-code
| Skill | Use When |
|---|---|
intent-layer-maintenance | Quarterly audits, post-incident updates |
intent-layer-query | Asking questions about the codebase |
intent-layer-onboarding | Orienting newcomers |
git-history (sub-skill) | Mining commit history for insights |
pr-review-mining (sub-skill) | Mining PR discussions for insights |
pr-review (sub-skill) | Reviewing PRs against Intent Layer |