From arc
Discovers architectural friction like shallow modules and coupling, proposes deep-module refactors with competing interface designs, and creates GitHub RFC issues. Use for improving architecture or testability.
npx claudepluginhub howells/arc --plugin arcThis skill uses the workspace's default tool permissions.
<tool_restrictions>
Provides Ktor server patterns for routing DSL, plugins (auth, CORS, serialization), Koin DI, WebSockets, services, and testApplication testing.
Conducts multi-source web research with firecrawl and exa MCPs: searches, scrapes pages, synthesizes cited reports. For deep dives, competitive analysis, tech evaluations, or due diligence.
Provides demand forecasting, safety stock optimization, replenishment planning, and promotional lift estimation for multi-location retailers managing 300-800 SKUs.
<tool_restrictions>
EnterPlanMode — BANNED. Do NOT call this tool. This skill has its own structured process.ExitPlanMode — BANNED. You are never in plan mode.
</tool_restrictions><arc_runtime>
This workflow requires the full Arc bundle, not a prompts-only install.
Resolve the Arc install root from this skill's location and refer to it as ${ARC_ROOT}.
Use ${ARC_ROOT}/... for Arc-owned files.
Use project-local paths such as .ruler/ or rules/ for the user's repository.
</arc_runtime>
<required_reading> Before starting, read these references:
${ARC_ROOT}/references/architecture-patterns.md — import depth rules, boundary violations${ARC_ROOT}/references/component-design.md — compound vs simple component patterns
</required_reading>Discover structural friction, propose deep-module refactors, and create RFC issues.
From John Ousterhout's A Philosophy of Software Design:
A deep module has a small interface hiding a large implementation. Deep modules are:
A shallow module has an interface nearly as complex as its implementation. Shallow modules:
Use the Agent tool with subagent_type=Explore to navigate the codebase. If the user provided a
path or focus area, start there. Otherwise, explore broadly.
Do NOT follow rigid heuristics. Explore organically and note where you experience friction:
The friction you encounter IS the signal.
Present a numbered list of deepening opportunities. For each candidate:
| Field | Description |
|---|---|
| Cluster | Which modules/concepts are involved |
| Why they're coupled | Shared types, call patterns, co-ownership of a concept |
| Dependency category | See categories below |
| Import depth | Max relative import depth between coupled modules |
| Test impact | What existing tests would be replaced by boundary tests |
| Severity | How much this coupling costs day-to-day |
Ask the user: "Which of these would you like to explore?"
Before spawning design agents, write a user-facing explanation of the chosen candidate:
Show this to the user, then immediately proceed to Step 4.
Spawn 3+ sub-agents in parallel using the Agent tool. Each must produce a radically different interface for the deepened module.
Give each agent a technical brief (file paths, coupling details, dependency category, what's being hidden) plus a different design constraint:
| Agent | Constraint |
|---|---|
| Agent 1 | "Minimise the interface — aim for 1-3 entry points max" |
| Agent 2 | "Maximise flexibility — support many use cases and extension" |
| Agent 3 | "Optimise for the most common caller — make the default case trivial" |
| Agent 4 (if applicable) | "Design around ports & adapters for cross-boundary dependencies" |
Each sub-agent outputs:
Present all designs, then compare them in prose. Give your own recommendation — which design is strongest and why. If elements from different designs combine well, propose a hybrid. Be opinionated.
Create a refactor RFC as a GitHub issue using gh issue create:
## Problem
[Describe the architectural friction — which modules are shallow and coupled,
what integration risk exists, why this makes the codebase harder to navigate]
## Proposed Interface
[The chosen interface design — signature, usage example, what it hides]
## Dependency Strategy
[Which category applies and how dependencies are handled]
## Testing Strategy
- **New boundary tests to write**: [behaviours to verify at the interface]
- **Old tests to delete**: [shallow module tests that become redundant]
- **Test environment needs**: [local stand-ins or adapters required]
## Implementation Recommendations
[Durable guidance NOT coupled to current file paths:
- What the module should own (responsibilities)
- What it should hide (implementation details)
- What it should expose (the interface contract)
- How callers should migrate]
Do NOT ask the user to review before creating — just create it and share the URL.
When assessing a candidate, classify its dependencies:
Pure computation, in-memory state, no I/O. Always deepenable — merge the modules and test directly.
Dependencies with local test stand-ins (PGLite for Postgres, in-memory filesystem). Deepenable if the stand-in exists. Test with the local stand-in running in the test suite.
Your own services across a network boundary. Define a port (interface) at the module boundary. The deep module owns the logic; the transport is injected. Tests use an in-memory adapter.
Third-party services (Stripe, Twilio) you don't control. Mock at the boundary. The deepened module takes the external dependency as an injected port; tests provide a mock.
The core principle: replace, don't layer.
From the architecture patterns reference:
| Signal | What it means |
|---|---|
5+ levels of ../ imports | Code is reaching across boundaries |
| Barrel file re-exporting everything | Hiding the real dependency graph |
| Test file longer than source file | Testing internals, not behaviour |
| "Utils" folder with 20+ files | Shallow modules masquerading as shared code |
| Type file imported by 10+ modules | Hidden coupling through shared types |
| Feature spread across 8+ files | Over-decomposition, shallow modules |
| Mock setup longer than test body | Integration seams are in the wrong place |