From ritual
Use to configure a project for the ritual plugin -- detects tech stack, sets architecture rules, discovers versioning manifests, integration toggles, and custom directives
npx claudepluginhub yanekyuk/arcana --plugin ritualThis skill is limited to using the following tools:
You are configuring a project for the ritual plugin. This is an interactive wizard -- you WILL ask the user questions and wait for responses.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
You are configuring a project for the ritual plugin. This is an interactive wizard -- you WILL ask the user questions and wait for responses.
test -f docs/ritual-config.json && echo "CONFIG EXISTS" || echo "NO CONFIG"
If config exists, read it and ask the user: "Existing config found. Do you want to reconfigure from scratch or edit the current config?" If they want to edit, load the existing values as defaults for each step below.
Scan project files to detect the tech stack. Check these files in order:
Language and runtime:
package.json exists → language is likely TypeScript (check for typescript in devDependencies) or JavaScriptCargo.toml exists → Rustgo.mod exists → Gopyproject.toml or setup.py or requirements.txt exists → Pythonbuild.gradle or pom.xml exists → Java/Kotlinmix.exs exists → ElixirRuntime detection (for JS/TS projects):
bun.lockb or bunfig.toml → Bunpnpm-lock.yaml → pnpm (Node)yarn.lock → Yarn (Node)package-lock.json → npm (Node)deno.json or deno.lock → DenoPackage manager: Infer from lockfile (same as runtime detection above).
Test runner:
package.json scripts.test if presentjest.config.*, vitest.config.*, .mocharc.*, pytest.ini, Cargo.toml (cargo test)go testLinter/formatter:
.eslintrc*, eslint.config.*, biome.json, .prettierrc*, rustfmt.toml, .golangci.yml, ruff.tomlpackage.json for scripts.lint, scripts.format, scripts.typecheckSource root:
src/, lib/, app/ directoriessrc if src/ exists, otherwise .Present all detected values to the user in a clear table:
Detected tech stack:
Language: TypeScript
Runtime: Bun
Package manager: bun
Test command: bun test
Lint command: bun run lint
Format command: bun run format
Typecheck: bun run typecheck
Source root: src
Ask: "Does this look correct? You can confirm, or tell me what to change."
Wait for the user to confirm or provide overrides. Apply any overrides they specify.
Present the architecture preset options:
Architecture presets:
1. Layered — Classic layered architecture with domain/application/infrastructure separation
2. Hexagonal — Ports and adapters with strict dependency inversion
3. Vertical Slices — Feature-based modules with minimal cross-feature coupling
4. Custom — Define your own rules from scratch
5. None — Skip architecture rules entirely
Ask: "Which architecture preset would you like? (1-5)"
Wait for user response. Expand the selected preset into flat rules:
Layered:
Hexagonal:
Vertical Slices:
Custom:
None:
architecture.rules to an empty array.After expanding, show the user the resulting rules and ask: "These are the architecture rules that will be enforced. Add, remove, or modify any? Or confirm to proceed."
Wait for user response. Apply any changes.
Auto-discover version-bearing files across the project tree. Search for these manifest types:
**/package.json -- look for "version" field (skip node_modules/)**/Cargo.toml -- look for version under [package]**/pyproject.toml -- look for version under [project] or [tool.poetry]**/setup.cfg -- look for version under [metadata]**/build.gradle / **/build.gradle.kts -- look for version = "X.Y.Z"**/version.txt -- entire file content is the version stringUse Glob to find candidate files, then Read each to confirm it contains a version field. Skip files in node_modules/, dist/, build/, .git/, and vendor directories.
Generate rule prompts from detected layout:
"Bump <manifest> version for all changes""Bump <path>/<manifest> version for changes under <path>/"versioning to an empty array and inform the user: "No version manifests detected. Versioning rules will be empty -- orchestrators will skip version bumps."Present the generated rules to the user:
Detected versioning rules:
1. Bump package.json version for all changes
Or for monorepos:
Detected versioning rules:
1. Bump frontend/package.json version for changes under frontend/
2. Bump api/pyproject.toml version for changes under api/
Ask: "These are the versioning rules that tell orchestrators which version files to bump. Add, remove, or modify any? Or confirm to proceed."
Wait for user response. Apply any changes.
When editing an existing config: Load the current versioning array as defaults. Show them alongside any newly detected manifests. If new manifests are found that are not covered by existing rules, highlight them: "New version manifest detected at <path> -- not covered by existing rules. Add a rule for it?"
Directives are soft guidelines for the AI, organized by skill group. Walk through each group in order, explaining what it covers and collecting directives for it.
When editing an existing config: If the existing directives field is an object with group keys, show the current directives per group as defaults. If the existing directives field is a flat array (legacy format), migrate all entries to the implementation group and inform the user: "Existing directives were in the legacy flat format. They have been migrated to the implementation group. You can redistribute them during this setup."
Present each group and collect directives:
Implementation directives apply to: run-tdd
These guide coding style, patterns to favor, and implementation preferences.
Ask: "Enter implementation directives (one per line), or say 'none' to skip."
Wait for user response. Collect directives until done.
Review directives apply to: run-self-review
These guide review focus areas, quality thresholds, and things to watch for.
Ask: "Enter review directives (one per line), or say 'none' to skip."
Wait for user response. Collect directives until done.
Documentation directives apply to: run-sync-docs, run-spec, run-domain-knowledge, run-design-decision
These guide doc style, detail level, and terminology preferences.
Ask: "Enter documentation directives (one per line), or say 'none' to skip."
Wait for user response. Collect directives until done.
Delivery directives apply to: run-open-pr, run-finish
These guide PR conventions, merge preferences, and changelog notes.
Ask: "Enter delivery directives (one per line), or say 'none' to skip."
Wait for user response. Collect directives until done.
Triage directives apply to: run-triage
These guide classification preferences, branch naming, and scope boundaries.
Ask: "Enter triage directives (one per line), or say 'none' to skip."
Wait for user response. Collect directives until done.
Present integration options:
Integration toggles (y/n for each):
1. CodeRabbit PR reviews — Automated AI code review on PRs
2. Linear — Issue tracking integration
3. GitHub Issues — Link PRs to GitHub Issues
4. Auto-docs — Automatically update documentation on changes
5. Context7 — Library documentation lookups via MCP during implementation
For each integration, ask the user and wait for their response. Default to the most common setup if they say "use defaults": GitHub Issues on, CodeRabbit off, Linear off, Context7 on, auto-docs on.
After the user confirms their integration selections, verify prerequisites for each enabled integration. Checks are advisory, not blocking — warn the user but do not prevent them from enabling the integration.
GitHub Issues:
which gh && gh auth status 2>&1 | head -5
If gh is not found, warn: "GitHub CLI is required for GitHub Issues integration. Install with brew install gh, then run gh auth login."
CodeRabbit: No executable check. Inform: "CodeRabbit requires the CodeRabbit GitHub App installed on this repository. Visit https://coderabbit.ai to set it up."
Linear:
claude mcp list 2>/dev/null | grep -qi linear
If not found, suggest: "Linear integration requires the Linear MCP server. Install with: claude mcp add linear -- npx -y @anthropic/linear-mcp@latest"
Context7:
claude mcp list 2>/dev/null | grep -qi context7
If not found, suggest: "Context7 integration requires the Context7 MCP server. Install with: claude mcp add context7 -- npx -y @upstash/context7-mcp@latest"
Auto-docs: No prerequisite — this is built-in behavior.
Assemble the final config object:
{
"stack": {
"language": "<detected or overridden>",
"runtime": "<detected or overridden>",
"packageManager": "<detected or overridden>",
"test": "<detected or overridden>",
"lint": "<detected or overridden>",
"format": "<detected or overridden>",
"typecheck": "<detected or overridden>"
},
"sourceRoot": "<detected or overridden>",
"integrations": {
"githubIssues": true,
"coderabbit": false,
"linear": false,
"context7": true,
"autoDocs": true
},
"architecture": {
"rules": [
"<expanded flat rules>"
]
},
"versioning": [
"<generated or user-edited versioning rules>"
],
"directives": {
"implementation": ["<implementation directives>"],
"review": ["<review directives>"],
"documentation": ["<documentation directives>"],
"delivery": ["<delivery directives>"],
"triage": ["<triage directives>"]
}
}
Write to docs/ritual-config.json:
# Ensure docs/ directory exists
mkdir -p docs
Use the Write tool to write the JSON file. Format it with 2-space indentation.
Show the user the final config and confirm: "Config written to docs/ritual-config.json. You can edit this file directly at any time. Orchestrators will read it at the start of every pipeline run."