npx claudepluginhub jwlutz/claude_code_framework/initInitializes beads issue tracking database in current directory with optional prefix (defaults to dir name). Shows DB location, prefix, workflow overview, next steps; displays stats if already set up.
/initInitializes guided UI design for dashboards, apps, and tools. Assesses intent, proposes styles with rationale, builds components, and offers to save patterns.
/initDownloads and installs/updates the platform-specific notification binary for claude-notifications plugin from GitHub into the plugin's bin directory.
/initInitializes AI task harness with ai/tasks/ directory for modular backlog, progress log, bootstrap script, and CLAUDE.md instructions. Supports --mode new|scan and --task-type ops|data|infra|manual.
/initScans Claude Code agents directory for custom agents, lets user select and register them for orchestration workflows, updates registry JSON and documentation.
Use TodoWrite to track progress through phases.
# Version control
git rev-parse --git-dir 2>/dev/null && echo "git:yes" || echo "git:none"
git remote -v 2>/dev/null | head -2
# Existing state
test -d .vibe && echo "vibe:exists" || echo "vibe:none"
test -d debug && echo "debug:exists" || echo "debug:none"
# Test framework detection (order matters - first match wins)
test -f pytest.ini && echo "test:pytest" || \
(grep -q pytest pyproject.toml 2>/dev/null && echo "test:pytest") || \
(test -f vitest.config.ts -o -f vitest.config.js && echo "test:vitest") || \
(test -f jest.config.ts -o -f jest.config.js && echo "test:jest") || \
(node -e "const p=require('./package.json');if(p.scripts?.test){console.log('test:'+p.scripts.test);process.exit(0)}else{process.exit(1)}" 2>/dev/null) || \
(grep -q "^test:" Makefile 2>/dev/null && echo "test:make") || \
(test -f go.mod && echo "test:go") || \
(test -f Cargo.toml && echo "test:cargo") || \
echo "test:none"
# Language detection
find . -maxdepth 3 -name "*.py" -not -path "./.git/*" -not -path "./.vibe/*" 2>/dev/null | head -1 | grep -q . && echo "lang:python"
find . -maxdepth 3 -name "*.ts" -o -name "*.tsx" 2>/dev/null | head -1 | grep -q . && echo "lang:typescript"
find . -maxdepth 3 -name "*.js" -o -name "*.jsx" 2>/dev/null | head -1 | grep -q . && echo "lang:javascript"
find . -maxdepth 3 -name "*.go" 2>/dev/null | head -1 | grep -q . && echo "lang:go"
find . -maxdepth 3 -name "*.rs" 2>/dev/null | head -1 | grep -q . && echo "lang:rust"
Check git status and offer setup if needed:
git initgh repo create --source=. --push with appropriate flagsIf .vibe/ exists: do incremental update (act like /vibe:refresh). Report: "Existing .vibe/ found. Updating."
If not: create full structure:
mkdir -p .vibe/docs
Create these files (content below):
.vibe/understanding.md - populated in Phase 3.vibe/current.md - # No active task.vibe/bugs.md - see template below.vibe/risks.md - populated in Phase 4.vibe/future.md - # Future Plans.vibe/decisions.md - see template belowbugs.md template:
# Bugs
Next ID: 1
## Open
## Deferred
## Resolved
decisions.md template:
# Decisions
## Technical Decisions
## Assumptions
## Learned Lessons
## Plan Archive
Scan codebase and write findings to understanding.md. Use token-efficient format.
node_modules, .git, __pycache__, .venv, venv, build, dist, .pytest_cache, debug/, .vibe/)understanding.md in this format:# project-name
Last: YYYY-MM-DD | N files | Languages
## Stack
One line per technology. Framework + version + purpose.
## Architecture
2-3 sentences. Type, structure, key directories.
## Components
One line per component: Name: path. Purpose. Key deps.
## Patterns
- Bullet per pattern observed
## Tests
Framework, pattern, run command.
## Docs Index
- [Name](docs/path) - Description
Run risk-detection skill patterns against codebase using Grep. Write findings to risks.md:
# Risks
Next ID: RN | Last scan: DATE | Baseline: counts per level
## Critical
#R1 [CRITICAL] Description. file:line (found DATE)
## High
...
## Medium
...
## Low
...
## Resolved
If .vibe/docs/ contains files:
understanding.md.md with extracted textIf test framework already detected from pre-compute: use it. Report which one.
If no framework detected: ask user with recommendation based on languages:
go test (built-in, no choice needed)cargo test (built-in, no choice needed)Record choice in understanding.md Tests section.
If debug/ exists with user-written tests: ask "Merge existing tests into generated suite or start fresh?" Default: merge.
Create debug/ structure based on languages:
debug/conftest.py or debug/setup.ts + test files)debug/python/, debug/typescript/)Generate initial tests covering:
Run debug/ suite to verify baseline passes.
Report: