From ship
Bootstraps repo infrastructure and AI harness: detects languages/tools, installs missing ones, configures CI/CD/pre-commit hooks, discovers constraints from code/git, generates AGENTS.md/learnings, sets hookify rules. Audits existing harnesses for staleness.
npx claudepluginhub heliohq/ship --plugin shipThis skill is limited to using the following tools:
```bash
Bootstraps repositories with harness engineering scaffolding: AGENTS.md orientation map, docs/ system of record, boundary tests, linter rules, CI pipeline, GC scripts. Use for new projects, agent-readiness, or architecture boundaries.
Sets up or updates agent-first engineering harness for repositories with AGENTS.md maps, docs structure, architecture boundaries, .harness rules, and quality scoring. Use to make repos AI agent-ready or audit readiness.
Assesses codebase for AI agent readiness by detecting stacks, monorepos, git setup, and evaluating style, testing, code quality, secrets, and file sizes.
Share bugs, ideas, or general feedback.
SHIP_PLUGIN_ROOT="${SHIP_PLUGIN_ROOT:-$(ship-plugin-root 2>/dev/null || echo "$HOME/.codex/ship")}"
SHIP_SKILL_NAME=setup source "${SHIP_PLUGIN_ROOT}/scripts/preflight.sh"
Never:
No user interaction in this phase.
git is available. If missing, stop.git rev-parse --is-inside-work-tree.git init.Scan repo files, then verify package manager / build tool exists on PATH.
| Language | File markers | Package manager / tool check |
|---|---|---|
| TypeScript / JavaScript | package.json, tsconfig.json, *.ts, *.tsx, *.js, *.jsx | npm, pnpm, yarn, bun |
| Python | pyproject.toml, requirements*.txt, setup.py, *.py | uv, poetry, pip, pip3 |
| Java | pom.xml, build.gradle*, *.java | mvn, gradle |
| C# | *.csproj, *.sln, *.cs | dotnet |
| Go | go.mod, *.go | go |
| Rust | Cargo.toml, *.rs | cargo |
| PHP | composer.json, *.php | composer |
| Ruby | Gemfile, *.rb | bundle, gem |
| Kotlin | build.gradle*, settings.gradle*, *.kt | gradle, mvn |
| Swift | Package.swift, *.swift, *.xcodeproj | swift, xcodebuild |
| Dart / Flutter | pubspec.yaml, *.dart | dart, flutter |
| Elixir | mix.exs, *.ex, *.exs | mix |
| Scala | build.sbt, *.scala | sbt, mill |
| C / C++ | CMakeLists.txt, Makefile, *.c, *.cc, *.cpp, *.h, *.hpp | cmake, make, detected compiler |
| Shell | *.sh, *.bash (no manifest) | bash, shellcheck (optional) |
If no language from the table above is detected, the repo may be
documentation-only, config-only, or use an unsupported language.
In that case: skip Install Tools and Pre-commit Hooks modules in
Phase 2 (mark as n/a), and proceed directly to Phase 3.5.
For each detected language, scan all mainstream tools by category: linter, formatter, type checker, test runner.
Status per tool:
ready: executable and config are usable as-ismissing: repo has no configured tool for that categorybroken: config references unavailable or misconfigured toolReference: references/toolchain-matrix.md for the full detection matrix.
Check and store:
.gitignore.github/workflows/*.{yml,yaml}.github/dependabot.ymlgit config --get core.hooksPath, .ship/hooks/;
also detect legacy: .husky/, .pre-commit-config.yaml, lint-staged in package.jsonUse AskUserQuestion after detection. The prompt must show:
ready / missing / brokenn/a if no supported language detected):Select modules to configure:
1. [x] Install missing tools (linter, formatter, type checker)
2. [x] Pre-commit hooks (lint + format on commit)
3. [ ] CI/CD (GitHub Actions — workflow only, no Dependabot)
4. [ ] Dependabot (dependency update PRs)
5. [ ] AI Code Review
Options:
Hard rule: Execute ONLY the modules the user selected. Each module is independent. CI/CD does NOT include Dependabot unless module 4 is also selected.
| Module | Reference |
|---|---|
| Install Tools | references/tooling.md |
| Pre-commit Hooks | generate hook scripts in .ship/hooks/, set core.hooksPath, works across all worktrees |
| CI/CD | references/ci.md (generate workflow only, skip Dependabot section unless module 4 is also selected) |
| Dependabot | references/ci.md (Dependabot section only) |
| AI Code Review | references/review.md |
Three cases based on what Phase 1 Step D detected:
Case 1: Working pre-commit system exists (.pre-commit-config.yaml
with pre-commit install done, .husky/ with hooks, or core.hooksPath
already set and working) → do not migrate. Respect the existing
system. Skip this module.
Case 2: Config exists but hook runner not wired (e.g., lint-staged
in package.json but no husky, or .pre-commit-config.yaml exists but
pre-commit install was never run) → wire it up. Install the
missing hook runner:
lint-staged without husky → run npx husky init or set
core.hooksPath to .ship/hooks/ with a script that calls
npx lint-staged.pre-commit-config.yaml without install → run pre-commit installCase 3: Nothing exists → generate .ship/hooks/pre-commit to run
lint + format on staged files. Set core.hooksPath .ship/hooks.
Use the project's detected linter/formatter. The script must be
executable (chmod +x).
Deterministic safety checks (secrets, protected files, forbidden patterns) are handled by hookify rules in Phase 7 Step C, not here.
After each module, commit atomically:
git add <changed files>
git commit -m "<conventional commit message>"
Before generating anything, check if the project already has harness
files (AGENTS.md, CLAUDE.md, .learnings/LEARNINGS.md, DEVELOPMENT.md).
If no harness files exist → skip to Phase 4 (full init).
If harness files exist → audit them for freshness using
references/harness-audit.md, then present results to the user:
Options:
If A: fix stale claims in existing files. Then proceed to Phase 4-7 to discover additional constraints not yet documented — these are added alongside the existing accurate rules, not replacing them.
If B: treat as full init — proceed to Phase 4 as if no harness exists.
If C: skip Phase 4-7 entirely.
Do NOT read file contents yet. Reuse language/structure data from Phase 1.
If Phase 1 revealed multiple sub-projects (each with their own manifest file, separate language, or independent directory structure), this is a monorepo.
For monorepos, identify sub-projects and their recent activity:
# Count commits per top-level directory in the last 30 days
git log --since="30 days ago" --name-only --pretty=format: | \
grep -v '^$' | cut -d/ -f1-2 | sort | uniq -c | sort -rn | head -10
Record each sub-project: path, language, manifest file, commit count. Note: monorepos will get per-sub-project AGENTS.md files in Phase 7.
Single repo with application code: record main entry file and key call paths. Monorepo: record entry point per active sub-project. No clear entry point (library, plugin, config-only, shell scripts): use the most-modified files in the last 30 days as starting points for investigation. Run:
git log --since="30 days ago" --name-only --pretty=format: | \
grep -v '^$' | sort | uniq -c | sort -rn | head -10
Find rules that only AI can judge — things where violating them causes bugs, security issues, or architectural breakage, but a regex or linter cannot detect the violation.
Do NOT look for code style patterns (naming, formatting, import order). The model already understands those from reading the code. Instead, look for constraints that the model would violate because it lacks context.
Monorepo: investigate each active sub-project independently.
Trace from entry points (or most-active files) 2-3 levels deep. Look for:
Scan git history for evidence of past mistakes:
# Find reverted commits (things that were tried and failed)
git log --oneline --grep="revert" --since="6 months ago" | head -10
# Find bug fix commits (what went wrong before)
git log --oneline --grep="fix" --grep="bug" --all-match --since="6 months ago" | head -10
# Find files with the most bug fixes (error-prone areas)
git log --oneline --grep="fix" --since="6 months ago" --name-only --pretty=format: | \
grep -v '^$' | sort | uniq -c | sort -rn | head -10
For interesting reverts or bug fixes, read the commit diff to understand what constraint was violated.
For each finding, apply this test:
type: deterministic.
These become hookify rules in Step 7C.type: semantic. These go in .learnings/LEARNINGS.md as verified entries in Step 7B.Use AskUserQuestion. Present safety rules and semantic rules separately. Ask if user has additional constraints not visible in the code.
Options: A) Generate as shown, B) Edit, C) Cancel. Max two rounds of edits.
If user adds a convention without code evidence, search for it first.
If no evidence found, include as Source: user-defined.
Read references/agents-md.md for structure. Fill from Phase 4-6
findings (survey, investigation, and user-provided context).
Omit sections with no content. Keep under 200 lines per file.
AGENTS.md documents project structure, commands, and architecture.
It should reference .learnings/LEARNINGS.md for semantic rules and mention
that hookify rules exist for deterministic safety checks.
Single repo: generate or update root AGENTS.md.
Monorepo: update each sub-project's local AGENTS.md with that
sub-project's conventions. If a local AGENTS.md doesn't exist, create it.
Root AGENTS.md gets repo-wide conventions only (commit format, shared
tooling, cross-project boundaries). Sub-project-specific conventions
go in the sub-project's AGENTS.md.
If an AGENTS.md already exists, use AskUserQuestion:
AGENTS.md already exists. Here's what would change:
<show diff summary: sections added/changed/removed>
Options:
For monorepos, ask once per file that needs changes (batch into one AskUserQuestion if possible).
Write semantic rules to .learnings/LEARNINGS.md as verified entries.
These are rules that require AI semantic judgment. Deterministic checks
go in hookify rules (Step C), NOT here.
Test before including: "Could a regex or grep catch this violation?" If yes, it belongs in a hookify rule. Learnings are for things like "don't remove auth logic to fix a bug" — where understanding intent is required.
Format (each rule is a learning entry):
## [LRN-YYYYMMDD-NNN] correction
**Logged**: <ISO 8601 timestamp>
**Priority**: high
**Status**: verified
**Area**: code
### Summary
<What must not happen — one sentence>
### Details
<Why this matters — what breaks>
### Suggested Action
<What to do instead>
### Metadata
- Source: <observed from code | git-history commit:hash | user-defined>
- Related Files: <file paths>
- Tags: <relevant tags>
---
Do NOT include style rules — the model follows style by reading code.
If .learnings/LEARNINGS.md already exists, use AskUserQuestion:
.learnings/LEARNINGS.md already exists with <N> entries.
Options:
Check if hookify plugin is available:
ls ~/.claude/plugins/data/*/hookify 2>/dev/null && echo "HOOKIFY_FOUND" || echo "HOOKIFY_NOT_FOUND"
If not found, install it:
claude /plugin install hookify
If install fails (e.g., no internet), warn the user but continue — pre-commit hook still provides commit-time safety. Hookify is the real-time layer, not the only layer.
Invoke the hookify skill to learn the exact rule format:
Skill("hookify:writing-rules")
For each deterministic finding from Phase 5, generate a hookify rule
file at .claude/hookify.ship-<name>.local.md following the format
from the hookify skill. Prefix all rule names with ship-.
Hookify auto-discovers .claude/hookify.*.local.md files — no restart needed.
Semantic rules (.learnings/LEARNINGS.md) are injected at session start by the
ship plugin's SessionStart hook — no per-edit checking needed.
Generate a comprehensive .gitignore based on everything detected in
Phase 1 (languages, package managers, toolchains, IDEs, build tools).
Use your knowledge of each detected technology to add the standard ignore patterns — caches, build output, virtual environments, IDE config, OS files, dependency directories, log files, environment variables, etc. Cover all detected languages and tools thoroughly.
Always include these Ship-specific rules:
# Ship runtime (tasks and audit are ephemeral)
.ship/tasks/
.ship/audit/
Do NOT gitignore .ship/hooks/ or .learnings/.
Always include Claude Code rules:
.claude/*
!.claude/settings.json
!.claude/hookify.ship-*.local.md
For existing repos: read the current .gitignore, identify gaps
based on detected tech stack, and append missing sections. Do not
duplicate or reorder existing rules.
Stage all generated/modified files and commit with a conventional commit message summarizing what was generated.
Output summary and offer next steps:
[Setup] Complete.
Infrastructure:
- <module name> — <what was done>
Harness:
AGENTS.md: <generated | merged | skipped>
Learnings: <N> verified rules in .learnings/LEARNINGS.md
Hookify: <N> safety rules generated
Pre-commit: <configured | skipped>
Semantic rules:
1. <name> — <why>
2. <name> — <why>
Safety rules:
1. <name> — <what it blocks>
2. <name> — <what it blocks>
## What's next?
1. **Start building** — /ship:auto with a task description
2. **Review harness** — read AGENTS.md and .learnings/LEARNINGS.md
3. **Customize** — edit conventions or hookify rules
references/agents-md.md — AGENTS.md structure guidereferences/toolchain-matrix.md — full detection matrix for 14 languagesreferences/tooling.md — tool installation instructions per languagereferences/ci.md — GitHub Actions CI/CD generationreferences/review.md — AI code review workflow setupreferences/runtime-install-guide.md — platform-specific runtime installationreferences/harness-audit.md — harness freshness audit (Phase 3.5)