From shipyard
Initialize or update a Shipyard project — configure settings, create directory structure, and analyze codebase. Use this when the user wants to set up Shipyard in a new project, reconfigure an existing project, re-analyze the codebase after changes, or update Shipyard tool files.
npx claudepluginhub acendas/shipyard --plugin shipyardThis skill is limited to using the following tools:
You are setting up (or updating) Shipyard for this project.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
You are setting up (or updating) Shipyard for this project.
!shipyard-context path
!shipyard-context view data-version
!shipyard-context view config
!shipyard-context legacy-check
!shipyard-data find-orphans
!shipyard-context project-claude-md
Paths. All file ops use the absolute SHIPYARD_DATA prefix from the context block. No ~, $HOME, or shell variables in file_path. Never use echo/printf/shell redirects to write state files — use the Write tool (auto-approved for SHIPYARD_DATA).
If context shows LEGACY_SHIPYARD_DETECTED → MUST run legacy migration FIRST, before anything else. Do not skip this. Do not go to quick check. The .shipyard/ directory in the project contains user data that needs to move to plugin data.
If context shows NO_LEGACY → skip migration, proceed to normal detect mode below.
This runs when .shipyard/config.md exists in the project directory. This is a pre-v0.5.0 installation.
⚠️ Branch check first: The .shipyard/ directory may contain data from a different branch (it was git-tracked, so it changes with branch switches). Before migrating, verify the data matches the current branch:
Check current branch: git branch --show-current
Check if .shipyard/ was recently modified on this branch: git log -1 --format=%H -- .shipyard/ 2>/dev/null
If .shipyard/ was last modified on a different branch, AskUserQuestion:
The .shipyard/ directory may contain specs from a different branch.
Current branch: [branch]
Last .shipyard/ commit: [branch/hash]
1. Migrate anyway — I'll use this data as a starting point
2. Start fresh — ignore old data, initialize clean for this branch
3. Let me check — I'll switch branches first
Recommended: 2 — cleaner to start fresh on the current branch
Migration steps (if proceeding):
Run the migration in one atomic step: shipyard-data migrate .shipyard
This handles all of: creating the data directory tree, copying contents
from .shipyard/ (skipping the obsolete scripts/ subdir which is now
served from the plugin), and removing transient state files. The command
prints the resolved data directory path on success.
Report:
Migrated Shipyard data from .shipyard/ to plugin data directory.
The .shipyard/ directory is no longer needed — you can safely delete it:
rm -rf .shipyard/
Do NOT auto-delete .shipyard/.
Re-run codebase analysis (Step 3) to ensure codebase-context.md matches the current branch.
Continue to QUICK CHECK below.
If the find-orphans context output above is non-empty AND the current data dir has no config.md:
This means a previous Shipyard installation wrote data under a different project hash — most commonly because the worktree-detection fix (R1/F5) changed how worktree paths are resolved. The user's previous sprint state, backlog, codebase context, and memory are all at the orphaned path and would otherwise be silently abandoned.
Each line of find-orphans output is tab-separated: <orphan-data-dir>\t<recorded-project-root>.
DO NOT proceed with fresh-install or update flows until you have asked the user about each candidate. Use AskUserQuestion:
For a single candidate:
"Found orphaned Shipyard data from a previous installation: (recorded project root: )
This was most likely created before the worktree-detection fix changed the project hash. Migrate this data to the current data directory?
- Yes, migrate it
- No, treat this project as a fresh install (orphaned data stays at the old path)"
For multiple candidates:
"Found N orphaned Shipyard data dirs from previous installations. Which would you like to migrate?
- (recorded as )
- (recorded as ) ... N+1. None — treat this project as a fresh install"
If the user picks "migrate", run shipyard-data migrate <orphaned-data-dir>. The migrate command's R4 safety guards apply automatically (it refuses on populated dest, and --force creates a backup) — but in this scenario the current dir is fresh by definition, so plain migrate will succeed without --force. R19 ensures the dest's .project-root is rewritten to the current project root after the copy.
After successful migration, MUST report the orphan source path back to the user so they can reclaim the disk space. The orphan dir is no longer scanned by find-orphans once the new dir is populated, so without this announcement the user has no way to discover the leftover data exists. Add to the final report (or as a follow-up message):
Migration complete. The original orphaned data is at:
<orphaned-data-dir>It has been copied to the current data directory and is no longer used by Shipyard. Verify your project state with/ship-status, then you can safely delete the orphaned directory:rm -rf <orphaned-data-dir>
Do NOT auto-delete the orphan source — match the legacy .shipyard/ migration pattern of telling the user without acting.
After successful migration, treat this as an UPDATE flow (the migrated dir has a config.md, so the update path applies).
If the user picks "none" or "fresh install", continue with the FRESH install flow as normal. The orphaned data stays at the old path; mention it in the final report so they know it's still there.
Check if <SHIPYARD_DATA>/config.md exists:
If <SHIPYARD_DATA>/config.md exists, run these checks before doing anything else:
Rules present and current? Use Glob ${CLAUDE_PLUGIN_ROOT}/project-files/rules/shipyard-*.md to enumerate the canonical rules. For each enumerated file, derive the basename (e.g., shipyard-data-model.md) and use the Read tool on .claude/rules/<basename>. Classify each:
If any rules are MISSING or OUTDATED → re-copy them. To copy: Read the source file from ${CLAUDE_PLUGIN_ROOT}/project-files/rules/<basename> and use Write to write it to .claude/rules/<basename>. Repeat per file. This avoids any shell cp/diff/for loops, which are not portable to plain Windows cmd.exe.
Config version current? Read config_version from <SHIPYARD_DATA>/config.md — if matches latest (3), no migration needed
Codebase context exists? Use the Read tool on <SHIPYARD_DATA>/codebase-context.md (substitute the literal SHIPYARD_DATA path) — if it exists, no re-analysis needed
If ALL checks pass → report and exit immediately:
✓ Shipyard is up to date. Nothing to do.
Run /ship-status for project overview, or /ship-discuss to start working.
If any check fails → continue to UPDATE mode to fix what's missing. Report what triggered the update:
Shipyard needs updating:
[✗ missing rules | ✗ config migration needed | ✗ codebase context missing]
Shipyard requires git (worktree isolation, branch strategy, TDD hooks all depend on it).
git rev-parse --git-dir 2>/dev/nullgit init and create an initial commit:
git init
git add -A
git commit -m "chore: initial commit"
If the directory is empty (nothing to add), create a .gitkeep and commit that.git log fails) → create an initial commit with whatever exists.This ensures worktree isolation and branching work from the first sprint.
Scan the project first — auto-detect as much as possible. Only ask what you can't figure out.
Ask these (skip if obvious from codebase):
Project name — what is this project called?
Tech stack — languages, frameworks, libraries (scan package.json, Cargo.toml, go.mod, etc. first)
Testing framework — vitest, jest, pytest, go test, etc. (check existing test files first)
Test commands — auto-detect from package.json scripts, pytest.ini, Makefile, etc. Populate test_commands in config:
unit — run unit tests (e.g., vitest run)integration — run integration testse2e — run E2E tests (if applicable)scoped — run a subset by pattern (e.g., vitest run --testPathPattern)
If not detectable, AskUserQuestion: "I couldn't auto-detect your test commands. What commands do you use to run tests? (e.g., npm test, pytest, go test ./...)"These keys double as the resolution target for kind: operational tasks — an operational task whose verify_command: test_commands.e2e resolves to whatever is under test_commands.e2e here. Keeping one source of truth for "how do I run X" means renaming a test runner in one place updates every operational task that references it.
operational_tasks.max_iterations (default 3) and operational_tasks.max_patch_tasks (default 5) are the fix-findings loop budget and scope-creep guard for kind: operational tasks. See skills/ship-sprint/references/task-kinds.md for the full semantics. Override per-task with verify_max_iterations: in task frontmatter.
Auto-detect these (confirm, don't ask):
Scan the project and present findings: "I detected [X]. Correct?" Only ask if detection fails.
5. Project type — infer from stack (Next.js → web-app, Express → api, etc.)
6. CI (continuous integration) platform — check .github/workflows/, .gitlab-ci.yml, etc.
7. Repo type — check for workspace configs (monorepo — multiple projects in one repo) or single package.json (single)
8. Git main branch — detect main branch name (git symbolic-ref refs/remotes/origin/HEAD 2>/dev/null or check for main/master).
Use defaults (only ask if the user brings it up): 9. E2E (end-to-end) framework — only ask if E2E tests detected or project type is web-app/mobile 10. Team size — default: solo. Only ask if multiple contributors detected in git log. 11. Workflow — default: sprint. Only ask if team size > solo. 12. Pull request workflow — Shipyard does not create PRs or push. Skip.
<SHIPYARD_DATA>/
├── config.md ← from answers above
├── codebase-context.md ← generated in step 3
├── spec/
│ ├── epics/
│ ├── features/
│ ├── tasks/
│ ├── bugs/
│ ├── ideas/
│ └── references/ ← detail docs split from large spec files
├── backlog/
│ └── BACKLOG.md
├── sprints/
│ └── current/ ← empty until first sprint
├── verify/
├── debug/
│ └── resolved/ ← closed debug sessions
├── memory/
│ └── metrics.md
├── releases/ ← changelog files per version
└── templates/ ← spec templates (copied from plugin)
Create the directory structure by running:
shipyard-data init
This creates all directories in the plugin data area (outside the project — no git noise).
Install rules into the project:
Rules live in the project's .claude/rules/ (plugins can't ship rules directly). Install them using Claude's native tools — no shell cp or mkdir, which are not portable to Windows cmd.exe:
${CLAUDE_PLUGIN_ROOT}/project-files/rules/shipyard-*.md to enumerate the source rule files..claude/rules/<basename>. The Write tool creates the parent directory automatically.Templates are copied into plugin data by shipyard-data init above — no separate shell step. The init command copies everything under $CLAUDE_PLUGIN_ROOT/project-files/templates/ into <SHIPYARD_DATA>/templates/ via Node's cpSync, which stays inside the allowlisted shipyard-data CLI and never prompts for permission on the plugin data dir. Do NOT synthesize a raw template-copy bash line — the plugin data dir lives outside the project root and every such line would trigger a "suspicious path" prompt.
After init, write these via the Write tool (auto-approved for files inside the data dir):
<SHIPYARD_DATA>/backlog/BACKLOG.md from <SHIPYARD_DATA>/templates/BACKLOG.md<SHIPYARD_DATA>/config.md — generate from user answers using template formatUpdate .gitignore — append any missing entries:
# Claude Code local memory (machine-specific paths — never commit)
.claude/projects/
# Shipyard task worktrees (temporary, created by builder subagents)
.claude/worktrees/
Note: Shipyard data lives in ${CLAUDE_PLUGIN_DATA} (outside the project), so no .shipyard/ gitignore entries needed. Worktrees DO live inside the project (<repo>/.claude/worktrees/<name>) and must be ignored to keep them out of git status.
Scan the codebase and write findings to <SHIPYARD_DATA>/codebase-context.md.
Project structure — directory layout, key directories
Tech stack detected — frameworks, libraries, versions from package files
Existing patterns — naming conventions, file organization, import patterns
Existing tests — test framework, test file locations, coverage config
Build system — build scripts, bundler, compiler settings
Environment — .env files (list variables, NOT values), docker setup, CI config
Entry points — main files, route definitions, API entry points
Dependencies — key external dependencies and their purposes
Commit conventions — analyze the last 30 git commits to detect the project's commit style:
git log --oneline -30 --no-decorate 2>/dev/null
Detect patterns:
feat:, fix:, chore:, docs: with optional scope feat(auth)::sparkles:, :bug:PROJ-123: description(auth), (ui), (api)?Also check for:
.commitlintrc, commitlint.config.js — explicit lint config.czrc, .cz.json — commitizen config.husky/, .git/hooks/)Write detected convention to config:
git:
commit_format: conventional # conventional | gitmoji | jira | freeform
commit_scope: true # use scopes like feat(auth):
commit_case: lowercase # lowercase | sentence | title
commit_examples: # 3 representative examples from history
- "feat(auth): add user registration flow"
- "fix(api): handle null response from external endpoint"
- "chore: update dependencies"
If no commits exist (new project), AskUserQuestion: "What commit message format do you prefer? (conventional commits is the default)"
Generate a commit format rule — write .claude/rules/project-commit-format.md:
---
paths: [".git/**/*"]
---
# Commit Message Format
This project uses [detected format]. Follow these conventions:
Format: [format description]
Case: [case convention]
Scopes: [list of scopes or "none"]
Examples from this project:
- [example 1]
- [example 2]
- [example 3]
This auto-loads whenever git operations happen, ensuring consistent commit messages across all agents and skills without every skill needing to read config.
Format codebase-context.md with YAML frontmatter summarizing key facts, markdown body with details.
Check for existing project management tools before scanning the codebase:
Check for existing spec documents:
Scan for existing human-authored technical docs in these locations:
spec/, specs/, spec/docs/, docs/specs/, docs/spec/ — only if they contain 3+ .md files with product/feature-like content (not test runner config files)documentation/, documents/, design-docs/, rfcs/, proposals/ — only if they contain 3+ .md files.md files whose names or titles suggest feature/product specsFirst pass — scan filenames and titles only (do NOT read full file content yet):
AskUserQuestion: "Found [N] spec documents in [path]/. Shipyard doesn't duplicate your existing docs — it references them. Want me to index these so Shipyard knows where your specs live? When you plan features, Shipyard will read them directly from their current location. (yes/no)"
If yes:
<SHIPYARD_DATA>/codebase-context.md under a ## Existing Specs section:
## Existing Specs
- [path/to/auth-spec.md] — authentication and authorization
- [path/to/api-design.md] — API endpoint conventions
- [path/to/data-model.md] — database schema and relationships
Shipyard's spec directory (<SHIPYARD_DATA>/spec/) holds only the working set — features being planned, built, or reviewed. It is NOT a mirror of the entire product. The user's existing docs remain the source of truth for the system as a whole.
If no spec docs found — note brownfield context:
If the codebase analysis reveals an existing application with routes, components, APIs, or models — note this in <SHIPYARD_DATA>/codebase-context.md under ## Existing Functionality:
/ship-discuss and /ship-sprint context about what already existsDo NOT create feature specs for existing code. Shipyard's spec is for new work being planned and built, not a catalog of existing functionality. The codebase-context.md serves as the reference for what's already there.
If the user wants to formalize existing features later, they use /ship-discuss [topic].
Read the full guide: ${CLAUDE_PLUGIN_ROOT}/skills/ship-init/references/constitution-advisor.md
Evaluate the project's existing architectural rules across 10 categories: architecture boundaries, code size limits, naming conventions, component patterns, testing patterns, error handling, banned patterns, domain vocabulary, shared patterns, and build order.
For each category, classify as COVERED / WEAK / MISSING by checking:
.claude/rules/ — existing path-scoped rules (use ls .claude/rules/ explicitly, hidden dirs are not globbed by default).claude/skills/ — any custom skills that imply conventions or constraintsCLAUDE.md — project-level instructions.eslintrc, pyproject.toml, .rubocop.yml, etc.)CONTRIBUTING.md, ARCHITECTURE.md, etc.)For any WEAK or MISSING category:
Present all proposals at once, grouped by category, with rationale for each. Let the user accept all, pick some, or skip entirely. Create accepted rules as .claude/rules/ files (not prefixed with shipyard-).
After codebase analysis is complete, generate Subject Matter Expert skills for the project's technology stack. These skills encode how THIS project uses each technology — project-specific patterns, paths, commands, and conventions.
Extract technologies from the codebase analysis (Step 3):
Spawn the skill-writer:
subagent_type: shipyard:shipyard-skill-writer
Prompt with:
<SHIPYARD_DATA>/codebase-context.md.claude/skills/The agent runs silently — no user prompts. It scans .claude/skills/ for existing coverage, skips technologies already covered, generates SME skills for the rest, self-validates all paths and commands, and returns a report.
Display the results to the user:
Generated [N] project skills:
/nextjs-expert — Next.js 15 (App Router, server components, middleware)
/postgres-expert — PostgreSQL (Prisma ORM, migrations, connection pooling)
Skipped (already exist):
/tailwind-expert
No coverage (insufficient usage):
Redis — only used as cache in one file
Write initial project conventions to <SHIPYARD_DATA>/memory/project-context.md so they persist across sessions and are shared across the team:
---
updated: [date]
---
# Project Context
## Tech Stack
[detected languages, frameworks, libraries and versions]
## Testing
[framework, test file locations, run commands]
## Naming Conventions
[file naming, class/function naming patterns found in codebase]
## Key Terminology
[project-specific domain terms and what they mean]
Important: Write to <SHIPYARD_DATA>/memory/project-context.md, not to Claude's ~/.claude/ memory system. Claude's memory path embeds the user's local filesystem path (e.g., -Users-alice-...), which breaks for other team members and gets misrouted when agents run inside git worktrees. The <SHIPYARD_DATA>/memory/ path is project-relative, user-neutral, and tracked in git.
Run a quick diagnostic to verify the installation works. Check each item silently, report results:
Run each check using Claude's native tools (substitute the literal SHIPYARD_DATA path from the context block for <SHIPYARD_DATA>):
.claude/rules/shipyard-*.md and count results. Expected: 7.<SHIPYARD_DATA>/templates/*.md and count results. Expected: 9.<SHIPYARD_DATA>/config.md (limit 3) and confirm config_version appears. Expected: yes.git rev-parse --git-dir 2>/dev/null && git log -1 --format=%H 2>/dev/null. Expected: both succeed.git rev-parse --git-common-dir 2>/dev/null. If it differs from --git-dir, the project is a worktree and parallel execution falls back to the parent.${CLAUDE_PLUGIN_ROOT}/agents/shipyard-*.md and count results. Expected: 4.<SHIPYARD_DATA>/config.md and confirm a unit: field appears under test_commands. Expected: yes.Report:
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
SELF-TEST
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
✅ Rules: 7/7 installed
✅ Templates: 9/9 installed
✅ Config: valid (v3)
✅ Git: ready (has commits)
✅ Worktree: supported (or: ⚠️ project is a worktree — parallel uses parent repo)
✅ Agents: 4/4 reachable
✅ Test commands: configured (vitest)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
If any check fails, fix it before reporting. For example:
project-files/rules/git init && git add -A && git commit -m "chore: initial commit"Shipyard skills and agents need specific tool permissions to run without interrupting the user mid-execution. Configure .claude/settings.local.json to allow these.
Read existing .claude/settings.local.json (may not exist). Merge — never replace existing entries.
Required permissions:
{
"permissions": {
"allow": [
"Bash(git:*)",
"Bash(shipyard-data)",
"Bash(ls:*)",
"Bash(wc:*)",
"Bash(head:*)",
"Bash(grep:*)",
"WebSearch",
"WebFetch"
]
}
}
If test commands were detected in Step 3, also add one Bash(<prefix>:*) entry per detected command prefix (e.g., Bash(npx vitest:*), Bash(npm test:*), Bash(pytest:*), Bash(go test:*), Bash(cargo test:*)).
Merge into the existing .claude/settings.local.json: Read the file (or start with {}), append missing entries to permissions.allow (exact string match, no duplicates), Write back. Leave all other keys untouched.
Report what was added (just new entries, not the full list):
Permissions: added 6 entries to .claude/settings.local.json
+ Bash(git:*), Bash(shipyard-data), WebSearch, WebFetch, ...
If all required entries already exist: "Permissions: already configured ✓"
Tell the user:
✓ Shipyard initialized for [project name]
Project type: [type]
Tech stack: [stack]
Testing: [framework]
▶ NEXT UP: Define your first features
/ship-discuss
(tip: /clear first for a fresh context window)
Read existing config. Do NOT modify:
<SHIPYARD_DATA>/spec/ (user's spec data)<SHIPYARD_DATA>/backlog/ (user's backlog)<SHIPYARD_DATA>/sprints/ (sprint history)<SHIPYARD_DATA>/memory/ (metrics, retro insights) — exception: create project-context.md if it doesn't exist (see Step 4c)Read the current config's config_version (absence = version 1). Compare against the latest template's version.
If config is outdated:
<SHIPYARD_DATA>/templates/config.md for the latest schemaconfig_version to currentIf spec frontmatter has changed between versions:
<SHIPYARD_DATA>/spec/features/*.md and <SHIPYARD_DATA>/spec/tasks/*.mdreferences: [], children: []) with defaultsNever remove existing fields — only add missing ones. If a field was renamed between versions, map the old value to the new field name and remove the old one.
If migrating from v2 (or earlier) to v3: proceed to Step 2b for data model migration.
Only run if migrating from config_version 2 (or absent) to 3.
The v3 data model enforces single-source-of-truth: feature files own feature data, task files own task data, aggregate files (BACKLOG.md, SPRINT.md, PROGRESS.md) are lightweight ID indexes. This step migrates old-format files.
1. BACKLOG.md — multi-column → ID-only
Use the Read tool on <SHIPYARD_DATA>/backlog/BACKLOG.md (limit 5) and check if it contains columns beyond Rank and ID (e.g., Title, RICE, Points, Status):
If old format detected:
ID column values and their rank order| Rank | ID | rows + ## Overrides section2. SPRINT.md — full task tables → task ID waves
If <SHIPYARD_DATA>/sprints/current/SPRINT.md exists and contains task data columns (Title, Effort, Status) beyond just task IDs in wave groups:
<!-- Read task files for details --> comments3. PROGRESS.md — old format → session log
If <SHIPYARD_DATA>/sprints/current/PROGRESS.md exists and contains task completion tracking tables (columns like Task, Status, Completed):
## Blockers table, ## Deviations table, ## Patch Tasks table, ## Session Log4. Epic files — remove features: arrays
Use Grep with pattern: ^features:, path: <SHIPYARD_DATA>/spec/epics, glob: E*.md, output_mode: files_with_matches to find epics with features: arrays. For each match:
features: key and its array values from frontmatter## Features table in the bodyepic: fields5. Feature files — remove inline task tables
Use Grep with pattern: ^## Tasks, path: <SHIPYARD_DATA>/spec/features, glob: F*.md, output_mode: files_with_matches to find feature files with inline task tables. For each match:
tasks: array exists in frontmatter (extract task IDs from table if needed)## Tasks section and its table from the bodytasks: array6. Idea and bug file frontmatter backfill
Scan <SHIPYARD_DATA>/spec/ideas/*.md — backfill story_points: 0 if missing.
Scan <SHIPYARD_DATA>/spec/bugs/*.md — backfill hotfix: false if missing.
Report migration:
Data model migrated (v2 → v3):
BACKLOG.md: migrated to ID-only format (was [N] columns)
SPRINT.md: [migrated / no active sprint / already current]
PROGRESS.md: [migrated / no active sprint / already current]
Epics: removed features: arrays from [N] files
Features: removed inline task tables from [N] files
Ideas: backfilled story_points in [N] files
Bugs: backfilled hotfix in [N] files
Regenerate <SHIPYARD_DATA>/codebase-context.md:
Check for any directories in the standard structure that don't exist yet (new versions may add directories like debug/, spec/references/). Create them silently.
Update .gitignore — append any missing entries (same list as fresh install). This is idempotent — skip entries already present. If .gitignore does not exist, create it. Specifically ensure both .claude/projects/ (machine-specific Claude memory) and .claude/worktrees/ (Shipyard task worktrees) are present. Both were added in recent Shipyard versions.
If the project lacks detailed .claude/rules/ files (beyond Shipyard's own shipyard-*.md rules), run the same constitution advisor as fresh install Step 3c. Read ${CLAUDE_PLUGIN_ROOT}/skills/ship-init/references/constitution-advisor.md for the full process. Only propose — never auto-create on update.
Check if <SHIPYARD_DATA>/memory/project-context.md exists:
<SHIPYARD_DATA>/codebase-context.md (written in Step 3) and extracting: tech stack versions and frameworks → ## Tech Stack, test framework and commands → ## Testing, detected naming patterns → ## Naming Conventions, project-specific terms → ## Key Terminology. Set updated: to today's date in frontmatter.This file was added in a recent Shipyard version. Existing projects won't have it until the first update run.
Quick consistency check:
Report issues if found, suggest /ship-status to validate and auto-fix.
Run the same permission configuration as FRESH INSTALL Step 5.5. This ensures new permissions added in plugin updates are backfilled. The same merge-not-replace approach preserves existing user entries and only adds missing required ones.
✓ Shipyard updated
Config migrated: v[old] → v[new] ([N] fields added)
Codebase re-analyzed: [N] new files, [M] changed patterns
.gitignore: [N entries added / already up to date]
project-context.md: [created / already exists]
State: consistent (or: [N] issues found — run /ship-status to auto-fix)
▶ NEXT UP: Check project status
/ship-status
(tip: /clear first for a fresh context window)