From worktree
Mock external commands in shell script tests using PATH override and invocation logging for assertion verification
npx claudepluginhub manifoldlogic/claude-code-plugins --plugin worktreeThis skill uses the workspace's default tool permissions.
This skill documents the pattern for testing shell scripts that invoke external commands (like `crewchief`, `gh`, `jq`, etc.) without requiring those commands to be installed or configured. The pattern creates temporary mock executables, adds them to PATH for test execution, and logs invocations to a file for assertion verification.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Checks Next.js compilation errors using a running Turbopack dev server after code edits. Fixes actionable issues before reporting complete. Replaces `next build`.
This skill documents the pattern for testing shell scripts that invoke external commands (like crewchief, gh, jq, etc.) without requiring those commands to be installed or configured. The pattern creates temporary mock executables, adds them to PATH for test execution, and logs invocations to a file for assertion verification.
This approach is used in test-merge-worktree.sh (977 lines, 103 tests) and provides a lightweight alternative to complex mocking frameworks. The pattern is particularly valuable for integration tests that verify argument passing and command orchestration without executing real operations.
Use this pattern when:
Do not use this pattern for:
Create temporary test directory with mock-bin:
setup() {
TEST_TMP=$(mktemp -d)
mkdir -p "$TEST_TMP/mock-bin"
touch "$TEST_TMP/mock.log"
}
Create mock executable for each external command:
# Mock crewchief CLI
cat > "$TEST_TMP/mock-bin/crewchief" << 'MOCKEOF'
#!/bin/sh
# Log invocation with all arguments
echo "MOCK_CREWCHIEF_CALLED: $*" >> "${MOCK_LOG:-/dev/null}"
# Return configurable exit code
exit "${MOCK_CREWCHIEF_EXIT:-0}"
MOCKEOF
chmod +x "$TEST_TMP/mock-bin/crewchief"
Create mocks for all external dependencies:
# Mock gh CLI
cat > "$TEST_TMP/mock-bin/gh" << 'MOCKEOF'
#!/bin/sh
echo "MOCK_GH_CALLED: $*" >> "${MOCK_LOG:-/dev/null}"
exit "${MOCK_GH_EXIT:-1}" # Default: no PR found
MOCKEOF
chmod +x "$TEST_TMP/mock-bin/gh"
# Mock workspace-folder.sh
cat > "$TEST_TMP/mock-bin/workspace-folder.sh" << 'MOCKEOF'
#!/bin/sh
echo "MOCK_WORKSPACE_FOLDER_CALLED: $*" >> "${MOCK_LOG:-/dev/null}"
exit 0
MOCKEOF
chmod +x "$TEST_TMP/mock-bin/workspace-folder.sh"
Execute script under test with PATH override:
# Prepend mock-bin to PATH so mocks are found first
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo myproject --yes 2>&1
Set environment variables to control mock behavior:
# Simulate command failure
MOCK_CREWCHIEF_EXIT=7 \
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo myproject 2>&1 || exit_code=$?
# Verify script handles failure correctly
assert_exit_code "7" "$exit_code" "script exits 7 when merge fails"
Check that commands were called:
# Read mock log
mock_log=$(cat "$TEST_TMP/mock.log")
# Verify crewchief was invoked
assert_contains "$mock_log" "MOCK_CREWCHIEF_CALLED" "crewchief was invoked"
Verify arguments passed to commands:
# Verify specific arguments
assert_contains "$mock_log" "worktree merge feature-x" "crewchief received correct args"
assert_contains "$mock_log" "--strategy squash" "merge strategy passed through"
Verify command sequence:
# Extract invocation order
mock_log=$(cat "$TEST_TMP/mock.log")
first_call=$(echo "$mock_log" | head -1)
last_call=$(echo "$mock_log" | tail -1)
assert_contains "$first_call" "MOCK_GH_CALLED" "PR check happens first"
assert_contains "$last_call" "MOCK_WORKSPACE_FOLDER_CALLED" "workspace update happens last"
teardown() {
if [ -n "$TEST_TMP" ] && [ -d "$TEST_TMP" ]; then
rm -rf "$TEST_TMP"
fi
}
# Call teardown on test completion
trap teardown EXIT
Test script structure:
#!/usr/bin/env zsh
SCRIPT_UNDER_TEST="plugins/worktree/skills/worktree-merge/scripts/merge-worktree.sh"
# Setup
TEST_TMP=$(mktemp -d)
mkdir -p "$TEST_TMP/mock-bin"
touch "$TEST_TMP/mock.log"
# Create mock crewchief
cat > "$TEST_TMP/mock-bin/crewchief" << 'EOF'
#!/bin/sh
echo "MOCK_CREWCHIEF_CALLED: $*" >> "${MOCK_LOG:-/dev/null}"
exit "${MOCK_CREWCHIEF_EXIT:-0}"
EOF
chmod +x "$TEST_TMP/mock-bin/crewchief"
# Create mock repo structure
mkdir -p "/workspace/repos/_test_$$/_test_$$"
mkdir -p "/workspace/repos/_test_$$/feature-x"
# Execute test
exit_code=0
output=$(
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo "_test_$$" --yes 2>&1
) || exit_code=$?
# Verify
mock_log=$(cat "$TEST_TMP/mock.log")
if echo "$mock_log" | grep -q "worktree merge feature-x"; then
echo "[PASS] crewchief invoked with correct args"
else
echo "[FAIL] expected crewchief invocation"
fi
# Cleanup
rm -rf "$TEST_TMP" "/workspace/repos/_test_$$"
Test error handling by configuring mock exit codes:
# Test: Merge failure (exit code 7)
MOCK_CREWCHIEF_EXIT=7 \
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo myproject --yes 2>&1 || exit_code=$?
assert_exit_code "7" "$exit_code" "script exits 7 when crewchief merge fails"
This simulates crewchief worktree merge returning exit code 7, allowing you to test the script's error handling path.
Test that arguments are correctly passed through to mocked commands:
# Execute with --strategy squash
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo myproject --strategy squash --yes 2>&1
# Verify strategy was passed to crewchief
mock_log=$(cat "$TEST_TMP/mock.log")
if echo "$mock_log" | grep -qF "worktree merge feature-x --strategy squash --yes"; then
echo "[PASS] strategy argument passed through correctly"
else
echo "[FAIL] strategy argument not found in crewchief invocation"
fi
Some tests need mocks to produce output (not just log invocations):
# Mock gh that returns JSON PR status
cat > "$TEST_TMP/mock-bin/gh" << 'EOF'
#!/bin/sh
echo "MOCK_GH_CALLED: $*" >> "${MOCK_LOG:-/dev/null}"
if [ "$MOCK_GH_RETURN_PR" = "open" ]; then
echo '{"state": "OPEN", "isDraft": false}'
exit 0
elif [ "$MOCK_GH_RETURN_PR" = "merged" ]; then
echo '{"state": "MERGED", "isDraft": false}'
exit 0
else
exit 1 # No PR found
fi
EOF
chmod +x "$TEST_TMP/mock-bin/gh"
# Test: PR status OPEN blocks merge
MOCK_GH_RETURN_PR="open" \
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo myproject --yes 2>&1 || exit_code=$?
assert_exit_code "8" "$exit_code" "script exits 8 when PR is OPEN"
Full integration test verifying command sequence and coordination:
# Setup all mocks
setup_all_mocks() {
cat > "$TEST_TMP/mock-bin/crewchief" << 'EOF'
#!/bin/sh
echo "STEP:crewchief $*" >> "${MOCK_LOG:-/dev/null}"
exit 0
EOF
chmod +x "$TEST_TMP/mock-bin/crewchief"
cat > "$TEST_TMP/mock-bin/gh" << 'EOF'
#!/bin/sh
echo "STEP:gh $*" >> "${MOCK_LOG:-/dev/null}"
exit 1 # No PR
EOF
chmod +x "$TEST_TMP/mock-bin/gh"
cat > "$TEST_TMP/mock-bin/workspace-folder.sh" << 'EOF'
#!/bin/sh
echo "STEP:workspace-folder $*" >> "${MOCK_LOG:-/dev/null}"
exit 0
EOF
chmod +x "$TEST_TMP/mock-bin/workspace-folder.sh"
}
setup_all_mocks
# Execute
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" feature-x --repo myproject --yes 2>&1
# Verify sequence
mock_log=$(cat "$TEST_TMP/mock.log")
echo "$mock_log" | grep -n "STEP:" | while IFS=: read -r line_num content; do
echo "[$line_num] $content"
done
# Expected sequence:
# 1. gh (PR check)
# 2. crewchief (merge)
# 3. workspace-folder (workspace cleanup)
Use quoted heredoc delimiter to prevent variable expansion in mock scripts:
cat > "$TEST_TMP/mock-bin/command" << 'EOF' # Note: 'EOF' is quoted
#!/bin/sh
echo "$*" >> "$MOCK_LOG" # Variables expand at runtime, not creation time
EOF
Make mocks return configurable exit codes for testing error paths:
exit "${MOCK_COMMAND_EXIT:-0}" # Default to success, override via env var
Isolate test state management:
setup() { ... }
teardown() { ... }
trap teardown EXIT
Use prefixes in log messages to distinguish mock sources:
echo "MOCK_CREWCHIEF_CALLED: $*" >> "$MOCK_LOG"
echo "MOCK_GH_CALLED: $*" >> "$MOCK_LOG"
Run tests in subshells to isolate environment variables:
exit_code=0
output=$(
PATH="$TEST_TMP/mock-bin:$PATH" \
MOCK_LOG="$TEST_TMP/mock.log" \
bash "$SCRIPT_UNDER_TEST" args 2>&1
) || exit_code=$?
For these cases, consider:
plugins/worktree/skills/worktree-merge/scripts/test-merge-worktree.sh (977 lines, 103 tests)