This skill should be used when performing exploratory testing of command-line tools and scripts, including help text validation, option testing, error handling verification, and output validation. Triggers when testing CLI commands, scripts, build tools, or command-line interfaces.
From tommymorgannpx claudepluginhub tommymorgan/claude-plugins --plugin tommymorganThis skill uses the workspace's default tool permissions.
references/exit-code-standards.mdreferences/output-validation.mdSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Creates persona-targeted CodeTour .tour files with real file and line anchors for onboarding, architecture, PR, RCA tours, and structured explanations.
Provide systematic patterns for autonomous command-line tool exploratory testing. Guide agents through comprehensive CLI testing including help discovery, option validation, error handling checks, exit code verification, and output correctness testing.
Test command-line tools using a structured approach:
Identify command to test from context:
Script path: ./scripts/build.sh
Binary name: mycommand
Package.json script: npm run test
System command: git status
Verify command exists:
which mycommand
command -v mycommand
[ -f ./script.sh ] && echo "exists"
Try these flags to discover usage:
command --help
command -h
command help
command
Extract from help text:
Example help parsing:
Usage: deploy [options] <environment>
Options:
-v, --verbose Enable verbose output
-d, --dry-run Show what would be deployed
--config <file> Use custom config
Identifies:
- Required: <environment>
- Optional flags: -v, -d, --config
- Config file option
Test command works at all:
# Version check
command --version
command -v
# No arguments (if allowed)
command
# Help (should never error)
command --help
Verify:
For each discovered flag:
1. Test flag alone: `command --flag`
2. Test with valid value: `command --flag value`
3. Test with invalid value: `command --flag invalid`
4. Test flag combinations: `command --flag1 --flag2`
5. Test conflicting flags: `command --yes --no`
Test required and optional arguments:
Valid inputs:
- Correct type and format
- Boundary values (min/max)
- Typical use cases
Invalid inputs:
- Wrong type (string instead of number)
- Out of range (negative when positive required)
- Missing required arguments
- Too many arguments
- Special characters
Focus testing based on context:
If context says "I updated the --config flag":
- Comprehensive tests on --config
- Valid config files
- Invalid config files
- Missing config files
- Malformed config data
- Smoke tests on other flags (ensure still work)
- Quick validation on core functionality
Balance depth with scope:
- Changed areas: Comprehensive testing
- Related areas: Thorough testing
- Unrelated areas: Smoke testing only
Verify proper exit codes:
Success: Exit 0
General error: Exit 1
Usage error: Exit 2
Permission denied: Exit 126
Command not found: Exit 127
Test:
command; echo "Exit code: $?"
Validate error messages are:
Good error:
Error: Config file 'config.json' not found.
Please create a config file or specify path with --config
Bad error:
Error
Verify output streams used correctly:
# Errors should go to stderr
command 2>&1 | grep "Error" # Should find errors here
# Normal output should go to stdout
command 2>/dev/null # Should show normal output
Check output is correctly formatted:
JSON output:
command --json | jq . # Should parse without error
CSV output:
command --csv | head -1 # Check headers present
Table output:
command --table | column # Check column alignment
Verify output matches expected:
Known inputs → Verify outputs:
1. Use predictable test data
2. Run command
3. Parse output
4. Verify values match expected
5. Report discrepancies
Check for test data support:
# Common patterns
command test generate
command fixtures create
command seed --test
If available:
When generating test data:
# Create temp directory
TMPDIR=$(mktemp -d)
# Create test files
echo "test data" > "$TMPDIR/test.txt"
# Run command with test data
command --input "$TMPDIR/test.txt"
# Cleanup
rm -rf "$TMPDIR"
Generate data that:
Issue: No --help flag or unclear help
Impact: Users don't know how to use command
Test: Try command --help
Report: Missing or inadequate help text
Issue: Cryptic or missing error messages Impact: Users can't diagnose problems Test: Trigger errors, check messages Report: Unhelpful error messages
Issue: Command always exits 0 even on errors Impact: Scripts can't detect failures Test: Cause error, check exit code Report: Exit code should be non-zero on error
Issue: Dangerous operations without confirmation Impact: Accidental data loss Test: Run potentially destructive commands Report: Should require --force flag or confirmation
Issue: Accepts invalid inputs without error Impact: Silent failures or unexpected behavior Test: Provide malformed inputs Report: Should validate and reject bad inputs
1. Discover command and help text
2. Run smoke tests (version, help, basic execution)
3. Test each flag individually
4. Test flag combinations
5. Test with valid arguments
6. Test with invalid arguments
7. Test edge cases (empty, very long, special chars)
8. Verify exit codes
9. Validate error messages
10. Check output formatting
11. Report findings by severity
## Critical Issues ❌
### Command Crashes on Empty Input - Critical
- **Command**: `deploy`
- **Issue**: Crashes when no environment specified
- **Test**:
```bash
./deploy.sh
deploy --config invalid.jsonError: Invalid configError: Config file 'invalid.json' not found. Expected JSON file with keys: ...deploy --helpUnknown option: --help
## Additional Resources
For complete CLI testing procedures, see:
- **`references/exit-code-standards.md`** - Exit code conventions and validation
- **`references/output-validation.md`** - Output format testing patterns