<objective>
Analyzes prompts in `./prompts/` to show pending work, implementation status, and recommends next steps. Uses memory bank caching for speed and suggests executable `/daplug:run-prompt` commands with your preferred agent.
/plugin marketplace add cruzanstx/daplug/plugin install daplug@cruzanstxUses memory bank caching to avoid re-analyzing unchanged prompts.
Suggests executable /daplug:run-prompt or /daplug:run-prompt --worktree commands with the user's preferred agent.
</objective>
<step0_resolve_prompt_manager> IMPORTANT: Use the prompt-manager script for all prompt operations:
PLUGIN_ROOT=$(jq -r '.plugins."daplug@cruzanstx"[0].installPath' ~/.claude/plugins/installed_plugins.json)
PROMPT_MANAGER="$PLUGIN_ROOT/skills/prompt-manager/scripts/manager.py"
CONFIG_READER="$PLUGIN_ROOT/skills/config-reader/scripts/config.py"
This ensures consistent git root detection and prompt resolution across all daplug commands. </step0_resolve_prompt_manager>
<step0_check_agent_preference> Check CLAUDE.md for preferred agent before generating recommendations:
preferred_agent setting (project first, then user-level) via <daplug_config>:# Get REPO_ROOT from prompt-manager for consistent path resolution
REPO_ROOT=$(python3 "$PROMPT_MANAGER" info --json | jq -r '.repo_root')
# Check project-level first, then user-level via config reader
PREFERRED_AGENT=$(python3 "$CONFIG_READER" get preferred_agent --repo-root "$REPO_ROOT")
echo "${PREFERRED_AGENT:-not_set}"
claude - Claude Code (default, most capable)codex - OpenAI Codex CLI (gpt-5.2-codex, default reasoning)codex-high - OpenAI Codex CLI with high reasoning effortcodex-xhigh - OpenAI Codex CLI with extra-high reasoning effortgemini - Google Gemini CLIzai - Z.AI GLM-4.6 via Codex CLI~/.claude/CLAUDE.md (applies to all projects):# Create ~/.claude/ if needed
mkdir -p ~/.claude
# Set preferred_agent in user-level CLAUDE.md using <daplug_config>
python3 "$CONFIG_READER" set preferred_agent "<selected_agent>" --scope user
<step1_check_memory_bank_cache> BEFORE doing any analysis, check for cached results:
./memory-bank/./memory-bank/prompts-analysis.md# Get prompt info from prompt-manager (JSON includes counts and paths)
PROMPT_INFO=$(python3 "$PROMPT_MANAGER" info --json)
PENDING_COUNT=$(echo "$PROMPT_INFO" | jq -r '.active_count')
COMPLETED_COUNT=$(echo "$PROMPT_INFO" | jq -r '.completed_count')
PROMPTS_DIR=$(echo "$PROMPT_INFO" | jq -r '.prompts_dir')
COMPLETED_DIR=$(echo "$PROMPT_INFO" | jq -r '.completed_dir')
# Get newest prompt file timestamp
NEWEST_PROMPT=$(ls -t "$PROMPTS_DIR"/*.md "$COMPLETED_DIR"/*.md 2>/dev/null | head -1)
NEWEST_TIME=$(stat -c %Y "$NEWEST_PROMPT" 2>/dev/null || echo 0)
# Check cache metadata (first 10 lines contain counts and timestamp)
if [ -f "./memory-bank/prompts-analysis.md" ]; then
CACHE_TIME=$(stat -c %Y "./memory-bank/prompts-analysis.md" 2>/dev/null || echo 0)
# Cache is fresh if it's newer than newest prompt
fi
Cache decision:
--refresh flag: Skip cache, do full analysisWhen using cache:
Read `./memory-bank/prompts-analysis.md` and display its content.
Add a note: "📋 Using cached analysis from [date]. Use `--refresh` to re-analyze."
</step1_check_memory_bank_cache>
<step2_gather_prompts> Use prompt-manager to list prompts consistently (handles git root detection, filtering, etc.):
# List all prompts as JSON (includes number, name, path, status)
python3 "$PROMPT_MANAGER" list --json
# List only active (pending) prompts
python3 "$PROMPT_MANAGER" list --active --json
# List only completed prompts
python3 "$PROMPT_MANAGER" list --completed --json
Based on user flags:
--pending (default): Use python3 "$PROMPT_MANAGER" list --active --json--completed: Use python3 "$PROMPT_MANAGER" list --completed --json--all: Use python3 "$PROMPT_MANAGER" list --jsonThe JSON output contains:
[
{"number": "006", "name": "backup-server", "filename": "006-backup-server.md", "path": "/path/to/prompts/006-backup-server.md", "status": "active"},
{"number": "001", "name": "initial-setup", "filename": "001-initial-setup.md", "path": "/path/to/prompts/completed/001-initial-setup.md", "status": "completed"}
]
Get repo info for absolute paths:
# Get repo root and prompts directory paths
REPO_INFO=$(python3 "$PROMPT_MANAGER" info --json)
REPO_ROOT=$(echo "$REPO_INFO" | jq -r '.repo_root')
PROMPTS_DIR=$(echo "$REPO_INFO" | jq -r '.prompts_dir')
</step2_gather_prompts>
<step3_analyze_each_prompt> For each pending prompt from the JSON list:
Metadata is already extracted from prompt-manager:
number (e.g., "011")name (e.g., "authentication-system")path (absolute path to file)status ("active" or "completed")Read prompt content using prompt-manager:
# Read full content
python3 "$PROMPT_MANAGER" read {number}
# Or read first 100 lines for analysis
python3 "$PROMPT_MANAGER" read {number} | head -100
Extract from content:
<objective> section - what the prompt aims to accomplish<context> section - background, dependencies, what's already doneCheck implementation status by searching codebase:
memory-bank/progress.md for mentions of this featureCLAUDE.md for documentation of this featureCategorize status:
NOT STARTED - No evidence of implementationPARTIAL - Some parts implemented, some remainingLIKELY DONE - Evidence suggests complete (should verify)BLOCKED - Has dependencies on other prompts
</step3_analyze_each_prompt><step4_group_and_prioritize> Group prompts by category based on filename patterns:
Prioritize based on:
Identify parallelizable prompts:
<step5_check_worktree_dir> Check for worktree_dir setting before generating report:
# Use REPO_ROOT from prompt-manager info (already resolved in step2)
REPO_ROOT=$(python3 "$PROMPT_MANAGER" info --json | jq -r '.repo_root')
REPO_NAME=$(basename "$REPO_ROOT")
# Read worktree_dir from CLAUDE.md (project first, then user-level)
WORKTREE_DIR=$(python3 "$CONFIG_READER" get worktree_dir --repo-root "$REPO_ROOT")
If not found, prompt the user: Use AskUserQuestion tool:
../worktrees - Sibling directory to current repo (Recommended)/tmp/worktrees - Temporary directoryAfter user responds, save to ~/.claude/CLAUDE.md:
# Resolve to absolute path
WORKTREE_DIR=$(cd "$WORKTREE_DIR" 2>/dev/null && pwd || mkdir -p "$WORKTREE_DIR" && cd "$WORKTREE_DIR" && pwd)
# Save to user-level CLAUDE.md
mkdir -p ~/.claude
python3 "$CONFIG_READER" set worktree_dir "$WORKTREE_DIR" --scope user
</step5_check_worktree_dir>
<step6_generate_report> CRITICAL: Use absolute paths for all file and directory references.
Use the $REPO_ROOT, $REPO_NAME, and $WORKTREE_DIR variables from previous steps.
Output a structured report with:
## Prompts Analysis Report
### Summary
- Pending: X prompts
- Completed: Y prompts
- Estimated implementation status breakdown
### Pending Prompts by Category
#### Research/Analysis
| # | Prompt | Notes | Command |
|-----|-------------------------------|------------------------------------------------|-------------------------------------------------|
| 275 | Production Deployment Revisit | Re-evaluate self-hosted vs cloud options | /daplug:run-prompt 275 --model {agent} --worktree |
#### Backend Features
| # | Prompt | Notes | Command |
|-----|-------------------------------|------------------------------------------------|-------------------------------------------------|
| 295 | Transcript Success Monitoring | Add metrics table for fetch success rates | /daplug:run-prompt 295 --model {agent} --worktree |
#### Infrastructure/DevOps
| # | Prompt | Notes | Command |
|-----|-------------------------------|------------------------------------------------|-------------------------------------------------|
| 045 | Deploy Delays to Staging | Deployment pipeline improvements | /daplug:run-prompt 045 --model {agent} --worktree |
### Recommendations
**Quick Wins** (start here):
- 289 + 303 - Analysis/investigation, no code conflicts, can run parallel:
`/daplug:run-prompt 289 303 --model {agent} --worktree`
**High Priority** (core functionality):
- 295 then 298 - Backend features, run sequential:
`/daplug:run-prompt 295 --model {agent} --worktree`
**Consider Skipping/Archiving**:
- [Prompt #] - [Name] - [Why - may be obsolete or already done]
### Recently Completed (last 5)
- [#] - [Name]
---
## Quick Start Commands
**Preferred Agent:** `{preferred_agent}` (change in `$REPO_ROOT/CLAUDE.md` under `<daplug_config>`)
### Run Single Prompt (in current context)
```bash
/daplug:run-prompt {recommended_prompt_number} --model {preferred_agent}
/daplug:run-prompt {recommended_prompt_number} --model {preferred_agent} --worktree
# Worktree: $WORKTREE_DIR/$REPO_NAME-prompt-{number}-{timestamp}/
# Logs: $WORKTREE_DIR/$REPO_NAME-prompt-{number}-{timestamp}/worktree.log
/daplug:run-prompt {prompt1} {prompt2} {prompt3} --model {preferred_agent} --worktree
Example for parallel execution:
/daplug:run-prompt 227 228 229 --model {preferred_agent} --worktree
# Creates 3 parallel worktrees:
# - $WORKTREE_DIR/$REPO_NAME-prompt-227-{timestamp}/
# - $WORKTREE_DIR/$REPO_NAME-prompt-228-{timestamp}/
# - $WORKTREE_DIR/$REPO_NAME-prompt-229-{timestamp}/
git worktree list
ls -la "$WORKTREE_DIR/"
# View logs for a specific worktree
tail -f "$WORKTREE_DIR/$REPO_NAME-prompt-{number}-"*/worktree.log
# Check all worktree statuses
for d in "$WORKTREE_DIR/$REPO_NAME-prompt-"*/; do
echo "=== $d ==="
tail -5 "$d/worktree.log" 2>/dev/null || echo "No log yet"
done
| Resource | Path |
|---|---|
| Pending Prompts | $REPO_ROOT/prompts/ |
| Completed Prompts | $REPO_ROOT/prompts/completed/ |
| Worktrees | $WORKTREE_DIR/ |
| Memory Bank | $REPO_ROOT/memory-bank/ |
| Cache File | $REPO_ROOT/memory-bank/prompts-analysis.md |
| CLAUDE.md | $REPO_ROOT/CLAUDE.md |
</step6_generate_report>
<step7_update_memory_bank>
**After generating the report, save to memory bank:**
1. Check if memory bank directory exists:
```bash
if [ -d "./memory-bank" ]; then
# Memory bank exists, save cache
fi
./memory-bank/prompts-analysis.md with metadata header:---
generated: YYYY-MM-DD HH:MM:SS
pending_count: X
completed_count: Y
preferred_agent: {agent}
cache_note: Auto-generated by /daplug:prompts command. Delete to force refresh.
---
# Prompts Analysis Report
[Full analysis report here with absolute paths]
This allows the next /daplug:prompts run to:
<implementation_detection_patterns> When checking if a feature might be implemented, look for:
Authentication prompts: Check frontend/src/lib/auth.js, Logon.svelte, look for OAuth config
API prompts: Check backend/internal/app/ for route handlers
Processor prompts: Check processor/internal/pipeline/services/
Frontend prompts: Check frontend/src/routes/ and frontend/src/lib/components/
Database prompts: Check migration files in processor/migrations/
CI/CD prompts: Check .gitlab-ci.yml
Testing prompts: Check tests/ directory
Use grep patterns like:
# Check if feature keywords exist in codebase
grep -rl "keyword" --include="*.go" --include="*.svelte" --include="*.ts" backend/ frontend/ processor/
</implementation_detection_patterns>
<output_format> Key principle: Every prompt listing includes a ready-to-copy command.
Use clean markdown formatting with:
Keep the output scannable - users want to quickly see what's available and what to do next.
Path format examples (using variables from step5):
$REPO_ROOT/prompts/216-svelte5-layouts-quick-wins.md$WORKTREE_DIR/$REPO_NAME-prompt-216-20251221/./prompts/216-svelte5-layouts-quick-wins.mdTable format - MUST include Command column:
| # | Prompt | Notes | Command |
|-----|----------------------------|------------------------------------|-------------------------------------------------|
| 303 | Shorts Generation Workflow | Frontend shows "queued" - trace it | /daplug:run-prompt 303 --model codex --worktree |
Table columns:
# - Prompt number (for quick reference)Prompt - Short descriptive name (from filename slug)Notes - One-line summary of what it doesCommand - Full executable command with preferred agentFormatting rules:
--worktree in commands for isolationpreferred_agent from <daplug_config> in CLAUDE.md in all commandsRecommendations section format:
Quick Wins (start here):
- 289 + 303 - Analysis/investigation, no code conflicts, can run parallel:
/daplug:run-prompt 289 303 --model {preferred_agent} --worktree
High Priority (core functionality):
- 295 then 298 - Backend features, run sequential
</output_format>
<critical_notes>