Get multi-model second opinion on design, code review, or bugs
Gathers full PR context and sends it to multi-model AI for comprehensive second opinion analysis.
/plugin marketplace add jleechanorg/claude-commands/plugin install claude-commands@claude-commands-marketplaceGet comprehensive multi-model AI feedback on your code, design decisions, or bug analysis.
π Uses command-line approach to bypass 25K token limits and send full PR context!
# Get second opinion on current PR branch
/secondo
# Specify feedback type
/secondo design
/secondo code-review
/secondo bugs
# With specific question
/secondo "Should I use Redis or in-memory caching for rate limiting?"
This command uses a direct approach with auth-cli.mjs for secure token management:
Authentication (Auto-Refresh):
~/.claude/scripts/auth-cli.mjs token for secure token retrievalGather Full PR Context (automated via build_second_opinion_request.py):
SECOND_OPINION_BASE_REF, falls back to origin/main β main β master)git status --short, git diff --stat, and recent commits for orientationBuild Comprehensive Request:
Direct MCP Server Call:
https://ai-universe-backend-dev-114133832173.us-central1.run.app/mcpMulti-Model Analysis:
Results Display:
tmp/secondo_analysis_[timestamp].mdWhen executing /second_opinion or /secondo:
# Verify auth-cli.mjs is installed
if [ ! -f "$HOME/.claude/scripts/auth-cli.mjs" ]; then
echo "β auth-cli.mjs not found. Run /localexportcommands to install"
exit 1
fi
# Get token (auto-refreshes if expired using refresh token)
# This is silent - only prompts for login if refresh token is invalid/missing
TOKEN=$(node ~/.claude/scripts/auth-cli.mjs token)
# If this fails, user needs to authenticate
if [ $? -ne 0 ]; then
echo "β Authentication failed. Please run:"
echo " node ~/.claude/scripts/auth-cli.mjs login"
exit 1
fi
Key Behavior:
~/.ai-universe/auth-token.json (exact same as AI Universe repo)skills/second_opinion_workflow/scripts/request_second_opinion.sh calls
build_second_opinion_request.py to harvest the full PR delta. The helper:
SECOND_OPINION_BASE_REF, then origin/main β main β master, finally HEAD^).git status --short, git diff --stat, recent commits, and the full diff.SECOND_OPINION_MAX_FILES (default 20) with truncation markers when limits are hit.gitContextNotices so the MCP tool sees exactly what was trimmed.Manual usage if you want to inspect the payload directly:
python3 skills/second_opinion_workflow/scripts/build_second_opinion_request.py \
/tmp/secondo_request.json \
"What should I double-check before merging?" \
3 \
origin/main
Tune the capture limits with environment variables:
export SECOND_OPINION_BASE_REF=origin/main # override comparison base
export SECOND_OPINION_MAX_FILES=25 # number of per-file patches to attach
export SECOND_OPINION_MAX_DIFF_CHARS=32000 # full diff char budget
export SECOND_OPINION_MAX_PATCH_CHARS=8000 # per-file diff char budget
The generated payload now includes the git context automatically:
{
"jsonrpc": "2.0",
"method": "tools/call",
"params": {
"name": "agent.second_opinion",
"arguments": {
"question": "Should I land this patch set as-is?",
"maxOpinions": 3,
"gitContext": {
"branch": "feature/refactor",
"base": "origin/main",
"diffstat": "β¦",
"recentCommits": "β¦",
"changedFiles": [
{"status": "M", "path": "$PROJECT_ROOT/api/routes.py"},
{"status": "A", "path": "$PROJECT_ROOT/services/cache.py"}
],
"patches": {
"$PROJECT_ROOT/api/routes.py": "@@ -42,6 +42,15 @@ β¦",
"$PROJECT_ROOT/services/cache.py": "@@ -0,0 +1,200 @@ β¦"
},
"limits": {
"maxFiles": 20,
"diffCharLimit": 24000,
"patchCharLimit": 6000
}
},
"gitContextNotices": [
"git diff origin/main...HEAD truncated by 1520 characters (limit 24000)."
]
}
},
"id": 1
}
π‘ You can still tailor the natural-language question, but you no longer need to paste diff snippets manuallyβthe helper attaches them for you.
# Call MCP server with HTTPie (matches request_second_opinion.sh)
http POST "https://ai-universe-backend-dev-114133832173.us-central1.run.app/mcp" \
"Accept:application/json, text/event-stream" \
"Authorization:Bearer $TOKEN" \
< /tmp/secondo_request.json \
--timeout=180 \
--print=b > /tmp/secondo_response.json
# Check if successful
if [ $? -eq 0 ] && [ -s /tmp/secondo_response.json ]; then
echo "β
Analysis complete"
else
echo "β Request failed"
exit 1
fi
Extract from /tmp/secondo_response.json:
result.content[0].text)tmp/secondo_analysis_[timestamp].mdExample parsing:
# Extract the main response text
jq -r '.result.content[0].text' /tmp/secondo_response.json > /tmp/secondo_parsed.txt
# Extract metrics if available
TOKENS=$(jq -r '.result.content[0].text' /tmp/secondo_response.json | grep -o 'Total Tokens: [0-9,]*' | head -1)
COST=$(jq -r '.result.content[0].text' /tmp/secondo_response.json | grep -o 'Total Cost: \$[0-9.]*' | head -1)
Maximum token budget: 24,900 tokens (stay under 25K limit)
Allocation strategy:
Tips to maximize context:
Authentication: Required via Firebase OAuth (exact same as AI Universe repo)
node ~/.claude/scripts/auth-cli.mjs login (browser-based OAuth, run outside Claude Code)node ~/.claude/scripts/auth-cli.mjs status (view current auth status)~/.ai-universe/auth-token.json (ID token + refresh token)/secondo automatically refreshes expired ID tokens (no browser popup needed)Token Lifecycle:
When You'll Need to Re-authenticate:
Rate Limits: Applied per authenticated user based on Firebase account
Practical limits:
Display results in markdown with:
Save to: tmp/secondo_analysis_[timestamp].md
β
Request completes successfully (curl exit code 0)
β
Response file is non-empty
β
Response contains valid JSON with result.content
β
Analysis report saved to tmp directory
β
User sees verdict and key findings
Common issues:
--max-time 180User: /secondo
1. Gather PR context:
β Branch: fix_mcp
β Changed files: 18 files (+2448/-799)
β Git diff saved to /tmp/secondo_diff_full.txt
β Commits: 5 commits analyzed
2. Build analysis request:
β Question: 487 words
β Code context: 14,250 tokens
β Total estimated: 23,890 tokens (95.6% of limit)
3. Execute request:
β curl -X POST [MCP endpoint]
β Response: 73KB received in 47 seconds
β Status: HTTP 200 OK
4. Parse results:
β Models: 3 (Gemini, Perplexity, OpenAI)
β Total tokens: 24,964
β Total cost: $0.10193
β Sources: 52 authoritative references
5. Display verdict:
π― UNANIMOUS VERDICT: CORRECT (with caveats)
β
Fix is safe for production
β οΈ Array initialization discipline required
π Report saved: tmp/secondo_analysis_20251031_1847.md