From bpa-mcp
Report an MCP tool issue (BPA, DS, GDB, or Keycloak). TRIGGER when: an MCP tool returns an error, wrong data, or results inconsistent with what the user expected — including when Claude observes a suspicious MCP tool response during normal work (proactive). Also triggers on "this is broken", "that's not right", "the tool did the wrong thing", "it should have done X instead", "report this bug". DO NOT TRIGGER when: the error is clearly a user input mistake (wrong params, wrong tool), auth is expired (suggest re-login instead), or the user is asking about MCP server development/code (not tool usage).
npx claudepluginhub unctad-eregistrations/plugin-marketplace --plugin bpa-mcpThis skill is limited to using the following tools:
You will help the user document a functional issue with an eRegistrations MCP server, producing a structured markdown report that an MCP developer can use to reproduce and fix the problem.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Analyzes BMad project state from catalog CSV, configs, artifacts, and query to recommend next skills or answer questions. Useful for help requests, 'what next', or starting BMad.
You will help the user document a functional issue with an eRegistrations MCP server, producing a structured markdown report that an MCP developer can use to reproduce and fix the problem.
Your role: Be a patient but skeptical investigator. The user may be non-technical — guide them through describing what went wrong without jargon. But also verify claims before writing them up — AI agents (including you) can hallucinate issues, misread responses, or confuse expected behavior with actual bugs.
Core principle: Every issue report must be verified, not just transcribed. A wrong bug report wastes more developer time than no report at all.
| Server | Tool prefix | Config path | Source path |
|---|---|---|---|
| BPA | mcp__BPA__ | ~/.config/mcp-eregistrations-bpa/ | src/mcp_eregistrations_bpa/tools/ |
| DS | mcp__DS__ | ~/.config/mcp-eregistrations-ds/ | src/mcp_eregistrations_ds/tools/ |
| GDB | mcp__GDB__ | ~/.config/mcp-eregistrations-gdb/ | src/mcp_eregistrations_gdb/tools/ |
| Keycloak | mcp__Keycloak__ | ~/.config/mcp-eregistrations-keycloak/ | src/mcp_eregistrations_keycloak/tools/ |
Throughout this skill, {SERVER} refers to the identified server (BPA, DS, GDB, or Keycloak) and {server} to its lowercase form (bpa, ds, gdb, keycloak).
First, determine which MCP server is involved. Look for:
mcp__BPA__, mcp__DS__, mcp__GDB__, mcp__Keycloak__)Then ask the user to describe the problem in their own words. Helpful prompts:
What were you trying to do? What happened instead of what you expected?
Listen for:
If the problem is visible in the current conversation (a tool call that returned wrong data, an error), reference it directly — don't make the user repeat what's already in context.
Collect these automatically (don't ask the user):
Server version and instances: Call mcp__{SERVER}__connection_status(instance="<name>") for the affected instance. The response includes version, latest_version, and update_available fields. Also call mcp__{SERVER}__instance_list() to capture registered instances.
Today's date: Run via Bash:
date +%Y-%m-%d
Recent server log errors (if available). Run via Bash:
find ~/.config/mcp-eregistrations-{server}/instances -name 'server.log' -exec grep -E 'ERROR|CRITICAL' {} \; 2>/dev/null | tail -10
Based on what the user described, determine which category applies:
| Category | Symptoms |
|---|---|
| Wrong API call | Tool sends incorrect HTTP method, path, or parameters |
| Data transformation | Response data is mangled, fields missing, wrong format |
| UI mismatch | Tool produces different result than the same action in the web UI |
| Missing validation | Tool accepts invalid input that the UI would reject |
| Auth/connection | Token errors, timeouts, wrong instance targeted |
| Missing capability | Tool doesn't support an operation that the UI does |
Ask clarifying questions only if you genuinely can't categorize. Don't interrogate the user.
This step is critical. Do NOT skip it.
Before writing anything up, verify the issue is real and reproducible:
If the original tool call is in the conversation context, re-run it with the exact same parameters. Compare:
| Outcome | Action |
|---|---|
| Same error/wrong result | Issue confirmed — proceed to Step 5 |
| Different error | Note both results — may be intermittent or state-dependent |
| Works correctly now | Stop. Tell the user: "I re-ran the same call and it succeeded. The original failure may have been transient (auth expiry, network, server restart). Want me to still file it as intermittent?" |
Try variations to isolate whether the issue is the tool or the input:
determinant_get when they meant determinant_list? Suggest the correct tool.If the MCP server source code is available locally, check the tool implementation:
# Find the tool source
grep -r "def <tool_name>" src/mcp_eregistrations_{server}/tools/ 2>/dev/null
Read the relevant function to understand:
If the tool is working as designed but the user expected different behavior, that's a feature request, not a bug. Note this distinction in the report.
This is where hallucinations hide. Before accepting any claim about what should happen:
Never write "Expected: X" in a report unless you have evidence that X is correct. If you're unsure, write "Expected behavior needs verification" and explain why.
If the failing tool call is in the current conversation, extract:
If the user can show what the web UI does for the same action (screenshot, network tab, or description), capture that as the "expected behavior" baseline.
Before writing the report, run through this checklist honestly. Write your answers down (internally, not in the report) for each question:
| Question | If YES → |
|---|---|
| Am I claiming the tool "should" do X without evidence? | Remove the claim or mark as "needs verification" |
| Did I read the error message carefully, or am I paraphrasing from memory? | Re-read the actual response |
| Am I conflating two different issues into one? | Split into separate reports |
| Is my "expected behavior" based on how I think the API works, or on actual docs/UI? | Cite your source or downgrade confidence |
| Did the user actually say this, or am I inferring? | Quote the user's words, don't interpret |
Before concluding "this is a bug", consider each alternative:
For each alternative, note whether you ruled it out and how. If you can't rule out an alternative, mention it in the report.
Build this table for every factual claim that will appear in the report. This is not optional — it's the structural filter that prevents bad reports from being filed.
| Claim | Type | Evidence |
|---|---|---|
| "Tool returns X" | Hard (reproduced) | Re-ran call, got same result |
| "Should return Y" | Assumption (unverified) | User said so, no UI/doc confirmation |
| "Field Z is missing" | Hard (observed) | Compared response against UI screenshot |
Type definitions:
Any claim typed as "Assumption" must be marked "needs verification" in the report. Do not present assumptions as facts.
Before proceeding, answer this honestly:
If this report is wrong, what damage does it cause? Would a developer waste hours reproducing a non-issue? Would they "fix" something that wasn't broken and introduce a real bug?
If the answer is "significant damage" and you have any Assumption-typed claims, stop and tell the user what evidence is needed before proceeding.
Based on your verification work:
| Level | Criteria |
|---|---|
| Verified | Reproduced the issue, confirmed expected behavior from UI/docs, ruled out alternatives |
| Likely | Reproduced or have strong evidence, but couldn't fully verify expected behavior |
| Suspected | User report is credible and consistent, but couldn't reproduce or verify independently |
| Unverified | Couldn't reproduce, expected behavior is unclear, or significant alternative explanations remain |
If confidence is "Unverified", tell the user before writing the report. They may want to gather more evidence first.
Create the report directory and file:
mkdir -p ~/Desktop/mcp-issue-reports
Write the report to ~/Desktop/mcp-issue-reports/<date>-<server>-<slug>.md where <server> is the lowercase server name and <slug> is a short kebab-case summary (e.g., 2026-04-02-bpa-effect-create-wrong-format).
# {SERVER} MCP Issue: <Short title>
**Date:** <YYYY-MM-DD>
**Server:** <BPA | DS | GDB | Keycloak>
**Reporter:** <user name if known, otherwise "via Claude">
**Severity:** <critical | high | medium | low>
**Confidence:** <verified | likely | suspected | unverified>
## Environment
- **MCP server version:** <version> (latest: <latest>)
- **Instance:** <name> (<url>)
- **Service ID:** <if applicable>
## Summary
<1-2 sentence description of the problem>
## Reproduction
- **Reproduced:** <yes — consistent | yes — intermittent | no — works on retry | not attempted>
- **Reproduction tool call:**
Tool: mcp__{SERVER}__<tool_name>(param=value, ...) Result: <same error | different result | success>
## Steps to Reproduce
1. <step>
2. <step>
3. <step>
## Actual Behavior
<What happened. Include the tool call, parameters, and response.>
Tool: mcp__{SERVER}__<tool_name>(param=value, ...) Response:
## Expected Behavior
<What should have happened.>
**Evidence source:** <web UI observation | API reference doc | tool docstring | user report only>
## Web UI Comparison
<If available: what the UI sends/receives for the same operation.
Include API endpoint, method, and payload if captured.>
<If not available: "Not compared — user did not check UI behavior.">
## Claim Classification
| Claim | Type | Evidence |
|-------|------|----------|
| <claim> | <Hard / Soft / Assumption> | <evidence or "needs verification"> |
## Alternative Explanations Considered
| Alternative | Ruled out? | How |
|-------------|-----------|-----|
| User error (wrong params) | <yes/no> | <explanation> |
| Stale state | <yes/no> | <explanation> |
| Auth/session issue | <yes/no> | <explanation> |
| Known limitation | <yes/no> | <explanation> |
| Working as designed | <yes/no> | <explanation> |
## Server Logs
<Relevant error lines from server.log, or "No errors found in logs.">
## Analysis
<Your assessment of what's likely wrong. Reference specific source files
in the MCP server if you can identify them.>
### Likely affected files
- `src/mcp_eregistrations_{server}/tools/<file>.py`
## Suggested Fix
<Concrete suggestion for the MCP developer, if you have one.>
Show the user the report path and a brief summary of what was captured:
Issue report saved to
~/Desktop/mcp-issue-reports/<filename>.mdSummary: Server: <BPA | DS | GDB | Keycloak> Category: Severity: Confidence: —
You can share this file with the MCP development team. Would you like me to adjust anything?
If confidence is below "verified", explicitly tell the user what additional evidence would raise it (e.g., "If you can confirm the web UI behavior for this action, I can upgrade confidence to 'verified'").
After the user confirms the report, offer to file it as a GitHub issue.
If any gate fails, tell the user exactly what's blocking it and what evidence would unblock it:
I can't file this on GitHub yet — the expected behavior is based on your description only (no UI/doc confirmation). If you can verify what the web UI does for this action, I can upgrade the claim and file it.
Check prerequisites:
gh auth status
If not authenticated, tell the user to run ! gh auth login and stop.
Map labels from the report:
| Report field | GitHub label |
|---|---|
| Server: BPA | bpa |
| Server: DS | ds |
| Server: GDB | gdb |
| Server: Keycloak | keycloak |
| Category: Wrong API call | api |
| Category: Data transformation | data |
| Category: UI mismatch | ui-mismatch |
| Category: Missing validation | validation |
| Category: Auth/connection | auth |
| Category: Missing capability | enhancement |
| Confidence: verified | verified |
| Confidence: likely | likely |
| Severity: critical | critical |
| Severity: high | high |
Ask the user for confirmation before filing:
Ready to file on UNCTAD-eRegistrations/MCP_eRegistrations:
- Title:
- Labels:
File it?
Create the issue (only after explicit user approval):
gh issue create --repo UNCTAD-eRegistrations/MCP_eRegistrations \
--title "<title>" \
--body-file ~/Desktop/mcp-issue-reports/<filename>.md \
--label "<label1>,<label2>"
Show the issue URL to the user.
If any labels don't exist in the repo, omit them rather than failing. Use --label only for labels that exist.