Guide PMs through debugging AI-generated code: how to read error messages, form effective debugging prompts, isolate issues, and decide when to fix forward vs start over.
From pm-vibe-codingnpx claudepluginhub tarunccet/pm-skills --plugin pm-vibe-codingThis skill uses the workspace's default tool permissions.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Help PM-builders debug AI-generated code effectively without deep engineering knowledge. Covers how to read error messages, form good debugging prompts for AI assistants, systematically isolate issues, and make the call between fixing forward vs starting a section over.
AI-generated code fails in predictable patterns: (1) hallucinated API names or library functions that don't exist, (2) type mismatches between frontend and backend (sending a string where a number is expected), (3) missing environment variables, (4) outdated library syntax that the AI learned from older training data, (5) state management issues in React (stale state, missing dependencies in useEffect). Most bugs in PM-builder projects fall into one of these categories. Knowing the category tells you where to look and how to prompt the AI to fix it.
The debugging mindset for PM-builders: You are not expected to read and understand all the code. You are expected to: (1) capture the full error, (2) classify the type of problem, (3) give the AI the right context to fix it, and (4) verify the fix actually resolves the user-facing issue. This is a product-level skill, not an engineering one.
The "3 strikes" rule: If you've given the AI three attempts to fix the same issue and the error keeps changing or getting worse, stop. The AI is stuck in a patch loop. The solution is to start that section fresh, not to keep patching.
You are helping debug a problem with $ARGUMENTS.
Before doing anything else — most debugging failures happen because the PM only shares part of the error with the AI.
Capture from all three sources:
npm run dev. If using Vercel/Railway, check the deployment logs in the dashboardFull error template to give the AI:
Browser console error: [paste full error with stack trace]
Server log error: [paste if applicable]
Network response: [paste the failing API response body]
What I was doing when it happened: [describe the user action]
Expected: [what should have happened]
Actual: [what happened instead]
Match your error to the closest pattern:
| Error Message Pattern | Error Type | Most Likely Cause | Where to Look |
|---|---|---|---|
Cannot read properties of undefined | Null reference | Using a variable before it has data | Where the variable is first set; check async loading |
Cannot read properties of null | Null reference | A database query returned no result | Add a null check before accessing properties |
404 Not Found on API call | Route not found | Path typo, wrong HTTP method, routing misconfigured | Check the exact URL in the network tab vs. the route file |
401 Unauthorized | Auth failure | Missing or expired auth token | Check that auth headers are being sent; check token expiry |
403 Forbidden | Permission denied | User doesn't have access to this resource | Check authorization logic and database RLS policies |
500 Internal Server Error | Backend crash | Exception in server code | Check server logs for the actual error message |
CORS error | Cross-origin blocked | Backend missing CORS headers | Backend needs to allow the frontend's origin |
Cannot find module '...' | Import error | Package not installed or wrong import path | Run npm install; check the import path spelling |
[Function] is not a function | Type error | Calling something as a function that isn't one | Log the variable before calling it to see its actual type |
Connection refused | Network error | Service not running or wrong port | Is the database/API running? Check the connection string |
TypeError: fetch is not a function | Environment error | Wrong environment (server vs. client context) | Check where the code is running; use appropriate fetch |
Hydration error | React SSR mismatch | Server and client render different content | Check for browser-only APIs (window, localStorage) in server components |
Module not found: Can't resolve | Missing dependency | Package not installed | Run npm install [package-name] |
SyntaxError: Unexpected token | Parse error | JSON parse failure or syntax error in code | Log the raw response before parsing; check JSON.parse input |
Template for AI debugging prompt:
I'm getting this error in my [Next.js / React / Node.js] app:
ERROR:
[paste the FULL error message and stack trace]
RELEVANT CODE:
[paste the specific file and function where the error occurs]
CONTEXT:
- This happens when: [describe the user action that triggers it]
- Expected behavior: [what should happen]
- Actual behavior: [what happens instead]
- This worked before: [yes/no — if yes, what changed?]
Please:
1. Explain what is causing this error
2. Show me the fix
3. Explain why your fix works so I can avoid this pattern in the future
Prompting principles that improve AI debugging:
If you can't identify where the error originates:
1. Comment out to isolate: Temporarily disable sections of code until the error stops. The last section you disabled is the culprit. Start from the outermost call and work inward.
2. Add checkpoints: Add console.log("CHECKPOINT 1: reached") before the suspected error location, then move it closer to the error until you find the exact line. Remove all checkpoints when done.
3. Log the inputs: Add a log statement just before the failing function to see the exact values being passed in. The error is often that the data is in a different format than expected (e.g., id is "123" string when the function expects 123 number).
4. Check the network tab: In the browser, open F12 → Network → filter by Fetch/XHR. Find the failing API request. Check:
5. Reproduce in isolation: Can you trigger the error from a simpler test? If the bug only happens in a complex multi-step flow, try to reproduce it in fewer steps. A minimal reproduction is much easier to debug.
Fix forward when:
Start the section over when:
How to start over effectively (without losing all progress):
Pattern 1: "It worked yesterday, now it's broken"
git diff to see what changed. Check if any npm packages were updated. Verify environment variables are still set in the hosting platformPattern 2: "The AI generated code with a function/library that doesn't exist"
TypeError: X is not a function, Cannot find module, or the function exists in AI output but errors when called[name] you used doesn't exist in [library] version [version]. What's the correct current API? Show me the working replacement using only documented functions."Pattern 3: "Data isn't saving / changes aren't persisting"
Pattern 4: "Works on localhost, breaks in production"
Pattern 5: "Users are randomly logged out"
Pattern 6: "The page flickers or shows wrong data briefly"
A structured debugging guide with: