Review AI-generated code from a PM perspective: spec compliance, security red flags, data model soundness, UX issues in frontend code, and error handling against product requirements.
From pm-vibe-codingnpx claudepluginhub tarunccet/pm-skills --plugin pm-vibe-codingThis skill uses the workspace's default tool permissions.
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides agent creation for Claude Code plugins with file templates, frontmatter specs (name, description, model), triggering examples, system prompts, and best practices.
Review AI-generated code without needing deep engineering expertise. Focuses on what PMs can and should catch: does the code match the spec, are there obvious security issues, is the data model right, are UX requirements implemented, and does error handling match what users should experience.
PMs reviewing AI-generated code are not looking for algorithmic efficiency or coding style — engineers handle that. PMs should focus on: correctness against requirements, security basics (especially around user data), data model alignment with the product spec, and whether error states surface correctly to users. These are the issues most likely to cause product failures and that PMs are best positioned to catch.
Why AI-generated code needs PM review: AI coding assistants are optimized for "working code" — code that runs without errors. They are not optimized for "correct product behavior." It's common for AI to implement the technically correct version of a requirement but miss the product intent. For example: an AI might correctly implement "users can delete their account" but not add a confirmation step, not clean up related data, and not send a confirmation email — all of which are product requirements, not just code requirements.
The PM review mindset: You are not reviewing code quality. You are auditing product correctness. Ask "does this code do what the user needs?" not "is this good code?"
You are reviewing AI-generated code for $ARGUMENTS.
Ask the user to paste the relevant code, or describe what was built. Then run through these review dimensions:
Compare implementation against the original specification:
How to verify: Walk through each user story from your spec and trace whether the code implements it correctly. A good prompt: "Show me where in the code [user story] is implemented, step by step."
Common spec compliance failures in AI-generated code:
Look for these common AI-generated code security issues:
Critical (fix before any public sharing):
process.env.VARIABLE_NAME/api/ endpoint that handles user data should check authentication/admin while logged out.env file excluded from the repository?Important (fix before beta users):
"SELECT * FROM users WHERE id = " + userId). Should use parameterized queries or ORMHow to flag: If you see any critical issues, they are blockers. Stop and fix them before sharing the code or URL with anyone.
Quick security audit prompt for AI: "Audit this code for security vulnerabilities. Specifically check for: hardcoded secrets, unprotected routes, SQL injection, missing input validation, and exposed sensitive data. Show me every instance."
Check that the data model matches your spec:
created_at and updated_at on records that need audit trails?Common data model failures:
Review the frontend from a product perspective — not code style, but user experience correctness:
States that must be implemented:
UX correctness checks:
How to check: Walk through every user flow in the app in a browser while logged into the network tab. Trigger every error state deliberately (submit empty forms, disconnect from network, use wrong credentials). Check what the user sees in each case.
Verify errors are handled from a user perspective — not just that errors are caught, but that they produce the right user experience:
Error handling anti-patterns to look for:
try/catch blocks that swallow errors silently (catch block with no handling)Overall Assessment: [Pass / Pass with issues / Needs rework]
Blockers (must fix before sharing with any user):
Recommended Fixes (should fix before beta users):
Nice to Have (can defer to next iteration):
Spec compliance summary: [X of Y user stories correctly implemented]
Blocker found: OpenAI API key hardcoded on line 12 of app/api/generate/route.ts — move to .env file as OPENAI_API_KEY, add .env to .gitignore, and set the variable in Vercel's environment variable settings before pushing to GitHub.
Recommended fix: The /api/projects endpoint returns all projects in the database, not just the current user's projects. On line 24 of app/api/projects/route.ts, add a .where('user_id', userId) filter using the authenticated user's ID from the session.
Nice to have: The delete project button has no confirmation dialog. Add a window.confirm() or a modal confirmation before calling the delete API — prevents accidental data loss.