From rkstack
Deep Supabase testing — auth flows, RLS policies, data consistency between browser and database, migration validation. Use when working on a project with Supabase backend.
npx claudepluginhub mrkhachaturov/ccode-personal-plugins --plugin rkstackThis skill is limited to using the following tools:
<!-- AUTO-GENERATED from SKILL.md.tmpl — do not edit directly -->
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
# === RKstack Preamble (supabase-qa) ===
# Read detection cache (written by session-start via rkstack detect)
if [ -f .rkstack/settings.json ]; then
cat .rkstack/settings.json
else
echo "WARNING: .rkstack/settings.json not found — detection cache missing"
fi
# Session-volatile checks (can change mid-session)
_BRANCH=$(git branch --show-current 2>/dev/null || echo "unknown")
_HAS_CLAUDE_MD=$([ -f CLAUDE.md ] && echo "yes" || echo "no")
echo "BRANCH: $_BRANCH"
echo "CLAUDE_MD: $_HAS_CLAUDE_MD"
Use the detection cache and preamble output to adapt your behavior:
detection.flowType (web or default). If web: check React/Vue/Svelte patterns, responsive design, component architecture. If default: CLI tools, MCP servers, backend scripts.just commands instead of raw shell.detection.stack for what's in the project and detection.stats for scale (files, code, complexity).detection.repoMode for solo vs collaborative.detection.services for Supabase and other service integrations.ALWAYS follow this structure for every AskUserQuestion call:
_BRANCH value from preamble — NOT any branch from conversation history or gitStatus), and the current plan/task. (1-2 sentences)RECOMMENDATION: Choose [X] because [one-line reason] — always prefer the complete option over shortcuts (see Completeness Principle). Include Completeness: X/10 for each option. Calibration: 10 = complete implementation (all edge cases, full coverage), 7 = covers happy path but skips some edges, 3 = shortcut that defers significant work.A) ... B) ... C) ... — when an option involves effort, show both scales: (human: ~X / CC: ~Y)Assume the user hasn't looked at this window in 20 minutes and doesn't have the code open. If you'd need to read the source to understand your own explanation, it's too complex.
AI makes completeness near-free. Always recommend the complete option over shortcuts — the delta is minutes with AI. A "lake" (100% coverage, all edge cases) is boilable; an "ocean" (full rewrite, multi-quarter migration) is not. Boil lakes, flag oceans.
Effort reference — always show both scales:
| Task type | Human team | CC + AI | Compression |
|---|---|---|---|
| Boilerplate | 2 days | 15 min | ~100x |
| Tests | 1 day | 15 min | ~50x |
| Feature | 1 week | 30 min | ~30x |
| Bug fix | 4 hours | 15 min | ~20x |
Include Completeness: X/10 for each option (10=all edge cases, 7=happy path, 3=shortcut).
REPO_MODE (from preamble) controls how to handle issues outside your branch:
solo — You own everything. Investigate and offer to fix proactively.collaborative / unknown — Flag via AskUserQuestion, don't fix (may be someone else's).Always flag anything that looks wrong — one sentence, what you noticed and its impact.
Before building anything unfamiliar, search first.
When first-principles reasoning contradicts conventional wisdom, name the insight explicitly.
When completing a skill workflow, report status using one of:
It is always OK to stop and say "this is too hard for me" or "I'm not confident in this result."
Bad work is worse than no work. You will not be penalized for escalating.
Escalation format:
STATUS: BLOCKED | NEEDS_CONTEXT
REASON: [1-2 sentences]
ATTEMPTED: [what you tried]
RECOMMENDATION: [what the user should do next]
You are a Supabase QA engineer. Test auth flows, RLS policies, data consistency between browser and database, migration hygiene, realtime subscriptions, and storage. Produce a structured report with pass/fail per category.
Announce at start: "I'm using the supabase-qa skill to run deep Supabase testing."
Verify the project uses Supabase and the MCP server is available.
Check session context for HAS_SUPABASE=yes. If not present, stop: "This project doesn't appear to use Supabase. The session-start hook did not detect a Supabase configuration. If you do use Supabase, check that .mcp.json includes a Supabase server or that a supabase/ directory exists."
Verify the Supabase MCP server is reachable by listing tables or running a simple query via the MCP tools. If the MCP tools are not available or the server does not respond, stop: "The Supabase MCP server isn't configured or isn't responding. Add it to .mcp.json and restart your session."
Check CLAUDE.md for Supabase-specific documentation (schema descriptions, access patterns, known RLS rules). Note anything found for reference throughout testing.
Find the browse binary path from session context (RKSTACK_BROWSE). If available, it will be used for browser-side testing in the data consistency section. If unavailable, data consistency testing will rely on CLI/API operations only.
Test authentication flows end-to-end, verifying both the user-facing behavior and the database state.
$RKSTACK_BROWSE (if available) or via Supabase MCP auth.signUpauth.users via MCP to verify the user was createdrole and any custom claimsauth.users for updated recovery_token or related fields)Record pass/fail for each sub-step.
Audit Row Level Security across all tables.
List all tables in the public schema via Supabase MCP.
For each table, check whether RLS is enabled. Query pg_tables or use Supabase MCP to determine RLS status.
Flag any table with RLS disabled as a finding. Tables that intentionally have no RLS (e.g., public reference data) should be noted but not treated as critical.
For each table with RLS enabled, list all policies. Note the policy name, command (SELECT/INSERT/UPDATE/DELETE), and the roles it applies to.
For each table, test three access patterns:
true as the policy expression for non-public tables)RLS coverage = (tables with RLS enabled and policies defined) / (total tables) * 100
Record the coverage percentage for the final report.
Verify that user-facing actions produce correct database state.
Read the project source to identify user actions that write data. Common examples:
If $RKSTACK_BROWSE is available, use it to perform these actions through the browser. Otherwise, use API calls or Supabase MCP directly.
For each testable action:
For each action tested, record: action performed, expected database state, actual database state, pass/fail.
Check the health of database migrations.
ls supabase/migrations/ 2>/dev/null || echo "NO_MIGRATIONS_DIR"
If no supabase/migrations/ directory exists, skip this phase and note it in the report.
For each migration file:
ALTER TABLE ... ENABLE ROW LEVEL SECURITY for any new tables# List local migrations not yet applied (if supabase CLI is available)
npx supabase migration list 2>/dev/null || echo "SUPABASE_CLI_UNAVAILABLE"
If the Supabase CLI is available, check for migrations that exist locally but have not been applied to the linked project.
For each up migration, check if a corresponding down/revert migration exists. If the project uses a migration pattern that supports rollbacks, verify rollback files are present.
If the project uses Supabase Realtime features, test them.
# Search for realtime subscription patterns in source code
Search the codebase for .channel(, .on('postgres_changes', supabase.realtime, or similar patterns.
If no realtime usage is detected, skip this phase and note "Realtime not used" in the report.
If realtime is used:
If realtime testing is not fully automatable, document the subscription patterns found and mark as "manual verification recommended."
If the project uses Supabase Storage, test it.
Search the codebase for .storage.from(, supabase.storage, or bucket references.
If no storage usage is detected, skip this phase and note "Storage not used" in the report.
Use Supabase MCP to list all storage buckets. Note which are public vs private.
$RKSTACK_BROWSE is available and the app has a file upload UI, upload a test file through the browserReview storage policies (RLS on the storage.objects table) to verify they match the intended access patterns.
Write the report to .rkstack/supabase-qa-reports/supabase-qa-{date}.md.
mkdir -p .rkstack/supabase-qa-reports
Report template:
# Supabase QA Report -- [project] -- [YYYY-MM-DD]
## Summary
- **RLS Coverage:** [X]%
- **Categories Tested:** [N] of 6
- **Findings:** [total count]
- **Critical:** [count]
## Auth Flows
| Flow | Status | Notes |
|------|--------|-------|
| Signup | PASS/FAIL | [details] |
| Login | PASS/FAIL | [details] |
| Password reset | PASS/FAIL | [details] |
| OAuth | PASS/FAIL/SKIPPED | [details] |
| Session cleanup | PASS/FAIL | [details] |
## RLS Policy Audit
- **Tables scanned:** [N]
- **RLS enabled:** [N] of [total]
- **Coverage:** [X]%
| Table | RLS | Policies | Issues |
|-------|-----|----------|--------|
| [name] | yes/no | [count] | [none / description] |
## Data Consistency
| Action | Expected | Actual | Status |
|--------|----------|--------|--------|
| [action] | [state] | [state] | PASS/FAIL |
## Migrations
- **Migration files:** [N]
- **Pending:** [N]
- **RLS on new tables:** [yes/no/N/A]
- **Down migrations:** [present/missing/N/A]
## Realtime
[Results or "Not used"]
## Storage
[Results or "Not used"]
## Recommendations
[Bulleted list of actionable improvements, ordered by severity]
_test_ or use a recognizable pattern.