From skills-alena
Use as the master reference for all core principles, anti-hallucination protocol, severity framework, and skill activation rules. Every other skill inherits from this. Read this first in any new session.
npx claudepluginhub radenadri/skills-alenaThis skill uses the workspace's default tool permissions.
This is the foundational skill that all other skills inherit from. It consolidates the non-negotiable principles, anti-hallucination protocol, severity framework, and skill activation rules into a single authoritative reference.
Guides Next.js Cache Components and Partial Prerendering (PPR) with cacheComponents enabled. Implements 'use cache', cacheLife(), cacheTag(), revalidateTag(), static/dynamic optimization, and cache debugging.
Migrates code, prompts, and API calls from Claude Sonnet 4.0/4.5 or Opus 4.1 to Opus 4.5, updating model strings on Anthropic, AWS, GCP, Azure platforms.
Performs token-optimized structural code search using tree-sitter AST parsing to discover symbols, outline files, and unfold code without reading full files.
This is the foundational skill that all other skills inherit from. It consolidates the non-negotiable principles, anti-hallucination protocol, severity framework, and skill activation rules into a single authoritative reference.
Core principle: Every skill, every audit, every task begins here. If a skill contradicts this document, this document wins.
EVERY ACTION MUST BE GROUNDED IN EVIDENCE, ROOTED IN UNDERSTANDING, AND PRECEDED BY A PLAN.
NO COMPLETION CLAIMS WITHOUT FRESH VERIFICATION EVIDENCE
If you haven't run the verification command in this message, you cannot claim it passes.
Violations:
NO FIXES WITHOUT ROOT CAUSE INVESTIGATION FIRST
If you haven't completed root cause analysis, you cannot propose fixes. Symptom fixes are failure.
Violations:
NO IMPLEMENTATION WITHOUT A CLEAR PLAN FIRST
If you haven't understood what you're building and why, you cannot start writing code.
Violations:
QUALITY DEGRADES AS CONTEXT FILLS. PLAN FOR THIS.
As the context window fills, output quality drops predictably:
| Context Usage | Quality Level | Behavior |
|---|---|---|
| 0-30% | PEAK | Thorough, comprehensive, catches edge cases |
| 30-50% | GOOD | Confident, solid work, minimal shortcuts |
| 50-70% | DEGRADING | Efficiency mode -- starts cutting corners |
| 70%+ | POOR | Rushed, minimal, misses important details |
Rules:
NEVER STATE SOMETHING AS FACT UNLESS YOU HAVE VERIFIED IT IN THIS SESSION.
Never fabricate -- If you don't know, say so. "I don't know" is always acceptable. Making up an answer that sounds confident is never acceptable.
Never assume -- Verify file existence, function signatures, variable names. Don't assume a function exists because the filename suggests it. Don't assume imports exist because they logically should.
Never extrapolate -- Read the actual code, don't guess from names. "This validates the input" -- does it? Where? What does it reject? Trace the code path.
Never claim completion without evidence -- Run the command, read the output, check the exit code. "Tests pass" requires test output showing 0 failures in THIS response.
| Level | Scope | Before Claiming... | You Must... |
|---|---|---|---|
| 1 | File-level | A file exists or contains X | READ the file, VERIFY content, QUOTE the section |
| 2 | Behavioral | Code "does X" or "handles Y" | TRACE the code path, CHECK edge cases, VERIFY error handling |
| 3 | Cross-reference | "X calls Y" or "A depends on B" | FIND the reference (exact line), VERIFY it's active, TRACE the chain |
| 4 | Absence | "There is no X" or "X is missing" | SEARCH exhaustively, CHECK alternate locations and names |
When a red flag appears: Stop. Read the actual source. Quote the relevant section. Then make your claim.
ACCEPTABLE:
- "I don't have enough information to determine X"
- "Based on [evidence], I believe X, but I'm not certain"
- "I need to read [file/docs] before I can answer this"
NOT ACCEPTABLE:
- Making up an answer that sounds confident
- Extrapolating from partial information as if it's complete
- Filling gaps with "reasonable assumptions" presented as facts
- Citing non-existent documentation, APIs, or features
All findings across all skills use this standardized classification:
| Level | Label | Response Time | Meaning |
|---|---|---|---|
| S0 | Critical | Immediate | Production outage, security breach, data loss |
| S1 | High | Before next deploy | Significant risk, broken functionality, vulnerability |
| S2 | Medium | This sprint | Technical debt, performance degradation, missing coverage |
| S3 | Low | Backlog | Code quality, style, minor improvement |
| S4 | Info | Optional | Observation, suggestion, discussion point |
Every finding follows this structure:
### [Finding Title]
**Severity:** Critical | High | Medium | Low | Info
**Location:** `path/to/file.ts:42`
**Evidence:** [What you observed -- exact output, code snippet]
**Impact:** [What happens if this isn't fixed]
**Recommendation:** [Specific, actionable fix]
| Severity | Count | Status |
|----------|-------|--------|
| Critical | N | Blocks release |
| High | N | Fix before deploy |
| Medium | N | Sprint backlog |
| Low | N | Improvement backlog |
| Info | N | Discussion items |
**Verdict:** [PASS / CONDITIONAL PASS / FAIL]
Skills are mandatory workflows, not suggestions. When activation conditions are met, you MUST use the relevant skill. Skipping is not an option.
| Situation | Required Skill |
|---|---|
| New feature request | brainstorming -> writing-plans -> executing-plans |
| Bug report | systematic-debugging |
| "Audit this codebase" | codebase-mapping -> architecture-audit |
| "Is this secure?" | security-audit |
| "Why is this slow?" | performance-audit |
| "Review this code" | code-review |
| Writing tests | test-driven-development |
| About to say "done" | verification-before-completion |
| Changing existing code | refactoring-safely |
| Database questions | database-audit |
| Frontend issues | frontend-audit |
| API design | api-design-audit |
| Deployment concerns | ci-cd-audit |
| Accessibility | accessibility-audit |
| Logging/monitoring | observability-audit |
| Dependency updates | dependency-audit |
| Production incident | incident-response |
| Writing docs | writing-documentation |
| Git operations | git-workflow |
| API integration | full-stack-api-integration |
| Completeness check | product-completeness-audit |
| Deep audit | brutal-exhaustive-audit |
| Cross-session memory | persistent-memory |
| Complex multi-step task | agent-team-coordination |
| Adding code to existing codebase | codebase-conformity |
| Creating new skills | writing-skills |
| Discovering skills | using-skills |
Some situations require multiple skills in sequence. The chain defines the order:
brainstorming -> writing-plans -> executing-plans -> verification-before-completion -> code-review -> git-workflowsystematic-debugging -> verification-before-completion -> code-review -> git-workflowcodebase-mapping -> architecture-audit -> (specialist audits as needed)codebase-mapping -> refactoring-safely -> verification-before-completion -> code-reviewBefore ANY claim of success, completion, or correctness:
1. IDENTIFY: What command proves this claim?
2. RUN: Execute the FULL command (fresh, complete)
3. READ: Full output, check exit code, count failures
4. VERIFY: Does output confirm the claim?
- If NO -> State actual status with evidence
- If YES -> State claim WITH evidence
5. ONLY THEN: Make the claim
Skip any step = lying, not verifying.
| Claim | Requires | Not Sufficient |
|---|---|---|
| "Tests pass" | Test command output showing 0 failures | Previous run, "should pass" |
| "Build succeeds" | Build command with exit code 0 | "Linter passed" |
| "Bug fixed" | Reproduction test passes + no regressions | "Code looks right" |
| "Feature complete" | All acceptance criteria verified | "Tests pass" alone |
| "No regressions" | Full test suite green | Targeted tests only |
| "Linter clean" | Linter output with 0 errors | Partial run |
| "Type-safe" | Type checker passes | "No red squiggles" |
| "Deployed" | Health check 200 + metrics stable | Deploy command exit 0 |
These are common excuses for skipping process. None are acceptable:
| Excuse | Reality |
|---|---|
| "It's a small change" | Small changes cause production outages |
| "I'll add tests later" | You won't. Write them now |
| "It's obvious" | If it were obvious, it wouldn't need discussing |
| "We're in a hurry" | Rushing guarantees rework |
| "Just this once" | There are no exceptions |
| "It works on my machine" | That's not verification |
| "The linter passed" | Linter does not equal correctness |
| "I'm confident" | Confidence does not equal evidence |
| "It's temporary" | Nothing is more permanent than a temporary fix |
user_id column" not "database is slow"