Simulates EGO reviewer for GNOME Shell extensions, auditing lifecycle correctness, signal disconnection, resource cleanup, async safety, security patterns, and code quality. Use before submission or when reviewing extension code.
npx claudepluginhub zvibaratz/gnome-extension-reviewerThis skill uses the workspace's default tool permissions.
Simulated EGO reviewer code review for GNOME Shell extensions.
Mandates invoking relevant skills via tools before any response in coding sessions. Covers access, priorities, and adaptations for Claude Code, Copilot CLI, Gemini CLI.
Share bugs, ideas, or general feedback.
Simulated EGO reviewer code review for GNOME Shell extensions.
This skill guides a thorough manual code review that covers everything an extensions.gnome.org reviewer checks, plus common rejection patterns learned from real submissions.
Understanding the real EGO review process helps calibrate review severity:
pkexec usage gets more leeway than one with typeof super.destroy === 'function' guards everywheremetadata.json -- note UUID, shell-version, settings-schema, any session-modes.js files -- identify extension.js, prefs.js, lib/ modulesUsing licensing-checklist.md:
Using lifecycle-checklist.md:
Read extension.js — verify enable/disable symmetry
Check constructor constraints (no resource allocation in constructor)
Reference the resource-tracking findings from Phase 0 lint. ego-lint
already ran build-resource-graph.py and check-resources.py. Use the
resource-tracking/* findings from the lint results as the starting point.
Do NOT re-run build-resource-graph.py.
For each resource-tracking FAIL/WARN from lint: read the cited file:line to verify it's a true leak. Classify as: TRUE LEAK (blocking) | JUSTIFIED (note why) | FALSE POSITIVE (skip). For true leaks, include the fix in the report.
For ownership chains: if lint reports orphans, verify parent calls
child's destroy() in its own disable()/destroy() and that destroy
order is reverse of creation. If lint reports 0 orphans, do a brief
spot-check of 1-2 ownership chains to verify graph accuracy, but do not
perform a full ownership walk.
Resource tracking table: if the report needs a resource tracking table,
build it from the lint JSON's resource-tracking/* findings rather than
re-running build-resource-graph.py --format=table.
If the graph reports 0 orphans and complete ownership chains: abbreviate this phase — focus on async guards and cleanup ordering below
Async guard verification: For every await in enable-path code, verify
a _destroyed check follows the resume point
Verify cleanup ordering (reverse order of creation)
Check for _destroyed flag pattern in async operations
Verify session mode handling if applicable
resource-tracking/* or lifecycle/* FAILs/WARNs. In this case, do a
single spot-check: pick 1 resource entry from the graph and verify by
reading the cited file:line that create/destroy are correctly paired._cleanup(), _teardown(), _clear()) not recognized by the resource
graph. GSettings signal leaks and D-Bus connectSignal leaks are now
automated — do not re-check manually.Using security-checklist.md:
Reviewer perspective notes:
pkexec, they immediately check the helper script for input validationApply accessibility-checklist.md:
Using code-quality-checklist.md:
shell-version range. Common hallucinations: Meta.Screen, St.Button.set_label(), GLib.source_remove(), Clutter.Actor.show_all()registerClass calls have GTypeName, that destroy() chains to super.destroy(), that GObject properties emit notifyfillPreferencesWindow() exists, GTK4/Adwaita patterns used correctly, no deprecated GTK3 patterns, no Shell importsvar declarations (should use const/let)console.log() (banned — only debug/warn/error allowed)Reviewer perspective notes:
console.log(), they think "developer forgot to clean up debug logging"let, they think "will this persist across enable/disable cycles?"try { super.destroy() } catch, they think "AI-generated code"Using ai-slop-checklist.md (46-item checklist):
quality/code-provenance score — if provenance-score >= 3,
apply +2 credit to BLOCKING thresholdFor each triggered AI pattern, include a Defense column in the analysis:
| # | Pattern | Triggered? | File:Line | Defense |
|---|---|---|---|---|
| 1 | Excessive try-catch | Yes | ext.js:45 | All try-catch wraps D-Bus calls (justified) |
| 8 | TypeScript JSDoc | Yes | lib/api.js:12 | Only on 2 exported functions (below threshold) |
Defense indicators to check for each triggered item:
If more triggered items have valid defenses than not, downgrade the verdict by one tier (BLOCKING → ADVISORY, ADVISORY → note only).
## EGO Review Report — [Extension Name] v[version]
### Verdict: [LIKELY APPROVED | NEEDS REVISION | LIKELY REJECTED]
**Rejection Risk**: [LOW | MEDIUM | HIGH | VERY HIGH]
---
### Section 1: Blocking Issues (Must Fix)
#### [B1] Issue title (category)
**File**: path/to/file.js:line
**What**: Description of the issue
**Why reviewers reject this**: Explanation with reviewer perspective
**Fix**:
```js
// BEFORE
old code
// AFTER
fixed code
Items that are acceptable IF properly documented:
File: path/to/file.js:line Status: Requires reviewer justification Template: [Include pkexec justification template from security checklist]
File: path/to/file.js:line What: Description Reviewer perspective: What the reviewer thinks when they see this Suggestion: How to fix
| Category | Pass | Fail | Warn |
|---|---|---|---|
| Metadata | N | N | N |
| Security | N | N | N |
| Lifecycle | N | N | N |
| Quality | N | N | N |
Score: N/46 triggered — [ADVISORY | BLOCKING] Triggered items: list with file:line Assessment: interpretation
Ready to submit? [YES | NO] — N blocking issues remain
Action items (priority order):
## Rejection-Risk Scoring Model
Calculate based on findings:
| Finding | Risk Points |
|---------|-------------|
| Each BLOCKING lifecycle issue | +3 |
| Each BLOCKING security issue | +4 |
| Each BLOCKING API hallucination | +5 (indicates AI) |
| AI slop score >= 3 | +5 |
| AI slop score >= 6 | +10 |
| Each ADVISORY issue | +1 |
| Justified advisory (with docs) | +0 |
**Verdict thresholds:**
- 0-2 points: **LIKELY APPROVED** — minor or no issues
- 3-6 points: **NEEDS REVISION** — fixable, resubmit after changes
- 7-12 points: **LIKELY REJECTED** — significant issues
- 13+ points: **LIKELY REJECTED** — fundamental problems or AI-generated
**Rough correspondence with ego-simulate scores:**
| ego-review risk | ego-simulate score | Interpretation |
|-----------------|-------------------|----------------|
| 0-2 | 0-4 | Likely to pass |
| 3-6 | 5-9 | May need revision |
| 7+ | 10+ | Likely rejected |
The scales use different inputs (ego-review: finding category points;
ego-simulate: taxonomy weights), so this mapping is approximate. When both
tools are run, prefer ego-review's assessment as the authoritative verdict.
## When to Use
- Before submitting to extensions.gnome.org
- After making significant changes to an extension
- When reviewing someone else's GNOME extension code
- As a learning tool for new extension developers