From git-repo-prep
Audits repositories for public release readiness: scans secrets, personal info, licenses, docs, gitignore, CI/CD pipelines, and metadata gaps.
npx claudepluginhub 2389-research/claude-plugins --plugin git-repo-prepThis skill uses the workspace's default tool permissions.
Announce: "I'm using git-repo-prep:review to audit this repo for openness."
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
Announce: "I'm using git-repo-prep:review to audit this repo for openness."
Create a TodoWrite checklist for all audit categories:
Mark each task in_progress before starting, completed immediately after.
This is an openness audit, not a code quality review. Stay focused on what matters for making the repo public. Do NOT comment on error handling, architecture, performance, code style, test coverage quality, or design patterns.
Detect the ecosystem to tailor checks:
| File | Ecosystem |
|---|---|
package.json | Node.js |
pyproject.toml / setup.py / setup.cfg | Python |
Cargo.toml | Rust |
go.mod | Go |
*.csproj / *.sln | .NET |
Gemfile | Ruby |
pom.xml / build.gradle | Java/Kotlin |
Scan every category below. Classify each finding as Critical, Recommended, or Nice-to-have using the severity definitions in this table exactly. Do not invent your own categories, and do not upgrade or downgrade severity — use the column where the issue appears.
| Category | Critical | Recommended | Nice-to-have |
|---|---|---|---|
| Secrets | API keys, passwords, tokens, .env committed | .env in gitignore | git history scan |
| Personal info | Real names/emails in source code | Author field review | Username cleanup |
| License | No LICENSE file | Mismatch between LICENSE file and package metadata; no license section in README | License headers in source files |
| Documentation | No README or empty README | Missing install/usage sections | CONTRIBUTING.md, SECURITY.md, CLAUDE.md |
| Gitignore | Sensitive files tracked/committed | Missing common patterns for ecosystem | IDE/OS files |
| CI/CD | — | No CI pipeline; no tests at all (CI cannot function) | No dependabot, no hooks, no coverage |
| Metadata | — | No repo URL in package metadata | Missing engine/version, keywords, description |
Work through each category in order.
sk_live, sk_test, ghp_, AKIA, password\s*=, secret\s*=, token\s*=, api_key, -----BEGIN.*PRIVATE KEY, connection strings with embedded credentials..env files in the repo (any level)..gitignore for .env and .env.* exclusion patterns..pem, .key, or credential files.package.json author, pyproject.toml authors, Cargo.toml authors).\b\S+@\S+\.\S+\b) in source files (not node_modules, not vendor, not lock files).DEFAULT_USER, AUTHOR, usernames in URLs or paths).This is the most commonly missed area. Check all three locations and compare:
license field say in package.json / pyproject.toml / Cargo.toml?Flag any mismatch between these three. Flag if any location is missing.
.gitignore exist?node_modules/, .env, dist/, coverage/__pycache__/, *.pyc, .env, *.egg-info/, dist/, .venv/target/, .envvendor/ (if not vendoring), .env.env, .env.*, *.log, .DS_Store, *.pem, *.keygit ls-files for .env, credentials, keys)..github/workflows/, .gitlab-ci.yml, Jenkinsfile, .circleci/, etc..github/dependabot.yml) or Renovate (renovate.json).Check ecosystem-specific metadata completeness:
repository, engines, keywords, description in package.json[project.urls], classifiers, requires-python, keywords, description in pyproject.tomlrepository, keywords, categories, description in Cargo.tomlPresent findings conversationally — do NOT write a report file.
Group by severity, starting with the most urgent:
For each finding:
End with a tally: "X critical, Y recommended, Z nice-to-have findings."
If critical findings exist, recommend addressing them before release. If none, say the repo looks ready and suggest tackling recommended items for polish.