From library-analyzer
Analyzes open-source GitHub libraries for contribution readiness. Produces structured Markdown reports on codebase structure, project lifecycle, and contribution paths.
npx claudepluginhub hamsurang/kit --plugin library-analyzerThis skill uses the workspace's default tool permissions.
Analyze an open-source library for contribution readiness. Produces a structured
Analyzes open-source GitHub repos from What/Why perspective: purpose, value, target users, usage flow. Clones repo locally, dispatches parallel subagents, synthesizes report.
Explores public GitHub repositories via DeepWiki AI-generated wikis for architecture overviews, design patterns, component relationships, and codebase Q&A.
Share bugs, ideas, or general feedback.
Analyze an open-source library for contribution readiness. Produces a structured Markdown report covering codebase structure, lifecycle, and contribution paths.
Do NOT activate when:
deepwiki-clideepwiki-cliParse the target from $ARGUMENTS or ask the user with AskUserQuestion.
| Input Format | Action |
|---|---|
https://github.com/owner/repo | URL mode |
owner/repo | URL mode (shorthand, treat as GitHub) |
/path/to/dir or ./path | Local mode |
react (bare name) | Reject: "Please use owner/repo format (e.g., facebook/react)" |
Validate the input:
which deepwiki-cli via Bash.
cargo install deepwiki-cli,
or clone the repo locally and provide the local path instead."Extract owner/repo for issue collection:
git -C <path> remote get-url origin and match github.com[:/]owner/repo.
Announce what you will analyze:
"Analyzing {owner/repo} for contribution readiness ({url|local} mode)..."
Collect data into a context bundle before launching any agents.
Read references/agent-prompts.md at this point for the agent prompt templates.
Context size limits — cap each field to keep total context under 15,000 characters:
readme: first 500 linesfile_tree: max 300 entries (top 3 directory levels)wiki_content (URL mode): first 800 lines(truncated at N lines — full content available via direct access)# Get repository structure overview
deepwiki-cli structure <owner/repo>
# Get full wiki content
deepwiki-cli read <owner/repo>
Map deepwiki-cli output to context bundle fields:
deepwiki-cli structure output → file_treedeepwiki-cli read output → split into readme (first section) + wiki_content (remainder)deepwiki-cli ask is used for specific questions → append answers to relevant fieldsIf deepwiki-cli or gh commands fail (timeout, network error, rate limit, permission denied):
issues: "gh rate limited")raw.githubusercontent.com and the GitHub web UI1. Glob("**/*", path) — get file tree (cap at 500 entries, top 3 levels)
2. Read README.md; search for CONTRIBUTING.md in: target directory, repo root, docs/, .github/
3. Read package manifest (package.json, Cargo.toml, pyproject.toml, go.mod, etc.)
4. Read CI config (.github/workflows/*.yml) — first file only, summarize
Run via Bash. If gh is not authenticated or not installed, skip and note it.
# Good first issues (up to 20)
gh issue list --repo <owner/repo> --label "good first issue" --state open \
--json number,title,createdAt,updatedAt,comments --limit 20
# Help wanted (up to 15)
gh issue list --repo <owner/repo> --label "help wanted" --state open \
--json number,title,createdAt,updatedAt,comments --limit 15
# Recently active (up to 30)
gh issue list --repo <owner/repo> --sort updated --state open \
--json number,title,labels,updatedAt,comments --limit 30
Deduplicate by issue number. Cap at 50 total.
Enrichment queries (run if time permits, enhances analysis quality):
# Repository metadata
gh api repos/<owner/repo> --jq '{stars: .stargazers_count, forks: .forks_count, open_issues: .open_issues_count}'
# Top contributors (top 10)
gh api repos/<owner/repo>/contributors?per_page=10
# Recent releases (last 5)
gh api repos/<owner/repo>/releases?per_page=5
# Community health
gh api repos/<owner/repo>/community/profile
These are optional but significantly improve the contribution-agent's analysis.
After collection, you should have:
| Field | Content |
|---|---|
owner_repo | e.g., facebook/react |
source_mode | url or local |
library_name | repo name (e.g., react) |
readme | README content (first 500 lines if large) |
contributing | CONTRIBUTING.md content or null |
file_tree | Directory structure (top 3 levels, max 500 entries) |
package_manifest | package.json / Cargo.toml / etc. content |
ci_config | CI workflow summary or null |
issues | Deduplicated issue JSON or null |
Launch ALL 3 agents in a SINGLE response turn. Do NOT launch them one by one.
Use the Agent tool with subagent_type: "general-purpose" for each.
Do NOT use run_in_background: true — issue all 3 calls at once and collect results together.
For each agent, take the prompt template from references/agent-prompts.md and replace
the {placeholder} tags with the actual context bundle data collected in Step 2.
For example, replace {readme} with the full README content, {file_tree} with the
directory listing, etc. If a field is null, replace the placeholder with:
(Not available — this repository does not have this file)
Input: file_tree, readme, package_manifest
Output sections: §1 Directory Structure, §2 Module Architecture, §3 Key Concepts
Input: readme, package_manifest, file_tree
Output sections: §4 Lifecycle, §5 Extension Points
Input: contributing, ci_config, issues, readme
Output sections: §6 How to Contribute, §7 Issue Landscape, §8 Recommended First Contributions
Read references/output-template.md at this point for the report template.
After all 3 agents return:
Check each result. For any agent that failed or returned empty:
> ⚠️ This section could not be analyzed.Count successes: sections_completed = N/3
Assemble the report using the template from references/output-template.md.
Fill in the YAML frontmatter and each section with agent results.
Save the file:
mkdir -p docs/library-analysis
Write the assembled Markdown to:
docs/library-analysis/<library_name>-<YYYY-MM-DD>.md
Report completion to the user:
"Analysis complete:
docs/library-analysis/<name>-<date>.mdSections completed: N/3 agents succeeded."
run_in_background. If the Agent tool is not available (e.g., in subagent context), perform all 3 analyses sequentially in the same response — output quality is the same, only speed differs.references/agent-prompts.md before launching agents — the prompt templates define the exact output structure each agent must produce.