Evaluates whether a programming language dependency should be used by analyzing maintenance activity, security posture, community health, documentation quality, dependency footprint, production adoption, license compatibility, API stability, and funding sustainability. Use when users are considering adding a new dependency, evaluating an existing dependency, or asking about package/library recommendations.
/plugin marketplace add princespaghetti/claude-marketplace/plugin install learnfrompast@princespaghetti-marketplaceThis skill is limited to using the following tools:
COMMANDS.mdECOSYSTEM_GUIDES.mdERROR_HANDLING.mdEXAMPLES.mdSCRIPT_USAGE.mdSIGNAL_DETAILS.mdWORKFLOW.mdscripts/dependency_evaluator.pyThis skill helps evaluate whether a programming language dependency should be added to a project by analyzing multiple quality signals and risk factors.
Making informed decisions about dependencies is critical for project health. A poorly chosen dependency can introduce security vulnerabilities, maintenance burden, and technical debt. This skill provides a systematic framework for evaluating dependencies before adoption.
Activate this skill when users:
This skill uses progressive disclosure - core framework below, detailed guidance in reference files:
| File | When to Consult |
|---|---|
| WORKFLOW.md | Detailed step-by-step evaluation process, performance tips, pitfalls |
| SCRIPT_USAGE.md | Automated data gathering script (optional efficiency tool) |
| COMMANDS.md | Ecosystem-specific commands (npm, PyPI, Cargo, Go, etc.) |
| SIGNAL_DETAILS.md | Deep guidance for scoring each of the 10 signals |
| ECOSYSTEM_GUIDES.md | Ecosystem-specific norms and considerations |
| EXAMPLES.md | Worked evaluation examples (ADOPT, AVOID, EVALUATE FURTHER) |
| ERROR_HANDLING.md | Fallback strategies when data unavailable or commands fail |
Quick navigation by ecosystem:
Evaluate dependencies using these ten key signals:
For detailed investigation guidance, see SIGNAL_DETAILS.md. For ecosystem-specific commands, see COMMANDS.md. For ecosystem considerations, see ECOSYSTEM_GUIDES.md.
Goal: Provide evidence-based recommendations (ADOPT / EVALUATE FURTHER / AVOID) by systematically assessing 10 quality signals.
Process: Quick assessment → Data gathering → Scoring → Report generation
See WORKFLOW.md for detailed step-by-step guidance, performance tips, and workflow variants.
A Python script (scripts/dependency_evaluator.py) automates initial data gathering for supported ecosystems (npm, pypi, cargo, go). The script:
Default approach: Try the script first - it provides more complete and consistent data gathering. Only fall back to manual workflow if the script is unavailable or fails.
Use the script when: Evaluating npm, PyPI, Cargo, or Go packages (most common ecosystems) Use manual workflow when: Unsupported ecosystem, Python unavailable, or script errors occur
See SCRIPT_USAGE.md for complete documentation. The skill works perfectly fine without the script using manual workflow.
Write it yourself if: Functionality is <50 lines of straightforward code, or you only need a tiny subset of features.
Use a dependency if: Problem is complex (crypto, dates, parsing), correctness is critical, or ongoing maintenance would be significant.
See WORKFLOW.md § Pre-Evaluation for detailed decision framework.
Structure your evaluation report as:
## Dependency Evaluation: <package-name>
### Summary
[2-3 sentence overall assessment with recommendation]
**Recommendation**: [ADOPT / EVALUATE FURTHER / AVOID]
**Risk Level**: [Low / Medium / High]
**Blockers Found**: [Yes/No]
### Blockers (if any)
[List any dealbreaker issues - these override all scores]
- ⛔ [Blocker description with specific evidence]
### Evaluation Scores
| Signal | Score | Weight | Notes |
|--------|-------|--------|-------|
| Maintenance | X/5 | [H/M/L] | [specific evidence with dates/versions] |
| Security | X/5 | [H/M/L] | [specific evidence] |
| Community | X/5 | [H/M/L] | [specific evidence] |
| Documentation | X/5 | [H/M/L] | [specific evidence] |
| Dependency Footprint | X/5 | [H/M/L] | [specific evidence] |
| Production Adoption | X/5 | [H/M/L] | [specific evidence] |
| License | X/5 | [H/M/L] | [specific evidence] |
| API Stability | X/5 | [H/M/L] | [specific evidence] |
| Funding/Sustainability | X/5 | [H/M/L] | [specific evidence] |
| Ecosystem Momentum | X/5 | [H/M/L] | [specific evidence] |
**Weighted Score**: X/50 (adjusted for dependency criticality)
### Key Findings
#### Strengths
- [Specific strength with evidence]
- [Specific strength with evidence]
#### Concerns
- [Specific concern with evidence]
- [Specific concern with evidence]
### Alternatives Considered
[If applicable, mention alternatives worth evaluating]
### Recommendation Details
[Detailed reasoning for the recommendation with specific evidence]
### If You Proceed (for ADOPT recommendations)
[Specific advice tailored to risks found]
- Version pinning strategy
- Monitoring recommendations
- Specific precautions based on identified concerns
Adjust signal weights based on dependency type:
| Signal | Critical Dep | Standard Dep | Dev Dep |
|---|---|---|---|
| Security | High | Medium | Low |
| Maintenance | High | Medium | Medium |
| Funding | High | Low | Low |
| License | High | High | Medium |
| API Stability | Medium | Medium | High |
| Documentation | Medium | Medium | Medium |
| Community | Medium | Medium | Low |
| Dependency Footprint | Medium | Low | Low |
| Production Adoption | Medium | Medium | Low |
| Ecosystem Momentum | Low | Medium | Low |
Critical Dependencies: Auth, security, data handling - require higher bar for all signals
Standard Dependencies: Utilities, formatting - balance all signals
Development Dependencies: Testing, linting - lower security concerns, focus on maintainability
Blocker Override: Any blocker issue → AVOID recommendation regardless of scores
Critical Thresholds:
Weighting Priority: Security and Maintenance typically matter more than Documentation or Ecosystem Momentum. A well-documented but unmaintained package is riskier than a poorly-documented but actively maintained one.
These issues trigger automatic AVOID recommendation:
Before presenting your report, verify:
Be Evidence-Based: Cite specific versions, dates, and metrics. Run commands to gather data, never assume.
Be Balanced: Acknowledge strengths AND weaknesses. Single issues rarely disqualify (unless blocker).
Be Actionable: Provide clear ADOPT/EVALUATE FURTHER/AVOID with alternatives and risk mitigation.
Be Context-Aware: Auth libraries need stricter scrutiny than dev tools. Adjust for ecosystem norms (see ECOSYSTEM_GUIDES.md).
See WORKFLOW.md § Common Pitfalls and § Guidelines for detailed best practices.
Creating algorithmic art using p5.js with seeded randomness and interactive parameter exploration. Use this when users request creating art using code, generative art, algorithmic art, flow fields, or particle systems. Create original algorithmic art rather than copying existing artists' work to avoid copyright violations.
Applies Anthropic's official brand colors and typography to any sort of artifact that may benefit from having Anthropic's look-and-feel. Use it when brand colors or style guidelines, visual formatting, or company design standards apply.
Create beautiful visual art in .png and .pdf documents using design philosophy. You should use this skill when the user asks to create a poster, piece of art, design, or other static piece. Create original visual designs, never copying existing artists' work to avoid copyright violations.