From ds-doc-analyzer
This skill should be used when the user asks to "analyze documentation from data science perspective", "identify data science scenarios", "find ML problems in docs", "evaluate AI system capabilities", "what DS use cases does this support", "map documentation to ML problems", "data science evaluation", "extract ML scenarios from docs", or needs guidance on systematically analyzing LLM/AI/ML system documentation to identify data science scenarios, machine learning problem types, and practical use cases.
npx claudepluginhub shinhf/skills-ide-resources --plugin ds-doc-analyzerThis skill uses the workspace's default tool permissions.
This skill provides a structured methodology for analyzing documentation of LLM/AI/ML systems through the lens of Data Science and Machine Learning. Apply this framework to any AI/ML system documentation to extract actionable data science scenarios, identify solvable ML problems, and evaluate practical utility for DS/ML practitioners.
Provides UI/UX resources: 50+ styles, color palettes, font pairings, guidelines, charts for web/mobile across React, Next.js, Vue, Svelte, Tailwind, React Native, Flutter. Aids planning, building, reviewing interfaces.
Fetches up-to-date documentation from Context7 for libraries and frameworks like React, Next.js, Prisma. Use for setup questions, API references, and code examples.
Retrieves current documentation, API references, and code examples for libraries, frameworks, SDKs, CLIs, and services via Context7 CLI. Ideal for API syntax, configs, migrations, and setup queries.
This skill provides a structured methodology for analyzing documentation of LLM/AI/ML systems through the lens of Data Science and Machine Learning. Apply this framework to any AI/ML system documentation to extract actionable data science scenarios, identify solvable ML problems, and evaluate practical utility for DS/ML practitioners.
Scan the documentation to build an inventory of system capabilities:
For each documented capability, extract:
Map extracted capabilities to data science scenarios using the taxonomy in references/scenario-taxonomy.md. For each scenario:
references/problem-patterns.md)For each mapped scenario, identify concrete ML/DS problems:
Identify what the documentation does NOT cover:
| Category | Subcategories | Key Signals in Docs |
|---|---|---|
| NLP/Text | Classification, Generation, Summarization, QA, Translation, NER, Sentiment | "chat", "completion", "prompt", "text", "token" |
| RAG/Retrieval | Knowledge retrieval, Semantic search, Document QA, Hybrid search | "embedding", "vector", "search", "retrieval", "index" |
| Agents/Reasoning | Tool use, Planning, Multi-step reasoning, Code generation | "agent", "tool", "function call", "reasoning", "plan" |
| Multi-Agent | Orchestration, Collaboration, Delegation, Debate | "workflow", "orchestration", "handoff", "group", "swarm" |
| Multimodal | Vision-language, Audio processing, Document understanding | "image", "audio", "multimodal", "vision", "document" |
| Evaluation | Model eval, Prompt testing, Benchmark suites, Quality metrics | "eval", "metric", "benchmark", "score", "quality" |
| Data Engineering | ETL for ML, Feature engineering, Data pipelines | "pipeline", "data", "transform", "feature", "preprocess" |
| MLOps | Deployment, Monitoring, Versioning, A/B testing | "deploy", "monitor", "version", "serve", "endpoint" |
Each identified scenario maps to one or more canonical ML problem types:
| Problem Type | Description | Example in LLM Context |
|---|---|---|
| Classification | Assign labels to inputs | Sentiment analysis, intent detection, content moderation |
| Generation | Produce new content | Text generation, code synthesis, report writing |
| Extraction | Pull structured data from unstructured input | NER, relation extraction, schema mapping |
| Retrieval | Find relevant information | RAG, semantic search, knowledge retrieval |
| Ranking | Order items by relevance | Search result ranking, recommendation |
| Clustering | Group similar items | Topic modeling, document clustering |
| Translation | Convert between representations | Language translation, code translation, format conversion |
| Summarization | Condense information | Document summarization, meeting notes, changelog |
| Reasoning | Multi-step logical inference | Chain-of-thought, planning, mathematical problem-solving |
| Orchestration | Coordinate multiple models/agents | Multi-agent workflows, ensemble methods, pipelines |
Produce analysis results in this structure:
## Executive Summary
[2-3 sentence overview of system's DS capabilities]
## Identified Scenarios
### [Category Name]
- **Scenario**: [Name]
- Problem Type: [Classification/Generation/etc.]
- Feasibility: [Direct/Composition/Extension]
- Documentation Evidence: [Specific doc references]
- Example Use Case: [Concrete example]
## Problem-Scenario Matrix
[Table mapping problems to scenarios]
## Gap Analysis
[What's missing or underdocumented]
## Recommendations
[Prioritized list of DS applications]
| Rating | Meaning | Criteria |
|---|---|---|
| Direct | System provides explicit support | Documented API, working examples, clear guidance |
| Composition | Achievable by combining documented features | Features exist but require assembly; no explicit guide |
| Extension | Requires custom development beyond documentation | System provides primitives but scenario needs custom code |
| Theoretical | Architecturally possible but undocumented | System design allows it, but no documentation or examples |
For detailed analysis taxonomies and patterns, consult:
references/scenario-taxonomy.md -- Complete taxonomy of DS/ML scenarios with subcategories, signals, and evaluation criteria for each domainreferences/problem-patterns.md -- Detailed problem pattern catalog with implementation patterns, evaluation metrics, and common pitfallsreferences/analysis-templates.md -- Report templates, structured output formats, and example analyses