This skill should be used when the user asks to "validate technology viability", "detect vaporware", "verify AI claims", "assess software maturity", "check if this tech actually works", or mentions technology due diligence, software validation, AI feasibility, vendor evaluation, or tech-stack viability. Deep forensic analysis of whether proposed software solutions, AI/ML components, and technology choices are viable substance or speculative smoke. Use this skill whenever technology choices need validation or vendor claims need scrutiny, even if they don't explicitly ask for "software-viability". [EXPLICIT]
From jm-adknpx claudepluginhub javimontano/jm-adk-alfaThis skill is limited to using the following tools:
agents/guardian.mdagents/lead.mdagents/specialist.mdagents/support.mdevals/evals.jsonknowledge/body-of-knowledge.mdknowledge/knowledge-graph.mdprompts/meta.mdprompts/primary.mdprompts/variations/deep.mdprompts/variations/quick.mdreferences/domain-knowledge.mdtemplates/output.docx.mdtemplates/output.htmlForensic validation of whether proposed software solutions, technology choices, and AI/ML components are viable, mature, and fit-for-purpose — or speculative, overhyped, and risky. [EXPLICIT] This is NOT the multidimensional feasibility analysis (technical-feasibility covers that). [EXPLICIT] This is a devoted, deep-cut software validator that operates at the level of code, APIs, vendor maturity, community health, and real-world production evidence. [EXPLICIT]
Todo en software es una promesa hasta que se demuestra en producción. Este skill separa promesas verificables de humo. Usa evidencia de primera mano: código ejecutable, APIs documentadas, benchmarks reproducibles, postmortems públicos, adoption data. NO usa: marketing decks, feature comparison tables de vendors, demos no reproducibles.
Escala de veredicto:
Parse $1 as project name, $2 as technology/solution to validate. [EXPLICIT]
Accepts: technology names, vendor products, AI/ML proposals, architectural patterns, library choices. [EXPLICIT]
Parameters:
{MODO}: piloto-auto (default) | desatendido | supervisado | paso-a-paso
{FORMATO}: markdown (default) | html | dual{VARIANTE}: ejecutiva (~40% — S1 inventory + S6 scorecard only) | técnica (full forensic analysis, default)Para cada tecnología, framework, vendor, o componente AI/ML propuesto:
| Tecnología | Claim | Fuente del Claim | Evidencia Requerida |
|---|---|---|---|
| {Vendor X AI Platform} | "Reduce development time 50%" | Vendor deck Phase 3 | Production case studies, benchmark |
| {Framework Y} | "Handles 100K rps" | Architecture decision | Load test results, community benchmarks |
| {LLM Integration} | "Automates 80% of workflows" | Scenario B | Pilot results, accuracy metrics |
Por cada pieza de software evaluada:
2a. Lifecycle Stage
| Indicador | Qué Buscar | Dónde |
|---|---|---|
| Version | >=1.0 = GA; <1.0 = pre-production; 0.x = experimental | GitHub releases, docs |
| Release cadence | Regular = healthy; erratic = risk | Release notes timeline |
| Breaking changes | Frequent = immature API; rare = stable | Changelogs, migration guides |
| Deprecation policy | Exists = mature; absent = risky | Documentation |
| LTS availability | Available = enterprise-ready; absent = risk | Release policy |
2b. Community Health
| Métrica | 🟢 Healthy | 🟠 Warning | 🔴 Risk |
|---|---|---|---|
| GitHub stars | >5K | 1K-5K | <1K |
| Contributors (12mo) | >50 | 10-50 | <10 |
| Open issues / closed ratio | <30% open | 30-60% | >60% |
| Last commit | <30 days | 30-90 days | >90 days |
| Bus factor | >5 maintainers | 2-5 | 1 (single point of failure) |
| Corporate backing | Major sponsor | Startup backed | Individual project |
2c. Production Evidence
SECCIÓN CRÍTICA — la IA es el campo con mayor ratio humo/substancia.
Para cada componente AI/ML propuesto:
3a. Claims vs Reality Matrix
| Claim | Benchmark Citado | Benchmark Real | Gap | Veredicto |
|---|---|---|---|---|
| "95% accuracy" | Vendor demo | Academic paper on similar task: 72-85% | 10-23% gap | 🟠 RIESGO |
| "Real-time inference" | Marketing | p95 latency in benchmarks: 2.3s | Depends on SLA | 🟡 VIABLE |
3b. AI Maturity Indicators
| Indicador | Sustancia | Humo |
|---|---|---|
| Training data | Documented, versioned, representative | "Proprietary" sin detalles |
| Evaluation metrics | Multiple metrics, test set documented | Single accuracy number |
| Failure modes | Documented, graceful degradation | "Works great" sin edge cases |
| Drift monitoring | Built-in, documented | No mention |
| Human-in-the-loop | Designed for it | Fully autonomous claims |
| Explainability | Interpretable outputs | Black box |
| Cost per inference | Documented | Hidden or "contact sales" |
| Data privacy | Clear data handling policy | Vague "we take privacy seriously" |
3c. LLM-Specific Red Flags (si aplica)
4a. Vendor Viability
| Factor | Assessment |
|---|---|
| Funding / Revenue | Public financial data, funding rounds, runway |
| Customer retention | NRR if available, churn indicators |
| Competitive position | Market share, differentiation, moat |
| Acquisition risk | Likely acquirer? Product continuity post-acquisition? |
| Pricing model stability | History of price changes, lock-in mechanisms |
4b. Dependency Chain Analysis
Para cada tecnología con veredicto 🟡 o 🟠, diseña un PoC mínimo:
| Tecnología | PoC Objective | Success Criteria | Effort | Timeline |
|---|---|---|---|---|
| {AI Platform} | Validate accuracy on real data | >85% on 100 production samples | 1 sprint | Sprint 0 |
| {Framework Y} | Load test with production-like data | >50K rps at p99 <200ms | 3 days | Sprint 0 |
Cada PoC debe:
SOFTWARE VIABILITY SCORECARD
════════════════════════════
Proyecto: {nombre}
| Tecnología | Maturity | Community | Production | AI Score | Vendor | VEREDICTO |
|---|---|---|---|---|---|---|
| {Tech A} | 4/5 | 4/5 | 5/5 | n/a | 4/5 | 🟢 SUBSTANCIA |
| {AI Tool B} | 2/5 | 3/5 | 2/5 | 2/5 | 3/5 | 🟠 RIESGO ALTO |
| {Framework C} | 3/5 | 4/5 | 3/5 | n/a | 5/5 | 🟡 PROMESA VIABLE |
VEREDICTO GLOBAL: [VIABLE / VIABLE CON PoCs / REQUIERE ALTERNATIVAS / NO VIABLE]
ALTERNATIVAS IDENTIFICADAS:
- {AI Tool B} → alternativa: {Open Source X} (🟢 en community, 🟡 en features)
SPIKES OBLIGATORIOS: [N]
TECNOLOGÍAS DESCARTADAS: [lista]
| Decision | Enables | Constrains | When to Use |
|---|---|---|---|
| Full stack validation | Maximum confidence | 3-5 days | Pre-commitment, large investment |
| AI-only validation | Focused on highest risk | Misses infra risks | AI-heavy proposals |
| Vendor comparison | Objective selection | Needs market research | Multiple vendor options |
| PoC-first approach | Evidence-based decisions | Delays commitment | Unproven technologies |
| Scenario | Response |
|---|---|
| Vendor provides only marketing materials | Flag as 🟠 minimum. Request technical docs, API reference, benchmark methodology |
| Technology is < 6 months old | Automatic 🟡 ceiling. Cannot be 🟢 without production evidence |
| AI claims "state of the art" | Verify against published benchmarks (papers, leaderboards). Discount by domain gap |
| Open source with no corporate backing | Assess bus factor and funding sustainability. Flag if bus factor = 1 |
| Client already committed to vendor | Still validate — document risks for risk register, design guardrails |
| Format | Default | Description |
|---|---|---|
markdown | ✅ | Rich Markdown + Mermaid diagrams. Token-efficient. |
html | On demand | Branded HTML (Design System). Visual impact. |
dual | On demand | Both formats. |
Default output is Markdown with embedded Mermaid diagrams. HTML generation requires explicit {FORMATO}=html parameter. [EXPLICIT]
Primary: A-04_Software_Viability_{project}.html — Technology inventory, maturity assessment, AI validation, vendor risk, PoC designs, viability scorecard, recommendations.
Author: Javier Montano | Last updated: March 18, 2026
Example invocations:
Searches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.