BI and analytics service discovery — data maturity assessment (DCAM/DMM), dashboard landscape inventory, semantic layer evaluation, self-service analytics readiness, data literacy assessment, analytics use case portfolio, and BI transformation roadmap. Distinct from bi-architecture (design skill); this is the discovery/assessment for BI-as-a-service engagements. Use when the user asks to "assess BI maturity", "evaluate analytics capabilities", "dashboard inventory", "data literacy assessment", "semantic layer review", "self-service analytics readiness", "analytics use case prioritization", "BI transformation roadmap", or mentions BI-as-a-service, analytics maturity, dashboard consolidation, data democratization, DCAM, DMM, or data literacy.
From maonpx claudepluginhub javimontano/mao-discovery-frameworkThis skill is limited to using the following tools:
examples/README.mdexamples/sample-output.htmlprompts/prompt.mdIntegrates Apple's FoundationModels for on-device LLM in iOS 26+ apps: text generation, @Generable structured output, tool calling, snapshot streaming.
Provides React and Next.js patterns for component composition, compound components, state management, data fetching, performance optimization, forms, routing, and accessible UIs.
Reviews Flutter/Dart code with library-agnostic checklist for widget best practices, state management patterns, Dart idioms, performance, accessibility, security, and clean architecture.
Genera un discovery integral de BI & Analytics que cubre data maturity assessment (DCAM/DMM), dashboard landscape inventory, semantic layer evaluation, self-service analytics readiness, data literacy assessment, analytics use case portfolio, y BI transformation roadmap. Distinto de bi-architecture (skill de diseño de arquitectura BI); este skill es el discovery/assessment para engagements de BI-as-a-service.
Los datos sin contexto son ruido. Los dashboards sin adopción son decoración. La analítica solo transforma cuando la organización entera sabe leer, cuestionar y actuar basándose en datos.
The user provides a project or client name as $ARGUMENTS. Parse $1 as the project/client name used throughout all output artifacts.
Parameters:
{MODO}: piloto-auto (default) | desatendido | supervisado | paso-a-paso
{FORMATO}: markdown (default) | html | dual{VARIANTE}: ejecutiva (~40% — S1 + S6 + S7 only) | técnica (full 7 sections, default)If reference materials exist, load them:
Read ${CLAUDE_SKILL_DIR}/references/
Assessment de madurez de gestión de datos usando frameworks estándar de la industria.
Frameworks de referencia:
Dimensiones de assessment:
| Dimensión | Evalúa | Indicadores clave |
|---|---|---|
| Strategy | Estrategia de datos, alineación con negocio, inversión | Data strategy document, CDO role, budget dedicado |
| Governance | Políticas, roles (data owners/stewards), compliance | Data council, policies documented, steward network |
| Quality | Completeness, accuracy, consistency, timeliness | Quality scores per dataset, monitoring automatizado |
| Architecture | Data platform, integration, metadata management | Data catalog, lineage, integration patterns |
| Operations | Pipelines, SLAs de datos, incident management | Pipeline uptime, data freshness SLAs, incident process |
Maturity levels (1-5):
| Nivel | Nombre | Descripción |
|---|---|---|
| 1 | Initial | Datos en silos, sin governance, calidad desconocida |
| 2 | Managed | Alguna documentación, governance parcial, quality básico |
| 3 | Defined | Procesos estandarizados, governance formal, quality monitoreado |
| 4 | Quantitatively Managed | Métricas de performance, SLAs, continuous improvement |
| 5 | Optimizing | Data-driven culture, innovation, predictive quality management |
Gap analysis: Delta entre maturity level actual y target por dimensión. El target no siempre es nivel 5 — depende de las necesidades del negocio.
Output: Data maturity radar chart con score por dimensión, overall maturity level, y gap analysis to target.
Inventario completo del landscape de dashboards y reportes existentes.
Inventory dimensions:
| Campo | Descripción | Ejemplo |
|---|---|---|
| Tool | Herramienta de BI utilizada | Power BI, Tableau, Looker, Qlik, Google Data Studio, Excel |
| Dashboard/Report name | Nombre del artefacto | "Sales Monthly Dashboard", "HR Turnover Report" |
| Owner | Quién lo creó y mantiene | Finance team, IT, individual analyst |
| Business area | Departamento o función de negocio | Sales, Finance, Operations, HR, Marketing |
| Refresh cadence | Frecuencia de actualización | Real-time, daily, weekly, monthly, manual |
| Data sources | Fuentes de datos que alimentan | ERP, CRM, Data Warehouse, spreadsheets, APIs |
| Adoption | Nivel de uso real | High (daily use), Medium (weekly), Low (rarely opened), Abandoned |
| Last modified | Última actualización del artefacto | Date |
Redundancy identification:
Inconsistency identification:
Tool sprawl assessment:
Output: Dashboard inventory table con adoption metrics, redundancy map, y tool sprawl assessment.
Evaluación de la consistencia de definiciones de métricas y la existencia de una fuente única de verdad.
Metrics definitions consistency:
Business glossary coverage:
Single source of truth assessment:
Metric conflicts and reconciliation needs:
Output: Semantic layer assessment con metric conflicts inventory y single source of truth score.
Evaluación de readiness para analytics democratizado.
Current self-service capabilities:
Data access policies:
Tool availability:
Training programs:
Readiness score:
| Dimensión | Score 1-5 | Peso |
|---|---|---|
| Tool availability | - | 20% |
| Data access governance | - | 25% |
| Data quality trust | - | 25% |
| User training | - | 15% |
| Support structure | - | 15% |
Self-service analytics readiness = weighted average. Score >3.5 = ready para self-service. Score 2-3.5 = necesita preparación. Score <2 = riesgoso sin inversión significativa.
Output: Self-service readiness scorecard con dimensiones, scores, y recommendations.
Evaluación del nivel de data literacy de la organización.
Data literacy by department/role:
| Nivel | Nombre | Descripción | Indicadores |
|---|---|---|---|
| 1 | Data-unaware | No usa datos para decisiones | Decisiones por intuición, no consulta reportes |
| 2 | Data-consumer | Consume reportes predefinidos | Lee dashboards, no cuestiona los datos |
| 3 | Data-conversant | Interpreta datos, hace preguntas | Identifica trends, pide drill-downs, cuestiona outliers |
| 4 | Data-literate | Analiza datos independientemente | Crea visualizaciones, hace análisis ad-hoc |
| 5 | Data-fluent | Influye decisiones con datos | Diseña KPIs, propone experimentos, comunica insights |
Assessment por departamento:
Training needs identification:
Data champions network assessment:
Cultural barriers to data-driven decision making:
Output: Data literacy map por departamento con nivel actual, gaps, training needs, y cultural barriers.
Portfolio priorizado de oportunidades de analytics.
Categorización de use cases:
| Tipo | Pregunta que responde | Complejidad | Ejemplo |
|---|---|---|---|
| Descriptive | ¿Qué pasó? | Baja | Dashboards de ventas, reportes financieros |
| Diagnostic | ¿Por qué pasó? | Media | Root cause analysis, drill-down analysis |
| Predictive | ¿Qué va a pasar? | Alta | Forecasting de demanda, churn prediction |
| Prescriptive | ¿Qué debemos hacer? | Muy alta | Pricing optimization, resource allocation |
Impact x Feasibility scoring:
| Criterio | Score 1-5 | Descripción |
|---|---|---|
| Business impact | - | Revenue impact, cost savings, risk reduction, customer experience |
| Data availability | - | ¿Los datos necesarios existen, son accesibles, y tienen calidad suficiente? |
| Technical feasibility | - | ¿La infraestructura y skills actuales lo permiten? |
| Organizational readiness | - | ¿El área de negocio está lista para actuar sobre los insights? |
| Time to value | - | ¿Cuánto tarda en entregar valor? (shorter = higher score) |
Composite score: (impact * 0.35) + (data_availability * 0.20) + (technical_feasibility * 0.20) + (org_readiness * 0.15) + (time_to_value * 0.10)
Top-10 use cases: Para cada use case del top-10:
Output: Use case portfolio con top-10 prioritized, scoring matrix, y clasificación quick-win vs strategic.
Roadmap de transformación BI faseado con maturity targets.
Quick Wins (Meses 1-3):
Medium-Term (Meses 4-9):
Strategic (Meses 10-18):
Per phase:
Output: Roadmap visual faseado con maturity targets, use case activation, y adoption metrics.
| Decisión | Habilita | Restringe | Cuándo Usar |
|---|---|---|---|
| Single BI tool | Consistency, licensing simplicity | Less flexibility, migration cost | Organizaciones con <500 BI users |
| Multi-tool strategy | Best-of-breed per use case | Complexity, inconsistency risk | Enterprise con needs muy diversos |
| Centralized BI team | Quality, consistency, governance | Bottleneck, slower time-to-value | Data maturity <3, governance priority |
| Federated model | Speed, domain ownership | Inconsistency, duplication risk | Data maturity >3, strong governance |
| Semantic layer first | Single source of truth, trust | Investment before visible value | Metric conflicts causing business issues |
| Self-service first | User empowerment, speed | Quality risk without governance | High data literacy, strong governance |
| Advanced analytics early | Competitive advantage, innovation | Requires foundation (data quality, infra) | Only if descriptive/diagnostic is solid |
Organización sin data warehouse (todo en spreadsheets): S1 maturity será nivel 1. El roadmap debe incluir data infrastructure foundation como prerequisito antes de BI. Referir a data-engineering y bi-architecture para el diseño técnico.
Múltiples herramientas de BI con ownership político: El dashboard consolidation es técnicamente simple pero políticamente complejo. Mapear stakeholders y sus intereses. Proponer coexistencia temporal con semantic layer unificado como puente.
Organización altamente regulada (banca, salud): Self-service analytics tiene restricciones de compliance (quién puede ver qué datos). Row-level security y data classification son pre-requisitos. Regulatory reporting tiene prioridad sobre self-service.
Data literacy muy baja (nivel 1 organization-wide): No intentar self-service analytics. Enfocarse en data literacy training + dashboards curados por equipo centralizado. Self-service es meta a mediano plazo, no punto de partida.
Analytics use cases que requieren datos que no existen: Documentar el gap de datos como pre-requisito. Algunos use cases requieren instrumentación (new data capture) antes de analytics. Incluir data collection como fase en el roadmap.
Before finalizing delivery, verify:
| Format | Default | Description |
|---|---|---|
markdown | Yes | Rich Markdown + Mermaid diagrams. Token-efficient. |
html | On demand | Branded HTML (Design System). Visual impact. |
dual | On demand | Both formats. |
Default output is Markdown with embedded Mermaid diagrams. HTML generation requires explicit {FORMATO}=html parameter.
Primary: BI_Analytics_Discovery_{project}.md -- Data maturity assessment, dashboard landscape inventory, semantic layer evaluation, self-service readiness, data literacy assessment, analytics use case portfolio, and phased BI transformation roadmap with maturity targets.
Diagramas incluidos:
| Caso | Estrategia de Manejo |
|---|---|
| Organizacion sin data warehouse (todo en spreadsheets) | S1 maturity nivel 1. Roadmap incluye data infrastructure foundation como prerequisito. Referir a data-engineering y bi-architecture. |
| Multiples herramientas de BI con ownership politico | Consolidation es tecnicamente simple pero politicamente complejo. Mapear stakeholders. Proponer coexistencia temporal con semantic layer unificado. |
| Organizacion altamente regulada (banca, salud) | Self-service analytics con restricciones de compliance. Row-level security y data classification son pre-requisitos. Regulatory reporting tiene prioridad. |
| Data literacy muy baja (nivel 1 organization-wide) | No intentar self-service. Dashboards curados por equipo centralizado. Self-service como meta a mediano plazo. |
| Analytics use cases que requieren datos inexistentes | Documentar gap de datos como pre-requisito. Incluir data collection como fase explicita en roadmap. |
| Decision | Alternativa Descartada | Justificacion |
|---|---|---|
| DCAM/DMM como frameworks de madurez | Frameworks propietarios, assessment ad-hoc | DCAM (EDM Council) y DMM (CMMI Institute) son estandares reconocidos de industria con benchmarks disponibles. Permiten comparabilidad entre organizaciones. |
| 7 secciones de discovery | Assessment de 3 secciones rapido, assessment de 12 secciones exhaustivo | 7 secciones cubren el ciclo completo: maturity, landscape, semantic, self-service, literacy, use cases, roadmap. Variante ejecutiva reduce a 3 sin perder decision-readiness. |
| Data literacy como seccion dedicada (S5) | Literacy como sub-seccion de self-service readiness | La literacy organizacional es el predictor mas fuerte de ROI de BI. Merece evaluacion independiente con niveles 1-5 por departamento y plan de training dedicado. |
| Impact x Feasibility scoring compuesto (5 criterios) | Scoring simple de 2 criterios (impacto y esfuerzo) | 5 criterios (impact, data availability, technical feasibility, org readiness, time-to-value) con pesos diferenciados producen priorizacion mas robusta. |
graph TD
subgraph Core["Conceptos Core"]
MATURITY["Data Maturity (DCAM/DMM)"]
DASHBOARD["Dashboard Landscape"]
SEMANTIC["Semantic Layer"]
SELFSERV["Self-Service Readiness"]
LITERACY["Data Literacy"]
PORTFOLIO["Use Case Portfolio"]
ROADMAP["BI Transformation Roadmap"]
end
subgraph Inputs["Entradas"]
TOOLS["BI Tools Inventory"]
METRICS["Existing Metrics & KPIs"]
TEAMS["Business Teams"]
DATASRC["Data Sources"]
end
subgraph Outputs["Salidas"]
REPORT["BI Analytics Discovery Report"]
RADAR["Maturity Radar Chart"]
SCATTER["Use Case Impact x Feasibility"]
GANTT["Transformation Roadmap"]
end
subgraph Related["Skills Relacionados"]
BIARCH["bi-architecture"]
DE["data-engineering"]
DQ["data-quality"]
DG["data-governance"]
ASIS["asis-analysis (Data-AI)"]
end
TOOLS --> DASHBOARD
METRICS --> SEMANTIC
TEAMS --> LITERACY
DATASRC --> MATURITY
MATURITY --> ROADMAP
DASHBOARD --> SEMANTIC
SEMANTIC --> SELFSERV
SELFSERV --> PORTFOLIO
LITERACY --> PORTFOLIO
PORTFOLIO --> ROADMAP
ROADMAP --> REPORT
REPORT --> RADAR
REPORT --> SCATTER
REPORT --> GANTT
BIARCH -.-> SEMANTIC
DE -.-> MATURITY
DQ -.-> SELFSERV
DG -.-> SEMANTIC
ASIS -.-> MATURITY
Formato Markdown (default):
# BI & Analytics Discovery: {project}
## S1: Data Maturity Assessment (DCAM/DMM)
### Overall Maturity Level: {level}/5
| Dimension | Score (1-5) | Evidencia | Gap to Target |
...
## S2: Dashboard Landscape Inventory
| Tool | Dashboard | Owner | Area | Refresh | Adoption |
...
### Redundancy Map
### Tool Sprawl Assessment
## S3-S5: [Semantic, Self-Service, Literacy]
## S6: Analytics Use Case Portfolio
### Top-10 Use Cases
| Use Case | Tipo | Impact | Feasibility | Score | Ranking |
...
## S7: BI Transformation Roadmap
### Quick Wins (Meses 1-3)
### Medium-Term (Meses 4-9)
### Strategic (Meses 10-18)
Formato PPTX (bajo demanda):
Slide 1: Portada — BI & Analytics Discovery: {project}
Slide 2: Executive Summary — maturity level + top-3 findings
Slide 3: Data Maturity Radar — 5 dimensiones scored 1-5
Slide 4: Dashboard Landscape — tool sprawl + adoption heatmap
Slide 5: Semantic Layer Assessment — metric conflicts count + single source of truth score
Slide 6: Data Literacy Distribution — department-level bar chart
Slide 7: Use Case Portfolio — Impact x Feasibility scatter plot
Slide 8-9: BI Transformation Roadmap — phased Gantt
Slide 10: Next Steps + Budget Magnitudes (FTE-meses)
Formato HTML (bajo demanda):
BI_Analytics_Discovery_{project}_{WIP}.htmlFormato DOCX (bajo demanda):
{fase}_{entregable}_{cliente}_{WIP}.docxFormato XLSX (bajo demanda):
{fase}_{entregable}_{cliente}_{WIP}.xlsx| Dimension | Peso | Criterio |
|---|---|---|
| Trigger Accuracy | 10% | Activacion correcta ante keywords de BI maturity, dashboard inventory, data literacy, semantic layer, self-service analytics, analytics use cases. |
| Completeness | 25% | 7 secciones cubren maturity, landscape, semantic, self-service, literacy, portfolio, y roadmap. Maturity assessment con 5 dimensiones. |
| Clarity | 20% | Scoring 1-5 por dimension interpretable. Use cases clasificados por tipo (descriptive/diagnostic/predictive/prescriptive). Cultural barriers documentadas. |
| Robustness | 20% | Edge cases (no warehouse, BI politics, regulacion, low literacy, datos inexistentes) manejados con estrategias practicas. |
| Efficiency | 10% | Variante ejecutiva reduce a S1+S6+S7 (~40%). Composite scoring con formula explicita para priorizacion reproducible. |
| Value Density | 15% | Dashboard consolidation como quick win. Metric conflicts inventariados con impacto. Roadmap faseado con adoption metrics targets. |
Umbral minimo: 7/10. Debajo de este umbral, revisar maturity dimensions coverage y use case scoring rigor.
Autor: Javier Montano · Comunidad MetodologIA | Ultima actualizacion: 15 de marzo de 2026