From reports
Provides actionable feedback on Power BI reports' quality, usage, adoption, structure, performance, and distribution using Python and Bash scripts for audits and health checks.
npx claudepluginhub data-goblin/power-bi-agentic-development --plugin reportsThis skill uses the workspace's default tool permissions.
Structured evaluation of Power BI reports to produce actionable feedback for developers and consultants. A report review assesses whether a report is effective, well-built, and actually being used. The output is a prioritized list of findings with concrete recommendations.
references/best-practices.mdreferences/distribution.mdreferences/export-to-excel.mdreferences/performance.mdreferences/report-metadata.mdreferences/usage-metrics.mdscripts/get_report_detail.pyscripts/get_report_distribution.pyscripts/get_report_usage.pyscripts/performance_audit.pyusage-metrics-dataset/README.mdusage-metrics-dataset/Usage Metrics Report.Report/StaticResources/RegisteredResources/Report-List21388101291952877.pngusage-metrics-dataset/Usage Metrics Report.Report/StaticResources/RegisteredResources/background2706064184022037.pngusage-metrics-dataset/Usage Metrics Report.Report/StaticResources/RegisteredResources/report-performance-v523120929600041595.pngusage-metrics-dataset/Usage Metrics Report.Report/StaticResources/SharedResources/BaseThemes/CY18SU07.jsonusage-metrics-dataset/Usage Metrics Report.Report/StaticResources/SharedResources/BuiltInThemes/CityPark.jsonusage-metrics-dataset/Usage Metrics Report.Report/definition.pbirusage-metrics-dataset/Usage Metrics Report.Report/report.jsonusage-metrics-dataset/Usage Metrics Report.SemanticModel/definition.pbismusage-metrics-dataset/Usage Metrics Report.SemanticModel/definition/cultures/en-US.tmdlSearches, retrieves, and installs Agent Skills from prompts.chat registry using MCP tools like search_skills and get_skill. Activates for finding skills, browsing catalogs, or extending Claude.
Searches prompts.chat for AI prompt templates by keyword or category, retrieves by ID with variable handling, and improves prompts via AI. Use for discovering or enhancing prompts.
Guides MCP server integration in Claude Code plugins via .mcp.json or plugin.json configs for stdio, SSE, HTTP types, enabling external services as tools.
Structured evaluation of Power BI reports to produce actionable feedback for developers and consultants. A report review assesses whether a report is effective, well-built, and actually being used. The output is a prioritized list of findings with concrete recommendations.
Note that the skill works on one of three scenarios:
In scenario 2-3 you may still provide feedback on the report content / structure, but prioritizing other things first.
Activate when conducting a report review, audit, or health check. Common triggers:
A comprehensive report review evaluates six dimensions. Not every review needs all six -- scope to what the user needs.
The most objective signal of report value. A report that nobody views is a maintenance liability regardless of its design quality.
Retrieve usage data with the scripts in scripts/:
# Workspace overview (views, rank, page views, load times)
python3 scripts/get_report_usage.py -w <workspace-id>
python3 scripts/get_report_usage.py -w <workspace-id> --include-datahub
# Single report deep-dive (daily views, per-viewer breakdown, page views by day)
python3 scripts/get_report_detail.py -w <workspace-id> -r <report-id>
# Distribution audit (who has access, through what channels)
python3 scripts/get_report_distribution.py -w <workspace-id> -r <report-id>
Filtering viewers: Exclude non-consumer users from adoption metrics. Service principals (type App), report developers, and IT / support personnel inflate viewer counts and distort reach. See references/usage-metrics.md for identification heuristics and references/distribution.md for resolving security groups and distribution lists via the Microsoft Graph API.
Evaluate usage signals:
references/distribution.md for how to calculate reach and what the numbers mean.references/usage-metrics.md).lastVisitedTimeUTC.references/performance.md for interpretation.Do not use arbitrary thresholds for what constitutes "healthy" or "concerning"; these depend entirely on the report's audience, purpose, and lifecycle stage. A report for 3 analysts has different expectations than one for 300 executives.
Subscriptions are not views. Email subscriptions deliver report snapshots without generating view events. Check admin/reports/{id}/subscriptions (requires Fabric Admin) for active subscribers. A report with 0 views but active subscriptions is being consumed passively.
Use rolling 7-day averages for view trends. Raw daily counts are noisy. Compare the current 7D average to the prior 7D to identify trajectory. See references/usage-metrics.md for methodology.
Key insight: Reports with 0 views are not necessarily bad -- they may be new, seasonal, consumed via subscriptions, or used via embedded scenarios not captured in telemetry. Cross-reference with the last_visited timestamp from DataHub.
Permissions: Tier 1 (WABI) needs any workspace role. Tier 2 (model) needs workspace Contributor+. Distribution and subscription checks need Fabric Admin (tenant-level). See references/usage-metrics.md for the full permission matrix.
For additional context on the usage metrics dataset schema and available tables, see usage-metrics-dataset/.
Evaluate the visual design and information architecture. Consult the pbi-report-design skill for detailed guidelines. Reference: Data Goblins Report Checklist.
Checklist:
Evaluate the connection between the report and its underlying semantic model.
Checklist:
Assess report load time and visual complexity. Run the performance audit script:
python3 scripts/performance_audit.py -w <workspace-id> -r <report-id>
See references/performance.md for percentile interpretation, DAX query inference from visual field bindings, and common anti-patterns.
Key indicators: P50 and P90 load times, visual count per page (loosely 12-15 max, but depends on complexity), extension measure count. See the reference for interpretation; do not apply rigid thresholds.
Assess the report's governance posture. See references/report-metadata.md.
Checklist:
references/export-to-excel.md)Evaluate whether the report meets accessibility, organizational standards, and documentation requirements.
Accessibility:
Standards:
Documentation (for handover/production):
Clarify what the user wants reviewed. Ask:
Ask the user:
If the semantic model is in scope, use the review-semantic-model skill in parallel. Many report issues (slow visuals, (Blank) values, missing fields) originate in the model. See references/best-practices.md for model symptoms that surface in reports.
If the report is local-only or not yet published, ask the user:
"Is this a report in development which doesn't yet have users, a report in testing with a subset of the user audience, or a report that's already distributed and should be seeing active usage and value generation?"
This determines which dimensions are applicable:
| Stage | Usage data? | What to review |
|---|---|---|
| Development | No | Design, data model binding, performance, accessibility, structure |
| Testing | Partial | All of the above + verify testers are actually testing (views from test audience) |
| Production | Yes | All dimensions including full usage, distribution, and export analysis |
Remind the user: a report's success lives and dies on whether it is being used and delivering business value. Design, performance, and structure can be reviewed proactively, but usage data is the only objective measure of whether the report is working. Good requirements gathering helps achieve adoption, but it can never be guaranteed.
If the report is local-only, ask where the published version is (or will be). Usage metrics require a published report in the Power BI service.
Run the usage script for quantitative data. Export or inspect the report definition for qualitative assessment.
Walk through each relevant dimension using the checklists above. Score each finding by severity:
Present findings as a structured summary. Lead with the most impactful findings.
Format:
REPORT REVIEW: <Report Name>
===============================
USAGE SIGNAL
Views (30d): 47 | Viewers: 8 | Rank: #3/22
Top pages: Overview (60%), Detail (30%), Trends (10%)
Load time P50: 3.2s | P90: 7.1s
CRITICAL
- [Performance] P90 load time exceeds 7s due to 14 visuals on Overview page
HIGH
- [Design] No page titles on 2 of 3 pages
- [Binding] 3 visuals have broken field references
MEDIUM
- [Design] Inconsistent margins (24px left, 32px right)
- [Accessibility] Missing alt text on 5 data visuals
LOW
- [Design] Default theme applied; consider custom theme
- [Standards] Report name uses spaces instead of hyphens
Before running usage scripts, ensure:
- Azure CLI authenticated: run
az loginif neededfabCLI authenticated: runfab auth loginif needed (for distribution script)- Python
requestspackage:uv pip install requests
references/usage-metrics.md -- Full documentation of all usage data APIs (official and undocumented)references/distribution.md -- All report access paths and how to audit themscripts/get_report_usage.py -- Workspace-level usage overviewscripts/get_report_detail.py -- Single report deep-dive (daily, per-viewer, per-page)scripts/get_report_distribution.py -- Distribution audit (ACL, apps, publish-to-web)scripts/performance_audit.py -- Load times + visual complexity analysisreferences/performance.md -- Percentile interpretation, DAX query inference from visual metadatareferences/report-metadata.md -- Thick/thin, endorsement, sensitivity, pipeline, model propertiesreferences/export-to-excel.md -- Export activity analysis, data governance implicationsreferences/best-practices.md -- Data visualization principles, chart selection, color, interaction designusage-metrics-dataset/ -- Exported Usage Metrics dataset (TMDL schema + report definition)review-semantic-model -- Companion skill for semantic model review (run in parallel when model is in scope)pbi-report-design -- Detailed report design guidelines and layout rulesmodifying-theme-json -- Theme authoring, compliance auditing, formatting promotiondeneb-visuals, python-visuals, r-visuals, svg-visuals -- Visual-specific review criteria