From powerbi-master
Diagnoses and optimizes Power BI performance issues like slow reports, DAX queries, and large models using Performance Analyzer, DAX Studio, and VertiPaq Analyzer.
npx claudepluginhub josiahsiegel/claude-plugin-marketplace --plugin powerbi-masterThis skill uses the workspace's default tool permissions.
Power BI performance depends on data model design, DAX efficiency, visual configuration, and infrastructure. This skill covers diagnostic tools, optimization techniques, and best practices for achieving fast, responsive reports.
Creates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Power BI performance depends on data model design, DAX efficiency, visual configuration, and infrastructure. This skill covers diagnostic tools, optimization techniques, and best practices for achieving fast, responsive reports.
Enable in Power BI Desktop: View > Performance Analyzer > Start recording
| Metric | Meaning | Action if Slow |
|---|---|---|
| DAX query | Time to execute the DAX | Optimize measure, check filter context |
| Visual display | Time to render the result | Reduce data points, simplify visual |
| Other | Miscellaneous overhead | Usually minor, ignore unless dominant |
Workflow:
Free external tool for deep DAX performance analysis:
Key features:
Server Timings breakdown:
| Engine | What It Does | Optimization Target |
|---|---|---|
| Storage Engine (SE) | Scans VertiPaq data, retrieves rows | Reduce cardinality, columns scanned |
| Formula Engine (FE) | Evaluates DAX formulas | Simplify DAX, avoid nested iterators |
Ideal ratio: SE should be 80-90% of total time. High FE % means DAX is doing too much computation.
Common DAX Studio workflow:
Analyze model size and compression in DAX Studio: Advanced > View Metrics
| Metric | What to Check | Target |
|---|---|---|
| Table size (bytes) | Identify largest tables | Reduce columns, remove unused |
| Column cardinality | High cardinality = poor compression | Reduce distinct values, group rare values |
| Column size | Disproportionately large columns | Remove or move to dimension |
| Dictionary size | Large string dictionaries | Shorten strings, use keys |
| Relationship size | Memory for relationship mapping | Normal, cannot optimize directly |
| Hierarchy size | Hidden auto date/time hierarchies | Disable auto date/time |
| Technique | Impact | How |
|---|---|---|
| Remove unused columns | High | Delete columns not used in any visual, measure, or relationship |
| Reduce column cardinality | High | Group rare values (bottom 5% into "Other") |
| Use integer keys | High | Replace text foreign keys with integer surrogates |
| Split date/time | Medium | Separate DateTime into Date (date) and Time (time) columns |
| Round decimals | Medium | Round to 2 decimal places instead of 15 |
| Avoid calculated columns | Medium | Use measures instead (query-time vs storage) |
| Disable auto date/time | Medium | Options > Data Load > uncheck |
| Remove text from facts | High | Move descriptions to dimension tables |
For large tables, partition by date range:
Use variables to avoid repeated calculations:
// BAD: Calculates [Total Sales] three times
Margin % = DIVIDE([Total Sales] - [Total Cost], [Total Sales])
// GOOD: Single calculation, reuse via variable
Margin % =
VAR Sales = [Total Sales]
VAR Cost = [Total Cost]
RETURN DIVIDE(Sales - Cost, Sales)
Avoid FILTER with large tables in CALCULATE:
// BAD: Scans entire table
CALCULATE([Sales], FILTER(ALL(Products), Products[Category] = "Electronics"))
// GOOD: Column filter (optimized)
CALCULATE([Sales], Products[Category] = "Electronics")
Avoid nested iterators:
// BAD: O(n^2) complexity
SUMX(Products,
SUMX(FILTER(Sales, Sales[ProductID] = Products[ProductID]),
Sales[Amount]))
// GOOD: Use relationship + simple aggregation
SUMX(Products, [Total Sales])
Use DISTINCTCOUNT instead of COUNTROWS(DISTINCT(...)):
// BAD
COUNTROWS(DISTINCT(Sales[CustomerID]))
// GOOD
DISTINCTCOUNT(Sales[CustomerID])
Avoid FORMAT() in measures (returns text, kills sort):
// BAD: Returns text, cannot sort
MonthLabel = FORMAT([Date], "MMMM yyyy")
// GOOD: Use a pre-computed column in the Date table for display
// And a numeric sort column for ordering
| Complexity | Acceptable For | Performance Concern |
|---|---|---|
| Simple aggregation (SUM, COUNT) | Any visual | No |
| CALCULATE with column filter | Any visual | No |
| Single iterator (SUMX) | Most visuals | Watch row count |
| CALCULATE with FILTER(table) | Limited visuals | Yes, if table is large |
| Nested iterators | Avoid | Yes, always |
| CALCULATE inside SUMX | Use carefully | Context transition cost |
| Problem | Impact | Fix |
|---|---|---|
| 20+ visuals on one page | Each visual sends DAX query | Keep to 8-12 visuals per page |
| Visuals with many data points | Large result sets | Use Top N, aggregation |
| Many slicers | Each slicer change re-queries all visuals | Use "Apply" button |
Enable query reduction features:
Avoid complex DAX-based conditional formatting on large tables. Use simple column references or measures with limited computation.
Pre-aggregated tables that Power BI queries instead of the detail table:
SELECT
ProductCategory,
CAST(OrderDate AS DATE) AS OrderDate,
SUM(Amount) AS TotalAmount,
COUNT(*) AS OrderCount
FROM Sales
GROUP BY ProductCategory, CAST(OrderDate AS DATE)
In Power BI, set up aggregation mappings:
TotalAmount summarization Sum to detail column Sales[Amount]OrderCount summarization Count to detail table SalesProductCategory group-by to Sales[ProductCategory]OrderDate group-by to Sales[OrderDate]Hide the aggregation table from report view
Power BI automatically routes queries:
Premium and Fabric capacities support automatic aggregation training:
Mix Import and DirectQuery tables in one model:
| Table | Storage Mode | Why |
|---|---|---|
| Date dimension | Import | Small, used everywhere, fast |
| Product dimension | Import | Small, frequent filtering |
| Customer dimension | Import or Dual | Medium size |
| Sales fact | DirectQuery | Too large for Import |
| Aggregation table | Import | Pre-computed summaries |
Dual mode: Table exists as both Import and DirectQuery. Engine chooses based on query context:
Set storage mode: Model view > select table > Properties > Storage mode
Composite models with Direct Lake (2025 Preview):
Direct Lake provides near-Import query speed without data duplication:
| Aspect | Guidance |
|---|---|
| V-Order | Enable in Spark notebooks/pipelines for optimal Parquet read performance |
| Framing frequency | Schedule frequent framing for near-real-time freshness (seconds cost) |
| Column count | Minimize columns -- each column still consumes memory when paged in |
| Guardrails | Monitor file/row-group counts per table (varies by F-SKU capacity) |
| Fallback (DL/SQL) | Set DirectLakeBehavior = DirectLakeOnly to block DQ fallback and force optimization |
| Fallback (DL/OL) | No DQ fallback -- queries fail if data cannot be served; optimize model size |
| Memory paging | Max Memory is soft limit -- excess paging degrades performance |
| Calculated columns | Supported but may trigger DQ fallback on DL/SQL; test impact |
| Modeling perf | Desktop 2025+ provides 50%+ improvement for live Direct Lake editing |
| Technique | Impact |
|---|---|
| Remove unused columns aggressively | High -- every column adds VertiPaq memory |
| Split DateTime into Date + Time | High -- reduces cardinality significantly |
| Use integer surrogate keys | High -- 4-byte integers compress far better than text |
| Reduce decimal precision | Medium -- ROUND to 2 places |
| Implement aggregation tables | High -- 100x fewer rows for summary queries |
| Use incremental refresh with partitioning | High -- only refresh changed partitions |
| Enable automatic aggregations (Premium/Fabric) | Medium -- system optimizes query routing |
| Consider Direct Lake for Fabric data | High -- eliminates Import refresh entirely |
| Disable auto date/time | Medium -- removes hidden tables |
| Archive cold data to separate model | Medium -- reduce active model footprint |
| Setting | Location | Recommendation |
|---|---|---|
| Auto date/time | Options > Data Load | Disable for production models |
| Background data | Options > Data Load | Enable for faster development |
| Parallel loading | Options > Data Load | Enable for multi-table models |
| DirectQuery query timeout | Options > DirectQuery | Increase for slow sources (default 10 min) |
| Query reduction for slicers | Report settings | Enable "Add Apply button" |
| Auto recovery | Options > Data Load | Enable to prevent work loss |
| Report storage mode | Options > Preview | PBIR format for git-friendly development |
Report Server has different performance characteristics from the cloud service:
| Area | Guidance |
|---|---|
| CPU | Most critical resource at peak load -- add cores first |
| Memory/RAM | Increase allocated memory for better query caching |
| Storage | Use SSDs with high IOPS for the report server database |
| Database isolation | Host report server DB on separate machine from PBIRS |
| Scale-out | Deploy multiple PBIRS instances sharing one report server DB |
| Load balancing | Use NLB or Azure Traffic Manager across instances |
| High availability | Passive standby VM in another region for business continuity |
| Caching | Configure report execution caching for frequently viewed reports |
| Data source proximity | Place gateway/PBIRS close to data sources to reduce latency |
| Concurrent users | Monitor with performance counters; scale out at 50+ concurrent |
| Problem | Solution |
|---|---|
| Too many bookmarks loading data | Use report-level filters instead of bookmark-captured filters |
| Bookmarks causing full re-query | Minimize bookmark-captured visual states |
| Complex cross-page drillthrough | Use drillthrough instead of bookmarks for page navigation |
| Slicer cascades on page load | Set default slicer values to reduce initial query count |
references/dax-studio-walkthrough.md -- Step-by-step DAX Studio analysis guide with query plan interpretation and latest DAX Studio features