From tabular-editor
Generates and refines BPA rules for Power BI semantic models via interactive Q&A discovery, model analysis, and expert authoring. Useful for validation, auditing, and best practices setup.
npx claudepluginhub data-goblin/power-bi-agentic-development --plugin tabular-editorThis skill uses the workspace's default tool permissions.
Expert guidance for creating and improving BPA (Best Practice Analyzer) rules for Tabular Editor and Power BI semantic models.
examples/README.mdexamples/comprehensive-rules.jsonexamples/course-3-business-case-bpa-rules.jsonexamples/microsoft-analysis-services-rules.jsonexamples/model-with-bpa-annotations.tmdlexamples/power-query-operations-rules.jsonexamples/te3-preferences-sample.jsonreferences/expression-syntax.mdreferences/model-investigation.mdreferences/quick-reference.mdreferences/rule-schema.mdreferences/te-compatibility.mdreferences/tmdl-annotations.mdschema/README.mdschema/bparules-schema.jsonscripts/validate_rules.pyCreates isolated Git worktrees for feature branches with prioritized directory selection, gitignore safety checks, auto project setup for Node/Python/Rust/Go, and baseline verification.
Executes implementation plans in current session by dispatching fresh subagents per independent task, with two-stage reviews: spec compliance then code quality.
Dispatches parallel agents to independently tackle 2+ tasks like separate test failures or subsystems without shared state or dependencies.
Expert guidance for creating and improving BPA (Best Practice Analyzer) rules for Tabular Editor and Power BI semantic models.
Activate automatically when tasks involve:
CRITICAL: Do NOT generate BPA rules immediately. This is a requirements-gathering exercise. Use the AskUserQuestion tool to conduct an iterative, back-and-forth conversation with the user across multiple rounds. Continue asking questions until sufficient context about the user's business, team, model, and priorities has been gathered. Only then move to rule generation.
The workflow follows a double-diamond pattern:
Call AskUserQuestion with 2-4 questions per round. After each round, review the answers and ask follow-up questions. Do not proceed to Phase 2 until the organizational context is clear. Continue rounds until satisfied.
Round 1 -- Goal and audience:
Ask about the primary goal and who will use the rules. Example AskUserQuestion call:
Round 2 -- Standards and existing rules:
Based on Round 1 answers, ask about conventions and existing rules. Example:
Round 3+ -- Follow-ups as needed:
If the user has existing rules, ask for the file path or URL and read them. If they have naming conventions, ask for specifics. If they mentioned CI/CD, ask about the pipeline setup. Keep calling AskUserQuestion until the organizational picture is clear.
After Phase 1, use AskUserQuestion to determine how to access the model:
Then investigate the model based on the answer:
| Answer | Action |
|---|---|
| Published to Fabric | Use AskUserQuestion to get workspace and model name. Then use fab CLI to inspect remotely -- load the fabric-cli skill and read references/model-investigation.md for specific commands. |
| Local as PBIP | Use AskUserQuestion to get the path to the .SemanticModel/definition/ folder. Then read TMDL files directly with Read/Grep tools. |
| Local as .pbix only | Guide the user to save as PBIP: File > Save as > Power BI Project (*.pbip) in Power BI Desktop. Then ask for the resulting folder path. See references/model-investigation.md for detailed steps. |
| model.bim file | Use AskUserQuestion to get the file path. Then parse with jq or read directly. |
| No model yet | Skip model investigation; generate general-purpose rules based on organizational context only. |
What to extract from the model (read files, grep patterns, count objects):
After model investigation, summarize findings to the user and use AskUserQuestion to confirm the analysis is accurate and ask if anything was missed.
Based on everything gathered, present a prioritized recommendation of rule categories. Use AskUserQuestion to let the user confirm or adjust before generating any rules.
Present categories ranked by relevance to the user's context. For example:
If the model has many measures without descriptions and the user cares about governance:
Use AskUserQuestion to ask:
Do not generate rules until the user confirms the priorities.
Only after Phases 1-3, generate tailored BPA rules. For each rule:
After generating rules, use AskUserQuestion to ask:
scripts/validate_rules.pyIterate on the rule set until the user is satisfied. Continue calling AskUserQuestion for refinements.
Skip the full Q&A workflow only when:
Even in these cases, ask clarifying questions with AskUserQuestion if the request is ambiguous.
BPA rule files must follow specific formatting requirements for Tabular Editor to load them correctly. Files that don't follow these rules may show empty rule collections or fail to load entirely.
Tabular Editor on Windows requires Windows line endings (CRLF, \r\n). Files with Unix line endings (LF only) will fail to load or show empty rule collections.
To convert a file to CRLF:
# macOS/Linux
sed -i 's/$/\r/' rules.json
# Or use the validation script
python scripts/validate_rules.py --fix rules.json
When adding rule files in Tabular Editor:
C:\BPARules\my-rules.json)..\..\.. - TE may fail to resolve thesehttps://raw.githubusercontent.com/...)No extra properties: TE's JSON parser is strict. Only use allowed fields:
ID, Name, Category, Description, Severity, Scope, ExpressionFixExpression, CompatibilityLevel, Source, RemarksAvoid these patterns:
// BAD: _comment fields not allowed
{ "_comment": "Section header", "ID": "RULE1", ... }
// BAD: Runtime fields (TE adds these, don't include them)
{ "ID": "RULE1", "ObjectCount": 0, "ErrorMessage": null, ... }
// GOOD: FixExpression can be null or omitted
{ "ID": "RULE1", "FixExpression": null, ... }
{ "ID": "RULE1", "Name": "...", "Severity": 2, "Scope": "Measure", "Expression": "..." }
Note: FixExpression: null is valid. ErrorMessage and ObjectCount are runtime fields that TE adds - do not include them in rule definitions.
When using RegEx.IsMatch() in expressions:
No @ prefix: Do not use C# verbatim string prefix
// BAD: @ prefix not supported
RegEx.IsMatch(Expression, @"FILTER\s*\(\s*ALL")
// GOOD: Standard escaping
RegEx.IsMatch(Expression, "FILTER\\s*\\(\\s*ALL")
No RegexOptions parameter: TE doesn't support the options parameter
// BAD: RegexOptions not supported
RegEx.IsMatch(Name, "^DATE$", RegexOptions.IgnoreCase)
// GOOD: Use inline flag or pattern only
RegEx.IsMatch(Name, "(?i)^DATE$")
RegEx.IsMatch(Name, "^(DATE|date|Date)$")
Use the exact scope names from the TOM enum. Common mistakes:
| Wrong | Correct |
|---|---|
Role | ModelRole |
Member | ModelRoleMember |
Expression | NamedExpression |
DataSource | ProviderDataSource or StructuredDataSource |
Note: Column is valid as a backwards-compatible alias for DataColumn, CalculatedColumn, CalculatedTableColumn.
Use the validation script to check and fix TE compatibility issues:
# Check for issues
python scripts/validate_rules.py rules.json
# Auto-fix issues (CRLF, remove nulls, remove _comment)
python scripts/validate_rules.py --fix rules.json
The script checks:
_comment fieldsnull values for optional fieldsBPA rules can exist in multiple locations (evaluated in order of priority):
| Location | Path / Source | Description |
|---|---|---|
| Built-in Best Practices | Internal to TE3 | Default rules bundled with Tabular Editor 3 |
| URL | Any valid URL (e.g., https://raw.githubusercontent.com/TabularEditor/BestPracticeRules/master/BPARules-standard.json) | Remote rule collections loaded from web |
| Rules within current model | See below | Rules embedded in model metadata |
| Rules for local user | %LocalAppData%\TabularEditor3\BPARules.json | User-specific rules on Windows |
| Rules on local machine | %ProgramData%\TabularEditor3\BPARules.json | Machine-wide rules for all users |
For built-in rule IDs (27 rules in TE3), model-embedded rule formats, cross-platform file access, and all file location details, see references/te-compatibility.md.
For rule JSON structure, valid scope values, severity levels, compatibility levels, and category prefixes, see references/quick-reference.md.
For expression syntax (Dynamic LINQ, TOM properties, string/boolean/collection checks, Tokenize(), DependsOn, ReferencedBy), see references/expression-syntax.md.
BPA rules can be embedded in TMDL files via annotations:
annotation BestPracticeAnalyzer = [{ "ID": "...", ... }]
annotation BestPracticeAnalyzer_IgnoreRules = {"RuleIDs":["RULE1","RULE2"]}
annotation BestPracticeAnalyzer_ExternalRuleFiles = ["https://..."]
For complete annotation patterns, see references/tmdl-annotations.md.
Follow the Primary Workflow: Interactive Q&A Discovery (above) for the best results. Use AskUserQuestion iteratively to gather context, investigate the model, then generate targeted rules.
When the user requests a specific rule without needing full discovery:
For detailed syntax and patterns, consult:
references/model-investigation.md - Investigating models via Fabric CLI or local .bim/.tmdl files; guiding users to save as PBIP; model analysis checklistreferences/te-compatibility.md - Tabular Editor compatibility (CRLF, file paths, JSON format, regex, scope names, validation, built-in rules, file locations, cross-platform access)references/quick-reference.md - Rule JSON structure, valid scopes, severity levels, compatibility levels, category prefixes, expression syntax overviewschema/bparules-schema.json - JSON Schema for validating BPA rule files (Draft-07) (temporary location)references/rule-schema.md - Human-readable BPA rule field descriptionsreferences/expression-syntax.md - Dynamic LINQ expression syntax, TOM properties, Tokenize(), DependsOn, ReferencedByreferences/tmdl-annotations.md - BPA annotations in TMDL formatWorking examples in examples/:
examples/comprehensive-rules.json - 30+ production-ready rules across all categoriesexamples/model-with-bpa-annotations.tmdl - TMDL file showing all annotation patternsUtility scripts:
/scripts/bpa_rules_audit.py - Comprehensive BPA rules audit across all sources (built-in, URL, model, user, machine). Supports Windows, WSL, and macOS with Parallels. Outputs ASCII report and JSON export.scripts/validate_rules.py - Validate BPA rule JSON files for schema complianceAudit Script Usage:
# Basic audit
python scripts/bpa_rules_audit.py /path/to/model
# Export to JSON
python scripts/bpa_rules_audit.py /path/to/model --json output.json
# Quiet mode (summary only)
python scripts/bpa_rules_audit.py /path/to/model --quiet
/suggest-rule - Generate BPA rules from descriptionsbpa-expression-helper - Debug and improve BPA expressionsTo retrieve current BPA and TOM reference docs, use microsoft_docs_search + microsoft_docs_fetch (MCP) if available, otherwise mslearn search + mslearn fetch (CLI). Search based on the user's request and run multiple searches as needed to ensure sufficient context before proceeding.
{
"ID": "META_MEASURE_NO_DESCRIPTION",
"Name": "Measure has no description",
"Category": "Metadata",
"Description": "All measures should have descriptions for documentation.",
"Severity": 2,
"Scope": "Measure",
"Expression": "string.IsNullOrWhitespace(Description)"
}
{
"ID": "PERF_UNUSED_HIDDEN_COLUMN",
"Name": "Remove hidden columns not used",
"Category": "Performance",
"Description": "Hidden columns with no references waste memory.",
"Severity": 3,
"Scope": "Column",
"Expression": "IsHidden and ReferencedBy.Count = 0 and not UsedInRelationships.Any()",
"FixExpression": "Delete()"
}