From fabric-consumption
Manages Microsoft Fabric workspaces/lakehouses, develops PySpark notebooks/data pipelines/orchestration, provisions infrastructure as code for Spark workflows.
npx claudepluginhub microsoft/skills-for-fabric --plugin fabric-consumptionThis skill uses the workspace's default tool permissions.
> **Update Check — ONCE PER SESSION (mandatory)**
Analyzes lakehouse data interactively using Fabric Livy sessions and PySpark/Spark SQL for DataFrames, cross-lakehouse joins, Delta time-travel, and unstructured/JSON data. For advanced Python-based Spark analysis.
Develop Lakeflow Spark Declarative Pipelines (formerly Delta Live Tables) on Databricks. Use when building batch or streaming data pipelines with Python or SQL. Invoke BEFORE starting implementation.
Guides Databricks development using Python SDK for workspace/account ops, Connect for local Spark, CLI commands, and direct REST APIs.
Share bugs, ideas, or general feedback.
Update Check — ONCE PER SESSION (mandatory) The first time this skill is used in a session, run the check-updates skill before proceeding.
- GitHub Copilot CLI / VS Code: invoke the
check-updatesskill.- Claude Code / Cowork / Cursor / Windsurf / Codex: compare local vs remote package.json version.
- Skip if the check was already performed earlier in this session.
CRITICAL NOTES
- To find the workspace details (including its ID) from workspace name: list all workspaces and, then, use JMESPath filtering
- To find the item details (including its ID) from workspace ID, item type, and item name: list all items of that type in that workspace and, then, use JMESPath filtering
| Task | Reference | Notes |
|---|---|---|
| RULES — Read these first, follow them always | SKILL.md § RULES | MUST read — 3 rules for this skill |
| Finding Workspaces and Items in Fabric | COMMON-CLI.md § Finding Workspaces and Items in Fabric | Mandatory — READ link first [needed for finding workspace id by its name or item id by its name, item type, and workspace id] |
| Fabric Topology & Key Concepts | COMMON-CORE.md § Fabric Topology & Key Concepts | |
| Environment URLs | COMMON-CORE.md § Environment URLs | |
| Authentication & Token Acquisition | COMMON-CORE.md § Authentication & Token Acquisition | Wrong audience = 401; read before any auth issue |
| Core Control-Plane REST APIs | COMMON-CORE.md § Core Control-Plane REST APIs | |
| Pagination | COMMON-CORE.md § Pagination | |
| Long-Running Operations (LRO) | COMMON-CORE.md § Long-Running Operations (LRO) | |
| Rate Limiting & Throttling | COMMON-CORE.md § Rate Limiting & Throttling | |
| OneLake Data Access | COMMON-CORE.md § OneLake Data Access | Requires storage.azure.com token, not Fabric token |
| Definition Envelope | ITEM-DEFINITIONS-CORE.md § Definition Envelope | Definition payload structure |
| Per-Item-Type Definitions | ITEM-DEFINITIONS-CORE.md § Per-Item-Type Definitions | Support matrix, decoded content, part paths — REST specs, CLI recipes |
| Job Execution | COMMON-CORE.md § Job Execution | |
| Capacity Management | COMMON-CORE.md § Capacity Management | |
| Gotchas & Troubleshooting | COMMON-CORE.md § Gotchas & Troubleshooting | |
| Best Practices | COMMON-CORE.md § Best Practices | |
| Tool Selection Rationale | COMMON-CLI.md § Tool Selection Rationale | |
| Authentication Recipes | COMMON-CLI.md § Authentication Recipes | az login flows and token acquisition |
Fabric Control-Plane API via az rest | COMMON-CLI.md § Fabric Control-Plane API via az rest | Always pass --resource https://api.fabric.microsoft.com or az rest fails |
| Pagination Pattern | COMMON-CLI.md § Pagination Pattern | |
| Long-Running Operations (LRO) Pattern | COMMON-CLI.md § Long-Running Operations (LRO) Pattern | |
OneLake Data Access via curl | COMMON-CLI.md § OneLake Data Access via curl | Use curl not az rest (different token audience) |
| SQL / TDS Data-Plane Access | COMMON-CLI.md § SQL / TDS Data-Plane Access | |
| Job Execution (CLI) | COMMON-CLI.md § Job Execution | |
| Job Scheduling | COMMON-CLI.md § Job Scheduling | URL is /jobs/{jobType}/schedules; endDateTime required |
| OneLake Shortcuts | COMMON-CLI.md § OneLake Shortcuts | |
| Capacity Management (CLI) | COMMON-CLI.md § Capacity Management | |
| Composite Recipes | COMMON-CLI.md § Composite Recipes | |
| Gotchas & Troubleshooting (CLI-Specific) | COMMON-CLI.md § Gotchas & Troubleshooting (CLI-Specific) | az rest audience, shell escaping, token expiry |
Quick Reference: az rest Template | COMMON-CLI.md § Quick Reference: az rest Template | |
| Quick Reference: Token Audience / CLI Tool Matrix | COMMON-CLI.md § Quick Reference: Token Audience ↔ CLI Tool Matrix | Which --resource + tool for each service |
| Relationship to SPARK-CONSUMPTION-CORE.md | SPARK-AUTHORING-CORE.md § Relationship to SPARK-CONSUMPTION-CORE.md | |
| Data Engineering Authoring Capability Matrix | SPARK-AUTHORING-CORE.md § Data Engineering Authoring Capability Matrix | |
| Lakehouse Management | SPARK-AUTHORING-CORE.md § Lakehouse Management | |
| Notebook Management | SPARK-AUTHORING-CORE.md § Notebook Management | |
| Notebook Execution & Job Management | SPARK-AUTHORING-CORE.md § Notebook Execution & Job Management | |
| CI/CD & Automation Patterns | SPARK-AUTHORING-CORE.md § CI/CD & Automation Patterns | |
| Infrastructure-as-Code | SPARK-AUTHORING-CORE.md § Infrastructure-as-Code | |
| Performance Optimization & Resource Management | SPARK-AUTHORING-CORE.md § Performance Optimization & Resource Management | |
| Authoring Gotchas and Troubleshooting | SPARK-AUTHORING-CORE.md § Authoring Gotchas and Troubleshooting | |
| Quick Reference: Authoring Decision Guide | SPARK-AUTHORING-CORE.md § Quick Reference: Authoring Decision Guide | |
| Recommended Patterns (Data Engineering) | data-engineering-patterns.md § Recommended patterns | |
| Data Ingestion Principles | data-engineering-patterns.md § Data Ingestion Principles | |
| Transformation Patterns | data-engineering-patterns.md § Transformation Patterns | |
| Delta Lake Best Practices | data-engineering-patterns.md § Delta Lake Best Practices | |
| Quality Assurance Strategies | data-engineering-patterns.md § Quality Assurance Strategies | |
| Recommended Patterns (Development Workflow) | development-workflow.md § Recommended patterns | |
| Notebook Lifecycle | development-workflow.md § Notebook Lifecycle | |
| Parameterization Patterns | development-workflow.md § Parameterization Patterns | |
| Variable Library (notebook + pipeline usage) | development-workflow.md § Method 4: Variable Library | getLibrary() + dot notation in notebooks; libraryVariables + @pipeline().libraryVariables in pipelines |
| Variable Library Definition | ITEM-DEFINITIONS-CORE.md § VariableLibrary | Definition parts, decoded content, types, pipeline mappings, gotchas |
| Local Testing Strategy | development-workflow.md § Local Testing Strategy | |
| Debugging Patterns | development-workflow.md § Debugging Patterns | |
| Recommended Patterns (Infrastructure) | infrastructure-orchestration.md § Recommended patterns | |
| Workspace Provisioning Principles | infrastructure-orchestration.md § Workspace Provisioning Principles | |
| Lakehouse Configuration Guidance | infrastructure-orchestration.md § Lakehouse Configuration Guidance | |
| Pipeline Design Patterns | infrastructure-orchestration.md § Pipeline Design Patterns | |
| CI/CD Integration Strategy | infrastructure-orchestration.md § CI/CD Integration Strategy | |
| Notebook API — Which Endpoint to Use | notebook-api-operations.md § Quick Decision | Start here for remote notebook edits — getDefinition vs updateDefinition |
| Notebook Modification Workflow | notebook-api-operations.md § Workflow | Five-step flow: retrieve, decode, modify, encode, upload |
| Notebook API Error Reference | notebook-api-operations.md § Error Reference | 411, 400 (updateMetadata), 401, 403 explained |
| Notebook API Gotchas | notebook-api-operations.md § Gotchas | /result suffix, empty body, \n per-line rule, format=ipynb |
| Default Lakehouse Binding | notebook-api-operations.md § Default Lakehouse Binding | .ipynb metadata vs .py # METADATA block; discover IDs dynamically |
| Public URL Data Ingestion | notebook-api-operations.md § Public URL Data Ingestion | Use real source URL, stage into Files/, then read with Spark |
| getDefinition (read notebook content) | notebook-api-operations.md § Step 1 — Retrieve Notebook Content | LRO flow, ?format=ipynb, empty body (--body '{}') requirement |
| Decode Base64 Notebook Payload | notebook-api-operations.md § Step 2 — Decode the Notebook Content | Extract payload, base64 decode, ipynb JSON structure |
| Modify Notebook Cells | notebook-api-operations.md § Step 3 — Modify the Notebook Content | Find cell, insert/replace lines, \n per-line rule |
| updateDefinition (write notebook content) | notebook-api-operations.md § Step 4 — Re-encode and Upload | Re-encode, upload, LRO poll, updateMetadata flag pitfall |
| Verify Notebook Update (Optional) | notebook-api-operations.md § Step 5 — Verify the Update | Skip unless you suspect a silent failure — Succeeded from updateDefinition is sufficient (see Rule 2) |
| Notebook API Error Reference | notebook-api-operations.md § Error Reference | 411, 400 (updateMetadata), 401, 403 explained |
| Notebook API End-to-End Script | notebook-api-operations.md § Complete End-to-End Script | Full bash: get → decode → modify → encode → update → verify |
| Quick Start Examples | SKILL.md § Quick Start Examples | Minimal examples for common operations |
\n to prevent code mergingname, driverMemory, driverCores, executorMemory, executorCores. Do NOT wrap in {"payload": ...} or send only {"kind": "pyspark"} — that causes HTTP 500. Use valid memory values (28g, 56g, 112g, 224g). See Create Livy Session example below and SPARK-CONSUMPTION-CORE.md.useStarterPool: trueuseWorkspacePool: truecreationPayload.enableSchemas: true for better table organizationRule 1 — Validate prerequisites before operations. Verify workspace has capacity assigned (see COMMON-CORE.md Create Workspace and Capacity Management) and resource IDs exist before attempting operations.
Rule 2 — Trust updateDefinition success. A
Succeededpoll result fromupdateDefinitionis sufficient confirmation that content and lakehouse bindings persisted. Do NOT callgetDefinitionafter every upload — it is an async LRO that adds significant latency. Only usegetDefinitionfor its intended purpose: reading current notebook content before making modifications.Rule 3 — Prevent duplicate jobs and monitor execution properly. Before submitting new notebook run, ALWAYS check for recent job instances first (last 5 minutes). If recent job exists, monitor it instead of creating duplicate. After submission, capture job instance ID immediately and poll status - never retry POST. See SPARK-AUTHORING-CORE.md Job Monitoring for patterns.
For detailed patterns, authentication, and comprehensive API usage, see:
az rest usage, environment detection, token acquisitionBelow are minimal quick-start examples. Always reference the COMMON- files for production use.*
# See COMMON-CORE.md Environment URLs and SPARK-AUTHORING-CORE.md for full patterns
cat > /tmp/body.json << 'EOF'
{"displayName": "DataEng-Dev"}
EOF
workspace_id=$(az rest --method post --resource "https://api.fabric.microsoft.com" \
--url "https://api.fabric.microsoft.com/v1/workspaces" \
--body @/tmp/body.json --query "id" --output tsv)
cat > /tmp/body.json << 'EOF'
{"displayName": "DevLakehouse", "type": "Lakehouse", "creationPayload": {"enableSchemas": true}}
EOF
lakehouse_id=$(az rest --method post --resource "https://api.fabric.microsoft.com" \
--url "https://api.fabric.microsoft.com/v1/workspaces/$workspace_id/items" \
--body @/tmp/body.json --query "id" --output tsv)
# See SPARK-AUTHORING-CORE.md Lakehouse Schema Organization for table organization patterns
# Create schemas for medallion architecture
spark.sql("CREATE SCHEMA IF NOT EXISTS bronze")
spark.sql("CREATE SCHEMA IF NOT EXISTS silver")
spark.sql("CREATE SCHEMA IF NOT EXISTS gold")
# See SPARK-CONSUMPTION-CORE.md for Livy session configuration and management
# IMPORTANT: Body MUST be flat JSON with memory/cores — do NOT wrap in {"payload": ...}
cat > /tmp/body.json << 'EOF'
{"name": "dev-session", "driverMemory": "56g", "driverCores": 8, "executorMemory": "56g", "executorCores": 8, "conf": {"spark.dynamicAllocation.enabled": "true", "spark.fabric.pool.name": "Starter Pool"}}
EOF
az rest --method post --resource "https://api.fabric.microsoft.com" \
--url "https://api.fabric.microsoft.com/v1/workspaces/$workspace_id/lakehouses/$lakehouse_id/livyapi/versions/2023-12-01/sessions" \
--body @/tmp/body.json
Livy Session Body — Common Mistakes
- ❌
{"payload": {"kind": "pyspark"}}→ HTTP 500 (wrong wrapper, missing required fields)- ❌
{"kind": "pyspark"}→ HTTP 500 (missingdriverMemory,executorMemory, etc.)- ✅ Flat JSON with
name,driverMemory,driverCores,executorMemory,executorCores(and optionallyconfwith Starter Pool)
For detailed workload-specific configurations, see data-engineering-patterns.md Delta Lake Best Practices.
Quick reference:
# Write-heavy (Bronze): Disable V-Order, enable autoCompact
# Balanced (Silver): Enable V-Order, adaptive execution
# Read-heavy (Gold): Vectorized reads, optimal parallelism
# See data-engineering-patterns.md for complete config tables
Use a Variable Library to centralize lakehouse names, workspace IDs, and feature flags.
# ✅ CORRECT — getLibrary() + dot notation
lib = notebookutils.variableLibrary.getLibrary("MyConfig")
lakehouse_name = lib.lakehouse_name
enable_logging = lib.enable_logging # returns string "true"/"false"
# Boolean: compare as string (bool("false") is True in Python!)
if enable_logging.lower() == "true":
print("Logging enabled")
# ❌ WRONG — .get() does not exist, causes runtime failure
# notebookutils.variableLibrary.get("MyConfig", "lakehouse_name")
Focus: Essential CLI patterns for Spark/data engineering development with intelligent routing to specialized resources. For comprehensive patterns, always reference COMMON-* files and resource documents.