Microsoft Fabric workspace and data platform management — lakehouses, warehouses, semantic models, pipelines, notebooks. Use when working with Fabric workspaces, querying data, managing items, or deploying Fabric artifacts.
npx claudepluginhub francoisbgdw/claude-skillsThis skill is limited to using the following tools:
Fabric has no dedicated CLI. Use the Fabric REST API via `az rest` or Python SDKs. The user authenticates via `az login`.
Manages Microsoft Fabric resources including workspaces, semantic models, reports, notebooks, lakehouses using fab CLI for deployment, job execution, data querying, OneLake file management, and automation.
Manages Microsoft Fabric resources (workspaces, semantic models, reports, notebooks, lakehouses) using fab CLI commands. For deploying items, running jobs, querying data, managing OneLake files, automating operations.
Provides expert guidance for fab CLI, nb CLI, and DuckDB in Microsoft Fabric: workspace navigation, notebook management, data querying, deployment, jobs, APIs, and automation. Useful for managing workspaces, lakehouses, OneLake files, and automating operations.
Share bugs, ideas, or general feedback.
Fabric has no dedicated CLI. Use the Fabric REST API via az rest or Python SDKs. The user authenticates via az login.
az account show --query "{name:name, id:id}" -o tsv 2>/dev/null || echo "not logged in — run: az login"For bulk or iterative operations (refreshing multiple models, deploying across workspaces, deleting items), apply self-regulation from shared safety patterns:
The Fabric REST API base URL is https://api.fabric.microsoft.com/v1.
# List all workspaces
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces"
# Get a specific workspace
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>"
# List items in a workspace
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items"
# Filter items by type (Lakehouse, Warehouse, SemanticModel, Notebook, DataPipeline, etc.)
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items?type=Lakehouse"
# List lakehouses in a workspace
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/lakehouses"
# List tables in a lakehouse
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/lakehouses/<lakehouse-id>/tables"
# Load a file into a table
az rest --method POST --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/lakehouses/<lakehouse-id>/tables/<table-name>/load" --body '{"relativePath": "Files/data.csv", "pathType": "File", "mode": "Overwrite"}'
# List semantic models
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/semanticModels"
# Trigger refresh
az rest --method POST --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/semanticModels/<model-id>/refresh"
# Get refresh history
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/semanticModels/<model-id>/refreshes"
# List pipelines
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items?type=DataPipeline"
# Run a pipeline (via item job)
az rest --method POST --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items/<pipeline-id>/jobs/instances?jobType=Pipeline"
# Get job instances
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items/<item-id>/jobs/instances"
# List notebooks
az rest --method GET --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items?type=Notebook"
# Get notebook definition
az rest --method POST --url "https://api.fabric.microsoft.com/v1/workspaces/<workspace-id>/items/<notebook-id>/getDefinition"
For a full API reference, see api-reference.md.
pip install microsoft-fabric-api
from microsoft.fabric import FabricClient
from azure.identity import DefaultAzureCredential
client = FabricClient(credential=DefaultAzureCredential())
# List workspaces
workspaces = client.workspaces.list()
for ws in workspaces:
print(f"{ws.display_name} ({ws.id})")
# List items in a workspace
items = client.items.list(workspace_id="<workspace-id>")
for item in items:
print(f"{item.type}: {item.display_name}")
For querying semantic models and running DAX. Best used in Fabric notebooks or local environments with az login.
pip install semantic-link semantic-link-labs
import sempy.fabric as fabric
# List datasets in a workspace
datasets = fabric.list_datasets(workspace="<workspace-name>")
# Read a table from a semantic model
df = fabric.read_table(
dataset="<dataset-name>",
table="<table-name>",
workspace="<workspace-name>"
)
# Run a DAX query
result = fabric.evaluate_dax(
dataset="<dataset-name>",
dax_string='EVALUATE SUMMARIZECOLUMNS(\'Date\'[Year], "Total Sales", [Total Sales])',
workspace="<workspace-name>"
)
pip install fabric-cicd
The fabric-cicd tool is officially supported by Microsoft for deploying Fabric items across workspaces (dev -> test -> prod).
from fabric_cicd import FabricWorkspace, publish_all_items
# Connect to source workspace
ws = FabricWorkspace(
workspace_id="<workspace-id>",
repository_directory="<local-path>",
item_type_in_scope=["Notebook", "DataPipeline", "Lakehouse"]
)
# Deploy items
publish_all_items(workspace=ws)
For reusable Python helpers, see scripts/fabric_helpers.py.
For richer integration via GraphQL:
microsoft/fabric-samples (MCP server).env