By microsoft
Manage end-to-end Microsoft Fabric workloads via CLI skills and agents: query SQL/DAX/KQL/PySpark across warehouses, lakehouses, eventhouses; author ETL pipelines, dataflows, alerts, eventstreams; monitor performance; migrate from Databricks, Synapse, HDInsight; implement medallion lakehouse architecture.
npx claudepluginhub microsoft/skills-for-fabric --plugin skills-for-fabricManage Microsoft Fabric operational excellence across capacity planning, governance, security, cost optimization, and observability. Use when the request involves workspace administration, capacity monitoring, access control, compliance policies, cross-workload operational concerns, or workspace documentation and inventory. Delegates endpoint-specific implementation to specialized skills where available.
Orchestrate end-to-end Microsoft Fabric data engineering workflows that span multiple workloads and personas. Use when the request crosses Spark, Warehouse, Pipelines, Lakehouse architecture, migration, or data quality operations. Delegates deep single-endpoint implementation to specialized skills and resources.
Build full-stack applications on top of Microsoft Fabric using Python, ODBC, XMLA, and REST APIs. Use when the request involves building applications connected to Fabric data. Delegates endpoint-specific implementation to specialized skills.
Check for skills-for-fabric marketplace updates at session start. Compares local version against GitHub releases and shows changelog if updates are available. Use when the user wants to: (1) check for skill updates, (2) see what's new in skills-for-fabric, (3) verify current version. Triggers: "check for updates", "am I up to date", "what version", "update skills", "show changelog".
The ONLY supported path for read-only Microsoft Fabric Power BI semantic model (formerly "Power BI dataset") query interactions. Execute DAX queries via the MCP server ExecuteQuery tool to: (1) discover semantic model metadata (tables, columns, measures, relationships, hierarchies, etc.) and their properties, (2) retrieve data from a semantic model. Triggers: "DAX query", "semantic model metadata", "list semantic model tables", "run EVALUATE", "get measure expression".
Create, manage, and deploy Power BI semantic models inside Microsoft Fabric workspaces via `az rest` CLI against Fabric and Power BI REST APIs. Use when the user wants to: (1) create a semantic model from TMDL definition files, (2) retrieve or download semantic model definitions, (3) update a semantic model definition with modified TMDL, (4) trigger or manage dataset refresh operations, (5) configure data sources, parameters, or permissions, (6) deploy semantic models between pipeline stages. Covers Fabric Items API (CRUD) and Power BI Datasets API (refresh, data sources, permissions). For read-only DAX queries, use `powerbi-consumption-cli`. For fine-grained modeling changes, route to `powerbi-modeling-mcp`. Triggers: "create semantic model", "upload TMDL", "download semantic model TMDL", "refresh dataset", "semantic model deployment pipeline", "dataset permissions", "list dataset users", "semantic model authoring".
Execute read-only T-SQL queries against Fabric Data Warehouse, Lakehouse SQL Endpoints, and Mirrored Databases via CLI. Default skill for any lakehouse data query (row counts, SELECT, filtering, aggregation) unless the user explicitly requests PySpark or Spark DataFrames. Use when the user wants to: (1) query warehouse/lakehouse data, (2) count rows or explore lakehouse tables, (3) discover schemas/columns, (4) generate T-SQL scripts, (5) monitor SQL performance, (6) export results to CSV/JSON. Triggers: "warehouse", "SQL query", "T-SQL", "query warehouse", "show warehouse tables", "show lakehouse tables", "query lakehouse", "lakehouse table", "how many rows", "count rows", "SQL endpoint", "describe warehouse schema", "generate T-SQL script", "warehouse performance", "export SQL data", "connect to warehouse", "lakehouse data", "explore lakehouse".
Execute authoring T-SQL (DDL, DML, data ingestion, transactions, schema changes) against Microsoft Fabric Data Warehouse and SQL endpoints from agentic CLI environments. Use when the user wants to: (1) create/alter/drop tables from terminal, (2) insert/update/delete/merge data via CLI, (3) run COPY INTO or OPENROWSET ingestion, (4) manage transactions or stored procedures, (5) perform schema evolution, (6) use time travel or snapshots, (7) generate ETL/ELT shell scripts, (8) create views/functions/procedures on Lakehouse SQLEP. Triggers: "create table in warehouse", "insert data via T-SQL", "load from ADLS", "COPY INTO", "run ETL with T-SQL", "alter warehouse table", "upsert with T-SQL", "merge into warehouse", "create T-SQL procedure", "warehouse time travel", "recover deleted warehouse data", "create warehouse schema", "deploy warehouse", "transaction conflict", "snapshot isolation error".
Analyze lakehouse data interactively using Fabric Lakehouse Livy API sessions and PySpark/Spark SQL for advanced analytics, DataFrames, cross-lakehouse joins, Delta time-travel, and unstructured/JSON data. Use when the user explicitly asks for PySpark, Spark DataFrames, Livy sessions, or Python-based analysis — NOT for simple SQL queries. Triggers: "PySpark", "Spark SQL", "analyze with PySpark", "Spark DataFrame", "Livy session", "lakehouse with Python", "PySpark analysis", "PySpark data quality", "Delta time-travel with Spark".
Develop Microsoft Fabric Spark/data engineering workflows and write code in Fabric Notebook cells with intelligent routing to specialized resources. Provides workspace/lakehouse management, notebook code authoring (PySpark, Scala, SparkR, SQL), and routes to: data engineering patterns, development workflow, or infrastructure orchestration. Use when the user wants to: (1) manage Fabric workspaces and resources, (2) write or debug code in notebook cells, (3) use notebookutils, (4) develop notebooks and PySpark applications, (5) design data pipelines, (6) provision infrastructure as code. Triggers: "develop notebook", "data engineering", "workspace setup", "pipeline design", "infrastructure provisioning", "Delta Lake patterns", "Spark development", "lakehouse configuration", "write notebook code", "notebookutils", "notebook cell", "PySpark notebook", "%%sql cell", "%%configure", "fabric notebook", "run notebook", "notebook deployment".
Run KQL queries against Fabric Eventhouse for real-time intelligence and time-series analytics using `az rest` against the Kusto REST API. Covers KQL operators (where, summarize, join, render), Eventhouse schema discovery (.show tables), time-series patterns with bin(), and ingestion monitoring. Use when the user wants to: 1. Run read-only KQL queries against an Eventhouse or KQL Database 2. Discover Eventhouse table schema and metadata 3. Analyse real-time or time-series data with KQL operators 4. Monitor ingestion health and active KQL queries 5. Export KQL results to JSON Triggers: "kql query", "kusto query", "eventhouse query", "kql database", "real-time intelligence", "time-series kql", "query eventhouse", "explore eventhouse", "show tables kql"
Execute KQL management commands (table management, ingestion, policies, functions, materialized views) against Fabric Eventhouse and KQL Databases via CLI. Use when the user wants to: 1. Create or alter KQL tables, columns, or functions 2. Ingest data into an Eventhouse (inline, from storage, streaming) 3. Configure retention, caching, or partitioning policies 4. Create or manage materialized views and update policies 5. Manage data mappings for ingestion pipelines 6. Deploy KQL schema via scripts Triggers: "create kql table", "kql ingestion", "ingest into eventhouse", "kql function", "materialized view", "kql retention policy", "eventhouse schema", "kql authoring", "create eventhouse table", "kql mapping"
List, inspect, and monitor Microsoft Fabric Eventstream real-time event ingestion pipelines via the Fabric Items REST API. Discover Eventstreams across workspaces, decode base64-encoded graph topologies to trace event flow from source through operators to destination nodes. Validate source connection IDs, destination wiring, retention policies (1-90 days), and throughput levels. Use when the user wants to: (1) list or search Eventstreams in a workspace, (2) decode and trace graph topology from source to destination, (3) validate source and destination configurations, (4) check retention and throughput settings. Triggers: "list eventstreams", "show eventstream", "inspect eventstream", "explain eventstream", "eventstream health", "monitor eventstream", "describe eventstream", "check eventstream configuration", "eventstream retention".
Create, wire, and publish Microsoft Fabric Eventstream real-time event streaming topologies via the Fabric Items REST API. Build graph-based definitions with 25 source types (Event Hubs, IoT Hub, CDC connectors, Kafka, SampleData), 8 transformation operators (Filter, Aggregate, GroupBy, Join, ManageFields, Union, Expand, SQL), 4 destination types (Lakehouse Delta, Eventhouse, Activator, Custom Endpoint), and DefaultStream/DerivedStream routing. Use when the user wants to: (1) author or publish an Eventstream topology, (2) add CDC sources with SQL-based Debezium payload flattening, (3) assemble multi-table fan-out routing, (4) modify or delete Eventstream definitions. Triggers: "create eventstream", "deploy eventstream", "design eventstream topology", "CDC source", "eventstream operator", "real-time ingestion pipeline", "eventstream definition", "update eventstream".
Inspect existing alerts, notifications, and automated actions in Fabric via read-only REST API calls using `az rest` CLI. Use when the user wants to: (1) list existing alerts in a workspace, (2) inspect how an alert or notification is configured, (3) read and decode an Activator/Reflex definition (ReflexEntities.json), (4) list rules, sources, and actions behind an alert, (5) understand why an alert fires or what action it takes. Triggers: "show my alerts", "what alerts do I have", "inspect this alert", "show me the rule", "show me the action", "show me the source", "get reflex definition", "list activators", "activator details"
Create alerts, notifications, and automated actions on Fabric data and events via Fabric REST API and `az rest` CLI. Use when the user wants to: (1) create, update, or delete an alert or notification flow, (2) send a Teams message, send an email, or run a Fabric item when something happens, (3) connect alert logic to Eventhouse, Eventstream, Real-time Hub, or Digital Twin Builder / Ontology data, (4) adjust thresholds, filters, event triggers, or actions, (5) troubleshoot or change an existing Activator/Reflex definition. Triggers: "create an alert", "notify me when", "let me know when", "take action when", "send me an email when", "send a teams message when", "run a pipeline when", "update an alert", "delete an alert", "activator rule"
Analyze Fabric Data Warehouse performance via CLI using sqlcmd and queryinsights views. Diagnose slow queries, SQL pool pressure, cache coldness, and recommend clustering keys. Triggers: "DW slow query analysis", "slowest queries warehouse", "queryinsights long running", "warehouse CPU resource consumers", "SQL pool pressure window", "pressure events warehouse", "DW cache warmth cold start", "cache warmth analysis", "warehouse cluster key recommendation", "cluster tables performance", "DW performance baseline comparison", "performance degraded warehouse", "warehouse user query patterns", "queryinsights diagnostics", "DW optimization sqlcmd".
Diagnose failed Spark jobs, unhealthy Livy sessions, and performance bottlenecks in Microsoft Fabric via read-only CLI triage. Use when the user wants to: (1) diagnose why a Spark job, notebook run, or Lakehouse job failed, (2) triage stuck or dead Livy sessions, (3) identify OOM, shuffle spill, or data skew, (4) retrieve driver and executor logs or Spark Advisor findings, (5) copy event logs and start a local Spark History Server, (6) diagnose all Spark activities within a failed pipeline run. Triggers: "diagnose my failed notebook", "why did my spark job fail", "triage spark failure", "diagnose pipeline run failure", "why did my pipeline fail", "livy session stuck in starting", "spark executor OOM", "check spark advisor findings", "shuffle spill diagnosis", "why did my lakehouse job fail", "diagnose lakehouse table load", "data skew diagnosis", "open spark history server locally", "analyze spark failure logs", "spark job triage".
Monitor, inspect, and discover Fabric Dataflows Gen2 via read-only CLI operations (az rest / curl). List dataflows across workspaces, decode base64 definitions to inspect Power Query M queries and queryMetadata.json, discover typed parameters with defaults, poll refresh operations for status, retrieve job history with timing and error details, and classify queries by staging settings. Use when the user wants to: (1) list dataflows, (2) inspect a dataflow definition and decode its mashup, (3) discover parameters, (4) check refresh status, (5) retrieve job history, (6) analyze staging settings, (7) examine connections and data source bindings. Triggers: "dataflow status", "refresh history", "dataflow monitor", "list dataflows", "dataflow parameters", "explore dataflow", "inspect dataflow", "dataflow run status".
Create, update, delete, and manage Fabric Dataflows Gen2 artifacts with Power Query M mashup definitions via CLI (az rest / curl). Uses az rest and curl against the Fabric REST API to author definitions containing base64-encoded mashup.pq, queryMetadata.json, and .platform parts. Supports creating dataflows with inline definitions, modifying mashup queries, binding connections, triggering Execute refresh jobs with typed parameter overrides, and exporting definitions for CI/CD. Use when the user wants to: (1) create a new Dataflow Gen2 with Power Query M queries, (2) update a dataflow mashup definition, (3) trigger a dataflow refresh job, (4) bind or manage dataflow connections, (5) set up CI/CD via definition export and import, (6) delete a dataflow, (7) configure staging destinations. Triggers: "create dataflow", "author dataflow", "Power Query M", "mashup document", "update dataflow definition", "refresh dataflow", "dataflow connection", "ETL dataflow", "dataflow CI/CD".
Assess, plan, and execute dataflow Gen1 → Gen2.1 CI/CD save-as operations via CLI (az rest / curl) against both Power BI REST and Fabric REST APIs. Scan workspaces or entire tenants for Gen1 dataflows, evaluate save-as readiness with seven risk signals (incremental refresh, BYOSA storage, Power Automate triggers, pipeline dependencies, linked entities, DirectQuery, caller-not-owner), produce a Save-As Readiness Snapshot (markdown + JSON), and invoke the SaveAsNativeArtifact API to create upgraded Gen2.1 copies of Gen1 dataflows. Use when the user wants to: (1) discover Gen1 dataflows in a workspace or tenant, (2) assess save-as readiness and risk signals, (3) upgrade or migrate Gen1 into a Gen2.1 copy, (4) validate post-save-as data integrity, (5) detect residual Gen1 references. Triggers: "save Gen1 dataflow", "convert dataflow Gen1", "upgrade dataflow", "migrate dataflow", "dataflow readiness", "Gen1 to Gen2", "dataflow save-as assessment", "saveAsNativeArtifact", "dataflow save-as scan".
Find and discover Microsoft Fabric items across workspaces when the workspace is unknown. Use when the user wants to: (1) find an item by name across workspaces, (2) list items of specific type across workspaces, (3) identify which workspace contains an item, (4) return item/workspace IDs for downstream API calls. Triggers: "which workspace has", "where is", "what items do I have", "do I have", "find item", "find all items", "search for item", "discover items", "find across workspaces".
Port Databricks notebooks and jobs to Microsoft Fabric. Provides an exhaustive dbutils to notebookutils substitution table: fs operations (mount removal via OneLake Shortcuts), secret scope to Key Vault URL conversion, notebook run and exit, widget replacement with parameter-tagged cells, and library install replacement with Fabric Environments. Covers Unity Catalog three-level namespace reduction to Lakehouse two-level schemas, DBFS path conversion to OneLake, Databricks Jobs to Spark Job Definitions, MLflow tracking URI removal, and Photon to Native Execution Engine substitution. Use when the user wants to: (1) replace dbutils with notebookutils, (2) collapse Unity Catalog namespaces to Lakehouse schemas, (3) convert Databricks Jobs or Delta Live Tables. Triggers: "migrate from databricks", "databricks to fabric", "dbutils to notebookutils", "dbutils fabric", "unity catalog migration", "dbfs to onelake", "databricks notebook migration", "delta live tables fabric", "photon native execution".
Port Azure Synapse Analytics notebooks, SQL pools, and pipelines to Microsoft Fabric. Translates mssparkutils calls to notebookutils (including the env→runtime namespace change), replaces Linked Services with Fabric Data Connections and OneLake Shortcuts, rewrites Dedicated SQL Pool DDL by removing DISTRIBUTION and CLUSTERED COLUMNSTORE hints, substitutes PolyBase external tables with COPY INTO, and maps Synapse Pipeline activities to Fabric Data Pipeline equivalents. Use when the user wants to: (1) port Synapse Spark notebooks to Fabric Lakehouse or Spark Job Definitions, (2) replace mssparkutils or Linked Services in Synapse code, (3) rewrite Dedicated SQL Pool T-SQL for Fabric Warehouse, (4) migrate Synapse Pipelines to Fabric Data Pipelines. Triggers: "migrate from synapse", "synapse to fabric", "mssparkutils to notebookutils", "synapse linked service replacement", "dedicated sql pool to warehouse", "polybase to copy into", "port synapse notebooks", "synapse workspace migration".
Port Azure HDInsight Spark clusters and Hive workloads to Microsoft Fabric. Removes legacy HiveContext and standalone SparkContext constructors, replacing them with the pre-instantiated SparkSession. Converts WASB and ABFS storage paths to OneLake abfss URLs via Shortcuts. Transforms Hive DDL (STORED AS ORC, external tables) to Delta Lake schemas inside Fabric Lakehouse. Maps Oozie workflow actions — spark, hive, shell, sqoop, coordinator — to Fabric Pipeline activities and schedule triggers. Introduces notebookutils for file and credential operations previously handled via subprocess or HDFS client calls. Use when the user wants to: (1) retire an HDInsight cluster and move to Fabric, (2) convert WASB paths or Hive DDL, (3) replace Oozie coordinators with Fabric Pipelines. Triggers: "migrate from hdinsight", "hdi to fabric", "hivecontext sparksession fabric", "wasb to onelake", "hive ddl to delta", "oozie to fabric pipelines", "hive metastore lakehouse", "hdinsight spark migration".
Implement end-to-end Medallion Architecture (Bronze/Silver/Gold) lakehouse patterns in Microsoft Fabric using PySpark, Delta Lake, and Fabric Pipelines. Use when the user wants to: (1) design a Bronze/Silver/Gold data lakehouse, (2) set up multi-layer workspace with lakehouses for each tier, (3) build ingestion-to-analytics pipelines with data quality enforcement, (4) optimize Spark configurations per medallion layer, (5) orchestrate Bronze-to-Silver-to-Gold flows via notebooks. Triggers: "medallion architecture", "bronze silver gold", "lakehouse layers", "e2e data pipeline", "end-to-end lakehouse", "data lakehouse pattern", "multi-layer lakehouse", "build medallion", "setup medallion".
Microsoft Fabric Skills are reusable AI assistant instructions for working with Microsoft Fabric. They help GitHub Copilot CLI and compatible AI coding tools understand Fabric workloads, APIs, query patterns, and operational best practices.
Add the public marketplace:
/plugin marketplace add microsoft/skills-for-fabric
Install the full bundle:
/plugin install fabric-skills@fabric-collection
Or install a focused bundle:
# Authoring: APIs, automation, notebooks, schemas, ingestion, and deployment
/plugin install fabric-authoring@fabric-collection
# Consumption: interactive querying, discovery, exploration, and monitoring
/plugin install fabric-consumption@fabric-collection
# Operations: diagnostics and performance investigation
/plugin install fabric-operations@fabric-collection
You can also filter the full bundle by workload:
/plugin install fabric-skills@fabric-collection --filter "sqldw-*"
/plugin install fabric-skills@fabric-collection --filter "spark-*"
/plugin install fabric-skills@fabric-collection --filter "eventhouse-*"
| Bundle | Use it for |
|---|---|
fabric-skills | Complete Microsoft Fabric skill bundle, including authoring, consumption, operations, migration, and end-to-end architecture skills. |
fabric-authoring | Creating and managing Fabric items through REST APIs, CLI automation, notebooks, T-SQL, KQL, Dataflows Gen2, Eventstreams, and semantic models. |
fabric-consumption | Read-only exploration and query workflows across Warehouses, Lakehouses, Power BI semantic models, Eventhouse/KQL databases, Eventstreams, Dataflows Gen2, and catalog search. |
fabric-operations | Performance and health diagnostics, including warehouse query insights and slow-query investigation. |
The full bundle includes skills for SQL data warehouse, Spark and Lakehouse, Power BI semantic models, Eventhouse and KQL, Eventstreams, Dataflows Gen2, catalog search, migration scenarios, and medallion architecture workflows.
See CHANGELOG.md for public release notes.
After installing a bundle, open Copilot CLI in a project folder and ask for the Fabric task you want to perform, for example:
Use Microsoft Fabric skills to design a medallion architecture for NYC taxi data.
Most Fabric operations require Azure authentication. Start with:
az login
az account get-access-token --resource https://api.fabric.microsoft.com
SQL, Spark, Power BI, and KQL workflows may require workload-specific endpoints or token audiences. The installed skills provide the detailed commands and API patterns for each workload.
Skills provide guidance and patterns. MCP servers provide live tool access to data sources and APIs. Some bundles include MCP configuration where supported, and you can register additional Fabric MCP servers if your environment provides them.
See MCP setup and the MCP servers guide.
GitHub Copilot CLI plugin installation is the recommended path. This repository also includes root-level configuration files for compatible AI coding tools — CLAUDE.md for Claude Code, .cursorrules for Cursor, .windsurfrules for Windsurf, and AGENTS.md for Codex / Jules / OpenCode. They are picked up automatically when the repo is cloned.
Report product issues in the GitHub issue tracker.
For security vulnerabilities, do not open a public issue. See SECURITY.md for the private reporting path.
This project is licensed under the MIT License.
Claude Code integration for Microsoft Fabric CLI, enabling AI-assisted data and analytics workflows.
Get this plugin to work with Fabric / Power BI service, by means of the fabric cli.
Claude Code skill pack for Databricks (24 skills)
Curated agent skills collection for dbt workflows, helping AI agents understand and execute data transformation pipelines more effectively.
Skills and tools for agentic Power BI development including Semantic Modeling, TMDL, PBIR, Best Practices.
External network access
Connects to servers outside your machine
Uses power tools
Uses Bash, Write, or Edit tools
Share bugs, ideas, or general feedback.
Editorial "Data Engineering" bundle for Claude Code from Antigravity Awesome Skills.
Own this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claimOwn this plugin?
Verify ownership to unlock analytics, metadata editing, and a verified badge.
Sign in to claim