From magic-powers
Use when designing Microsoft Fabric Lakehouse architecture, working with Delta tables, OneLake storage, Spark notebooks, or studying for DP-700 (Microsoft Fabric Data Engineer Associate). Covers Fabric architecture, Delta Lake, OneLake shortcuts, and medallion patterns.
npx claudepluginhub kienbui1995/magic-powers --plugin magic-powersThis skill uses the workspace's default tool permissions.
- Designing a Lakehouse in Microsoft Fabric for analytics workloads
Generates design tokens/docs from CSS/Tailwind/styled-components codebases, audits visual consistency across 10 dimensions, detects AI slop in UI.
Records polished WebM UI demo videos of web apps using Playwright with cursor overlay, natural pacing, and three-phase scripting. Activates for demo, walkthrough, screen recording, or tutorial requests.
Delivers idiomatic Kotlin patterns for null safety, immutability, sealed classes, coroutines, Flows, extensions, DSL builders, and Gradle DSL. Use when writing, reviewing, refactoring, or designing Kotlin code.
| Capability | Lakehouse | Warehouse |
|---|---|---|
| Storage format | Delta (Parquet + transaction log) | Proprietary columnar |
| Query interface | SQL analytics endpoint (read-only) + Spark | Dedicated SQL (read/write T-SQL) |
| Write via SQL | No (Spark or Dataflow only) | Yes (INSERT, UPDATE, DELETE) |
| Spark support | Yes (PySpark, Scala, R) | No |
| Best for | Data engineering, ML prep, exploration | BI reporting, T-SQL-heavy analytics |
Decision rule: Choose Lakehouse when you need Spark + open Delta format; choose Warehouse when your team is SQL-first and needs T-SQL DML.
_delta_log/)SELECT * FROM table VERSION AS OF 10 — by version numberSELECT * FROM table TIMESTAMP AS OF '2024-01-15' — by timestampmergeSchema option allows adding new columnsspark.read.format("delta").load("abfss://...")df.write.format("delta").mode("overwrite").save("Tables/mytable")spark.read.load("Files/shortcut-folder/") — treats shortcuts as local paths| Layer | Lakehouse | Content |
|---|---|---|
| Bronze | Raw Lakehouse | Ingested as-is; no transformations; full fidelity |
| Silver | Cleaned Lakehouse | Validated, deduplicated, typed data |
| Gold | Curated Lakehouse | Aggregated, business-ready for BI and reporting |
Tables/ section; registered in Lakehouse metastoreFiles/ section; not a Delta table; no SQL accessTables/ section (not Files/) for SQL analytics endpoint access?Files/ section expecting SQL access (Files are not Delta tables)VERSION AS OF for version number, TIMESTAMP AS OF for date; default 7-day retentionTables/ = Delta format, SQL-queryable; Files/ = raw files, Spark-only