Master ETL processes seamlessly with Microsoft Fabric
A complete, end-to-end Microsoft Fabric ETL training built for team autonomy. In two days, participants learn how to ingest, transform, and operationalize data flows using Fabric’s core building blocks: Data Factory (pipelines + Dataflows Gen2) and the Lakehouse on OneLake with Delta tables.
By the end of the workshop, your team can build a coherent ingestion-to-model flow, monitor executions, apply the right permission model, and optimize transformations (including when query folding can help).
We don’t believe in training that overwhelms participants with theory. We believe in structured immersion. Our “3-Stage Autonomy Arc” ensures concepts aren’t just understood ; they’re truly mastered.
1) Foundations & Architecture
Participants align on ETL fundamentals and the Fabric landscape: how experiences connect, what OneLake/Lakehouse/Delta mean in practice, and where Data Factory (pipelines + Dataflows Gen2) fits in the workflow.
2) Build Pipelines & Transformations
Participants create ingestion and transformation flows with Dataflows Gen2 and orchestrate them with pipelines (including running Dataflows inside pipelines).
3) Operate: Monitoring, Security & Optimization
Participants learn to run pipelines reliably: destinations and refresh behavior, monitoring run history, Lakehouse permissions/sharing, and optimization choices (transformation strategy decisions, query folding considerations).
Participants learn how to build and operate a complete Fabric ETL flow: ingest with Dataflows Gen2, orchestrate with pipelines, land data in a Lakehouse with Delta tables, and manage monitoring + access + optimization decisions confidently.
Module 1 - Fabric Foundations for ETL
ETL principles, how Fabric experiences connect end-to-end, and the role of OneLake/Lakehouse/Delta in a modern data integration workflow.
Module 2 - Dataflows Gen2 for Ingestion & Transformation
Building ingestion flows with Dataflows Gen2 (Power Query online), choosing destinations and understanding refresh/write behavior so outputs are predictable and maintainable.
Module 3 - Orchestration with Data Factory Pipelines
Creating pipelines to automate sequencing, dependencies, and execution - including running a Dataflow Gen2 as a pipeline activity and tracking run history.
Module 4 - Lakehouse & Delta Tables
Loading and shaping data into Delta tables in the Lakehouse and understanding the patterns that make Lakehouse storage usable and consistent for analytics teams.
Module 5 - Security, Monitoring & Optimization
Permissions and sharing for Lakehouses, monitoring practices, and optimization choices (including transformation strategy selection and performance considerations).