Transform data in delta lake using mapping data flows

APPLIES TO: Azure Data Factory Azure Synapse Analytics

Tip

Try out Data Factory in Microsoft Fabric, an all-in-one analytics solution for enterprises. Microsoft Fabric covers everything from data movement to data science, real-time analytics, business intelligence, and reporting. Learn how to start a new trial for free!

If you're new to Azure Data Factory, see Introduction to Azure Data Factory.

In this tutorial, you'll use the data flow canvas to create data flows that allow you to analyze and transform data in Azure Data Lake Storage (ADLS) Gen2 and store it in Delta Lake.

Prerequisites

  • Azure subscription. If you don't have an Azure subscription, create a free Azure account before you begin.
  • Azure storage account. You use ADLS storage as a source and sink data stores. If you don't have a storage account, see Create an Azure storage account for steps to create one.

The file that we are transforming in this tutorial is MoviesDB.csv, which can be found here. To retrieve the file from GitHub, copy the contents to a text editor of your choice to save locally as a .csv file. To upload the file to your storage account, see Upload blobs with the Azure portal. The examples will be referencing a container named 'sample-data'.

Create a data factory

In this step, you create a data factory and open the Data Factory UX to create a pipeline in the data factory.

  1. Open Microsoft Edge or Google Chrome. Currently, Data Factory UI is supported only in the Microsoft Edge and Google Chrome web browsers.

  2. On the left menu, select Create a resource > Integration > Data Factory

  3. On the New data factory page, under Name, enter ADFTutorialDataFactory

  4. Select the Azure subscription in which you want to create the data factory.

  5. For Resource Group, take one of the following steps:

    a. Select Use existing, and select an existing resource group from the drop-down list.

    b. Select Create new, and enter the name of a resource group.

    To learn about resource groups, see Use resource groups to manage your Azure resources.

  6. Under Version, select V2.

  7. Under Location, select a location for the data factory. Only locations that are supported are displayed in the drop-down list. Data stores (for example, Azure Storage and SQL Database) and computes (for example, Azure HDInsight) used by the data factory can be in other regions.

  8. Select Create.

  9. After the creation is finished, you see the notice in Notifications center. Select Go to resource to navigate to the Data factory page.

  10. Select Author & Monitor to launch the Data Factory UI in a separate tab.

Create a pipeline with a data flow activity

In this step, you'll create a pipeline that contains a data flow activity.

  1. On the home page, select Orchestrate.

    Screenshot that shows the ADF home page.

  2. In the General tab for the pipeline, enter DeltaLake for Name of the pipeline.

  3. In the Activities pane, expand the Move and Transform accordion. Drag and drop the Data Flow activity from the pane to the pipeline canvas.

    Screenshot that shows the pipeline canvas where you can drop the Data Flow activity.

  4. In the Adding Data Flow pop-up, select Create new Data Flow and then name your data flow DeltaLake. Click Finish when done.

    Screenshot that shows where you name your data flow when you create a new data flow.

  5. In the top bar of the pipeline canvas, slide the Data Flow debug slider on. Debug mode allows for interactive testing of transformation logic against a live Spark cluster. Data Flow clusters take 5-7 minutes to warm up and users are recommended to turn on debug first if they plan to do Data Flow development. For more information, see Debug Mode.

    Screenshot that shows where is the Data flow debug slider.

Build transformation logic in the data flow canvas

You will generate two data flows in this tutorial. The first data flow is a simple source to sink to generate a new Delta Lake from the movies CSV file from above. Lastly, you'll create this flow design below to update data in Delta Lake.

Final flow

Tutorial objectives

  1. Take the MoviesCSV dataset source from above, and form a new Delta Lake from it. 1. Build the logic to updated ratings for 1988 movies to '1'.
  2. Delete all movies from 1950.
  3. Insert new movies for 2021 by duplicating the movies from 1960.

Start from a blank data flow canvas

  1. Click on the source transformation

  2. Click new next to dataset in the bottom panel 1 Create a new Linked Service for ADLS Gen2

  3. Choose Delimited Text for the dataset type

  4. Name the dataset “MoviesCSV” 

  5. Point to the MoviesCSV file that you uploaded to storage above

  6. Set it to be comma delimited and include header on first row 

  7. Go to the source projection tab and click "Detect data types"

  8. Once you have your projection set, you can continue 

  9. Add a sink transformation

  10. Delta is an inline dataset type. You will need to point to your ADLS Gen2 storage account.

    Inline dataset

  11. Choose a folder name in your storage container where you would like ADF to create the Delta Lake

  12. Go back to the pipeline designer and click Debug to execute the pipeline in debug mode with just this data flow activity on the canvas. This will generate your new Delta Lake in ADLS Gen2.

  13. From Factory Resources, click new > Data flow 

  14. Use the MoviesCSV again as a source and click "Detect data types" again

  15. Add a filter transformation to your source transformation in the graph

  16. Only allow movie rows that match the three years you are going to work with which will be 1950, 1988, and 1960

  17. Update ratings for each 1988 movie to '1' by now adding a derived column transformation to your filter transformation

  18. In that same derived column, create movies for 2021 by taking an existing year and change the year to 2021. Let’s pick 1960.

  19. This is what your three derived columns will look like

    Derived column

  20. Update, insert, delete, and upsert policies are created in the alter Row transform. Add an alter row transformation after your derived column.

  21. Your alter row policies should look like this.

    Alter row

  22. Now that you’ve set the proper policy for each alter row type, check that the proper update rules have been set on the sink transformation

    Sink

  23. Here we are using the Delta Lake sink to your ADLS Gen2 data lake and allowing inserts, updates, deletes. 

  24. Note that the Key Columns are a composite key made up of the Movie primary key column and year column. This is because we created fake 2021 movies by duplicating the 1960 rows. This avoids collisions when looking up the existing rows by providing uniqueness.

Download completed sample

Here is a sample solution for the Delta pipeline with a data flow for update/delete rows in the lake:

Learn more about the data flow expression language.