Edit

Share via


Create a copy activity in Microsoft Fabric Data Factory

The copy activity in Microsoft Fabric Data Factory can help you connect to your Azure Database for PostgreSQL flexible server instance to perform data movement and transformation activities.

The copy activity supports copy command, bulk insert, and upsert as write methods. To learn more, see Configure Azure Database for PostgreSQL in a copy activity.

This article has step-by-step instructions on how to create a copy activity.

Prerequisites

Create a copy activity

  1. In Microsoft Fabric, select your workspace, switch to Data factory, and then select the New item button.

  2. On the New item pane, search for pipeline and select the Data pipeline tile.

    Screenshot that shows selections for starting the process of creating a data pipeline.

  3. In the New pipeline dialog, enter a name and then select the Create button to create a data pipeline.

    Screenshot that shows the dialog for naming a new pipeline.

  4. On the Activities menu, select Copy data, and then select Add to canvas.

    Screenshot that shows selections for copying data and adding it to a canvas.

  5. With the copy activity selected on the data pipeline canvas, on the General tab, enter a name for the activity.

    Screenshot that shows where to enter a name for a copy activity on the General tab.

  6. On the Source tab, select or create a source connection. Learn more about connecting to your data by using the modern get-data experience for data pipelines.

    Screenshot that shows where to select or create a source connection on the Source tab.

    The following example shows the selection of an Azure Database for PostgreSQL table as a source connection.

    Screenshot that shows a source connection selected.

  7. On the Destination tab, select or create an Azure Database for PostgreSQL connection.

    Screenshot that shows where to select or create a destination data source on the Destination tab.

  8. For Write method, select Copy command, Bulk insert, or Upsert.

  9. If a custom mapping is required, configure your mapping on the Mapping tab.

  10. Validate your pipeline.

  11. Select the Run button, which runs the pipeline manually.

  12. Set up a trigger for your pipeline.

Specify the behavior of key columns on upsert

When you upsert data by using the Azure Database for PostgreSQL connector, you need to specify fields called key columns. You specify them in the Key columns area of the Destination tab.

Screenshot that shows the area for key columns on the Destination tab.

There are two acceptable ways to use key columns:

  • Select New and add all the primary key columns of the table for the destination data source.

    Screenshot that shows an example with all key columns for a destination data source.

  • Select New and add one or more unique columns of the table for the destination data source.