Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
The Copy job in Data Factory makes it easy to move data from your source to your destination--no pipelines required. With a simple, guided experience, you can set up data transfers using built-in patterns for both batch and incremental copy. Whether you’re new to data integration or just want a faster way to get your data where it needs to go, Copy job offers a flexible and user-friendly solution.
Some advantages of the Copy job over other data movement methods include:
- Easy to use: Set up and monitor data copying with a simple, guided experience—no technical expertise needed.
- Efficient: Copy only new or changed data to save time and resources, with minimal manual steps.
- Flexible: Choose which data to move, map columns, set how data is written, and schedule jobs to run once or regularly.
- High performance: Move large amounts of data quickly and reliably, thanks to a serverless, scalable system.
Supported connectors
With Copy job, you can move your data between cloud data stores or from on-premises sources that are behind a firewall or inside a virtual network using a gateway. Copy job supports the following data stores as sources or destinations:
Connector | Source | Destination | Read - Full load | Read - Incremental load (watermark based) | Read - CDC (Preview) | Write - Append | Write - Override | Write - Merge |
---|---|---|---|---|---|---|---|---|
Azure SQL DB | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Oracle | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
On-premises SQL Server | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Fabric Warehouse | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Fabric Lakehouse table | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Fabric Lakehouse file | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Amazon S3 | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Data Lake Storage Gen2 | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Blob Storage | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure SQL Managed Instance | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Snowflake | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Synapse Analytics | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Data Explorer | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure PostgreSQL | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Google Cloud Storage | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
MySQL | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure MySQL | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
PostgreSQL | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
SQL database in Fabric (Preview) | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Amazon S3 compatible | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
SAP HANA | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
ODBC | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Amazon RDS for SQL Server | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Google BigQuery | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Salesforce | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Salesforce service cloud | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Tables | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Files | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
SFTP | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
FTP | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
IBM Db2 database | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Vertica | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
ServiceNow | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Oracle Cloud Storage | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
MariaDB | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Dataverse | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Dynamics 365 | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Dynamics CRM | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Azure Cosmos DB for NoSQL | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
HTTP | ![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
![]() |
Note
Currently, when using Copy Job for CDC replication from a supported source store, the supported destination stores include Azure SQL Database, on-premises SQL Server, Azure SQL Managed Instance, and SQL Database in Fabric (Preview).
Copy behavior
You can pick how your data is delivered:
- Full copy mode: Every time the job runs, it copies all data from your source to your destination.
- Incremental copy mode: The first run copies everything, and future runs only move new or changed data. For databases, this means only new or updated rows are copied. If your database uses CDC (Change Data Capture), inserted, updated, and deleted rows are included. For storage sources, files with a newer LastModifiedTime are copied.
You can also decide how data is written to your destination:
By default, Copy job appends new data, so you keep a full history. If you prefer, you can choose to merge (update existing rows using a key column) or overwrite (replace existing data). If you select merge, Copy job uses the primary key by default, if one exists.
- When copying to a database: New rows are added to your tables. For supported databases, you can also choose to merge or overwrite existing data.
- When copying to storage: New data is saved as new files. If a file with the same name already exists, it's replaced.
Incremental column
When you use incremental copy mode, you pick an incremental column for each table. This column acts like a marker, so Copy job knows which rows are new or updated since the last run. Usually, the incremental column is a date/time value, or a number that goes up with each new row. If your source database uses Change Data Capture (CDC), you don't need to pick a column--Copy job finds the changes for you.
Region availability
Copy job has the same regional availability as Fabric.
Pricing
You can get the details in pricing Copy job.