Copy and transform data in Azure Data Lake Storage Gen2 using Azure Data Factory or Azure Synapse Analytics
APPLIES TO: Azure Data Factory Azure Synapse Analytics
Tip
Try out Data Factory in Microsoft Fabric, an all-in-one analytics solution for enterprises. Microsoft Fabric covers everything from data movement to data science, real-time analytics, business intelligence, and reporting. Learn how to start a new trial for free!
Azure Data Lake Storage Gen2 (ADLS Gen2) is a set of capabilities dedicated to big data analytics built into Azure Blob storage. You can use it to interface with your data by using both file system and object storage paradigms.
This article outlines how to use Copy Activity to copy data from and to Azure Data Lake Storage Gen2, and use Data Flow to transform data in Azure Data Lake Storage Gen2. To learn more, read the introductory article for Azure Data Factory or Azure Synapse Analytics.
Tip
For data lake or data warehouse migration scenario, learn more in Migrate data from your data lake or data warehouse to Azure.
Supported capabilities
This Azure Data Lake Storage Gen2 connector is supported for the following capabilities:
Supported capabilities | IR | Managed private endpoint |
---|---|---|
Copy activity (source/sink) | ① ② | ✓ |
Mapping data flow (source/sink) | ① | ✓ |
Lookup activity | ① ② | ✓ |
GetMetadata activity | ① ② | ✓ |
Delete activity | ① ② | ✓ |
① Azure integration runtime ② Self-hosted integration runtime
For Copy activity, with this connector you can:
- Copy data from/to Azure Data Lake Storage Gen2 by using account key, service principal, or managed identities for Azure resources authentications.
- Copy files as-is or parse or generate files with supported file formats and compression codecs.
- Preserve file metadata during copy.
- Preserve ACLs when copying from Azure Data Lake Storage Gen1/Gen2.
Get started
Tip
For a walk-through of how to use the Data Lake Storage Gen2 connector, see Load data into Azure Data Lake Storage Gen2.
To perform the Copy activity with a pipeline, you can use one of the following tools or SDKs:
- The Copy Data tool
- The Azure portal
- The .NET SDK
- The Python SDK
- Azure PowerShell
- The REST API
- The Azure Resource Manager template
Create an Azure Data Lake Storage Gen2 linked service using UI
Use the following steps to create an Azure Data Lake Storage Gen2 linked service in the Azure portal UI.
Browse to the Manage tab in your Azure Data Factory or Synapse workspace and select Linked Services, then click New:
Search for Azure Data Lake Storage Gen2 and select the Azure Data Lake Storage Gen2 connector.
Configure the service details, test the connection, and create the new linked service.
Connector configuration details
The following sections provide information about properties that are used to define Data Factory and Synapse pipeline entities specific to Data Lake Storage Gen2.
Linked service properties
The Azure Data Lake Storage Gen2 connector supports the following authentication types. See the corresponding sections for details:
- Account key authentication
- Shared access signature authentication
- Service principal authentication
- System-assigned managed identity authentication
- User-assigned managed identity authentication
Note
- If want to use the public Azure integration runtime to connect to the Data Lake Storage Gen2 by leveraging the Allow trusted Microsoft services to access this storage account option enabled on Azure Storage firewall, you must use managed identity authentication. For more information about the Azure Storage firewalls settings, see Configure Azure Storage firewalls and virtual networks.
- When you use PolyBase or COPY statement to load data into Azure Synapse Analytics, if your source or staging Data Lake Storage Gen2 is configured with an Azure Virtual Network endpoint, you must use managed identity authentication as required by Azure Synapse. See the managed identity authentication section with more configuration prerequisites.
Account key authentication
To use storage account key authentication, the following properties are supported:
Property | Description | Required |
---|---|---|
type | The type property must be set to AzureBlobFS. | Yes |
url | Endpoint for Data Lake Storage Gen2 with the pattern of https://<accountname>.dfs.core.windows.net . |
Yes |
accountKey | Account key for Data Lake Storage Gen2. Mark this field as a SecureString to store it securely, or reference a secret stored in Azure Key Vault. | Yes |
connectVia | The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If this property isn't specified, the default Azure integration runtime is used. | No |
Note
Secondary ADLS file system endpoint is not supported when using account key authentication. You can use other authentication types.
Example:
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"url": "https://<accountname>.dfs.core.windows.net",
"accountkey": {
"type": "SecureString",
"value": "<accountkey>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Shared access signature authentication
A shared access signature provides delegated access to resources in your storage account. You can use a shared access signature to grant a client limited permissions to objects in your storage account for a specified time.
You don't have to share your account access keys. The shared access signature is a URI that encompasses in its query parameters all the information necessary for authenticated access to a storage resource. To access storage resources with the shared access signature, the client only needs to pass in the shared access signature to the appropriate constructor or method.
For more information about shared access signatures, see Shared access signatures: Understand the shared access signature model.
Note
- The service now supports both service shared access signatures and account shared access signatures. For more information about shared access signatures, see Grant limited access to Azure Storage resources using shared access signatures.
- In later dataset configurations, the folder path is the absolute path starting from the container level. You need to configure one aligned with the path in your SAS URI.
The following properties are supported for using shared access signature authentication:
Property | Description | Required |
---|---|---|
type | The type property must be set to AzureBlobFS (suggested) |
Yes |
sasUri | Specify the shared access signature URI to the Storage resources such as blob or container. Mark this field as SecureString to store it securely. You can also put the SAS token in Azure Key Vault to use auto-rotation and remove the token portion. For more information, see the following samples and Store credentials in Azure Key Vault. |
Yes |
connectVia | The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or the self-hosted integration runtime (if your data store is in a private network). If this property isn't specified, the service uses the default Azure integration runtime. | No |
Note
If you're using the AzureStorage
type linked service, it's still supported as is. But we suggest that you use the new AzureDataLakeStorageGen2
linked service type going forward.
Example:
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"sasUri": {
"type": "SecureString",
"value": "<SAS URI of the Azure Storage resource e.g. https://<accountname>.blob.core.windows.net/?sv=<storage version>&st=<start time>&se=<expire time>&sr=<resource>&sp=<permissions>&sip=<ip range>&spr=<protocol>&sig=<signature>>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Example: store the account key in Azure Key Vault
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"sasUri": {
"type": "SecureString",
"value": "<SAS URI of the Azure Storage resource without token e.g. https://<accountname>.blob.core.windows.net/>"
},
"sasToken": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "<Azure Key Vault linked service name>",
"type": "LinkedServiceReference"
},
"secretName": "<secretName with value of SAS token e.g. ?sv=<storage version>&st=<start time>&se=<expire time>&sr=<resource>&sp=<permissions>&sip=<ip range>&spr=<protocol>&sig=<signature>>"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
When you create a shared access signature URI, consider the following points:
- Set appropriate read/write permissions on objects based on how the linked service (read, write, read/write) is used.
- Set Expiry time appropriately. Make sure that the access to Storage objects doesn't expire within the active period of the pipeline.
- The URI should be created at the right container or blob based on the need. A shared access signature URI to a blob allows the data factory or Synapse pipeline to access that particular blob. A shared access signature URI to a Blob storage container allows the data factory or Synapse pipeline to iterate through blobs in that container. To provide access to more or fewer objects later, or to update the shared access signature URI, remember to update the linked service with the new URI.
Service principal authentication
To use service principal authentication, follow these steps.
Register an application with the Microsoft identity platform. To learn how, see Quickstart: Register an application with the Microsoft identity platform. Make note of these values, which you use to define the linked service:
- Application ID
- Application key
- Tenant ID
Grant the service principal proper permission. See examples on how permission works in Data Lake Storage Gen2 from Access control lists on files and directories
- As source: In Storage Explorer, grant at least Execute permission for ALL upstream folders and the file system, along with Read permission for the files to copy. Alternatively, in Access control (IAM), grant at least the Storage Blob Data Reader role.
- As sink: In Storage Explorer, grant at least Execute permission for ALL upstream folders and the file system, along with Write permission for the sink folder. Alternatively, in Access control (IAM), grant at least the Storage Blob Data Contributor role.
Note
If you use UI to author and the service principal is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with Read + Execute permission to continue.
These properties are supported for the linked service:
Property | Description | Required |
---|---|---|
type | The type property must be set to AzureBlobFS. | Yes |
url | Endpoint for Data Lake Storage Gen2 with the pattern of https://<accountname>.dfs.core.windows.net . |
Yes |
servicePrincipalId | Specify the application's client ID. | Yes |
servicePrincipalCredentialType | The credential type to use for service principal authentication. Allowed values are ServicePrincipalKey and ServicePrincipalCert. | Yes |
servicePrincipalCredential | The service principal credential. When you use ServicePrincipalKey as the credential type, specify the application's key. Mark this field as SecureString to store it securely, or reference a secret stored in Azure Key Vault. When you use ServicePrincipalCert as the credential, reference a certificate in Azure Key Vault, and ensure the certificate content type is PKCS #12. |
Yes |
servicePrincipalKey | Specify the application's key. Mark this field as SecureString to store it securely, or reference a secret stored in Azure Key Vault. This property is still supported as-is for servicePrincipalId + servicePrincipalKey . As ADF adds new service principal certificate authentication, the new model for service principal authentication is servicePrincipalId + servicePrincipalCredentialType + servicePrincipalCredential . |
No |
tenant | Specify the tenant information (domain name or tenant ID) under which your application resides. Retrieve it by hovering the mouse in the upper-right corner of the Azure portal. | Yes |
azureCloudType | For service principal authentication, specify the type of Azure cloud environment to which your Microsoft Entra application is registered. Allowed values are AzurePublic, AzureChina, AzureUsGovernment, and AzureGermany. By default, the data factory or Synapse pipeline's cloud environment is used. |
No |
connectVia | The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. | No |
Example: using service principal key authentication
You can also store service principal key in Azure Key Vault.
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"url": "https://<accountname>.dfs.core.windows.net",
"servicePrincipalId": "<service principal id>",
"servicePrincipalCredentialType": "ServicePrincipalKey",
"servicePrincipalCredential": {
"type": "SecureString",
"value": "<service principal key>"
},
"tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>"
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Example: using service principal certificate authentication
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"url": "https://<accountname>.dfs.core.windows.net",
"servicePrincipalId": "<service principal id>",
"servicePrincipalCredentialType": "ServicePrincipalCert",
"servicePrincipalCredential": {
"type": "AzureKeyVaultSecret",
"store": {
"referenceName": "<AKV reference>",
"type": "LinkedServiceReference"
},
"secretName": "<certificate name in AKV>"
},
"tenant": "<tenant info, e.g. microsoft.onmicrosoft.com>"
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
System-assigned managed identity authentication
A data factory or Synapse workspace can be associated with a system-assigned managed identity. You can directly use this system-assigned managed identity for Data Lake Storage Gen2 authentication, similar to using your own service principal. It allows this designated factory or workspace to access and copy data to or from your Data Lake Storage Gen2.
To use system-assigned managed identity authentication, follow these steps.
Retrieve the system-assigned managed identity information by copying the value of the managed identity object ID generated along with your data factory or Synapse workspace.
Grant the system-assigned managed identity proper permission. See examples on how permission works in Data Lake Storage Gen2 from Access control lists on files and directories.
- As source: In Storage Explorer, grant at least Execute permission for ALL upstream folders and the file system, along with Read permission for the files to copy. Alternatively, in Access control (IAM), grant at least the Storage Blob Data Reader role.
- As sink: In Storage Explorer, grant at least Execute permission for ALL upstream folders and the file system, along with Write permission for the sink folder. Alternatively, in Access control (IAM), grant at least the Storage Blob Data Contributor role.
These properties are supported for the linked service:
Property | Description | Required |
---|---|---|
type | The type property must be set to AzureBlobFS. | Yes |
url | Endpoint for Data Lake Storage Gen2 with the pattern of https://<accountname>.dfs.core.windows.net . |
Yes |
connectVia | The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. | No |
Example:
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"url": "https://<accountname>.dfs.core.windows.net",
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
User-assigned managed identity authentication
A data factory can be assigned with one or multiple user-assigned managed identities. You can use this user-assigned managed identity for Blob storage authentication, which allows to access and copy data from or to Data Lake Storage Gen2. To learn more about managed identities for Azure resources, see Managed identities for Azure resources
To use user-assigned managed identity authentication, follow these steps:
Create one or multiple user-assigned managed identities and grant access to Azure Data Lake Storage Gen2. See examples on how permission works in Data Lake Storage Gen2 from Access control lists on files and directories.
- As source: In Storage Explorer, grant at least Execute permission for ALL upstream folders and the file system, along with Read permission for the files to copy. Alternatively, in Access control (IAM), grant at least the Storage Blob Data Reader role.
- As sink: In Storage Explorer, grant at least Execute permission for ALL upstream folders and the file system, along with Write permission for the sink folder. Alternatively, in Access control (IAM), grant at least the Storage Blob Data Contributor role.
Assign one or multiple user-assigned managed identities to your data factory and create credentials for each user-assigned managed identity.
These properties are supported for the linked service:
Property | Description | Required |
---|---|---|
type | The type property must be set to AzureBlobFS. | Yes |
url | Endpoint for Data Lake Storage Gen2 with the pattern of https://<accountname>.dfs.core.windows.net . |
Yes |
credentials | Specify the user-assigned managed identity as the credential object. | Yes |
connectVia | The integration runtime to be used to connect to the data store. You can use the Azure integration runtime or a self-hosted integration runtime if your data store is in a private network. If not specified, the default Azure integration runtime is used. | No |
Example:
{
"name": "AzureDataLakeStorageGen2LinkedService",
"properties": {
"type": "AzureBlobFS",
"typeProperties": {
"url": "https://<accountname>.dfs.core.windows.net",
"credential": {
"referenceName": "credential1",
"type": "CredentialReference"
}
},
"connectVia": {
"referenceName": "<name of Integration Runtime>",
"type": "IntegrationRuntimeReference"
}
}
}
Note
If you use Data Factory UI to author and the managed identity is not set with "Storage Blob Data Reader/Contributor" role in IAM, when doing test connection or browsing/navigating folders, choose "Test connection to file path" or "Browse from specified path", and specify a path with Read + Execute permission to continue.
Important
If you use PolyBase or COPY statement to load data from Data Lake Storage Gen2 into Azure Synapse Analytics, when you use managed identity authentication for Data Lake Storage Gen2, make sure you also follow steps 1 to 3 in this guidance. Those steps will register your server with Microsoft Entra ID and assign the Storage Blob Data Contributor role to your server. Data Factory handles the rest. If you configure Blob storage with an Azure Virtual Network endpoint, you also need to have Allow trusted Microsoft services to access this storage account turned on under Azure Storage account Firewalls and Virtual networks settings menu as required by Azure Synapse.
Dataset properties
For a full list of sections and properties available for defining datasets, see Datasets.
Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
- Avro format
- Binary format
- Delimited text format
- Excel format
- JSON format
- ORC format
- Parquet format
- XML format
The following properties are supported for Data Lake Storage Gen2 under location
settings in the format-based dataset:
Property | Description | Required |
---|---|---|
type | The type property under location in the dataset must be set to AzureBlobFSLocation. |
Yes |
fileSystem | The Data Lake Storage Gen2 file system name. | No |
folderPath | The path to a folder under the given file system. If you want to use a wildcard to filter folders, skip this setting and specify it in activity source settings. | No |
fileName | The file name under the given fileSystem + folderPath. If you want to use a wildcard to filter files, skip this setting and specify it in activity source settings. | No |
Example:
{
"name": "DelimitedTextDataset",
"properties": {
"type": "DelimitedText",
"linkedServiceName": {
"referenceName": "<Data Lake Storage Gen2 linked service name>",
"type": "LinkedServiceReference"
},
"schema": [ < physical schema, optional, auto retrieved during authoring > ],
"typeProperties": {
"location": {
"type": "AzureBlobFSLocation",
"fileSystem": "filesystemname",
"folderPath": "folder/subfolder"
},
"columnDelimiter": ",",
"quoteChar": "\"",
"firstRowAsHeader": true,
"compressionCodec": "gzip"
}
}
}
Copy activity properties
For a full list of sections and properties available for defining activities, see Copy activity configurations and Pipelines and activities. This section provides a list of properties supported by the Data Lake Storage Gen2 source and sink.
Azure Data Lake Storage Gen2 as a source type
Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
- Avro format
- Binary format
- Delimited text format
- Excel format
- JSON format
- ORC format
- Parquet format
- XML format
You have several options to copy data from ADLS Gen2:
- Copy from the given path specified in the dataset.
- Wildcard filter against folder path or file name, see
wildcardFolderPath
andwildcardFileName
. - Copy the files defined in a given text file as file set, see
fileListPath
.
The following properties are supported for Data Lake Storage Gen2 under storeSettings
settings in format-based copy source:
Property | Description | Required |
---|---|---|
type | The type property under storeSettings must be set to AzureBlobFSReadSettings. |
Yes |
Locate the files to copy: | ||
OPTION 1: static path |
Copy from the given file system or folder/file path specified in the dataset. If you want to copy all files from a file system/folder, additionally specify wildcardFileName as * . |
|
OPTION 2: wildcard - wildcardFolderPath |
The folder path with wildcard characters under the given file system configured in dataset to filter source folders. Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character); use ^ to escape if your actual folder name has wildcard or this escape char inside. See more examples in Folder and file filter examples. |
No |
OPTION 2: wildcard - wildcardFileName |
The file name with wildcard characters under the given file system + folderPath/wildcardFolderPath to filter source files. Allowed wildcards are: * (matches zero or more characters) and ? (matches zero or single character); use ^ to escape if your actual file name has wildcard or this escape char inside. See more examples in Folder and file filter examples. |
Yes |
OPTION 3: a list of files - fileListPath |
Indicates to copy a given file set. Point to a text file that includes a list of files you want to copy, one file per line, which is the relative path to the path configured in the dataset. When using this option, do not specify file name in dataset. See more examples in File list examples. |
No |
Additional settings: | ||
recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. Note that when recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. Allowed values are true (default) and false. This property doesn't apply when you configure fileListPath . |
No |
deleteFilesAfterCompletion | Indicates whether the binary files will be deleted from source store after successfully moving to the destination store. The file deletion is per file, so when copy activity fails, you will see some files have already been copied to the destination and deleted from source, while others are still remaining on source store. This property is only valid in binary files copy scenario. The default value: false. |
No |
modifiedDatetimeStart | Files filter based on the attribute: Last Modified. The files will be selected if their last modified time is greater than or equal to modifiedDatetimeStart and less than modifiedDatetimeEnd . The time is applied to UTC time zone in the format of "2018-12-01T05:00:00Z". The properties can be NULL, which means no file attribute filter will be applied to the dataset. When modifiedDatetimeStart has datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal with the datetime value will be selected. When modifiedDatetimeEnd has datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value will be selected.This property doesn't apply when you configure fileListPath . |
No |
modifiedDatetimeEnd | Same as above. | No |
enablePartitionDiscovery | For files that are partitioned, specify whether to parse the partitions from the file path and add them as additional source columns. Allowed values are false (default) and true. |
No |
partitionRootPath | When partition discovery is enabled, specify the absolute root path in order to read partitioned folders as data columns. If it is not specified, by default, - When you use file path in dataset or list of files on source, partition root path is the path configured in dataset. - When you use wildcard folder filter, partition root path is the sub-path before the first wildcard. For example, assuming you configure the path in dataset as "root/folder/year=2020/month=08/day=27": - If you specify partition root path as "root/folder/year=2020", copy activity will generate two more columns month and day with value "08" and "27" respectively, in addition to the columns inside the files.- If partition root path is not specified, no extra column will be generated. |
No |
maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections. | No |
Example:
"activities":[
{
"name": "CopyFromADLSGen2",
"type": "Copy",
"inputs": [
{
"referenceName": "<Delimited text input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "DelimitedTextSource",
"formatSettings":{
"type": "DelimitedTextReadSettings",
"skipLineCount": 10
},
"storeSettings":{
"type": "AzureBlobFSReadSettings",
"recursive": true,
"wildcardFolderPath": "myfolder*A",
"wildcardFileName": "*.csv"
}
},
"sink": {
"type": "<sink type>"
}
}
}
]
Azure Data Lake Storage Gen2 as a sink type
Azure Data Factory supports the following file formats. Refer to each article for format-based settings.
The following properties are supported for Data Lake Storage Gen2 under storeSettings
settings in format-based copy sink:
Property | Description | Required |
---|---|---|
type | The type property under storeSettings must be set to AzureBlobFSWriteSettings. |
Yes |
copyBehavior | Defines the copy behavior when the source is files from a file-based data store. Allowed values are: - PreserveHierarchy (default): Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder. - FlattenHierarchy: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. - MergeFiles: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. |
No |
blockSizeInMB | Specify the block size in MB used to write data to ADLS Gen2. Learn more about Block Blobs. Allowed value is between 4 MB and 100 MB. By default, ADF automatically determines the block size based on your source store type and data. For non-binary copy into ADLS Gen2, the default block size is 100 MB so as to fit in at most approximately 4.75-TB data. It may be not optimal when your data is not large, especially when you use Self-hosted Integration Runtime with poor network resulting in operation timeout or performance issue. You can explicitly specify a block size, while ensure blockSizeInMB*50000 is big enough to store the data, otherwise copy activity run will fail. |
No |
maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections. | No |
metadata | Set custom metadata when copy to sink. Each object under the metadata array represents an extra column. The name defines the metadata key name, and the value indicates the data value of that key. If preserve attributes feature is used, the specified metadata will union/overwrite with the source file metadata.Allowed data values are: - $$LASTMODIFIED : a reserved variable indicates to store the source files' last modified time. Apply to file-based source with binary format only.- Expression - Static value |
No |
Example:
"activities":[
{
"name": "CopyToADLSGen2",
"type": "Copy",
"inputs": [
{
"referenceName": "<input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<Parquet output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "<source type>"
},
"sink": {
"type": "ParquetSink",
"storeSettings":{
"type": "AzureBlobFSWriteSettings",
"copyBehavior": "PreserveHierarchy",
"metadata": [
{
"name": "testKey1",
"value": "value1"
},
{
"name": "testKey2",
"value": "value2"
},
{
"name": "lastModifiedKey",
"value": "$$LASTMODIFIED"
}
]
}
}
}
}
]
Folder and file filter examples
This section describes the resulting behavior of the folder path and file name with wildcard filters.
folderPath | fileName | recursive | Source folder structure and filter result (files in bold are retrieved) |
---|---|---|---|
Folder* |
(Empty, use default) | false | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
Folder* |
(Empty, use default) | true | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
Folder* |
*.csv |
false | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
Folder* |
*.csv |
true | FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv AnotherFolderB File6.csv |
File list examples
This section describes the resulting behavior of using file list path in copy activity source.
Assuming you have the following source folder structure and want to copy the files in bold:
Sample source structure | Content in FileListToCopy.txt | ADF configuration |
---|---|---|
filesystem FolderA File1.csv File2.json Subfolder1 File3.csv File4.json File5.csv Metadata FileListToCopy.txt |
File1.csv Subfolder1/File3.csv Subfolder1/File5.csv |
In dataset: - File system: filesystem - Folder path: FolderA In copy activity source: - File list path: filesystem/Metadata/FileListToCopy.txt The file list path points to a text file in the same data store that includes a list of files you want to copy, one file per line with the relative path to the path configured in the dataset. |
Some recursive and copyBehavior examples
This section describes the resulting behavior of the copy operation for different combinations of recursive and copyBehavior values.
recursive | copyBehavior | Source folder structure | Resulting target |
---|---|---|---|
true | preserveHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the same structure as the source: Folder1 File1 File2 Subfolder1 File3 File4 File5 |
true | flattenHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 autogenerated name for File1 autogenerated name for File2 autogenerated name for File3 autogenerated name for File4 autogenerated name for File5 |
true | mergeFiles | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 File1 + File2 + File3 + File4 + File5 contents are merged into one file with an autogenerated file name. |
false | preserveHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 File1 File2 Subfolder1 with File3, File4, and File5 isn't picked up. |
false | flattenHierarchy | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 autogenerated name for File1 autogenerated name for File2 Subfolder1 with File3, File4, and File5 isn't picked up. |
false | mergeFiles | Folder1 File1 File2 Subfolder1 File3 File4 File5 |
The target Folder1 is created with the following structure: Folder1 File1 + File2 contents are merged into one file with an autogenerated file name. autogenerated name for File1 Subfolder1 with File3, File4, and File5 isn't picked up. |
Preserve metadata during copy
When you copy files from Amazon S3/Azure Blob/Azure Data Lake Storage Gen2 to Azure Data Lake Storage Gen2/Azure Blob, you can choose to preserve the file metadata along with data. Learn more from Preserve metadata.
Preserve ACLs from Data Lake Storage Gen1/Gen2
When you copy files from Azure Data Lake Storage Gen1/Gen2 to Gen2, you can choose to preserve the POSIX access control lists (ACLs) along with data. Learn more from Preserve ACLs from Data Lake Storage Gen1/Gen2 to Gen2.
Tip
To copy data from Azure Data Lake Storage Gen1 into Gen2 in general, see Copy data from Azure Data Lake Storage Gen1 to Gen2 for a walk-through and best practices.
Mapping data flow properties
When you're transforming data in mapping data flows, you can read and write files from Azure Data Lake Storage Gen2 in the following formats:
Format specific settings are located in the documentation for that format. For more information, see Source transformation in mapping data flow and Sink transformation in mapping data flow.
Source transformation
In the source transformation, you can read from a container, folder, or individual file in Azure Data Lake Storage Gen2. The Source options tab lets you manage how the files get read.
Wildcard path: Using a wildcard pattern will instruct ADF to loop through each matching folder and file in a single Source transformation. This is an effective way to process multiple files within a single flow. Add multiple wildcard matching patterns with the + sign that appears when hovering over your existing wildcard pattern.
From your source container, choose a series of files that match a pattern. Only container can be specified in the dataset. Your wildcard path must therefore also include your folder path from the root folder.
Wildcard examples:
*
Represents any set of characters**
Represents recursive directory nesting?
Replaces one character[]
Matches one of more characters in the brackets/data/sales/**/*.csv
Gets all csv files under /data/sales/data/sales/20??/**/
Gets all files in the 20th century/data/sales/*/*/*.csv
Gets csv files two levels under /data/sales/data/sales/2004/*/12/[XY]1?.csv
Gets all csv files in 2004 in December starting with X or Y prefixed by a two-digit number
Partition Root Path: If you have partitioned folders in your file source with a key=value
format (for example, year=2019), then you can assign the top level of that partition folder tree to a column name in your data flow data stream.
First, set a wildcard to include all paths that are the partitioned folders plus the leaf files that you wish to read.
Use the Partition Root Path setting to define what the top level of the folder structure is. When you view the contents of your data via a data preview, you'll see that ADF will add the resolved partitions found in each of your folder levels.
List of files: This is a file set. Create a text file that includes a list of relative path files to process. Point to this text file.
Column to store file name: Store the name of the source file in a column in your data. Enter a new column name here to store the file name string.
After completion: Choose to do nothing with the source file after the data flow runs, delete the source file, or move the source file. The paths for the move are relative.
To move source files to another location post-processing, first select "Move" for file operation. Then, set the "from" directory. If you're not using any wildcards for your path, then the "from" setting will be the same folder as your source folder.
If you have a source path with wildcard, your syntax will look like this below:
/data/sales/20??/**/*.csv
You can specify "from" as
/data/sales
And "to" as
/backup/priorSales
In this case, all files that were sourced under /data/sales are moved to /backup/priorSales.
Note
File operations run only when you start the data flow from a pipeline run (a pipeline debug or execution run) that uses the Execute Data Flow activity in a pipeline. File operations do not run in Data Flow debug mode.
Filter by last modified: You can filter which files you process by specifying a date range of when they were last modified. All date-times are in UTC.
Enable change data capture: If true, you will get new or changed files only from the last run. Initial load of full snapshot data will always be gotten in the first run, followed by capturing new or changed files only in next runs. For more details, see Change data capture.
Sink properties
In the sink transformation, you can write to either a container or folder in Azure Data Lake Storage Gen2. the Settings tab lets you manage how the files get written.
Clear the folder: Determines whether or not the destination folder gets cleared before the data is written.
File name option: Determines how the destination files are named in the destination folder. The file name options are:
- Default: Allow Spark to name files based on PART defaults.
- Pattern: Enter a pattern that enumerates your output files per partition. For example, loans[n].csv will create loans1.csv, loans2.csv, and so on.
- Per partition: Enter one file name per partition.
- As data in column: Set the output file to the value of a column. The path is relative to the dataset container, not the destination folder. If you have a folder path in your dataset, it will be overridden.
- Output to a single file: Combine the partitioned output files into a single named file. The path is relative to the dataset folder. Please be aware that the merge operation can possibly fail based upon node size. This option is not recommended for large datasets.
Quote all: Determines whether to enclose all values in quotes
umask
You can optionally set the umask
for files using POSIX read, write, execute flags for owner, user and group.
Pre-processing and post-processing commands
You can optionally execute Hadoop filesystem commands before or after writing to an ADLS Gen2 sink. The following commands are supported:
cp
mv
rm
mkdir
Examples:
mkdir /folder1
mkdir -p folder1
mv /folder1/*.* /folder2/
cp /folder1/file1.txt /folder2
rm -r /folder1
Parameters are also supported through expression builder, for example:
mkdir -p {$tempPath}/commands/c1/c2
mv {$tempPath}/commands/*.* {$tempPath}/commands/c1/c2
By default, folders are created as user/root. Refer to the top level container with ‘/’.
Lookup activity properties
To learn details about the properties, check Lookup activity.
GetMetadata activity properties
To learn details about the properties, check GetMetadata activity
Delete activity properties
To learn details about the properties, check Delete activity
Legacy models
Note
The following models are still supported as-is for backward compatibility. You are suggested to use the new model mentioned in above sections going forward, and the ADF authoring UI has switched to generating the new model.
Legacy dataset model
Property | Description | Required |
---|---|---|
type | The type property of the dataset must be set to AzureBlobFSFile. | Yes |
folderPath | Path to the folder in Data Lake Storage Gen2. If not specified, it points to the root. Wildcard filter is supported. Allowed wildcards are * (matches zero or more characters) and ? (matches zero or single character). Use ^ to escape if your actual folder name has a wildcard or this escape char is inside. Examples: filesystem/folder/. See more examples in Folder and file filter examples. |
No |
fileName | Name or wildcard filter for the files under the specified "folderPath". If you don't specify a value for this property, the dataset points to all files in the folder. For filter, the wildcards allowed are * (matches zero or more characters) and ? (matches zero or single character).- Example 1: "fileName": "*.csv" - Example 2: "fileName": "???20180427.txt" Use ^ to escape if your actual file name has a wildcard or this escape char is inside.When fileName isn't specified for an output dataset and preserveHierarchy isn't specified in the activity sink, the copy activity automatically generates the file name with the following pattern: "Data.[activity run ID GUID].[GUID if FlattenHierarchy].[format if configured].[compression if configured]", for example, "Data.0a405f8a-93ff-4c6f-b3be-f69616f1df7a.txt.gz". If you copy from a tabular source using a table name instead of a query, the name pattern is "[table name].[format].[compression if configured]", for example, "MyTable.csv". |
No |
modifiedDatetimeStart | Files filter based on the attribute Last Modified. The files are selected if their last modified time is greater than or equal to modifiedDatetimeStart and less than modifiedDatetimeEnd . The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". The overall performance of data movement is affected by enabling this setting when you want to do file filter with huge amounts of files. The properties can be NULL, which means no file attribute filter is applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal to the datetime value are selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value are selected. |
No |
modifiedDatetimeEnd | Files filter based on the attribute Last Modified. The files are selected if their last modified time is greater than or equal to modifiedDatetimeStart and less than modifiedDatetimeEnd . The time is applied to the UTC time zone in the format of "2018-12-01T05:00:00Z". The overall performance of data movement is affected by enabling this setting when you want to do file filter with huge amounts of files. The properties can be NULL, which means no file attribute filter is applied to the dataset. When modifiedDatetimeStart has a datetime value but modifiedDatetimeEnd is NULL, it means the files whose last modified attribute is greater than or equal to the datetime value are selected. When modifiedDatetimeEnd has a datetime value but modifiedDatetimeStart is NULL, it means the files whose last modified attribute is less than the datetime value are selected. |
No |
format | If you want to copy files as is between file-based stores (binary copy), skip the format section in both the input and output dataset definitions. If you want to parse or generate files with a specific format, the following file format types are supported: TextFormat, JsonFormat, AvroFormat, OrcFormat, and ParquetFormat. Set the type property under format to one of these values. For more information, see the Text format, JSON format, Avro format, ORC format, and Parquet format sections. |
No (only for binary copy scenario) |
compression | Specify the type and level of compression for the data. For more information, see Supported file formats and compression codecs. Supported types are **GZip**, **Deflate**, **BZip2**, and **ZipDeflate** .Supported levels are Optimal and Fastest. |
No |
Tip
To copy all files under a folder, specify folderPath only.
To copy a single file with a given name, specify folderPath with a folder part and fileName with a file name.
To copy a subset of files under a folder, specify folderPath with a folder part and fileName with a wildcard filter.
Example:
{
"name": "ADLSGen2Dataset",
"properties": {
"type": "AzureBlobFSFile",
"linkedServiceName": {
"referenceName": "<Azure Data Lake Storage Gen2 linked service name>",
"type": "LinkedServiceReference"
},
"typeProperties": {
"folderPath": "myfilesystem/myfolder",
"fileName": "*",
"modifiedDatetimeStart": "2018-12-01T05:00:00Z",
"modifiedDatetimeEnd": "2018-12-01T06:00:00Z",
"format": {
"type": "TextFormat",
"columnDelimiter": ",",
"rowDelimiter": "\n"
},
"compression": {
"type": "GZip",
"level": "Optimal"
}
}
}
}
Legacy copy activity source model
Property | Description | Required |
---|---|---|
type | The type property of the copy activity source must be set to AzureBlobFSSource. | Yes |
recursive | Indicates whether the data is read recursively from the subfolders or only from the specified folder. When recursive is set to true and the sink is a file-based store, an empty folder or subfolder isn't copied or created at the sink. Allowed values are true (default) and false. |
No |
maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections. | No |
Example:
"activities":[
{
"name": "CopyFromADLSGen2",
"type": "Copy",
"inputs": [
{
"referenceName": "<ADLS Gen2 input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "AzureBlobFSSource",
"recursive": true
},
"sink": {
"type": "<sink type>"
}
}
}
]
Legacy copy activity sink model
Property | Description | Required |
---|---|---|
type | The type property of the copy activity sink must be set to AzureBlobFSSink. | Yes |
copyBehavior | Defines the copy behavior when the source is files from a file-based data store. Allowed values are: - PreserveHierarchy (default): Preserves the file hierarchy in the target folder. The relative path of the source file to the source folder is identical to the relative path of the target file to the target folder. - FlattenHierarchy: All files from the source folder are in the first level of the target folder. The target files have autogenerated names. - MergeFiles: Merges all files from the source folder to one file. If the file name is specified, the merged file name is the specified name. Otherwise, it's an autogenerated file name. |
No |
maxConcurrentConnections | The upper limit of concurrent connections established to the data store during the activity run. Specify a value only when you want to limit concurrent connections. | No |
Example:
"activities":[
{
"name": "CopyToADLSGen2",
"type": "Copy",
"inputs": [
{
"referenceName": "<input dataset name>",
"type": "DatasetReference"
}
],
"outputs": [
{
"referenceName": "<ADLS Gen2 output dataset name>",
"type": "DatasetReference"
}
],
"typeProperties": {
"source": {
"type": "<source type>"
},
"sink": {
"type": "AzureBlobFSSink",
"copyBehavior": "PreserveHierarchy"
}
}
}
]
Change data capture
Azure Data Factory can get new or changed files only from Azure Data Lake Storage Gen2 by enabling Enable change data capture in the mapping data flow source transformation. With this connector option, you can read new or updated files only and apply transformations before loading transformed data into destination datasets of your choice.
Make sure you keep the pipeline and activity name unchanged, so that the checkpoint can always be recorded from the last run to get changes from there. If you change your pipeline name or activity name, the checkpoint will be reset, and you will start from the beginning in the next run.
When you debug the pipeline, the Enable change data capture works as well. Be aware that the checkpoint will be reset when you refresh your browser during the debug run. After you are satisfied with the result from debug run, you can publish and trigger the pipeline. It will always start from the beginning regardless of the previous checkpoint recorded by debug run.
In the monitoring section, you always have the chance to rerun a pipeline. When you are doing so, the changes are always gotten from the checkpoint record in your selected pipeline run.
Related content
For a list of data stores supported as sources and sinks by the copy activity, see Supported data stores.