In your case you need "Self-hosted" and follow the instructions to download and install the integration runtime on a machine that has access to your NAS drive.
After installation, register the integration runtime with your ADF.
- Create the linked services :
Then you need to create a linked service and configure the linked service to use the self-hosted integration runtime and provide the necessary details to connect to the NAS drive (file path, credentials).
Similarly, create another linked service for your Azure Blob Storage configure it with the necessary details (storage account name, key, container).
- Create the Datasets :
- Create Dataset for NAS Drive:
- Go to the "Author" tab and create a new dataset.
- Choose "File System" as the dataset type.
- Configure the dataset to use the linked service created for the NAS drive.
- Specify the file path or pattern if needed.
- Create Dataset for Azure Blob Storage:
- Similarly, create another dataset for Azure Blob Storage.
- Choose "Azure Blob Storage" as the dataset type and configure it to use the linked service for Blob Storage.
- Specify the container and folder path if needed.
- Create the pipeline :
- Create a New Pipeline:
- In the "Author" tab, create a new pipeline.
- Add Copy Activity:
- Drag and drop the "Copy data" activity onto the pipeline canvas.
- Configure the copy activity to use the NAS drive dataset as the source and the Azure Blob Storage dataset as the sink.
- Configure Source and Sink:
- In the source tab, specify the file path or wildcard pattern to include all required files.
- In the sink tab, specify the destination folder and file naming pattern if necessary.
Finally then schedule and trigger the pipeline.
More links :
https://learn.microsoft.com/en-us/azure/data-factory/connector-file-system?tabs=data-factory