Hello @Vieira Dias, Leonor
please find below
To connect both storage accounts using Azure Data Factory and move files between them, follow these steps:
Set up your Azure Data Factory: If you haven't already, create an Azure Data Factory instance in your Azure subscription. You can do this through the Azure portal by searching for "Data Factory" and following the steps to create a new instance.
Create linked services: In your Data Factory instance, create linked services to represent your source (storage account A2 in resource group A3) and destination (storage account B2 in resource group B3) storage accounts. Linked services define the connection information and credentials required to connect to the storage accounts.
For each storage account, go to the "Author & Monitor" section of your Data Factory instance in the Azure portal.
Click on "Connections" and then "New Linked Service."
Select the appropriate storage account type (e.g., Azure Blob Storage) and provide the necessary details, such as the storage account name and access key.
Repeat this process for both storage accounts.
Create datasets: In Data Factory, datasets represent the data structures you'll be working with. Create two datasets—one for each storage account—to define the structure and location of the files.
For each storage account, go to the "Author & Monitor" section of your Data Factory instance in the Azure portal.
Click on "Author" and then "New dataset."
Select the appropriate dataset type (e.g., Azure Blob Storage) and configure the dataset properties, such as the storage account, container, and file format.
Repeat this process for both storage accounts.
Create a pipeline: Pipelines in ADF define the workflow and activities to be performed. Create a pipeline to move the files from the source storage account to the destination storage account.
In the Data Factory Authoring UI, click on "Author" and then "New pipeline."
Give your pipeline a name and add activities to it.
Add a "Copy Data" activity to the pipeline, specifying the source and destination datasets you created earlier.
Configure the copy activity settings, such as file filters, mappings, and performance options.
Save and publish your pipeline.
Trigger the pipeline: Once your pipeline is published, you can trigger it manually or schedule it to run at specific intervals using triggers. Triggers can be set up to automate the movement of files between the storage accounts.
In the Data Factory Authoring UI, go to the "Trigger" section.
Create a new trigger and define its properties, such as the start time, recurrence, and execution settings.
Associate your pipeline with the trigger.
Please accept the answer if this helped