1 Copy activity in pipeline dev and qa

Vineet S 910 Reputation points
2024-09-24T21:17:01.71+00:00

Hi have 5 pipe line with copy activity source sql1 and sink in parquet but database are different... Wants to run both in parallel dev and qa.. How pipeline will come to know in debug mode that it is running dev database and qa database ... How to it will call qa db and dev in debug mode... Screenshot only pls

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,711 questions
0 comments No comments
{count} votes

Accepted answer
  1. hossein jalilian 7,280 Reputation points
    2024-09-24T23:47:00.8666667+00:00

    Thanks for posting your question in the Microsoft Q&A forum.

    You can use parameters and variables to dynamically set the database connection details.

    • Add a parameter called environment with possible values "dev" or "qa"
    • Create variables for database name, server, etc.. and use expressions to set these based on the environment parameter
    • In your SQL dataset, use parameters for database name, server, etc.. and in your sink dataset, parameterize the file path if needed

    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful


1 additional answer

Sort by: Most helpful
  1. AnnuKumari-MSFT 32,906 Reputation points Microsoft Employee
    2024-09-26T05:22:17.0733333+00:00

    Hi @Vineet S ,

    Thankyou for using Microsoft Q&A platform and thanks for posting your query here.

    I understand that you are trying to execute copy activity parallelly in two different environments : Dev and QA .

    Ideally, We should have different ADF workspace for different environment . That is the best practice . Develop everything in Dev environment and then deploy the code to higher env- Test, Pre-prod and Prod.

    Diagram of continuous integration with Azure Pipelines

    By going through your query, it seems that you are considering same ADF as both Dev and QA environment. Kindly let us know how is the deployment process in your case.

    Coming to your query , it seems you want to dynamically execute the pipeline for Dev and QA DBs.

    You need to either create two linked services and datasets to point to the DBs individually by providing their servername, DBName and credentials.
    User's image

    Or consider parameterizing it :

    User's image

    Reference: Parameterize Linked Services in Azure Data Factory

    Hope it helps. Kindly accept the answer by clicking on Accept answer button . Thankyou

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.