Question on scheduling pipeline through Azure data factory

yi zhang 1 Reputation point
2021-10-18T05:50:52.96+00:00

I have questions on build "Managed private endpoints" for the pipeline which copies data from azure SQL DB to azure Blob storage which will be scheduled for running once a month automatically.

My question 1 is: when I create new integration runtime, I have to set interactive authoring and the connection will auto terminated after 60 mins. Does that mean every time I have to manually enable interactive authoring? My status of connection always shows in red rectangle:

Only after manual enable it, the connect test could succeed.
And my managed private endpoint looks fine.

The question 2 is: even I choose Account selection method using "From Azure subscription", it would be back to "Enter manually", and seems this kind change could not be published, so the updating never to be saved. Any difference effect to pipeline between "From Azure subscription" and "Enter Manually"?

And now, even my pipeline can be run successfully once I manually enable interactive authoring, it would fail in future scheduled jobs as following, and status as "In progress":

Hope I explained my situation clear to you guys, and welcome any suggestions.

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,643 questions
{count} votes

1 answer

Sort by: Most helpful
  1. MartinJaffer-MSFT 26,036 Reputation points
    2021-10-19T15:11:28.91+00:00

    Hello @yi zhang and welcome to Microsoft Q&A.

    On Question 2, my understanding is that this selection method is for your convenience while editing. The same information is stored no matter which you choose. Since the same information is stored, there is no difference when it is read later. This results in appearing to always be "Enter Manually" when you open it later.
    The "From Azure subscription" is just to make it easier for finding the details initially.

    On Question 1, "test connection" is considered part of authoring. The detail to realize, is , activities initiated by the Data Factory service (such as scheduled triggers) move through a slightly different channel than activities initiated by you through the authoring tool (such as preview data, debug pipeline, trigger now, test connection).
    One requires a 'listener'.

    Much more concerning to me, is the last part. It sounds like you are saying, the scheduled trigger fails after your authoring session ends. Is this correct?

    0 comments No comments