Events
Mar 31, 11 PM - Apr 2, 11 PM
The ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. March 31 to April 2, 2025.
Register todayThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
APPLIES TO: Azure Data Factory Azure Synapse Analytics
Tip
Try out Data Factory in Microsoft Fabric, an all-in-one analytics solution for enterprises. Microsoft Fabric covers everything from data movement to data science, real-time analytics, business intelligence, and reporting. Learn how to start a new trial for free!
A pipeline run in Azure Data Factory defines an instance of a pipeline execution. For example, let's say you have a pipeline that runs at 8:00 AM, 9:00 AM, and 10:00 AM. In this case, there are three separate pipeline runs. Each pipeline run has a unique pipeline run ID. A run ID is a globally unique identifier (GUID) that defines that particular pipeline run.
Pipeline runs are typically instantiated by passing arguments to parameters that you define in the pipeline. You can run a pipeline either manually or by using a trigger. See Pipeline execution and triggers in Azure Data Factory for details.
You have a data factory and a function app running on a private endpoint in Azure. You're trying to run a pipeline that interacts with the function app. You've tried three different methods, but one returns error "Bad Request," and the other two methods return "103 Error Forbidden."
Cause
Azure Data Factory currently doesn't support a private endpoint connector for function apps. Azure Functions rejects calls because it's configured to allow only connections from a private link.
Resolution
Create a PrivateLinkService endpoint and provide your function app's DNS.
Cause
When you cancel a pipeline run, pipeline monitoring often still shows the progress status. This happens because of a browser cache issue. You also might not have the correct monitoring filters.
Resolution
Refresh the browser and apply the correct monitoring filters.
Cause
If a folder you're copying contains files with different schemas, such as variable number of columns, different delimiters, quote char settings, or some data issue, the pipeline might throw this error:
Operation on target Copy_sks failed: Failure happened on 'Sink' side. ErrorCode=DelimitedTextMoreColumnsThanDefined, 'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException, Message=Error found when processing 'Csv/Tsv Format Text' source '0_2020_11_09_11_43_32.avro' with row number 53: found more columns than expected column count 27., Source=Microsoft.DataTransfer.Common,'
Resolution
Select the Binary Copy option while creating the Copy activity. This way, for bulk copies or migrating your data from one data lake to another, Data Factory won't open the files to read the schema. Instead, Azure Data Factory treats each file as binary and copies it to the other location.
Issue
Error message:
Type=Microsoft.DataTransfer.Execution.Core.ExecutionException,Message=There are substantial concurrent MappingDataflow executions which is causing failures due to throttling under Integration Runtime 'AutoResolveIntegrationRuntime'.
Cause
You've reached the integration runtime's capacity limit. You might be running a large amount of data flow by using the same integration runtime at the same time. See Azure subscription and service limits, quotas, and constraints for details.
Resolution
Issue
Error message:
Operation on target Cancel failed: {“error”:{“code”:”AuthorizationFailed”,”message”:”The client ‘<client>’ with object id ‘<object>’ does not have authorization to perform action ‘Microsoft.DataFactory/factories/pipelineruns/cancel/action’ over scope ‘/subscriptions/<subscription>/resourceGroups/<resource group>/providers/Microsoft.DataFactory/factories/<data factory name>/pipelineruns/<pipeline run id>’ or the scope is invalid. If access was recently granted, please refresh your credentials.”}}
Cause
Pipelines can use the Web activity to call ADF REST API methods if and only if the Azure Data Factory member is assigned the Contributor role. You must first configure and add the Azure Data Factory managed identity to the Contributor security role.
Resolution
Before you use the Azure Data Factory’s REST API in a Web activity’s Settings tab, security must be configured. Azure Data Factory pipelines can use the Web activity to call ADF REST API methods if and only if the Azure Data Factory managed identity is assigned the Contributor role. Begin by opening the Azure portal and clicking the All resources link on the left menu. Select Azure Data Factory to add ADF managed identity with Contributor role by clicking the Add button in the Add a role assignment box.
Cause
Azure Data Factory orchestration allows conditional logic and enables users to take different paths based upon the outcome of a previous activity. It allows four conditional paths: Upon Success (default pass), Upon Failure, Upon Completion, and Upon Skip.
Azure Data Factory evaluates the outcome of all leaf-level activities. Pipeline results are successful only if all leaves succeed. If a leaf activity was skipped, we evaluate its parent activity instead.
Resolution
Cause
You might need to monitor failed Azure Data Factory pipelines in intervals, say 5 minutes. You can query and filter the pipeline runs from a data factory by using the endpoint.
Resolution
Cause
The degree of parallelism in ForEach is the max degree of parallelism. We can't guarantee a specific number of executions happening at the same time, but this parameter guarantees that we never go above the value that was set. You should see this as a limit, to be applied when controlling concurrent access to your sources and sinks.
Known Facts about ForEach
Resolution
Cause
This can happen for various reasons like hitting concurrency limits, service outages, network failures and so on.
Resolution
Concurrency Limit: If your pipeline has a concurrency policy, verify that there are no old pipeline runs in progress.
Monitoring limits: Go to the authoring canvas, select your pipeline, and determine if it has a concurrency property assigned to it. If it does, go to the Monitoring view, and make sure there's nothing in the past 45 days that's in progress. If there's something in progress, you can cancel it and the new pipeline run should start.
Transient Issues: It's possible that your run was impacted by a transient network issue, credential failures, services outages etc. If this happens, Azure Data Factory has an internal recovery process that monitors all the runs and starts them when it notices something went wrong. You can rerun pipelines and activities as described here.. You can rerun activities if you had canceled activity or had a failure as per Rerun from activity failures. This process happens every one hour, so if your run is stuck for more than an hour, create a support case.
Cause
This can happen if you haven't implemented time to live feature for Data Flow or optimized SHIR.
Resolution
Cause
This can happen if you haven't scaled up SHIR as per your workload.
Resolution
Cause
Long queue-related error messages can appear for various reasons.
Resolution
Cause
It's a user error because JSON payload that hits management.azure.com is corrupt. No logs are stored because user call didn't reach ADF service layer.
Resolution
Perform network tracing of your API call from ADF portal using Microsoft Edge/Chrome browser Developer tools. You'll see offending JSON payload, which could be due to a special character (for example, $
), spaces, and other types of user input. Once you fix the string expression, you'll proceed with rest of ADF usage calls in the browser.
Cause
You're running ADF in debug mode.
Resolution
Execute the pipeline in trigger mode.
Cause
You made changes in collaboration branch to remove storage event trigger. You're trying to publish and encounter Trigger deactivation error
message.
Resolution
This is due to the storage account, used for the event trigger, is being locked. Unlock the account.
Cause
The expression builder can fail to load due to network or cache problems with the web browser.
Resolution
Upgrade the web browser to the latest version of a supported browser, clear cookies for the site, and refresh the page.
Cause
You have chained many activities.
Resolution
You can split your pipelines into sub pipelines, and stich them together with ExecutePipeline activity.
Cause
You have not optimized mapping data flow.
Resolution
Cause
Failure type is user configuration issue. String of parameters, instead of Array, is passed to the child pipeline.
Resolution
Input execute pipeline activity for pipeline parameter as @createArray('a','b') for example, if you want to pass parameters 'a' and 'b'. If you want to pass numbers, for example, use @createArray(1,2,3). Use createArray function to force parameters being passed as an array.
For more troubleshooting help, try these resources:
Events
Mar 31, 11 PM - Apr 2, 11 PM
The ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. March 31 to April 2, 2025.
Register todayTraining
Module
Troubleshoot slow-running flows in Power Automate - Training
Learn how to troubleshoot slow-running flows in Microsoft Power Automate.
Certification
Microsoft Certified: Fabric Data Engineer Associate - Certifications
As a fabric data engineer, you should have subject matter expertise with data loading patterns, data architectures, and orchestration processes.