Been running several pipelines on a trigger in Azure Synapse Analytics. Each pipeline has a set of tables it needs to pass from one level to the next. Each pipeline loops through those set of tables (ForEach).
I have 6 pipelines that are technically running in parallel, each has about 8-10 tables (iterations of ForEach).
Let's say Pipeline A has one that failed (one iteration of ForEach, which is one table not being copied to the next level) because of the following error:
{
"errorCode": "3250",
"message": "There are not enough resources available in the workspace, details: 'Your job requested 8 vcores. However, the workspace only has 2 vcores available out of quota of 50 vcores for node size family [MemoryOptimized]. Try ending the running job(s), reducing the numbers of vcores requested or increasing your vcore quota. https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-concepts#workspace-level'",
"failureType": "UserError",
"target": "Dataflow_D365_FO_DELETE_SILVER",
"details": []
}
However, these pipelines are triggered in 2 hours intervals, and the activity that failed differs every time. For example, pipeline A will have 2 activity that failed this time, and then next time it will be a different pipeline A activity. Thus this problem is hard to isolate what is going on.
Or the next time it'll be a different pipeline's activity that failed on the resource issue.
Lastly, I have increased the compute size for Pipeline A's DataFlow. The problem persists. If someone could explain how I can resolve this issue and how to either increase the workspace vcores or how to decrease the number of cores each job requests. Anything that can help me resolve this issue would be great.