Hi @Grace Ann Salvame
Thanks for the detailed information and for trying out multiple troubleshooting steps you’ve already done a great job narrowing this down.
Based on the error Executor heartbeat timed out after 171582 ms
, this usually indicates a problem within the Spark execution environment, specifically that an executor node became unresponsive or timed out during execution. Although the failure appears at the Sink (stgAccountInfo
), the root cause is likely related to backend infrastructure rather than any changes in your Data Flow logic.
To help stabilize execution, first try enabling Sink staging if you are using Azure SQL, Synapse, or SQL Server as your sink. You can do this by opening the Sink transformation in your Data Flow, enabling the “Staged insert” option, and configuring a temporary staging linked service pointing to Azure Blob Storage. This approach buffers large writes and prevents overloading the sink during high-load Spark operations. You can find more details here: Enable Staged Insert – Azure SQL Database.
Next, consider adding a Repartition transformation just before the Sink. Setting this to Round Robin or specifying a partition count such as 8 or 16 helps distribute data evenly across executor nodes, reducing the likelihood of data skew or executor crashes.
Also, if you are currently using a fixed compute size like Medium or Large, try switching to AutoResolveIntegrationRuntime or set the Data Flow compute size to Auto. This enables the environment to scale automatically according to workload demands.
Finally, review the Spark monitoring view by navigating to the Data Flow activity run and clicking the eyeglass icon. Look for long-running partitions, high retry counts, or uneven data distribution (data skew), as these can indicate which parts of the pipeline may be causing bottlenecks. More information is available here: Troubleshoot Mapping Data Flows – Microsoft Docs.
If this is helpful, please click Accept Answer and kindly upvote it so that other people who faces similar issue may get benefitted from it.Let me know if you have any further Queries.