DF-Executor-InternalServerError Error in Mapping Data Flow

Casey 116 Reputation points
2023-01-16T11:46:04.0033333+00:00

Hi,

I've been getting this error for 7 days now for a data flow that has been running fine for months and has no recent updates. This Data Flow runs fine in data flow debug, I can see data running through from source activity through to sink activity.

Operation on target dataFlow_Raw_to_Transformed failed: {"StatusCode":"DF-Executor-InternalServerError","Message":"Job failed due to reason: at Sink 'transformedData': Failed to execute dataflow with internal server error, please retry later. If issue persists, please contact Microsoft support for further assistance","Details":"org.apache.spark.SparkException: Job aborted due to stage failure: Task 320 in stage 21.0 failed 1 times, most recent failure: Lost task 320.0 in stage 21.0 (TID 1297, vm-42929650, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container from a bad node: container_1673864656014_0001_01_000002 on host: vm-42929650. Exit status: 143. Diagnostics: [2023-01-16 10:34:13.066]Container killed on request. Exit code is 143\n[2023-01-16 10:34:13.067]Container exited with a non-zero exit code 143. \n[2023-01-16 10:34:13.068]Killed by external signal\n.\nDriver stacktrace:\n\tat com.microsoft.dataflow.FileStoreExceptionHandler$.extractRootCause(FileStoreExceptionHandler.scala:36)\n\tat com.microsoft.dataflow.transformers.DefaultFileWriter$$anonfun$write$1$$anonfun$101$$anonfun$apply$93$$anonfun$apply$5.apply$mcV$sp(FileStore.scala:1185)\n\tat com.microsoft.dataflow.transformers.DefaultFileWriter$$anonfun$write$1$$anonfun$101$$anonfun$apply$93$$anonfun$apply$5.apply(FileStore.scala:1"}

This error happens when run as part of a pipeline.

The pipeline we run currently runs fine in other environments.

Can anyone please help with this error?

Thanks.

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,196 questions
{count} votes

Accepted answer
  1. ShaktiSingh-MSFT 14,481 Reputation points Microsoft Employee
    2023-01-17T09:21:53.54+00:00

    Hi @Casey ,

    Thanks for posting this question in Microsoft Q&A forum.

    As I understand from the error DF-Executor-Internal Server Error and Reason: Container from a bad node, it might be caused by the cluster running out of disk space.

    For this out of memory error, please retry using an integration runtime with bigger core count and/or memory optimized compute type.

    Refer to this video for creation of larger debug session: Custom Data Flow Debug | Larger Integration Runtime | How to create larger debug session in ADF

    If the above does not help, please do as suggested by @PRADEEPCHEEKATLA-MSFT and raise support ticket.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful