Hi,
I've been getting this error for 7 days now for a data flow that has been running fine for months and has no recent updates. This Data Flow runs fine in data flow debug, I can see data running through from source activity through to sink activity.
Operation on target dataFlow_Raw_to_Transformed failed: {"StatusCode":"DF-Executor-InternalServerError","Message":"Job failed due to reason: at Sink 'transformedData': Failed to execute dataflow with internal server error, please retry later. If issue persists, please contact Microsoft support for further assistance","Details":"org.apache.spark.SparkException: Job aborted due to stage failure: Task 320 in stage 21.0 failed 1 times, most recent failure: Lost task 320.0 in stage 21.0 (TID 1297, vm-42929650, executor 1): ExecutorLostFailure (executor 1 exited caused by one of the running tasks) Reason: Container from a bad node: container_1673864656014_0001_01_000002 on host: vm-42929650. Exit status: 143. Diagnostics: [2023-01-16 10:34:13.066]Container killed on request. Exit code is 143\n[2023-01-16 10:34:13.067]Container exited with a non-zero exit code 143. \n[2023-01-16 10:34:13.068]Killed by external signal\n.\nDriver stacktrace:\n\tat com.microsoft.dataflow.FileStoreExceptionHandler$.extractRootCause(FileStoreExceptionHandler.scala:36)\n\tat com.microsoft.dataflow.transformers.DefaultFileWriter$$anonfun$write$1$$anonfun$101$$anonfun$apply$93$$anonfun$apply$5.apply$mcV$sp(FileStore.scala:1185)\n\tat com.microsoft.dataflow.transformers.DefaultFileWriter$$anonfun$write$1$$anonfun$101$$anonfun$apply$93$$anonfun$apply$5.apply(FileStore.scala:1"}
This error happens when run as part of a pipeline.
The pipeline we run currently runs fine in other environments.
Can anyone please help with this error?
Thanks.