Job failed due to reason: at Source 'srcExtended': org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0

Vidyashree Salimath 1 Reputation point
2021-02-10T10:45:33.507+00:00

Hi,

I'm using mapping data flow and xml file as a source which is around 2.5 GB. While Importing projection in the source and executing the pipeline facing the below issue

Error:

  • Source 'srcExtended': org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 0.0 failed 4 times, most recent failure: Lost task 0.3 in stage 0.0
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,933 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. KranthiPakala-MSFT 46,432 Reputation points Microsoft Employee
    2021-02-10T19:45:36.083+00:00

    Hi @Vidyashree Salimath ,

    Thanks for reaching out in Microsoft Q&A forum.

    By looking at the error message it seems like issue is due to spark resources overloaded. I would recommend you to please try with bigger Integration runtime (more cores)

    66682-image.png

    Please try with General Purpose compute type incase if you are using Memory optimized and also if you have configured some custom key partition in transformations, try defaulting to current partitioning so that spark can handle that optimization for you.

    NOTE: Billing for data flows is based upon the type of compute you select and the number of cores selected per hour. If you set a TTL, then the minimum billing time will be that amount of time. Otherwise, the time billed will be based on the execution time of your data flows and the time of your debug sessions. Note that debug sessions will incur a minimum of 60 minutes of billing time unless you switch off the debug session manually. For further details, please click here for the pricing page.

    In case if you still encounter the issue even after using bigger IR, please share below details for further investigation.

    1. failed pipeline runID
    2. failed activity runID
    3. Compute type used in IR
    4. Core counts used in IR
    5. Any custom key partition used?

    Hope the above info helps. Looking forward to your confirmation.

    ----------

    Thank you
    Please do consider to click on "Accept Answer" and "Upvote" on the post that helps you, as it can be beneficial to other community members.