Cannot call methods on a stopped SparkContext.

Ande Venkatesham 11 Reputation points
2022-12-02T19:12:25.383+00:00

{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: Cannot call methods on a stopped SparkContext.\nThis stopped SparkContext was created

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,364 questions
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,532 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Chaithanya Chandus 0 Reputation points
    2023-01-30T17:51:57.5633333+00:00

    I am getting following error, while executing the Azure Synapse Pipeline:

    message":"Job failed due to reason: Cannot call methods on a stopped SparkContext.\nThis stopped SparkContext was created at:\n\norg.apache.spark.SparkContext.getOrCreate(SparkContext.scala)\norg.apache.livy.rsc.driver.SparkEntries.sc(SparkEntries.java:52)\norg.apache.livy.rsc.driver.SparkEntries.sparkSession(SparkEntries.java:66)\norg.apache.livy.repl.AbstractSparkInterpreter.postStart(AbstractSparkInterpreter.scala:144)\norg.apache.livy.repl.SparkInterpreter$$anonfun$start$1.apply$mcV$sp(SparkInterpreter.scala:114)\norg.apache.livy.repl.SparkInterpreter$$anonfun$start$1.apply(SparkInterpreter.scala:89)\norg.apache.livy.repl.SparkInterpreter$$anonfun$start$1.apply(SparkInterpreter.scala:89)\norg.apache.livy.repl.AbstractSparkInterpreter.restoreContextClassLoader(AbstractSparkInterpreter.scala:491)\norg.apache.livy.repl.SparkInterpreter.start(SparkInterpreter.scala:89)\norg.apache.livy.repl.Session$$anonfun$1.apply(Session.scala:279)\norg.apache.livy.repl.Session$$anonfun$1.apply(Session.scala:268)\nscala.concurrent.impl.Future$Prom.)

    Scenario: Trying to read the Data from Serverless SQL pool views (which was pointed to .csv files in datalake) and loading the data into Deddicated pool table. I am able to load till 10,000 records (top 10,000) but not full data. The moment, when I run the pipeline (called Data Flow) above error is throwing.. I am only using SQL pools (not any spark...)

    0 comments No comments