I am getting following error, while executing the Azure Synapse Pipeline:
message":"Job failed due to reason: Cannot call methods on a stopped SparkContext.\nThis stopped SparkContext was created at:\n\norg.apache.spark.SparkContext.getOrCreate(SparkContext.scala)\norg.apache.livy.rsc.driver.SparkEntries.sc(SparkEntries.java:52)\norg.apache.livy.rsc.driver.SparkEntries.sparkSession(SparkEntries.java:66)\norg.apache.livy.repl.AbstractSparkInterpreter.postStart(AbstractSparkInterpreter.scala:144)\norg.apache.livy.repl.SparkInterpreter$$anonfun$start$1.apply$mcV$sp(SparkInterpreter.scala:114)\norg.apache.livy.repl.SparkInterpreter$$anonfun$start$1.apply(SparkInterpreter.scala:89)\norg.apache.livy.repl.SparkInterpreter$$anonfun$start$1.apply(SparkInterpreter.scala:89)\norg.apache.livy.repl.AbstractSparkInterpreter.restoreContextClassLoader(AbstractSparkInterpreter.scala:491)\norg.apache.livy.repl.SparkInterpreter.start(SparkInterpreter.scala:89)\norg.apache.livy.repl.Session$$anonfun$1.apply(Session.scala:279)\norg.apache.livy.repl.Session$$anonfun$1.apply(Session.scala:268)\nscala.concurrent.impl.Future$Prom.)
Scenario: Trying to read the Data from Serverless SQL pool views (which was pointed to .csv files in datalake) and loading the data into Deddicated pool table. I am able to load till 10,000 records (top 10,000) but not full data. The moment, when I run the pipeline (called Data Flow) above error is throwing.. I am only using SQL pools (not any spark...)