Hi , I am facing the issue.
This issue is when I am running the pipeline from Synapse. I the dataflow , when I am trying to read data from change feed.
When I am running the full load , it is working. But when I choose to load for incremental load , it is giving below error.
Pipeline Id - 0b97ec95-15a8-4348-94e5-524802dc76b1
and DF run Id - ef6ffd02-ef05-4c3d-99d2-ac78a8dcfe52
Job failed due to reason: Job aborted due to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: Lost task 0.0 in stage 12.0 (TID 12, vm-03612025, executor 1): java.io.FileNotFoundException: Operation failed: "The specified filesystem does not exist.", 404, HEAD, https://c4tpubdatalake.dfs.core.windows.net/c4tpubdatalakefs/?upn=false&action=getAccessControl&timeout=90 at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.checkException(AzureBlobFileSystem.java:1071) at org.apache.hadoop.fs.azurebfs.AzureBlobFileSystem.create(AzureBlobFileSystem.java:189) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1067) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:1048) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:937) at org.apache.hadoop.fs.FileSystem.create(FileSystem.java:925) at com.microsoft.azure.cosmosdb.spark.util.HdfsUtils$$anonfun$write$1.apply$mcV$sp(HdfsUtils.scala:50) at com.microsoft.azure.cosmosdb.spark.util.HdfsUtils$$anonfun$write$1.apply(Hd