Posting Business Central Json data from blob using adf
Kayode Ogidan
90
Reputation points
Job failed due to reason: at Sink 'RESTAPIOutput': Job aborted due to stage failure: Task 3 in stage 108.0 failed 1 times, most recent failure: Lost task 3.0 in stage 108.0 (TID 148) (vm-0cf09028 executor 1): java.lang.NullPointerException
at org.apache.spark.sql.execution.datasources.rest.RestClient.executeSingleRowRequest(RestClient.scala:197)
at org.apache.spark.sql.execution.datasources.rest.RestClient.$anonfun$savePartitionSingle$5(RestClient.scala:163)
at org.apache.spark.sql.execution.datasources.rest.RestClient.$anonfun$savePartitionSingle$5$adapted(RestClient.scala:160)
at scala.collection.Iterator.foreach(Iterator.scala:941)
at scala.collection.Iterator.foreach$(Iterator.scala:941)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1429)
at org.apache.spark.sql.execution.datasources.rest.RestClient.$anonfun$savePartitionSingle$1(RestClient.scala:160)
at scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)
at scala.util.Try$.apply(Try.scala:213)
at org.apache.spar
@PRADEEPCHEEKATLA-MSFT @phemanth @Harishga Please help, I am posting data to business central, I have about 20 csv files in a copy activity, total input rows of 331, I am getting the error, what could be wrong, it fails on the sink
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
11,603 questions
Sign in to answer