What I am trying to do is to sink some transformed data in data factory to a data verse/dynamics 365 table. The dataflow runs smoothly in debug mode and I get all the data i need in sink step, but when I trigger the pipeline it fails. Here is the error I receive. Isn't a data verse custom table supported for sink in data factory?
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink1': org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: Lost task 0.0 in stage 12.0 (TID 1405, vm-09497082, executor 1): com.microsoft.dataflow**.Issues: DF-Rest_006 - Row marker value:17 is not valid!**\n\tat com.microsoft.dataflow.Utils$.failure(Utils.scala:75)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient$$anonfun$savePartitionSingle$1$$anonfun$apply$mcV$sp$3.apply(RestClient.scala:117)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient$$anonfun$savePartitionSingle$1$$anonfun$apply$mcV$sp$3.apply(RestClient.scala:101)\n\tat scala.collection.Iterator$class.foreach(Iterator.scala:891)\n\tat scala.collection.AbstractIterator.foreach(Iterator.scala:1334)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient$$anonfun$savePartitionSingle$1.apply$mcV$sp(RestClient.scala:100)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient$$anonfun$savePartitionSingle$1.apply(Re","Details":"org.apache.spark.SparkException: Job aborted due to stage failure: Task 0 in stage 12.0 failed 1 times, most recent failure: Lost task 0.0 in stage 12.0
Thanks