Issue while calling the external api in azure datafactory

Jaganath Kumar 110 Reputation points
2023-12-10T08:29:31.5133333+00:00

Hello everyone,

I need to encode the file content to Base64, create a JSON object with this encoded content, and subsequently send this JSON as part of an external call to an API endpoint.

I created a dataflow with following steps, dataflow

  1. SrcInputFile - Read entire file as single column.
  2. Derivingcolumns - Read the input and convert it to base64 Derivingcolumns 3)selectrequiredcolumn - Selected converted content column and src filename selectrequiredcolumn 4)derivejsonbody - Forming a jsonbody to pass to external call derivejsonbody 5)externalcall1 - calling the api externalcall1

I am getting error DF-REST_001 - Error response from server: Some(), Status code: 415. Please check your request url and body.

Some({"OperationName":"importBulkData","DocumentContent":"ajndjnddff","ContentType":"csv","FileName":"xxx_20231206.csv","DocumentAccount":"xx$/xxx$/xx$","JobName":"/xx/zz/xx/zz/ddd/ff/xx/,Import","ParameterList":"Operations,xx,xx,ALL,N,N,N","JobOptions":"InterfaceDetails=15,EnableEvent=Y,ImportOption=Y ,PurgeOption = N,ExtractFileType=ALL"}), request method: POST)

is this failing because of some() getting added to json request?. if yes, can someone help to resolve this issue please.

Externalcall settings
User's image

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
{count} votes

3 answers

Sort by: Most helpful
  1. Jaganath Kumar 110 Reputation points
    2023-12-10T09:46:00.2433333+00:00

    Hello Gomathi,

    I appreciate your prompt reply. Despite following your advice, I am still encountering an error. Interestingly, when I copied the JSON from the error and manually ran it through Postman, it worked fine. Below, you'll find the error message extracted from the data flow. Please note that I have scrambled the JSON values as they contain real-time data.

    Job aborted due to stage failure: Task 0 in stage 31.0 failed 1 times, most recent failure: Lost task 0.0 in stage 31.0 (TID 28) (vm-2da25753 executor 1): com.microsoft.dataflow.Issues: DF-REST_001 - Error response from server: Some(), Status code: 415. Please check your request url and body. (url:https://ebwt-test.fa.ap1.oraclecloud.com//fscmRestApi/resources/11.13.18.05/erpintegrations/,request body: Some({"OperationName":"importBulkData","DocumentContent":"xxxxxxxx","ContentType":"csv","FileName":"xxx_20231206.csv","DocumentAccount":"xx$/yy$/gg$","JobName":"/ff/apps/ggg/xx/xx/xx/common/,fgggh","ParameterList":"Operations,xx,yy,ALL,N,N,N","JobOptions":"InterfaceDetails=15,EnableEvent=Y,ImportOption=Y ,PurgeOption = N,ExtractFileType=ALL"}), request method: POST)

    at com.microsoft.dataflow.Utils$.failure(Utils.scala:76)

    at org.apache.spark.sql.execution.datasources.rest.RestClient.ensureSuccessResponse(RestClient.scala:595)

    at org.apache.spark.sql.execution.datasources.rest.RestClient.executeRequest(RestClient.scala:580)

    at org.apache.spark.sql.execution.datasources.rest.RestClient.$anonfun$readPage$2(RestClient.scala:443)

    at scala.Option.map(Option.scala:230)

    at org.apache.spark.sql.execution.datasources.rest.RestClient.readPage(RestClient.scala:443)

    at org.apache.spark.sql.execution.datasources.rest.RestClient.callExecuteSingleRowRequest(RestClient.scala:283)

    at org.apache.spark.sql.execution.datasources.rest.RestClient.callResources(RestClient.scala:138)

    at com.microsoft.dataflow.store.rest.RestCallee.call(RestStore.scala:329)

    at com.microsoft.dataflow.spark.CallExec.$anonfun$doExecute$7(CallExec.scala:117)

    at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)

    at scala.collection.Iterator$$anon$10.next(Iterator.scala:459)

    at scala.collection.TraversableOnce$FlattenOps$$anon$2.hasNext(TraversableOnce.scala:469)

    at org.apache.spark.sql.catalyst.expressions.GeneratedClass$GeneratedIteratorForCodegenStage1.processNext(Unknown Source)

    at org.apache.spark.sql.execution.BufferedRowIterator.hasNext(BufferedRowIterator.java:43)

    at org.apache.spark.sql.execution.WholeStageCodegenExec$$anon$1.hasNext(WholeStageCodegenExec.scala:762)

    at org.apache.spark.sql.execution.columnar.DefaultCachedBatchSerializer$$anon$1.hasNext(InMemoryRelation.scala:118)

    at scala.collection.Iterator$$anon$10.hasNext(Iterator.scala:458)

    at org.apache.spark.storage.memory.MemoryStore.putIterator(MemoryStore.scala:237)

    at org.apache.spark.storage.memory.MemoryStore.putIteratorAsBytes(MemoryStore.scala:365)

    at org.apache.spark.storage.BlockManager.$anonfun$doPutIterator$1(BlockManager.scala:1441)

    at org.apache.spark.storage.BlockManager.org$apache$spark$storage$BlockManager$$doPut(BlockManager.scala:1351)

    at org.apache.spark.storage.BlockManager.doPutIterator(BlockManager.scala:1415)

    at org.apache.spark.storage.BlockManager.getOrElseUpdate(BlockManager.scala:1238)

    at org.apache.spark.rdd.RDD.getOrCompute(RDD.scala:385)

    at org.apache.spark.rdd.RDD.iterator(RDD.scala:336)

    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:57)

    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)

    at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)

    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:57)

    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)

    at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)

    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:57)

    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)

    at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)

    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:57)

    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)

    at org.apache.spark.rdd.RDD.iterator(RDD.scala:338)

    at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)

    at org.apache.spark.scheduler.Task.run(Task.scala:131)

    at org.apache.spark.executor.Executor$TaskRunner.$anonfun$run$3(Executor.scala:498)

    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1439)

    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:501)

    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)

    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)

    at java.lang.Thread.run(Thread.java:750)

    Driver stacktrace:

    0 comments No comments

  2. ShaikMaheer-MSFT 38,556 Reputation points Microsoft Employee Moderator
    2023-12-14T06:08:47.27+00:00

    Hi Jaganath Kumar,

    Thank you for posting query in Microsoft Q&A Platform.

    From the error details, it seems request body is not a valid Json. I could see some() function getting added there. Could you please check once how you are sending your request body. Kindly also share screenshots of request body where you are sending once to understand better. Also, in derviejsonbody transformation, we can use string concatenation syntax to build Json string. Kindly try it and build Json string and see if that helps. Thank you.

    User's image

    Hope this helps. Please let me know how it goes. Thank you.


    Please consider hitting Accept Answer button. Accepted answers help community as well. Thank you.


  3. Deleted

    This answer has been deleted due to a violation of our Code of Conduct. The answer was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.


    Comments have been turned off. Learn more

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.