How to fix the request throttles, quto exceeded in the Azure Data pipeline built through data flows

Rohan Dussa 0 Reputation points
2024-01-31T16:59:03.4266667+00:00

I have created a data flow using the source as a REST connector and sink as Azure SQL database. The linked service has pages in the URL for which I have applied the pagination rule. However, I am getting the below error when I run the pipeline for the data flow: Operation on target KeapContactData failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: Failure to read most recent page request:","Details":"com.microsoft.dataflow.Issues: DF-Rest_015 - Failure to read most recent page request: DF-REST_001 - Error response from server: Some({\n "code": "429",\n "message": "Quota Exceeded",\n "status": "Request Throttled",\n "details": null\n }), Status code: 429. Please check your request url and body. (url:https://api.infusionsoft.com/crm/rest/v1/contacts/,request body: None, request method: GET)\n\tat com.microsoft.dataflow.Utils$.failure(Utils.scala:76)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient.$anonfun$readResourcesWithDynamicPaging$1(RestClient.scala:88)\n\tat scala.util.Try$.apply(Try.scala:213)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient.readResourcesWithDynamicPaging(RestClient.scala:55)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient.readResources(RestClient.scala:27)\n\tat org.apache.spark.sql.execution.datasources.rest.RestRDD.compute(RestRDD.scala:20)\n\tat org.apache.spark.rdd.R"} From the description, I understand that the error is regarding the quota and request limit. However I am not able to resolve the issue. I have set the request interval time to 16000 ms to allow the service to navigate through pages.

I appreciate the help

Azure SQL Database
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
10,196 questions
{count} votes