In ADF , dataflow source is dataverse table and move data to sql table depending upon the column created on ( 01.01.2024 to 31.01.2024)
We are using filter activity but it's not suitable for this
I got error while running the pipeline,
"Operation on target Data flow1 failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: Failure to read most recent page request: Request Failure(URL:https://scic.crm.dynamics.com/api/data/v9.2/activitypointers); Error Message: Read timed out; java.net.SocketTimeoutException: Read timed out","Details":"com.microsoft.dataflow.Issues: DF-Rest_015 - Failure to read most recent page request: Request Failure(URL:https://scic.crm.dynamics.com/api/data/v9.2/activitypointers); Error Message: Read timed out; java.net.SocketTimeoutException: Read timed out\n\tat com.microsoft.dataflow.Utils$.failure(Utils.scala:76)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient.$anonfun$readResourcesWithDynamicPaging$1(RestClient.scala:87)\n\tat scala.util.Try$.apply(Try.scala:213)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient.readResourcesWithDynamicPaging(RestClient.scala:55)\n\tat org.apache.spark.sql.execution.datasources.rest.RestClient.readResources(RestClient.scala:27)\n\tat org.apache.spark.sql.execution.datasources.rest.RestRDD.compute(RestRDD.scala:20)\n\tat org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:374)\n\tat org.apache.spark.rdd.RDD.iterator(RDD.scala:338)\n\tat org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:57)\n\tat org.apache.spark.rdd.RDD.computeOrReadCheckpoint("}"
Please guide as with proper step.
Thanks,
D. Prakash