I'm getting an error in a pipeline run in Azure Synapse Analytics.

Anirudh Mounasamy 101 Reputation points


I'm facing an error with a particular pipeline in azure synapse analytics.
The Architecture:
The overal architecture of the process is that I have used Azure Synapse Link in dataverse to setup a live sync between dataverse and azure data lake storage gen2 storage account and setup a lake database in azure synapse analytics and connected data lake to the lake database. I have created one pipeline for each table in lake database to move that data to Dedicated SQL pool. In this process each pipeline uses upsert method in dataflow to update/insert rows into dedicated SQL Pool. But, for a particular pipeline that's responsible for upserting data from "Contact" table (which consists of all the details of contacts). This is facing two errors sometimes but, not all the time. I need to resolve these error. Can anyone help me resolving these error.


  1. Operation on target contact failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'contactSQL': Spark job failed in one of the cluster nodes while writing data in one of the partitions to sink, with following error message: Operation failed: "The range specified is invalid for the current size of the resource.", 416, GET, https://datascicgen2.dfs.core.windows.net/dataverse-scicprod-orgdcef26ab/contact/2023-08.csv?timeout=90, InvalidRange, "The range specified is invalid for the current size of the resource. RequestId:13b9db51-e01f-006f-0938-f7ab0e000000 Time:2023-10-05T03:05:39.8690461Z"","Details":null}User's image
  2. Operation on target contact failed: {"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'contactSQL': Conversion failed when converting from a character string to uniqueidentifier.","Details":"java.sql.BatchUpdateException: Conversion failed when converting from a character string to uniqueidentifier.\n\tat shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerStatement.executeBatch(SQLServerStatement.java:1886)\n\tat com.microsoft.dataflow.transformers.store.JDBCWriter.executeBatchSQLs(JDBCStore.scala:601)\n\tat com.microsoft.dataflow.transformers.store.JDBCWriter.executeSQL(JDBCStore.scala:576)\n\tat com.microsoft.dataflow.transformers.store.JDBCWriter.$anonfun$executeTableOpAndPostSQL$1(JDBCStore.scala:670)\n\tat scala.runtime.java8.JFunction0$mcV$sp.apply(JFunction0$mcV$sp.java:23)\n\tat com.microsoft.dataflow.transformers.StoreContext$.timed(Store.scala:153)\n\tat com.microsoft.dataflow.transformers.store.JDBCWriter.executeTableOpAndPostSQL(JDBCStore.scala:670)\n\tat com.microsoft.dataflow.transformers.store.JDBCWriter.executeTableOpAndPostSQLAndDDL(JDBCStore.scala:640)\n\tat com.microsoft.dataflow.transformers.store.JDBCWriter.executeTableOpAndPostSQLAndDDL(JDBCStore.scala:629)\n\tat com.microsoft.dataflow.tra"}
    User's image
  3. The failed pipelines you see is because of one of the two errors:
    User's image

Anyone who knows the solution please post is ASAP.

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,462 questions
Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
9,724 questions
0 comments No comments
{count} votes

Accepted answer
  1. QuantumCache 20,031 Reputation points

    Hello @Anirudh Mounasamy

    I assume that you have data being inserted is in the correct format! and
    Is this happening in recent time? frequently?

    You might have referred to the Azure documentation on Common Azure Storage error codes for more information.

    User's image

    Regarding the second error, you can check if the data type of the column in the dedicated SQL pool is correct. You can also check if the data being inserted is in the correct format. You can refer to the Azure documentation on Data types in SQL Server for more information.

    You can refer to the Azure documentation on Enable verbose logging for Azure Synapse Analytics pipelines for more information.

    To resolve the "InvalidRange" error, you can try the following:

    Check if the range specified in the request is valid and within the valid range of the resource.

    Check if the range specified in the request is aligned with the block size of the resource.

    Increase the timeout value specified in the request to allow the server more time to complete the request.

    Check if the request is malformed or contains invalid parameters.

    0 comments No comments

0 additional answers

Sort by: Most helpful