Continue upon erorr

Richard Gao 6 Reputation points
2022-06-03T15:56:49.283+00:00

208276-screen-shot-2022-06-03-at-115040-am.png

We have two databases, one in Azure SQL and another in Azure Synapse. We write to both using dataflows in data factory. When trying to write to these databsases we often get records that exceed the varchar size. In Azure SQL we were able to Skip upon errors and log those erorrs. However for Synapse this is not the case. Despite checking the boxes in the photo, we get the following truncation error :

Error code
DFExecutorUserError

Troubleshooting guide
Failure type
User configuration issue

Details

{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink1': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopSqlException: String or binary data would be truncated.","Details":"Unexpected error encountered filling record reader buffer: HadoopSqlException: String or binary data would be truncated."}

The exact same records get sent to Azure SQL and Azure Synapse. In Azure SQL they are logged with no problem. This is exteremely useful because we get to see which rows caused the problem and perhaps we should throw out the data. But in Synapse, it crashes our entire pipeline. So the question is, does Synapse actually have error logging or not and is the only solution to substring all our columns?

Azure Data Factory
Azure Data Factory
An Azure service for ingesting, preparing, and transforming data at scale.
11,661 questions
{count} vote

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.