Continue upon erorr
We have two databases, one in Azure SQL and another in Azure Synapse. We write to both using dataflows in data factory. When trying to write to these databsases we often get records that exceed the varchar size. In Azure SQL we were able to Skip upon errors and log those erorrs. However for Synapse this is not the case. Despite checking the boxes in the photo, we get the following truncation error :
Error code
DFExecutorUserError
Troubleshooting guide
Failure type
User configuration issue
Details
{"StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink1': shaded.msdataflow.com.microsoft.sqlserver.jdbc.SQLServerException: HdfsBridge::recordReaderFillBuffer - Unexpected error encountered filling record reader buffer: HadoopSqlException: String or binary data would be truncated.","Details":"Unexpected error encountered filling record reader buffer: HadoopSqlException: String or binary data would be truncated."}
The exact same records get sent to Azure SQL and Azure Synapse. In Azure SQL they are logged with no problem. This is exteremely useful because we get to see which rows caused the problem and perhaps we should throw out the data. But in Synapse, it crashes our entire pipeline. So the question is, does Synapse actually have error logging or not and is the only solution to substring all our columns?