Hi Brandon Duncan,
Thank you for posting query in Microsoft Q&A Platform.
The error message you are seeing indicates that the data you are trying to copy over is too large for the
nvarchar(max)
field in your SQL table. This is likely due to a limitation in the Hadoop file format used by Synapse Link for Dataverse.
One possible solution to this issue is to split the email body into smaller chunks and store them in separate rows in your SQL table. You can use a data flow in Synapse to split the email body into chunks of a certain size (e.g. 10,000 characters) and then write each chunk to a separate row in your SQL table.
To do this, you can use the substring
function in a derived column transformation to split the email body into chunks. For example, you can use the following expression to split the email body into chunks of 10,000 characters:
substring(email_body, (i - 1) * 10000 + 1, 10000)
where i
is a variable that increments for each chunk.
You can then use a sink transformation to write each chunk to a separate row in your SQL table.
Another possible solution is to use a different file format that supports larger data sizes, such as Parquet or ORC. These file formats are supported by Synapse Link for Dataverse and can handle larger data sizes than the Hadoop file format.
I hope this helps you find a solution to your issue. Let me know how it goes or if any further queries. Thank you.