Hello @MadhuVamsi-2459 ,
Apologies for the delay in response.
This is a problem that that goes back 15 or more years. Reviewing your table design may be more effective.
When designing your table, you want to aim for rows less than 8,060 bytes to fit on a page.
if you exceed this IN_ROW_DATA it needs to use the ROW_OVERFLOW_DATA Allocation unit.
You need to use a type of varchar(max), nvarchar(max), varbinary(max).
The 2GB limit is per row. According to the docs, the total size of a columnstore table is unlimited... so a table with 80GB should be fine.
I would suggest you try a different approach. Trying to shove that volume of data through an ancient JDBC connection will be troublesome. The recommended pattern for moving data from Databricks to Azure Synapse is to use the Azure Synapse Dedicated SQL Pool Connector for Apache Spark.
This will then create a separate page for the larger column but keep a 24-byte pointer in the original page.
I would start with looking at the physical data length of the data in the column and set the type appropriately if you can.
Hope this helps.