Hi @Verma, Manish Kumar /@manish verma
With all due respect, I would like to share some of my thoughts about your requirements. I am understanding that your requirement is reading billion records(Assuming source data coming in File) from the source using Databricks and write into SQL server Hyper scale.
I could not able to see any limitations on the JDBC driver supporting writing billion records as the issues starts to occur when you could not able to fit all of your source records into your Databricks Spark Cluster Memory and writing technique used .As a developer we need to come out with better design patterns to handle large volumes data inside Databricks. One of the better design pattern @HimanshuSinha-msft provided is partitioning the data and this will split your data into multiple partitions and also improve the writing speed as each partition data will be written independently.
Again with due respect, I am not Microsoft employee but have not seen release of any partial tested components by Microsoft without giving relevant documentation about it. We all will try to help if you provide what error is occurring when you are loading large volumes using Databricks and could able to find out the better design approach .