It seems you want to populate a table in synapse dedicated sql pool with data from a databricks dataframe. I see you are using spark.read for that, instead you should be using the spark "write" method.
Here is a sample code that will allow you to write the data in dataframe "df" into a synapse dedicated sql pool. If the table "table01" doesn't reside in synapse dedicated pool, it will be created for you.
df.write\
.mode('append')\
.format(com.databricks.spark.sqldw)\
.option("url","complete connection string URL")\
.option("forwardSparkAzureStorageCredentials","true")\
.option("dbTable","table01")\
.option("tempDir","staging_folder_path in your storage account")\
.save()
Hope this helps!