You can find instructions in this post: https://learn.microsoft.com/en-us/answers/questions/428775/connect-synapse-spark-to-synapse-serverless-sql-po.html
Basically, the jbdc driver will allow access to SQL, but the version pre-installed in your Spark Pool doesn't have AAD support. You need to update and add some libraries to your pool to get this to work.
It's worth noting that it's not usually necessary to connect to the serverless pool from spark. All the data in your serverless pool is in storage so you can go straight to the storage account- you would need access permissions to storage anyway. That being said, I can see a scenario where you have some complex views or something similar that you want to tap into from spark.
Similar to how you would control access in the serverless pool, both Access Control Lists combined with AAD users and SAS tokens are available for scoped access to ADLS.
The TokenLibrary library availble in your Spark pool will work with either option: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-secure-credentials-with-tokenlibrary?pivots=programming-language-python
Hi @Moore, Payton E
I wanted to check in with you to see if you have any follow up questions for me on this issue.
We decided to access the data lake storage directly, and it worked perfect with the documentation you provided. Thank you! @Samara Soucy - MSFT
Sign in to comment