Getting all the table details from Azure database to Databricks

asked 2021-11-09T17:16:38.667+00:00
Sandz 66 Reputation points

Hi All,

I dont know python/scala just googling and understanding the commands. We got a requirement to read Azure SQL database from databricks. I got the info related to JDBC driver and established connection and read one table from SQL.
Is there anyway to mount or create a DBFS Azure SQL database in the databricks to query and update data?

Basically I want to view all the Azure SQL table from databricks rather than writing lengthy query for individual tables.


Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
1,201 questions
No comments
{count} votes

Accepted answer
  1. answered 2021-11-10T07:07:31.747+00:00
    PRADEEPCHEEKATLA-MSFT 53,096 Reputation points Microsoft Employee

    Hello @Sandz ,

    Thanks for the question and using MS Q&A platform.

    There are couple of methods to connect Azure SQL Database using Azure Databricks.

    Method1: SQL databases using JDBC

    The following Python examples cover some of the same tasks:

    #Create the JDBC URL  
    jdbcHostname = "<>"  
    jdbcDatabase = "<DatabaseName>"  
    jdbcPort = 1433  
    jdbcUsername = "<UserName>"  
    jdbcPassword = "<Password>"  
    jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)  
    connectionProperties = {  
      "user" : jdbcUsername,  
      "password" : jdbcPassword,  
      "driver" : ""  
    #Push down a query to the database engine  
    pushdown_query = "(select * from employees where emp_no < 10008) emp_alias"  
    df =, table=pushdown_query, properties=connectionProperties)  


    Push down query to get the tables details from Azure SQL Database:

    pushdown_query = "(SELECT * FROM sys.Tables) tables"  
    df =, table=pushdown_query, properties=connectionProperties)  


    Method2: SQL databases using the Apache Spark Connector

    There are two versions of the connector available through Maven, a 2.4.x compatible version and a 3.0.x compatible version. Both versions can be found here and can be imported using the coordinates below:


    Prerequisites: You need to install the Apache Spark Connector.


    Python code to Read from SQL Table using Apache Spark Connector:

    server_name = "jdbc:sqlserver://<ServerName>"  
    database_name = "<DatabaseName>"  
    url = server_name + ";" + "databaseName=" + database_name + ";"  
    table_name = "<TableName>"  
    username = "<UserName>"  
    password = "<Password>" # Please specify password here  
    jdbcDF = \  
            .format("") \  
            .option("url", url) \  
            .option("dbtable", table_name) \  
            .option("user", username) \  
            .option("password", password).load()  


    Hope this will help. Please let us know if any further queries.


    • Please don't forget to click on 130616-image.png or upvote 130671-image.png button whenever the information provided helps you. Original posters help the community find answers faster by identifying the correct answer. Here is how
    • Want a reminder to come back and check responses? Here is how to subscribe to a notification
    • If you are interested in joining the VM program and help shape the future of Q&A: Here is how you can be part of Q&A Volunteer Moderators

0 additional answers

Sort by: Most helpful