How to connect to Serverless database from Synapse notebook?

Afroz Shaik 0 Reputation points
2023-10-31T00:36:31.67+00:00

Hi All,

I am trying to connect to Serverless database from Synapse notebook to read/write the data and create external tables with custom data source.

I tried jdbc connection approach, pyodbc connection approach and I wasn’t successful to read/write the data.

It feels so complex, can anyone provide a working solution?

Is Azure synapse right choice?

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
5,378 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. PRADEEPCHEEKATLA 90,651 Reputation points Moderator
    2023-10-31T08:43:54.25+00:00

    @Afroz Shaik - Thanks for the question and using MS Q&A platform.

    To connect to a serverless database from Synapse notebook, you can use the pyspark library. Here is an example code snippet to connect to a serverless database using pyspark:

    # Import the necessary libraries
    from pyspark.sql import SparkSession
    
    # Set the configuration values
    jdbcHostname = ".database.windows.net"
    jdbcDatabase = ""
    jdbcPort = 1433
    jdbcUsername = ""
    jdbcPassword = ""
    
    # Set the JDBC connection string
    jdbcUrl = "jdbc:sqlserver://{0}:{1};database={2}".format(jdbcHostname, jdbcPort, jdbcDatabase)
    
    # Set the JDBC driver name and connection properties
    connectionProperties = {
      "user": jdbcUsername,
      "password": jdbcPassword,
      "driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
    }
    
    # Create a Spark session
    spark = SparkSession.builder.appName("ServerlessDatabaseConnection").getOrCreate()
    
    # Read data from the serverless database
    df = spark.read.jdbc(url=jdbcUrl, table="", properties=connectionProperties)
    
    # Write data to the serverless database
    df.write.jdbc(url=jdbcUrl, table="", mode="overwrite", properties=connectionProperties)
    

    You can replace the placeholders <your_serverless_database_name>, <your_database_name>, <your_username>, <your_password>, and <your_table_name> with your actual values.

    Regarding your question about whether Azure Synapse is the right choice, it depends on your specific use case and requirements. Azure Synapse provides a unified analytics service that brings together big data and data warehousing. It offers a range of features such as data integration, data warehousing, big data, and AI to help you analyze and gain insights from your data. If your use case involves any of these areas, then Azure Synapse could be a good choice. However, if your use case is more focused on a specific area such as data warehousing or big data, then you may want to consider other services that are more specialized in those areas.

    For more details, refer to similar ask:
    https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/query-serverless-sql-pool-from-an-apache-spark-scala-notebook/ba-p/2250968

    https://stackoverflow.com/questions/76436455/execute-serverless-sql-pool-external-table-views-from-synapse-notebook-using-p

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.