Welcome to Microsoft Q&A platform and thanks for posting your query here.
You can query your Azure SQL Database directly from Azure Synapse Notebooks without needing to create a copy pipeline. Here are the steps to do this using a JDBC connection:
- From the Azure Portal, get the JDBC connection string for your Azure SQL Database.
-
- You can use the
pyodbc
library to connect to your Azure SQL Database. Here's an example code snippet:
import pyodbc
# Define the connection string
connection_string = (
"Driver={ODBC Driver 18 for SQL Server};"
"Server=tcp:your_server.database.windows.net,1433;"
"Database=your_database;"
"Encrypt=yes;"
"TrustServerCertificate=no;"
"Connection Timeout=60;"
"Authentication=ActiveDirectoryIntegrated"
)
# Establish the connection
conn = pyodbc.connect(connection_string)
# Create a cursor object
cursor = conn.cursor()
# Execute a query
cursor.execute("SELECT * FROM your_table")
# Fetch the results
rows = cursor.fetchall()
# Print the results
for row in rows:
print(row)
# Close the connection
conn.close()
- Ensure that the ODBC Driver 18 for SQL Server is installed.
- Verify that your Azure SQL Database allows connections from the IP address of your Synapse workspace.
- Check that the managed identity or service principal used has the necessary permissions on the Azure SQL Database.
If you encounter issues with the pyodbc
connection, you can also use the spark.read.jdbc
method to connect to your Azure SQL Database using Spark:
jdbcHostname = "your_server.database.windows.net"
jdbcDatabase = "your_database"
jdbcPort = 1433
jdbcUrl = f"jdbc:sqlserver://{jdbcHostname}:{jdbcPort};database={jdbcDatabase};encrypt=true;trustServerCertificate=false;hostNameInCertificate=*.database.windows.net;loginTimeout=30;"
connectionProperties = {
"user": "your_username",
"password": "your_password",
"driver": "com.microsoft.sqlserver.jdbc.SQLServerDriver"
}
query = "(SELECT * FROM your_table) AS your_table_alias"
df = spark.read.jdbc(url=jdbcUrl, table=query, properties=connectionProperties)
df.show()
This method uses Spark's JDBC capabilities to read data directly from your Azure SQL Database into a DataFrame, which you can then manipulate within your Synapse Notebook
I hope the above steps will resolve the issue, please do let us know if issue persists. Thank you