Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Note
This page applies to JDBC driver (Legacy) versions below version 3. For Databricks JDBC driver version 3 and above, see Databricks JDBC Driver.
This page describes how to configure special and advanced driver capability settings for the Databricks JDBC Driver.
The Databricks JDBC Driver provides the following special and advanced driver capability settings.
ANSI SQL-92 query support in JDBC
Legacy Spark JDBC drivers accept SQL queries in ANSI SQL-92 dialect and translate them to Databricks SQL before sending them to the server.
If your application generates Databricks SQL directly or uses non-ANSI SQL-92 syntax specific to Azure Databricks, set UseNativeQuery=1 in your connection configuration. This setting passes SQL queries verbatim to Azure Databricks without translation.
Default catalog and schema
To specify the default catalog and schema, add ConnCatalog=<catalog-name>;ConnSchema=<schema-name> to the JDBC connection URL.
Query tags for tracking
Important
This feature is in Private Preview. To request access, contact your account team.
Attach key-value tags to your SQL queries for tracking and analytics purposes. Query tags appear in the system.query.history table for query identification and analysis.
To add query tags to your connection, include the ssp_query_tags parameter in your JDBC connection URL:
jdbc:databricks://<server-hostname>:443;httpPath=<http-path>;ssp_query_tags=key1:value1,key2:value2
Define query tags as comma-separated key-value pairs, where each key and value is separated by a colon. For example, key1:value1,key2:value2.
Extract large query results in JDBC
To achieve the best performance when you extract large query results, use the latest version of the JDBC driver, which includes the following optimizations.
Arrow serialization in JDBC
JDBC driver version 2.6.16 and above supports an optimized query results serialization format that uses Apache Arrow.
Cloud Fetch in JDBC
JDBC driver version 2.6.19 and above supports Cloud Fetch, a capability that fetches query results through the cloud storage configured in your Azure Databricks deployment.
When you run a query, Azure Databricks stores the results in your workspace's cloud storage as Arrow-serialized files of up to 20 MB. After the query completes, the driver sends fetch requests, and Azure Databricks returns shared access signature (SAS) URLs to the result files. The driver then uses these URLs to download results directly from Azure storage.
Cloud Fetch only applies to query results larger than 1 MB. The driver retrieves smaller results directly from Azure Databricks.
Azure Databricks automatically garbage collects accumulated files by marking them for deletion after 24 hours and permanently removing them 24 hours later.
Network prerequisites
If your network is private, you must configure the following settings for Cloud Fetch to work:
- Allow
*.blob.core.windows.netand*.store.core.windows.netin your network environment. - Add the required certificate downloads and revocations to your allow list.
- If firewall support is enabled on your Azure Databricks workspace storage account, configure a virtual network data gateway or an on-premises data gateway to allow private access to the storage account.
To disable Cloud Fetch, set EnableQueryResultDownload=0 in your connection configuration.
Diagnose slow downloads
Set LogLevel to 4 (INFO) and LogPath to the full path of a log folder to see Cloud Fetch download speed metrics. The driver logs download speed per chunk, so large result sets generate multiple log lines. The driver also logs a warning when speed falls below approximately 1 MB/s. This feature is available in JDBC (Simba) driver versions released after December 2025.
If downloads are slow or stalled, SAS tokens can expire before the driver finishes downloading all result files. Check for bandwidth throttling or network congestion between the client and Azure Blob Storage.
Enable logging
To enable logging in the JDBC driver, set the LogLevel property to a value between 1 (severe events only) and 6 (all driver activity). Set the LogPath property to the full path of the folder where you want to save log files.
For more information, see Configuring Logging in the Databricks JDBC Driver Guide.