Enable Serverless SQL warehouses
With the Serverless compute version of the Databricks platform architecture, the compute layer exists in the Databricks cloud subscription rather than the customer’s cloud subscription. Serverless compute is supported for use with Databricks SQL. Admins can create Serverless SQL warehouses that enable instant compute and are managed by Databricks. Serverless warehouses use compute clusters in the Azure Databricks subscription. Use Serverless warehouses with Databricks SQL queries just like you normally would with the original customer-hosted SQL warehouses that are now called Classic SQL warehouses.
If Serverless warehouses are enabled for your workspace:
- New SQL warehouses are Serverless by default when created from the UI or the API, but you can also create new Classic warehouses.
- You can create Serverless warehouses with the UI or API, or convert warehouses to Serverless.
This feature only affects Databricks SQL. It does not affect how Databricks Runtime clusters work with notebooks and jobs in the Data Science & Engineering or Databricks Machine Learning workspace environments.
Databricks Runtime clusters always run in the Classic data plane in your Azure subscription.
Serverless warehouses do not have public IP addresses. For more architectural information, see Serverless compute.
The following procedure requires that you have owner or contributor permissions on the Azure Databricks workspace.
Databricks changed the name from SQL endpoint to SQL warehouse because, in the industry, endpoint refers to either a remote computing device that communicates with a network that it’s connected to, or an entry point to a cloud service. A data warehouse is a data management system that stores current and historical data from multiple sources in a business friendly manner for easier insights and reporting. SQL warehouse accurately describes the full capabilities of this compute resource.
- Your Azure Databricks workspace must be on the Premium tier.
- Azure Databricks supports Serverless SQL warehouses in Azure region East US (
eastus), East US 2 (
eastus2), and West Europe (
- Notes about Serverless SQL warehouse feature support:
- External Hive legacy metastores are not supported.
- Cluster policies, including spot instance policies are not supported.
- The Serverless data plane does not use the customer-configurable Azure Private Link connectivity that is used for the Classic data plane.
- Although the Serverless data plane does not use the secure cluster connectivity relay that is used for the Classic data plane, Serverless warehouses do not have public IP addresses.
- VNet Injection is not applicable.
Also note that the Azure Databricks documentation on cluster size instance types and CPU quotas apply only to Classic warehouses, not to Serverless warehouses.
As an Azure Databricks workspace administrator, go to the SQL admin console in Databricks SQL.
If you are in the Data Science & Engineering or Databricks Machine Learning workspace environment, you might need to select SQL from the sidebar. Click the icon below the Databricks logo.
Click your username in the top bar of the workspace and select SQL Admin Console.
If you do not see the SQL Admin Console menu item, your user account is not an admin for this workspace.
In the SQL admin console, click the SQL Warehouse Settings tab.
Select Serverless SQL Warehouses.
Click Save changes.
Step 2: Test usage of Serverless SQL warehouses
- Create or convert a warehouse:
- Create a new Serverless warehouse using the SQL warehouse UI. Note that by default, new SQL warehouses are Serverless.
- Create a new Serverless warehouse using a REST API. Note that by default, new SQL warehouses are Serverless.
- Convert a classic warehouse to a Serverless warehouse.
- Run a query with your new Serverless warehouse.
Submit and view feedback for