SQL Warehouses APIs 2.0

Important

To access Databricks REST APIs, you must authenticate.

To configure individual SQL warehouses, use the SQL Warehouses API. To configure all SQL warehouses, use the Global SQL Warehouses API.

Requirements

  • To create SQL warehouses you must have cluster create permission, which is enabled in the Data Science & Engineering workspace.
  • To manage a SQL warehouse you must have Can Manage permission in Databricks SQL for the warehouse.

Using Warehouses API with serverless SQL warehouses

You can use the SQL Warehouse API to manage serverless SQL warehouses. Before you can create serverless SQL warehouses, you must enable the feature for the workspace. For more information about serverless compute, see Serverless compute.

  • If you use Databricks APIs to create a SQL warehouse, it is serverless by default if the serverless feature is enabled for your workspace. To create a pro or classic SQL warehouse, add the enable_serverless_compute parameter in the API request and set it to false. You can add this parameter both the Create API and Edit API request.
  • To downgrade a serverless SQL warehouse to a pro or classic SQL warehouse, edit the warehouse and set enable_serverless_compute to false.
  • For serverless SQL warehouses, spot instance policies (spot_instance_policy) are unsupported and ignored.

Note that for the Auto Stop field (auto_stop_mins), the default and minimum values are different for serverless, pro, and classic SQL warehouses.

SQL Warehouses API

Use this API to create, edit, list, and get SQL warehouses.

In this section:

Create

Endpoint HTTP Method
2.0/sql/warehouses/ POST
2.0/sql/endpoints/ (deprecated) POST

Create a SQL warehouse.

Field Name Type Description
name STRING Name of the SQL warehouse. Must be unique. This field is required.
cluster_size STRING The size of the clusters allocated to the warehouse: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large". For the mapping from cluster to instance size, see Cluster size. This field is required.
min_num_clusters INT32 Minimum number of clusters available when a SQL warehouse is running. The default is 1.
max_num_clusters INT32 Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, this is limited to 1.
auto_stop_mins INT32 Time in minutes until an idle SQL warehouse terminates all clusters and stops. This field is optional. Setting this to 0 disables Auto Stop. For pro and classic SQL warehouses, the default value is 15 and the minimum is 10. For serverless SQL warehouses, the default value is 10 and the minimum is 1. Databricks recommends setting to 10 minutes for typical use. Lower values (such as 1) cause Databricks to restart the warehouse more often and is not recommended. Note that the minimum is 5 minutes when creating the warehouse in the UI. This only applies to idle warehouses in the RUNNING state. If a warehouse is unable to acquire underlying compute resources and stays in the STARTING state, it is auto-stopped after 75 minutes.
tags WarehouseTags Key-value pairs that describe the warehouse. Azure Databricks tags all warehouse resources with these tags. This field is optional.
enable_photon BOOLEAN Whether queries are executed on a native vectorized engine that speeds up query execution. This field is optional. The default is true.
enable_serverless_compute BOOLEAN Whether this SQL warehouse is a serverless SQL warehouse. Databricks strongly recommends that you always explicitly set this field. If serverless SQL warehouses are disabled for the workspace, the default is false. If serverless SQL warehouses are enabled for the workspace, the default is true. IMPORTANT: If you set this to true, you must also set the warehouse_type to pro. See the row for warehouse_type.
warehouse_type WarehouseType The SQL warehouse type of the warehouse. This field is optional. For a non-serverless SQL warehouse, both classic and pro warehouse types are viable options, and the default warehouse type is classic. For a serverless warehouse, this field can only be pro. See the row for warehouse_type.
channel Channel Whether to use the current SQL warehouse compute version or the preview version. Preview versions let you try out functionality before it becomes the Databricks SQL standard. Typically, preview versions are promoted to the current version two weeks after initial preview release, but some previews may last longer. You can learn about the features in the latest preview version by reviewing the release notes. Databricks does not recommend using preview versions for production workloads. This field is optional. The default is CHANNEL_NAME_CURRENT.
spot_instance_policy WarehouseSpotInstancePolicy The spot policy to use for allocating instances to clusters. This field is optional. This field is not used if the SQL warehouse is a serverless SQL warehouse.

Example request

{
  "name": "My SQL Warehouse",
  "cluster_size": "Medium",
  "min_num_clusters": 1,
  "max_num_clusters": 10,
  "tags": {
    "custom_tags": [
      {
        "key": "mykey",
        "value": "myvalue"
      }
    ]
  },
  "spot_instance_policy":"COST_OPTIMIZED",
  "enable_photon": "true",
  "enable_serverless_compute": "true",
  "warehouse_type": "PRO",
  "channel": {
    "name": "CHANNEL_NAME_CURRENT"
  }
}

Example response

{
  "id": "0123456789abcdef"
}

Delete

Endpoint HTTP Method
2.0/sql/warehouses/{id} DELETE
2.0/sql/endpoints/{id} (deprecated) DELETE

Delete a SQL warehouse.

Edit

Endpoint HTTP Method
2.0/sql/warehouses/{id}/edit POST
2.0/sql/endpoints/{id}/edit (deprecated) POST

Modify a SQL warehouse. All fields are optional. Missing fields default to the current values.

Field Name Type Description
id STRING ID of the SQL warehouse.
name STRING Name of the SQL warehouse.
cluster_size STRING The size of the clusters allocated to the warehouse: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large". For the mapping from cluster to instance size, see Cluster size.
min_num_clusters INT32 Minimum number of clusters available when a SQL warehouse is running.
max_num_clusters INT32 Maximum number of clusters available when a SQL warehouse is running. This field is required. If multi-cluster load balancing is not enabled, limited to 1.
auto_stop_mins INT32 Time in minutes until an idle SQL warehouse terminates all clusters and stops. Setting this to 0 disables Auto Stop. For pro and classic SQL warehouses, the default value is 15 and the minimum is 10. For serverless SQL warehouses, the default value is 10 and the minimum is 1. Databricks recommends setting to 10 minutes for typical use. Lower values (such as 1) cause Databricks to restart the warehouse more often and is not recommended. Note that the minimum is 5 minutes when creating the warehouse in the UI. This only applies to idle warehouses in the RUNNING state. If a warehouse is unable to acquire underlying compute resources and stays in the STARTING state, it is auto-stopped after 75 minutes.
tags WarehouseTags Key-value pairs that describe the warehouse.
spot_instance_policy WarehouseSpotInstancePolicy The spot policy to use for allocating instances to clusters.
enable_photon BOOLEAN Whether queries are executed on a native vectorized engine that speeds up query execution.
enable_serverless_compute BOOLEAN Whether this SQL warehouse is a serverless SQL warehouse. To use a serverless SQL warehouse, you must enable serverless SQL warehouses for the workspace.
warehouse_type WarehouseType The SQL warehouse type of the warehouse. This field is optional. For a non-serverless SQL warehouse, both classic and pro warehouse types are viable options, and the default warehouse type is classic. For a serverless warehouse, this field can only be pro. For a list of valid values, see the "WarehouseType" section later in this article.
channel Channel Whether to use the current SQL warehouse compute version or the preview version. Preview versions let you try out functionality before it becomes the Databricks SQL standard. Typically, preview versions are promoted to the current version two weeks after initial preview release, but some previews may last longer. You can learn about the features in the latest preview version by reviewing the release notes. Databricks does not recommend using preview versions for production workloads. This field is optional. The default is CHANNEL_NAME_CURRENT.

Example request

{
  "name": "My Edited SQL warehouse",
  "cluster_size": "Large",
  "auto_stop_mins": 60
}

Get

Endpoint HTTP Method
2.0/sql/warehouses/{id} GET
2.0/sql/endpoints/{id} (deprecated) GET

Retrieve the info for a SQL warehouse.

Field Name Type Description
id STRING SQL warehouse ID.
name STRING Name of the SQL warehouse.
cluster_size STRING The size of the clusters allocated to the warehouse: "2X-Small", "X-Small", "Small", "Medium", "Large", "X-Large", "2X-Large", "3X-Large", "4X-Large". For the mapping from cluster to instance size, see Cluster size.
auto_stop_mins INT32 The number of minutes until an idle SQL warehouse terminates all clusters and stops.
spot_instance_policy WarehouseSpotInstancePolicy The spot policy to use for allocating instances to clusters.
num_clusters INT32 Number of clusters allocated to the warehouse.
min_num_clusters INT32 Minimum number of clusters available when a SQL warehouse is running.
max_num_clusters INT32 Maximum number of clusters available when a SQL warehouse is running.
num_active_sessions INT32 Number of active JDBC and ODBC sessions running on the SQL warehouse.
state WarehouseState State of the SQL warehouse.
creator_name STRING Email address of the user that created the warehouse.
creator_id STRING Azure Databricks ID of the user that created the warehouse.
jdbc_url STRING The URL used to submit SQL commands to the SQL warehouse using JDBC.
odbc_params ODBCParams The host, path, protocol, and port information required to submit SQL commands to the SQL warehouse using ODBC.
tags WarehouseTags Key-value pairs that describe the warehouse.
health WarehouseHealth The health of the warehouse.
enable_photon BOOLEAN Whether queries are executed on a native vectorized engine that speeds up query execution.
enable_serverless_compute BOOLEAN Whether this SQL warehouse is a serverless SQL warehouse. To use a serverless SQL warehouse, you must enable serverless SQL warehouses for the workspace. This field is optional. If serverless SQL warehouses are disabled for the workspace, the default is false. If serverless SQL warehouses are enabled for the workspace, the default is true.
warehouse_type WarehouseType The SQL warehouse type of the warehouse. This field is optional. For a non-serverless SQL warehouse, both classic and pro warehouse types are viable options, and the default warehouse type is classic. For a serverless warehouse, this field can only be pro. For a list of valid values, see the "WarehouseType" section later in this article.
channel Channel Whether the SQL warehouse uses the current SQL warehouse compute version or the preview version. Preview versions let you try out functionality before it becomes the Databricks SQL standard. Typically, preview versions are promoted to the current version two weeks after initial preview release, but some previews may last longer. You can learn about the features in the latest preview version by reviewing the release notes. Databricks does not recommend using preview versions for production workloads. This field is optional. The default is CHANNEL_NAME_CURRENT.

Example response

{
  "id": "7f2629a529869126",
  "name": "MyWarehouse",
  "cluster_size": "Small",
  "min_num_clusters": 1,
  "max_num_clusters": 1,
  "auto_stop_mins": 0,
  "auto_resume": true,
  "num_clusters": 0,
  "num_active_sessions": 0,
  "state": "STOPPED",
  "creator_name": "user@example.com",
  "jdbc_url": "jdbc:spark://hostname.staging.cloud.databricks.com:443/default;transportMode=http;ssl=1;AuthMech=3;httpPath=/sql/1.0/warehouses/7f2629a529869126;",
  "odbc_params": {
    "hostname": "hostname.cloud.databricks.com",
    "path": "/sql/1.0/warehouses/7f2629a529869126",
    "protocol": "https",
    "port": 443
  },
  "tags": {
    "custom_tags": [
      {
        "key": "mykey",
        "value": "myvalue"
      }
    ]
  },
  "spot_instance_policy": "COST_OPTIMIZED",
  "enable_photon": true,
  "cluster_size": "Small",
  "enable_serverless_compute": true,
  "warehouse_type": "PRO",
  "channel": {
    "name": "CHANNEL_NAME_CURRENT"
  }
}

List

Endpoint HTTP Method
2.0/sql/warehouses/ GET
2.0/sql/endpoints/ (deprecated) GET

List all SQL warehouses in the workspace.

Example response

{
  "warehouses": [
    { "id": "123456790abcdef", "name": "My SQL warehouse", "cluster_size": "Medium" },
    { "id": "098765321fedcba", "name": "Another SQL warehouse", "cluster_size": "Large" }
  ]
}

Note: If you use the deprecated 2.0/sql/endpoints/ API, the top-level response field would be “endpoints” instead of “warehouses”.

Start

Endpoint HTTP Method
2.0/sql/warehouses/{id}/start POST
2.0/sql/endpoints/{id}/start (deprecated) POST

Start a SQL warehouse.

Stop

Endpoint HTTP Method
2.0/sql/warehouses/{id}/stop POST
2.0/sql/endpoints/{id}/stop (deprecated) POST

Stop a SQL warehouse.

Global SQL Warehouses API

Use this API to configure the security policy, data access properties, and configuration parameters for all SQL warehouses.

In this section:

Get

Endpoint HTTP Method
/2.0/sql/config/warehouses GET
/2.0/sql/config/endpoints (deprecated) GET

Get the configuration for all SQL warehouses.

Field Name Type Description
security_policy WarehouseSecurityPolicy The policy for controlling access to datasets.
data_access_config Array of WarehouseConfPair An array of key-value pairs containing properties for data access configuration.
sql_configuration_parameters RepeatedWarehouseConfPairs SQL configuration parameters.

Example response

{
  "security_policy": "DATA_ACCESS_CONTROL",
  "data_access_config": [
    {
      "key": "spark.sql.hive.metastore.jars",
      "value": "/databricks/hive_metastore_jars/*"
    }
  ],
  "sql_configuration_parameters": {
    "configuration_pairs": [
      {
         "key" : "legacy_time_parser_policy",
         "value": "LEGACY"
      }
    ]
  }
}

Edit

Edit the configuration for all SQL warehouses.

Important

  • All fields are required.
  • Invoking this method restarts all running SQL warehouses.
Endpoint HTTP Method
/2.0/sql/config/warehouses PUT
/2.0/sql/config/endpoints (deprecated) PUT
Field Name Type Description
security_policy WarehouseSecurityPolicy The policy for controlling access to datasets.
data_access_config Array of WarehouseConfPair An array of key-value pairs containing properties for data access configuration.
sql_configuration_parameters RepeatedWarehouseConfPairs SQL configuration parameters.

Example request

{
  "data_access_config": [
    {
      "key": "spark.sql.hive.metastore.jars",
      "value": "/databricks/hive_metastore_jars/*"
    }
  ],
  "sql_configuration_parameters": {
     "configuration_pairs": [
       {
          "key" : "legacy_time_parser_policy",
          "value": "LEGACY"
       }
     ]
  }
}

Data structures

In this section:

WarehouseConfPair

Field Name Type Description
key STRING Configuration key name.
value STRING Configuration key value.

WarehouseHealth

Field Name Type Description
status WarehouseStatus Warehouse status.
message STRING A descriptive message about the health status. Includes information about errors contributing to current health status.

WarehouseSecurityPolicy

Option Description
DATA_ACCESS_CONTROL Use data access control to control access to datasets.

WarehouseSpotInstancePolicy

Option Description
COST_OPTIMIZED Use an on-demand instance for the cluster driver and spot instances for cluster executors. The maximum spot price is 100% of the on-demand price. This is the default policy.
RELIABILITY_OPTIMIZED Use on-demand instances for all cluster nodes.

WarehouseState

State of a SQL warehouse. The allowable state transitions are:

  • STARTING -> STARTING, RUNNING, STOPPING, DELETING
  • RUNNING -> STOPPING, DELETING
  • STOPPING -> STOPPED, STARTING
  • STOPPED -> STARTING, DELETING
  • DELETING -> DELETED
State Description
STARTING The warehouse is in the process of starting.
RUNNING The starting process is done and the warehouse is ready to use.
STOPPING The warehouse is in the process of being stopped.
STOPPED The warehouse is stopped. Start by calling start or by submitting a JDBC or ODBC request.
DELETING The warehouse is in the process of being destroyed.
DELETED The warehouse has been deleted and cannot be recovered.

WarehouseStatus

State Description
HEALTHY Warehouse is functioning normally and there are no known issues.
DEGRADED Warehouse might be functional, but there are some known issues. Performance might be affected.
FAILED Warehouse is severely affected and will not be able to serve queries.

WarehouseTags

Field Name Type Description
custom_tags Array of WarehouseTagPair An object containing an array of key-value pairs.

WarehouseTagPair

Field Name Type Description
key STRING Tag key name.
value STRING Tag key value.

ODBCParams

Field Name Type Description
host STRING ODBC server hostname.
path STRING ODBC server path.
protocol STRING ODBC server protocol.
port INT32 ODBC server port.

RepeatedWarehouseConfPairs

Field Name Type Description
configuration_pairs Array of WarehouseConfPair An object containing an array of key-value pairs.

Channel

Field Name Type Description
name WarehouseType Channel Name

ChannelName

Option Description
CHANNEL_NAME_PREVIEW SQL warehouse is set to the preview channel and uses upcoming functionality.
CHANNEL_NAME_CURRENT SQL warehouse is set to the current channel.

WarehouseType

Option Description
CLASSIC Classic warehouses.
PRO Both non-serverless pro warehouses and serverless warehouses are considered PRO.