Access Databricks tables from Delta clients

This page describes how to use the Unity REST API to create, read, and write to Unity Catalog managed and external tables from external Delta clients. For a full list of supported integrations, see Unity Catalog integrations.

Tip

For information about how to read Azure Databricks data using Microsoft Fabric, see Use Microsoft Fabric to read data that is registered in Unity Catalog.

Create, read, and write using the Unity REST API

Important

Creating and writing to Unity Catalog managed tables from Delta clients is in Beta.

The Unity REST API provides external clients create, read, and write access to tables registered to Unity Catalog. Configure access using the workspace URL as the endpoint. The following table types are accessible:

Table type Read Write Create
Managed Delta Yes Yes* Yes*
External Delta Yes Yes Yes

* Supported for managed Delta tables with catalog commits.

Requirements

Azure Databricks supports Unity REST API access to tables as part of Unity Catalog. You must have Unity Catalog enabled in your workspace to use these endpoints.

You must also complete the following configuration steps to configure access to tables from Delta clients using the Unity REST API:

Limitations

  • External access to UniForm tables with IcebergCompatV3 is not currently supported. After externally writing to a UniForm table, you must run MSCK REPAIR TABLE in Databricks to generate Iceberg metadata.
  • Schema changes (for example, ALTER TABLE), table property updates, and table feature changes are not currently supported on managed tables from external clients.
  • External clients can't perform table maintenance operations, such as OPTIMIZE, VACUUM, and ANALYZE, on managed Delta tables.
  • External clients cannot create shallow clones.
  • External clients cannot create tables with generated columns, default columns, or constraint columns.
  • When creating external tables, Azure Databricks recommends using Apache Spark to ensure that column definitions are in a format compatible with Apache Spark. The API does not validate the correctness of the column specification. If the specification is not compatible with Apache Spark, then Databricks Runtime might be unable to read the tables.

Access Delta tables with Apache Spark using PAT authentication

PAT authentication for external Spark clients requires:

  • Unity Catalog Spark client version 0.4.1 or above (io.unitycatalog:unitycatalog-spark)
  • Apache Spark 4.0 or above
  • Delta Spark 4.2.0 or above
  • A personal access token for the principal accessing Unity Catalog. See Authorize access to Azure Databricks resources.

The following configuration is required to read or write to Unity Catalog managed and external Delta tables with Apache Spark using PAT authentication:

"spark.sql.extensions": "io.delta.sql.DeltaSparkSessionExtension",
"spark.sql.catalog.spark_catalog": "io.unitycatalog.spark.UCSingleCatalog",
"spark.sql.catalog.<uc-catalog-name>": "io.unitycatalog.spark.UCSingleCatalog",
"spark.sql.catalog.<uc-catalog-name>.uri": "<workspace-url>",
"spark.sql.catalog.<uc-catalog-name>.token": "<token>",
"spark.sql.defaultCatalog": "<uc-catalog-name>",
"spark.jars.packages": "io.delta:delta-spark_4.1_2.13:4.2.0,io.unitycatalog:unitycatalog-spark_2.13:0.4.1,org.apache.hadoop:hadoop-azure:3.4.2"

Substitute the following variables:

  • <uc-catalog-name>: The name of the catalog in Unity Catalog that contains your tables.
  • <token>: Personal access token (PAT) for the principal configuring the integration.
  • <workspace-url>: The Azure Databricks workspace URL, including the workspace ID. For example, adb-1234567890123456.12.azuredatabricks.net.

Note

The package versions shown above are current as of the last update to this page. Newer versions might be available. Verify that package versions are compatible with your Spark version.

For additional details about configuring Apache Spark for cloud object storage, see the Unity Catalog OSS documentation.

Important

Databricks Runtime 16.4 and above is required to read from, write to, or create tables with catalog commits enabled. Databricks Runtime 18.0 and above is required to enable or disable catalog commits on existing tables.

To create managed Delta tables with catalog commits, use the following SQL:

CREATE TABLE <uc-catalog-name>.<schema-name>.<table-name> (id INT, desc STRING)
TBLPROPERTIES ('delta.feature.catalogManaged' = 'supported') USING delta;

To create external Delta tables, use the following SQL:

CREATE TABLE <uc-catalog-name>.<schema-name>.<table-name> (id INT, desc STRING)
USING delta
LOCATION <path>;

Access Delta tables with Apache Spark using OAuth authentication

Azure Databricks also supports OAuth machine-to-machine (M2M) authentication. OAuth automatically handles token and credential renewal for Unity Catalog authentication.

OAuth authentication for external Spark clients requires:

The following configuration is required to create, read, or write to Unity Catalog managed tables and external Delta tables with Apache Spark using OAuth authentication:

"spark.sql.extensions": "io.delta.sql.DeltaSparkSessionExtension",
"spark.sql.catalog.spark_catalog": "io.unitycatalog.spark.UCSingleCatalog",
"spark.sql.catalog.<uc-catalog-name>": "io.unitycatalog.spark.UCSingleCatalog",
"spark.sql.catalog.<uc-catalog-name>.uri": "<workspace-url>",
"spark.sql.catalog.<uc-catalog-name>.auth.type": "oauth",
"spark.sql.catalog.<uc-catalog-name>.auth.oauth.uri": "<oauth-token-endpoint>",
"spark.sql.catalog.<uc-catalog-name>.auth.oauth.clientId": "<oauth-client-id>",
"spark.sql.catalog.<uc-catalog-name>.auth.oauth.clientSecret": "<oauth-client-secret>",
"spark.sql.defaultCatalog": "<uc-catalog-name>",
"spark.jars.packages": "io.delta:delta-spark_4.1_2.13:4.2.0,io.unitycatalog:unitycatalog-spark_2.13:0.4.1,org.apache.hadoop:hadoop-azure:3.4.2"

Substitute the following variables:

  • <workspace-url>: The Azure Databricks workspace URL, including the workspace ID. For example, adb-1234567890123456.12.azuredatabricks.net.

Note

The package versions shown above are current as of the last update to this page. Newer versions might be available. Verify that package versions are compatible with your Spark version.