Nota
L-aċċess għal din il-paġna jeħtieġ l-awtorizzazzjoni. Tista’ tipprova tidħol jew tibdel id-direttorji.
L-aċċess għal din il-paġna jeħtieġ l-awtorizzazzjoni. Tista’ tipprova tibdel id-direttorji.
This page describes the requirements for creating and refreshing standalone materialized views and streaming tables.
You can create and refresh standalone materialized views and streaming tables using a SQL warehouse. To submit CREATE and REFRESH statements, use the SQL editor in the Azure Databricks UI, the Databricks SQL CLI, or the Databricks SQL API.
You can also create and refresh standalone materialized views and streaming tables from a notebook running on serverless general compute (Beta, limited regional availability). See Notebooks.
General requirements
The following requirements apply to all standalone pipelines.
You must have:
- An Azure Databricks account with serverless enabled. See Set up serverless SQL warehouses.
- A workspace with Unity Catalog enabled. See Get started with Unity Catalog.
Permissions to create or refresh
The owner (the user who creates the table) must have the following permissions:
SELECTprivilege on the base tables.USE CATALOGandUSE SCHEMAprivileges on the catalog and schema containing the source tables.USE CATALOGandUSE SCHEMAprivileges on the target catalog and schema.CREATE MATERIALIZED VIEWprivilege on the schema containing the materialized view.CREATE TABLEprivilege on the schema containing the streaming table. Pipelines using legacy publishing mode also require theCREATE TABLEprivilege for materialized views.
To refresh a standalone materialized view or streaming table:
- You must be in the workspace that created it.
- You must have the
REFRESHprivilege on the table. Owners have this privilege implicitly.
Source table requirements
For incremental refresh of materialized views from Delta tables, the source tables must have row tracking enabled.
SQL warehouses
To create or refresh standalone materialized views and streaming tables using a SQL warehouse, you must have a Unity Catalog-enabled pro or serverless SQL warehouse.
- Your workspace must be in a region that supports serverless SQL warehouses.
Notebooks
You can create and refresh standalone materialized views and streaming tables from a notebook with serverless general compute.
Serverless general compute
Important
Creating and refreshing standalone materialized views and streaming tables from a notebook on serverless general compute is in Beta. This feature is available in select regions only. See Regional availability.
You can create and refresh standalone materialized views and streaming tables from a notebook attached to serverless general compute. This option is useful when you want to define and run materialized views or streaming tables alongside other notebook-based workflows without provisioning a SQL warehouse.
Serverless general compute requirements
- A notebook attached to serverless general compute.
- Databricks Runtime 18.1 or above. Interactive notebooks meet this requirement automatically; jobs pinned to an older version do not.
- Your workspace must be in a supported region.
Limitations
- Only the table owner can refresh the table. To allow another user to refresh, change the owner. See Change the owner of a streaming table and Change the owner of a materialized view.
- Asynchronous refreshes are not supported. Use a synchronous refresh instead.
- The preview channel is not supported. Tables created on serverless general compute use the
currentchannel. - A table can only be refreshed using the compute type it was created with. A table created on a SQL warehouse must be refreshed on a SQL warehouse, and a table created on serverless general compute must be refreshed on serverless general compute. To check the compute type, view the table in Catalog Explorer.
- Cost attribution and control are not available. Use a SQL warehouse if you need per-table cost attribution.
- Vertical autoscaling on out-of-memory errors is not available.
- Retries for schema upgrades are not available.
- Performance mode selection on refresh is not available. See Select a performance mode for scheduled refreshes.
Note
spark.sql is supported when running a refresh in a notebook on serverless general compute.
:::
Query requirements
To query a standalone materialized view or streaming table, you must be the owner, or you must have SELECT on the table along with USE CATALOG and USE SCHEMA on its parents.
You must use one of the following compute resources:
- SQL warehouse
- Lakeflow Spark Declarative Pipelines interfaces
- Standard access mode compute (formerly shared access mode)
- Dedicated access mode compute (formerly single user access mode) on Databricks Runtime 15.4 or above, if the workspace is enabled for serverless compute. See Fine-grained access control on dedicated compute. If you are the owner, you can use dedicated access mode compute running Databricks Runtime 14.3 or above.
For streaming tables on Databricks Runtime 15.3 and below, you can use dedicated compute to query an streaming table only if you own it. Databricks Runtime 15.4 LTS and above support querying pipeline-generated tables on dedicated compute even if you aren't the owner. You might be charged for serverless compute resources when you use dedicated compute to run data filtering operations. See Fine-grained access control on dedicated compute.
Regional availability
Tables created and refreshed using a Databricks SQL warehouse are available in all regions that support serverless Databricks SQL warehouses.
Creating and refreshing standalone materialized views and streaming tables on serverless general compute is available in select regions only.
For the list of supported regions for both compute options, see Serverless availability.