Share via


Databricks SQL release notes 2026

The following Databricks SQL features and improvements were released in 2026.

February 2026

Databricks SQL version 2025.40 is rolling out in Current

February 23, 2026

Databricks SQL version 2025.40 is rolling out to the Current channel. See features in 2025.40.

Databricks SQL version 2025.40 is now available in Preview

February 11, 2026

Databricks SQL version 2025.40 is now available in the Preview channel. Review the following section to learn about new features, behavioral changes, and bug fixes.

SQL scripting is generally available

SQL scripting is now generally available. Write procedural logic with SQL, including conditional statements, loops, local variables, and exception handling.

Parameter markers now supported in more SQL contexts

You can now use named (:param) and unnamed (?) parameter markers anywhere a literal value of the appropriate type is allowed. This includes DDL statements such as CREATE VIEW v AS SELECT ? AS c1, column types such as DECIMAL(:p, :s), and comments such as COMMENT ON t IS :comment. This allows you to parameterize a large variety of SQL statements without exposing your code to SQL injection attacks. See Parameter markers.

IDENTIFIER clause expanded to more SQL contexts

The IDENTIFIER clause, which casts strings to SQL object names, is now supported in nearly every context where an identifier is permitted. Combined with expanded parameter marker and literal string coalescing support, you can parameterize anything from column aliases (AS IDENTIFIER(:name)) to column definitions (IDENTIFIER(:pk) BIGINT NOT NULL). See IDENTIFIER clause.

Literal string coalescing supported everywhere

Sequential string literals such as 'Hello' ' World' now coalesce into 'Hello World' in any context where string literals are allowed, including COMMENT 'This' ' is a ' 'comment'. See STRING type.

New BITMAP_AND_AGG function

A new BITMAP_AND_AGG function is now available to complement the existing library of BITMAP functions.

New Theta Sketch functions for approximate distinct counts

A new library of functions for approximate distinct count and set operations using Datasketches Theta Sketch is now available:

New KLL Sketch functions for approximate quantiles

A new library of functions for building KLL Sketches for approximate quantile computation is now available:

You can merge multiple KLL sketches in an aggregation context using kll_merge_agg_bigint, kll_merge_agg_double, and kll_merge_agg_float.

SQL window functions in metric views

You can now use SQL window functions in metric views to calculate running totals, rankings, and other window-based calculations.

FILTER clause for aggregate functions in metric views

You can now use the FILTER clause with measure aggregate functions in metric views to define per-aggregate filters when referencing metric view measures.

New geospatial functions

The following new geospatial functions are now available:

EWKT input support for existing geometry and geography functions

The following functions now accept Extended Well-Known Text (EWKT) as input:

Improved geospatial function performance

Spatial join performance is improved with shuffled spatial join support. The following ST functions now have Photon implementations:

FSCK REPAIR TABLE includes metadata repair by default

FSCK REPAIR TABLE now includes an initial metadata repair step before checking for missing data files, allowing it to work on tables with corrupt checkpoints or invalid partition values. Additionally, the dataFilePath column in the FSCK REPAIR TABLE DRY RUN output schema is now nullable to support new issue types where the data file path is not applicable.

DESCRIBE TABLE output includes metadata column

The output of DESCRIBE TABLE [EXTENDED] now includes a metadata column for all table types. This column contains semantic metadata (display name, format, and synonyms) defined on the table as a JSON string.

NULL structs preserved in MERGE, UPDATE, and streaming write operations

NULL structs are now preserved as NULL in Delta Lake MERGE, UPDATE, and streaming write operations that include struct type casts. Previously, NULL structs were expanded to structs with all fields set to NULL.

Partition columns materialized in Parquet files

Partitioned Delta Lake tables now materialize partition columns in newly written Parquet data files. Previously, partition values were stored only in the Delta Lake transaction log metadata. Workloads that directly read Parquet files written by Delta Lake sees additional partition columns in newly written files.

Timestamp partition values respect session timezone

Timestamp partition values are now correctly adjusted using the spark.sql.session.timeZone configuration. Previously, they were incorrectly converted to UTC using the JVM timezone.

Time travel restrictions updated

Azure Databricks now blocks time travel queries beyond the deletedFileRetentionDuration threshold for all tables. The VACUUM command ignores the retention duration argument except when the value is 0 hours. You cannot set deletedFileRetentionDuration larger than logRetentionDuration.

SHOW TABLES DROPPED respects LIMIT clause

SHOW TABLES DROPPED now correctly respects the LIMIT clause.

January 2026