Events
Mar 31, 11 PM - Apr 2, 11 PM
The ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. March 31 to April 2, 2025.
Register todayThis browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Applies to: Databricks SQL Databricks Runtime
Adds, modifies, or drops a column in a table or a field in a column in a Delta Lake table.
If you use Unity Catalog you must have MODIFY
permission to:
All other operations require ownership of the table.
ALTER TABLE table_name
{ ADD COLUMN clause |
ALTER COLUMN clause |
DROP COLUMN clause |
RENAME COLUMN clause }
This clause is not supported for JDBC
data sources.
Adds one or more columns to the table, or fields to existing columns in a Delta Lake table.
Note
When you add a column to an existing Delta table, you cannot define a DEFAULT
value. All columns added to Delta tables are treated as NULL
for existing rows. After adding a column, you can optionally define a default value for the column, but this is only applied for new rows inserted into the table. Use the following syntax:
ALTER TABLE table_name ALTER COLUMN column_name SET DEFAULT default_expression
{ ADD [ COLUMN | COLUMNS ]
( { { column_identifier | field_name } data_type
[ COLLATE collation_name ]
[ DEFAULT clause ]
[ COMMENT comment ]
[ FIRST | AFTER identifier ]
[ MASK clause ] } [, ...] ) }
The name of the column to be added. The name must be unique within the table.
Unless FIRST
or AFTER name
are specified the column or field will be appended at the end.
The fully qualified name of the field to be added to an existing column. All components of the path to the nested field must exist and the field name itself must be unique.
DEFAULT default_expression
Applies to: Databricks SQL Databricks Runtime 11.3 LTS and above
Defines a DEFAULT
value for the column which is used on INSERT
and MERGE ... INSERT
when the column is not specified.
Any STRING
literals and STRING
functions in the default expression will use UTF8_BINARY
collation.
If no default is specified, DEFAULT NULL
is implied for nullable columns.
default_expression
may be composed of literals, and built-in SQL functions or operators except:
default_expression
must not contain any subquery.
DEFAULT
is supported for CSV
, JSON
, PARQUET
, and ORC
sources.
Specifies the data type of the column or field. Not all data types supported by Azure Databricks are supported by all data sources.
COLLATE
collation_name
For data_type
STRING
optionally specifies the collation to use with the column or field.
If not specified the UTF8_BINARY
collation applies.
COMMENT comment
An optional STRING
literal describing the added column or field.
If you want to add an AI-generated comment for a table or table column managed by Unity Catalog, see Add AI-generated comments to Unity Catalog objects.
FIRST
If specified the column will be added as the first column of the table, or the field will be added as the first field of in the containing struct.
AFTER
identifier
If specified the column or field will be added immediately after the field or column identifier
.
Applies to: Databricks SQL Databricks Runtime 12.2 LTS and above Unity Catalog only
Important
This feature is in Public Preview.
Adds a column mask function to anonymize sensitive data. All subsequent queries from that column will receive the result of evaluating that function over the column in place of the column’s original value. This can be useful for fine-grained access control purposes where the function can inspect the identity or group memberships of the invoking user to determine whether to redact the value.
Applies to: Databricks SQL Databricks Runtime
Changes a property or the location of a column.
{ { ALTER | CHANGE } [ COLUMN ] { column_identifier | field_name }
{ COMMENT comment |
{ FIRST | AFTER column_identifier } |
{ SET | DROP } NOT NULL |
TYPE data_type |
SET DEFAULT clause |
DROP DEFAULT |
SYNC IDENTITY |
SET { MASK clause } |
DROP MASK |
SET TAGS clause |
UNSET TAGS clause } }
The name of the column to be altered.
The fully qualified name of the field to be altered. All components of the path to the nested field must exist.
COMMENT comment
Changes the description of the column_name
column. comment
must be a STRING
literal.
FIRST
or AFTER
identifier
Moves the column from its current position to the front (FIRST
) or immediately AFTER
the identifier
.
This clause is only supported if table_name
is a Delta table.
TYPE
data_type
Applies to: Databricks SQL Databricks Runtime 15.2 and above
Changes the data type of the column_name
column.
This clause is only supported if table_name
is a Delta table.
The following type changes are supported for all Delta tables:
VARCHAR
column, for example, from VARCHAR(5)
to VARCHAR(10)
CHAR
column to a VARCHAR
, for example, from CHAR(5)
to VARCHAR(5)
CHAR
or VARCHAR
column to STRING
, for example, from VARCHAR(10)
to STRING
.The following type changes are supported for Delta tables with delta.enableTypeWidening
set to true
:
Important
This feature is in Public Preview in Databricks Runtime 15.2 and above.
Source type | Supported wider types |
---|---|
BYTE |
SHORT , INT , BIGINT , DECIMAL , DOUBLE |
SHORT |
INT , BIGINT , DECIMAL , DOUBLE |
INT |
BIGINT , DECIMAL , DOUBLE |
BIGINT |
DECIMAL , DOUBLE |
FLOAT |
DOUBLE |
DECIMAL |
DECIMAL with greater precision and scale |
DATE |
TIMESTAMP_NTZ |
For more detailed information on type widening, see Type widening.
SET NOT NULL
or DROP NOT NULL
Changes the domain of valid column values to exclude nulls SET NOT NULL
, or include nulls DROP NOT NULL
.
This option is only supported for Delta Lake tables.
Delta Lake will ensure the constraint is valid for all existing and new data.
SYNC IDENTITY
Applies to: Databricks SQL Databricks Runtime 10.4 LTS and above
Synchronize the metadata of an identity column with the actual data. When you write your own values to an identity column, it might not comply with the metadata. This option evaluates the state and updates the metadata to be consistent with the actual data. After this command, the next automatically assigned identity value will start from start + (n + 1) * step
, where n
is the smallest value that satisfies start + n * step >= max()
(for a positive step).
This option is only supported for identity columns on Delta Lake tables.
DROP DEFAULT
Applies to: Databricks SQL Databricks Runtime 11.3 LTS and above
Removes the default expression from the column. For nullable columns this is equivalent to SET DEFAULT NULL
. For columns defined with NOT NULL
you need to provide a value on every future INSERT
operation
SET DEFAULT default_expression
Applies to: Databricks SQL Databricks Runtime 11.3 LTS and above
Defines a DEFAULT
value for the column which is used on INSERT
and MERGE ... INSERT
when the column is not specified.
If no default is specified DEFAULT NULL
is implied for nullable columns.
default_expression
may be composed of literals, built-in SQL functions, or operators except:
default_expression
must not contain a subquery.
DEFAULT
is supported for CSV
, JSON
, ORC
, and PARQUET
sources.
When you define the default for a newly added column, the default applies to all pre-existing rows.
If the default includes a non-deterministic function such as rand
or current_timestamp
the value is computed once when the ALTER TABLE
is executed,
and applied as a constant to pre-existing rows.
For newly inserted rows, the default expression runs once per rows.
When you set a default using ALTER COLUMN
, existing rows are not affected by that change.
SET
MASK clause
Applies to: Databricks SQL Databricks Runtime 12.2 LTS and above Unity Catalog only
Important
This feature is in Public Preview.
Adds a column mask function to anonymize sensitive data. All subsequent queries from that column will receive the result of evaluating that function over the column in place of the column’s original value. This can be useful for fine-grained access control purposes where the function can inspect the identity or group memberships of the invoking user to determine whether to redact the value.
DROP MASK
Applies to: Unity Catalog only
Important
This feature is in Public Preview.
Removes the column mask for this column, if any. Future queries from this column will receive the column’s original values.
SET TAGS ( { tag_name = tag_value } [, ...] )
Applies to: Databricks SQL Databricks Runtime 13.3 LTS and above
Apply tags to the column. You need to have APPLY TAG
permission to add tags to the column.
tag_name
A literal STRING
. The tag_name
must be unique within the table or column.
tag_value
A literal STRING
.
UNSET TAGS ( tag_name [, ...] )
Applies to: Databricks SQL Databricks Runtime 13.3 LTS and above
Remove tags from the column. You need to have APPLY TAG
permission to remove tags from the column.
tag_name
A literal STRING
. The tag_name
must be unique within the table or column.
Important
This feature is in Public Preview.
Applies to: Databricks SQL Databricks Runtime 11.3 LTS and above
Drop one or more columns or fields in a Delta Lake table.
When you drop a column or field, you must drop dependent check constraints and generated columns.
For requirements, see Rename and drop columns with Delta Lake column mapping.
DROP [COLUMN | COLUMNS] [ IF EXISTS ] ( { {column_identifier | field_name} [, ...] )
IF EXISTS
When you specify IF EXISTS
, Azure Databricks ignores an attempt to drop columns that do not exist. Otherwise, dropping non-existing columns will cause an error.
The name of the existing column.
The fully qualified name of an existing field.
Important
This feature is in Public Preview.
Applies to: Databricks SQL Databricks Runtime 10.4 LTS and above
Renames a column or field in a Delta Lake table enabled for column mapping.
When you rename a column or field you also need to change dependent check constraints and generated columns. Any primary keys and foreign keys using the column will be dropped. In case of foreign keys you must own the table on which the foreign key is defined.
For requirements, and how to enable column mapping see Rename and drop columns with Delta Lake column mapping.
RENAME COLUMN { column_identifier TO to_column_identifier|
field_name TO to_field_identifier }
The existing name of the column.
The new column identifier. The identifier must be unique within the table.
The existing fully qualified name of a field.
The new field identifier. The identifier must be unique within the local struct.
See ALTER TABLE examples.
Events
Mar 31, 11 PM - Apr 2, 11 PM
The ultimate Microsoft Fabric, Power BI, SQL, and AI community-led event. March 31 to April 2, 2025.
Register todayTraining
Module
Create and manage columns within a table in Microsoft Dataverse - Training
Learn how to create and manage table columns in Dataverse.
Documentation
ALTER TABLE - Azure Databricks - Databricks SQL
Learn how to use the ALTER TABLE syntax of the SQL language in Databricks SQL.
COMMENT ON - Azure Databricks - Databricks SQL
Learn how to use the COMMENT syntax of the SQL language in Databricks SQL and Databricks Runtime.
Names - Azure Databricks - Databricks SQL
Learn about SQL names in Databricks SQL and Databricks Runtime.