Manage data quality with Delta Live Tables

You use expectations to define data quality constraints on the contents of a dataset. Expectations allow you to guarantee data arriving in tables meets data quality requirements and provide insights into data quality for each pipeline update. You apply expectations to queries using Python decorators or SQL constraint clauses.

What are Delta Live Tables expectations?

Expectations are optional clauses you add to Delta Live Tables dataset declarations that apply data quality checks on each record passing through a query.

An expectation consists of three things:

  • A description, which acts as a unique identifier and allows you to track metrics for the constraint.
  • A boolean statement that always returns true or false based on some stated condition.
  • An action to take when a record fails the expectation, meaning the boolean returns false.

The following matrix shows the three actions you can apply to invalid records:

Action Result
warn (default) Invalid records are written to the target; failure is reported as a metric for the dataset.
drop Invalid records are dropped before data is written to the target; failure is reported as a metrics for the dataset.
fail Invalid records prevent the update from succeeding. Manual intervention is required before re-processing.

You can view data quality metrics such as the number of records that violate an expectation by querying the Delta Live Tables event log. See Monitor Delta Live Tables pipelines.

For a complete reference of Delta Live Tables dataset declaration syntax, see Delta Live Tables Python language reference or Delta Live Tables SQL language reference.


While you can include multiple clauses in any expectation, only Python supports defining actions based on multiple expectations. See Multiple expectations.

Retain invalid records

Use the expect operator when you want to keep records that violate the expectation. Records that violate the expectation are added to the target dataset along with valid records:


@dlt.expect("valid timestamp", "col(“timestamp”) > '2012-01-01'")


CONSTRAINT valid_timestamp EXPECT (timestamp > '2012-01-01')

Drop invalid records

Use the expect or drop operator to prevent further processing of invalid records. Records that violate the expectation are dropped from the target dataset:


@dlt.expect_or_drop("valid_current_page", "current_page_id IS NOT NULL AND current_page_title IS NOT NULL")


CONSTRAINT valid_current_page EXPECT (current_page_id IS NOT NULL and current_page_title IS NOT NULL) ON VIOLATION DROP ROW

Fail on invalid records

When invalid records are unacceptable, use the expect or fail operator to stop execution immediately when a record fails validation. If the operation is a table update, the system atomically rolls back the transaction:


@dlt.expect_or_fail("valid_count", "count > 0")



When a pipeline fails because of an expectation violation, you must fix the pipeline code to handle the invalid data correctly before re-running the pipeline.

Fail expectations modify the Spark query plan of your transformations to track information required to detect and report on violations. For many queries, you can use this information to identify which input record resulted in the violation. The following is an example exception:

Expectation Violated:
  "flowName": "a-b",
  "verboseInfo": {
    "expectationsViolated": [
      "x1 is negative"
    "inputData": {
      "a": {"x1": 1,"y1": "a },
      "b": {
        "x2": 1,
        "y2": "aa"
    "outputRecord": {
      "x1": 1,
      "y1": "a",
      "x2": 1,
      "y2": "aa"
    "missingInputData": false

Multiple expectations

You can define expectations with one or more data quality constraints in Python pipelines. These decorators accept a Python dictionary as an argument, where the key is the expectation name and the value is the expectation constraint.

Use expect_all to specify multiple data quality constraints when records that fail validation should be included in the target dataset:

@dlt.expect_all({"valid_count": "count > 0", "valid_current_page": "current_page_id IS NOT NULL AND current_page_title IS NOT NULL"})

Use expect_all_or_drop to specify multiple data quality constraints when records that fail validation should be dropped from the target dataset:

@dlt.expect_all_or_drop({"valid_count": "count > 0", "valid_current_page": "current_page_id IS NOT NULL AND current_page_title IS NOT NULL"})

Use expect_all_or_fail to specify multiple data quality constraints when records that fail validation should halt pipeline execution:

@dlt.expect_all_or_fail({"valid_count": "count > 0", "valid_current_page": "current_page_id IS NOT NULL AND current_page_title IS NOT NULL"})

You can also define a collection of expectations as a variable and pass it to one or more queries in your pipeline:

valid_pages = {"valid_count": "count > 0", "valid_current_page": "current_page_id IS NOT NULL AND current_page_title IS NOT NULL"}

def raw_data():
  # Create raw dataset

def prepared_data():
  # Create cleaned and prepared dataset

Quarantine invalid data

The following example uses expectations in combination with temporary tables and views. This pattern provides you with metrics for records that pass expectation checks during pipeline updates, and provides a way to process valid and invalid records through different downstream paths.

import dlt
from pyspark.sql.functions import expr

rules = {}
rules["valid_website"] = "(Website IS NOT NULL)"
rules["valid_location"] = "(Location IS NOT NULL)"
quarantine_rules = "NOT({0})".format(" AND ".join(rules.values()))

def get_farmers_market_data():
  return ('csv').option("header", "true")

def farmers_market_quarantine():
  return ("raw_farmers_market")
      .select("MarketName", "Website", "Location", "State",
              "Facebook", "Twitter", "Youtube", "Organic", "updateTime")
      .withColumn("is_quarantined", expr(quarantine_rules))

def get_valid_farmers_market():
  return ("farmers_market_quarantine")

def get_invalid_farmers_market():
  return ("farmers_market_quarantine")

Validate row counts across tables

You can add an additional table to your pipeline that defines an expectation to compare row counts between two live tables. The results of this expectation appear in the event log and the Delta Live Tables UI. This following example validates equal row counts between the tbla and tblb tables:

CREATE OR REFRESH LIVE TABLE count_verification(
  CONSTRAINT no_rows_dropped EXPECT (a_count == b_count)
  (SELECT COUNT(*) AS a_count FROM LIVE.tbla),
  (SELECT COUNT(*) AS b_count FROM LIVE.tblb)

Perform advanced validation with Delta Live Tables expectations

You can define live tables using aggregate and join queries and use the results of those queries as part of your expectation checking. This is useful if you wish to perform complex data quality checks, for example, ensuring a derived table contains all records from the source table or guaranteeing the equality of a numeric column across tables. You can use the TEMPORARY keyword to prevent these tables from being published to the target schema.

The following example validates that all expected records are present in the report table:

CREATE TEMPORARY LIVE TABLE report_compare_tests(
  CONSTRAINT no_missing_records EXPECT (r.key IS NOT NULL)
AS SELECT * FROM LIVE.validation_copy v
LEFT OUTER JOIN r ON v.key = r.key

The following example uses an aggregate to ensure the uniqueness of a primary key:

  CONSTRAINT unique_pk EXPECT (num_entries = 1)
AS SELECT pk, count(*) as num_entries

Make expectations portable and reusable

You can maintain data quality rules separately from your pipeline implementations.

Databricks recommends storing the rules in a Delta table with each rule categorized by a tag. You use this tag in dataset definitions to determine which rules to apply.

The following example creates a table named rules to maintain rules:

  col1 AS name,
  col2 AS constraint,
  col3 AS tag
  ("website_not_null","Website IS NOT NULL","validity"),
  ("location_not_null","Location IS NOT NULL","validity"),
  ("state_not_null","State IS NOT NULL","validity"),
  ("fresh_data","to_date(updateTime,'M/d/yyyy h:m:s a') > '2010-01-01'","maintained"),
  ("social_media_access","NOT(Facebook IS NULL AND Twitter IS NULL AND Youtube IS NULL)","maintained")

The following Python example defines data quality expectations based on the rules stored in the rules table. The get_rules() function reads the rules from the rules table and returns a Python dictionary containing rules matching the tag argument passed to the function. The dictionary is applied in the @dlt.expect_all_*() decorators to enforce data quality constraints. For example, any records failing the rules tagged with validity will be dropped from the raw_farmers_market table:

import dlt
from pyspark.sql.functions import expr, col

def get_rules(tag):
    loads data quality rules from a table
    :param tag: tag to match
    :return: dictionary of rules that matched the tag
  rules = {}
  df ="rules")
  for row in df.filter(col("tag") == tag).collect():
    rules[row['name']] = row['constraint']
  return rules

def get_farmers_market_data():
  return ('csv').option("header", "true")

def get_organic_farmers_market():
  return ("raw_farmers_market")
      .filter(expr("Organic = 'Y'"))
      .select("MarketName", "Website", "State",
        "Facebook", "Twitter", "Youtube", "Organic",