DLT pipe notebook using UC receiving following error cannot find solution for: [LIVE_REFERENCE_OUTSIDE_QUERY_DEFINITION_CLASSIC] Referencing datasets using `LIVE` virtual schema outside the dat

Barsness, Scott 0 Reputation points
2024-10-22T14:36:23.1233333+00:00

We are currently working in UC DLT Pipeline and notebook receiving below error that we cannot find solution for. This is only occurring in our Dev environment and works successfully in our test and prod in Azure. Below is code erroring on.

  dlt.create_target_table(
    name = f"{dest_table}_silver_no_indicators",
    # path = f"{silver_path_prefix}/RM/{dest_table}_silver_no_indicators",
    partition_cols = ["__START_AT"],
    table_properties = {"pipelines.autoOptimize.zOrderCols": z_order_cols})
  

  dlt.apply_changes_from_snapshot(
    target = f"{dest_table}_silver_no_indicators",
    snapshot_and_version = next_snapshot_and_version,
    keys = pks,
    stored_as_scd_type = "2",
    track_history_except_column_list = ["edl_ingestion_dt", "edl_deletedindicator"])
py4j.protocol.Py4JJavaError: An error occurred while calling o580.read.
: com.databricks.pipelines.LiveTableReferenceOutsideQueryDefinition: [LIVE_REFERENCE_OUTSIDE_QUERY_DEFINITION_CLASSIC] Referencing datasets using `LIVE` virtual schema outside the dataset query definition (i.e., @dlt.table annotation) is not supported.
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,530 questions
{count} votes

1 answer

Sort by: Most helpful
  1. hossein jalilian 11,055 Reputation points Volunteer Moderator
    2024-10-22T16:47:53+00:00

    Thanks for posting your question in the Microsoft Q&A forum.

    The error you're encountering suggests that you're trying to reference a LIVE table outside of a DLT dataset definition. This is not supported in Delta Live Tables.

    • Ensure all LIVE table references are within DLT dataset definitions
    • If you need to reference the table outside of a DLT dataset definition, use the full table name including the catalog and schema, rather than the LIVE schema.
    • Make sure you're not indirectly referencing a LIVE table through a view or a temporary table.
    • Instead of using spark.read() or other direct reading methods, use dlt.read() to reference tables within your DLT pipeline.

    Please don't forget to close up the thread here by upvoting and accept it as an answer if it is helpful


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.