Append in Liquid Cluster enabled table is not completing on DBR 15.3 version

Sudipta Goswami 20 Reputation points
2024-07-13T07:29:40.88+00:00

I am trying do analysis with a Partition Table and Liquid Clustered table. As per Azure Databricks recommendation, I am using DBR 15.2 to execute the code.

I have created a clustered table as and using an append operation which is specified below.

Few Stats: Size of df is around 950 GB with 9Bn row.

When I run this piece of code in DBR 14.3, it executes in around 12 minutes. But when I am using DBR 15.2 or DBR 15.3, it is not completing. I have left the process to run for around 2 hrs also.

Can you please advice why this is not running in the latest DBR runtime.

CREATE TABLE ext_IOT_event_clust ( 
	id BIGINT, 
	device_id STRING, 
	country STRING, 
	manufacturer STRING, 
	model_line STRING, 
	event_type STRING, 
	event_ts TIMESTAMP) 
USING delta CLUSTER BY (country) 
LOCATION 'abfss://adls-location' 


# Writing to the cluster table

df_raw_event.write.format("delta").option("path","abfss://adls-location2") \
    .mode("append") \
    .saveAsTable("ext_IOT_event_clust")
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,076 questions
0 comments No comments
{count} votes

2 answers

Sort by: Most helpful
  1. Amira Bedhiafi 19,946 Reputation points
    2024-07-13T12:12:53.4633333+00:00

    Your issue could be due to several factors.

    First thing I think about is newer runtime versions might introduce changes or optimizations. So ypu may need to review and potentially optimize your cluster configuration to ensure it has sufficient resources for the 950 GB, 9 billion row dataset.

    Check for any specific performance tuning parameters for Delta Lake or Spark SQL that might help. If you didn't enable any detailed logging and monitoring, I think you should at least to capture more information about the write operation and identify performance bottlenecks.

    Review your dataframe transformations and actions for efficiency, and verify that your clustering strategy is appropriate for your query patterns.

    Steps to you can follow to diagnose :

    • Check the Databricks cluster logs for any error messages or warnings related to the write operation. Look for any exceptions or stack traces that might indicate the cause of the problem.
    • Use the Spark SQL EXPLAIN command to generate the execution plan for your write operation. This can help you understand how Spark is processing the write and identify any potential bottlenecks.
    • Use Databricks' built-in performance profiling tools to analyze the execution of your job. This can provide insights into where the time is being spent during the write operation.

  2. PRADEEPCHEEKATLA-MSFT 85,511 Reputation points Microsoft Employee
    2024-07-22T04:43:52.98+00:00

    @Sudipta Goswami - Thanks for the question and using MS Q&A platform.

    If the same code works fine with DBR 14.3 and both the runtimes have the same cluster configurations, it is possible that there are some changes in the newer versions of DBR that are causing this issue.

    To further troubleshoot this issue, I would recommend the following steps:

    • Check the release notes: Check the release notes for DBR 15.2 and DBR 15.3 to see if there are any changes that could be causing this issue. You can find the release notes in the Azure Databricks documentation.
    • Check the performance tuning: Check if there are any performance tuning changes that need to be made for DBR 15.2 and DBR 15.3. You can find the performance tuning recommendations in the Azure Databricks documentation.
    • Check the data: Check the data to see if there are any changes that could be causing this issue. You can use the Databricks Delta Lake table history to see the changes in the table.

    If you are still unable to identify the issue, you may open a support ticket for further assistance.

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    0 comments No comments