Partitioning of delta files in Data Lake

pankaj mahra 0 Reputation points

I am writing data into data lake into delta files

Delta Files in data lake is getting stored in very small size and the number of files is too much

I don't have any column in which I can do the partitioning

Can Someone please suggest how can I partition the files for its optimal usage without using the partition column ?

Thanks in Adavnce

Azure Data Lake Storage
Azure Data Lake Storage
An Azure service that provides an enterprise-wide hyper-scale repository for big data analytic workloads and is integrated with Azure Blob Storage.
1,271 questions
Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,118 questions
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
1,847 questions
{count} votes

1 answer

Sort by: Most helpful
  1. AnnuKumari-MSFT 29,586 Reputation points Microsoft Employee

    Hi pankaj mahra ,

    Welcome to Microsoft Q&A platform and thanks for posting your query here.

    As per my understanding, you are looking for suggestions on how to partition the files for optimal usage without using the partition column. Please let me know if that is not the case.

    Delta Lake supports partitioning data by one or more columns. Partitioning data can improve query performance by reducing the amount of data that needs to be scanned. Partitioning can also help to organize data into more manageable chunks. However, you mentioned that you do not have any column to partition the data. In this case, you can use the 'repartition' or 'coalesce' function to partition the data.

    You can use the coalesce function to reduce the number of files. The coalesce function can be used to reduce the number of partitions to a fixed number. For example


    Check the below resources for more details:

    Spark - Repartition Or Coalesce

    How to Write Dataframe as single file with specific name in PySpark

    I hope this helps . Kindly accept the answer by clicking on Accept answer button