Hello Rakesh Kumar,
Thanks for confirming this.
After going through the below forums, the issue could be due to the way Spark handles data partitioning. When writing data, Spark creates a separate file for each partition of the DataFrame. If some partitions are empty, Spark still creates a file for them, which results in the generation of empty files.
Can you try repartitioning your DataFrame before writing it and see if it resolves creating empty files.
for path, dataframe in dataframes_dict.items():
dataframe = dataframe.repartition(1)
dataframe.write.mode("overwrite").format("com.databricks.spark.csv").option("header", "true").csv(path)
https://stackoverflow.com/questions/46436077/how-to-avoid-empty-files-while-writing-parquet-files