@Gulhasan.Siddiquee You can use repartition or coalesce to write it back to a single file. Sample code below Just keep in mind when you write it as a single file, you loose the able to do parallel reads. If the files are smaller in size then it should be fine but if the end file is too big then spark cannot do a parallel read and that might slow down the read queries.
# Read from a folder where you have multiple files with same schema
df=spark.read.parquet("blob_source_address")
# Write it back as a single file
df.repartition(1).write.parquet("blob_destination_address")
Mark as answer if this helps you