databricks - save to .parquet

arkiboys 9,666 Reputation points

using pyspark, I run a select and then save to .parquet.
The problem is that it saves .parquet as well as othe rfiles such as _commited and _success, etc.
How can I change the pyspark to only save .parquet and have no other files?

df = spark.sql('select * from viewName limit 100')
df.write.parquet('dbfs:/mnt/temp/foldername', mode='overwrite')

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,476 questions
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
1,972 questions
{count} votes

Accepted answer
  1. Costa , Ana 76 Reputation points

    I have the same problem.
    For anybody looking for a quick fix meanwhile , I use this after creating a file with Python:

    nameFile = [ for x in"{path}{fileName}.parquet") if'.')[-1] == 'parquet'][0]
    dbutils.fs.rm(f"{path}{fileName}.parquet",recurse = True)

    0 comments No comments

0 additional answers

Sort by: Most helpful