Κοινή χρήση μέσω


Data format options

Azure Databricks has built-in keyword bindings for all of the data formats natively supported by Apache Spark. Azure Databricks uses Delta Lake as the default protocol for reading and writing data and tables, whereas Apache Spark uses Parquet.

These articles provide an overview of many of the options and configurations available when you query data on Azure Databricks.

The following data formats have built-in keyword configurations in Apache Spark DataFrames and SQL:

Azure Databricks also provides a custom keyword for loading MLflow experiments.

Data formats with special considerations

Some data formats require additional configuration or special considerations for use:

  • Databricks recommends loading images as binary data.
  • Azure Databricks can directly read compressed files in many file formats. You can also unzip compressed files on Azure Databricks if necessary.

For more information about Apache Spark data sources, see Generic Load/Save Functions and Generic File Source Options.