Hello @KEERTHANA JAYADEVAN
You can use the spark.read.parquet() method to read the Parquet file from a mounted blob container in Azure Databricks.
Here is an example:
dbutils.fs.mount( source = "wasbs://******@blobstoreaccount.blob.core.windows.net/", mount_point = "/mnt/nyctrip", extra_configs = {"fs.azure.account.key.blobstorageaccount.blob.core.windows.net":"key"})
-- Define the path to your Parquet file
parquet_file_path = "/mnt/nyctrip/NYCTripSmall.parquet"
--Read the Parquet file into a DataFrame
df = spark.read.parquet(parquet_file_path)
-- Show the DataFrame
df.show()
I hope this answers your question.
If this answers your question, please consider accepting the answer by hitting the Accept answer and up-vote as it helps the community look for answers to similar questions