I am getting error while displaying the dataframe in azure databricks

Pawan Singh 0 Reputation points
2024-01-20T04:50:28.2+00:00

df.show() is working but display(df) is not working

df=spark.read.csv("abfss://demo@anshadls2001.dfs.core.windows.net/data/",header=

"True")
display(df)

(1) Spark Jobs Failed to store the result. Try rerunning the command.Failed to upload command result to DBFS. Error message: PUT request to create file error HttpResponseProxy{HTTP/1.1 404 The specified filesystem does not exist. [Content-Length: 175, Content-Type: application/json;charset=utf-8, Server: Windows-Azure-HDFS/1.0 Microsoft-HTTPAPI/2.0, x-ms-error-code: FilesystemNotFound, x-ms-request-id: 38b330ef-701f-0068-435b-4b399d000000, x-ms-version: 2021-04-10, Date: Sat, 20 Jan 2024 04:43:58 GMT] ResponseEntityProxy{[Content-Type: application/json;charset=utf-8,Content-Length: 175,Chunked: false]}}

Azure Storage Accounts
Azure Storage Accounts
Globally unique resources that provide access to data management services and serve as the parent namespace for the services.
2,717 questions
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
1,942 questions
{count} votes

2 answers

Sort by: Most helpful
  1. PRADEEPCHEEKATLA-MSFT 77,901 Reputation points Microsoft Employee
    2024-01-21T22:57:58.3266667+00:00

    @Pawan Singh - Thanks for the question and using MS Q&A platform.

    The error message suggests that the specified filesystem does not exist. This could be due to a few reasons:

    • The path to the file is incorrect. Double-check that the path is correct and that the file exists in the specified location.
    • The file system is not mounted. If you are using Azure Data Lake Storage Gen2, you need to mount the file system before you can access the files. You can mount the file system using the dbutils.fs.mount() method. Here's an example:
    dbutils.fs.mount(
      source="abfss://<storage-account-name>@<file-system-name>.dfs.core.windows.net",
      mount_point="/mnt/<mount-name>",
      extra_configs={
        "fs.adl.oauth2.access.token.provider.type": "ClientCredential",
        "fs.adl.oauth2.client.id": "<application-id>",
        "fs.adl.oauth2.credential": "<client-secret>",
        "fs.adl.oauth2.refresh.url": "https://login.microsoftonline.com/<tenant-id>/oauth2/token"
      }
    )
    
    
    

    Replace the placeholders with your own values. Once the file system is mounted, you can access the files using the mount point.

    • The file system is not accessible. Check that you have the necessary permissions to access the file system. You may need to configure access control lists (ACLs) or role-based access control (RBAC) to grant access.

    Once you have resolved the issue, you should be able to display the DataFrame using the display() method.

    For more details, refer to MS Q&A thread addressing similar issue: https://learn.microsoft.com/en-us/answers/questions/1341813/i-am-unable-to-mount-containers-using-databricks-a

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.


  2. Mojtaba Taslimitehrani 0 Reputation points
    2024-04-26T21:25:36.8266667+00:00

    I had the same problem. For me the root cause was that I had chosen the Standard tier while creating the Azure Databricks resource. This does not support RBAC. I created another instance of Databricks and chose the Premium tier. This solved the problem and display(df) and df.write now work properly.

    User's image

    0 comments No comments