Accessing dataframe created in Scala from Python command

Dimitri B 66 Reputation points
2020-06-09T00:31:27.66+00:00

Is there a way to create a Spark dataframe in Scala command, and then access it in Python, without explicitly writing it to disk and re-reading?

In Databricks I can do in Scala dfFoo.createOrReplaceTempView("temp_df_foo") and it then in Python spark.read.table('temp_df_foo') and Databricks will do all the work in the background.

Is something similar possible in Synapse?

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,997 questions
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,218 questions
0 comments No comments
{count} votes

Accepted answer
  1. PRADEEPCHEEKATLA 90,226 Reputation points
    2020-06-09T06:29:57.323+00:00

    @DimitriB-1079 Welcome to the Microsoft Q&A platform.

    You can create an Apache Spark pool in Azure Synapse Analytics and run the same queries which you are running in Azure Databricks.

    9511-synapse-sparkreadtable.jpg

    Reference: Quickstart: Create an Apache Spark pool (preview) in Azure Synapse Analytics using web tools.

    Hope this helps. Do let us know if you any further queries.


    Do click on "Accept Answer" and Upvote on the post that helps you, this can be beneficial to other community members.

    1 person found this answer helpful.
    0 comments No comments

1 additional answer

Sort by: Most helpful
  1. Euan Garden 136 Reputation points
    2020-06-09T03:08:44.45+00:00

    Exact same code should work in Synapse Spark.

    1 person found this answer helpful.
    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.