How to access apache spark pool custom properties with Python in azure synapse's notebooks?

Riccardo Perelli 25 Reputation points
2024-03-13T15:57:43.3433333+00:00

Hi guys, Hope to find you well.

I'm currently working with azure synapse analytics, I created custom properties on my apache spark pool,

as you can see in the first image:
User's image

As you can see there is a custom property called "test_property".

My question is "How can I access this property by code?"

Right now I'm trying to do it like this:
User's image

But as you can see there is no "test_property"

Thank you for your help.

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
5,373 questions
{count} votes

Accepted answer
  1. PRADEEPCHEEKATLA 90,641 Reputation points Moderator
    2024-03-15T08:10:07.5066667+00:00

    @Riccardo Perelli - Thanks for the question and using MS Q&A platform.

    From the description I see add the prefix sparkis missing, that could be the reason for not getting the custom properties.

    For adding custom properties in Synaspe you would need to add the prefix spark.<custom_property_name> User's image

    Note: Make sure you have attached your spark configuration to the Spark pool and have published the changes.

    User's image

    After publishing the changes, when you start a new spark session you could run spark.conf.get(<property_name>) to get the value.

    To get the current value of a Spark config property, evaluate the property without including a value.

    %python
    
    spark.conf.get("spark.<name-of-property>")
    

    User's image

    Hope this helps. Do let us know if you any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.

    1 person found this answer helpful.

1 additional answer

Sort by: Most helpful
  1. Karni Gupta (MSFT) 95 Reputation points Microsoft Employee
    2024-03-14T13:47:54.2633333+00:00

    Hi Riccardo,

    For notebooks, magic command with %%configure is used to refer the config. For jobs, you will have to import and initialize the SparkSession.

    I'm listing few document links below that will provide details and examples for this. Please refer to them and let me know in case you will need any further assistance on this.

    1. https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-development-using-notebooks
    2. https://techcommunity.microsoft.com/t5/azure-synapse-analytics-blog/how-to-set-spark-pyspark-custom-configs-in-synapse-workspace/ba-p/2114434
    3. https://blog.devgenius.io/spark-configurations-96eab8775e7

    If you find my answer helpful, please consider marking it as the ‘Answer’ and giving it an ‘Upvote’ using the thumbs-up option. This can also benefit other community members who may have the same question.

    Thanks,

    Karni.G

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.