How to use base parameter within a Synapse Notebook Pipeline activity?

Meghan Munagala 0 Reputation points
2024-06-06T19:32:57.98+00:00

I'm trying to use a base parameter (parameter_test )that I have for a notebook activity and reference that parameter within my notebook itself. I'm using pyspark, and need to pass in this parameter and set it to a variable within my notebook as shown here:

variable = parameter_test

The issue is that I cannot seem to find the syntax for how to do this. And I know for a fact that the notebook activity is reading in the parameter value correctly because it appears in the input json for the activity, I just need to figure out how to reference it.

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,696 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Smaran Thoomu 12,620 Reputation points Microsoft Vendor
    2024-06-07T05:29:44.2066667+00:00

    Hi @Meghan Munagala

    Thanks for the question and using MS Q&A platform.

    To use a base parameter within a Synapse Notebook Pipeline activity, you can reference the parameter in your notebook code using the following syntax:

    dbutils.widgets.get("parameter_test")
    
    
    

    This will retrieve the value of the "parameter_test" parameter and set it to a variable in your notebook. Here's an example of how you can use this syntax in your notebook code:

     
    from pyspark.sql.functions 
    import * # Get the value of the "parameter_test" parameter 
    parameter_test = dbutils.widgets.get("parameter_test") 
    # Use the parameter value in your code 
    df = spark.read.format("csv").option("header", "true").load(parameter_test) 
    # Show the resulting dataframe 
    display(df)
    

    In this example, the "parameter_test" parameter is used to specify the path to a CSV file that is loaded into a Spark dataframe. The value of the parameter is retrieved using dbutils.widgets.get("parameter_test") and is then used in the spark.read.format("csv").option("header", "true").load(parameter_test) line to load the CSV file.

    I hope this helps! Let me know if you have any further questions.

    0 comments No comments