Thanks for the question and using MS Q&A platform.
To use a base parameter within a Synapse Notebook Pipeline activity, you can reference the parameter in your notebook code using the following syntax:
dbutils.widgets.get("parameter_test")
This will retrieve the value of the "parameter_test" parameter and set it to a variable in your notebook. Here's an example of how you can use this syntax in your notebook code:
from pyspark.sql.functions
import * # Get the value of the "parameter_test" parameter
parameter_test = dbutils.widgets.get("parameter_test")
# Use the parameter value in your code
df = spark.read.format("csv").option("header", "true").load(parameter_test)
# Show the resulting dataframe
display(df)
In this example, the "parameter_test" parameter is used to specify the path to a CSV file that is loaded into a Spark dataframe. The value of the parameter is retrieved using dbutils.widgets.get("parameter_test")
and is then used in the spark.read.format("csv").option("header", "true").load(parameter_test)
line to load the CSV file.
I hope this helps! Let me know if you have any further questions.