Thank you for reaching out to the Azure community forum with your query.
I think it is not possible to update a column in a dedicated SQL pool table using the spark.sql
API in a Synapse workspace notebook.
To update a column in a dedicated SQL pool table, you can use the pyspark
package to read the data from the table into a Spark DataFrame, update the column in the DataFrame, and then write the updated data back to the table using the synapsesql
API. Here's an example code snippet that you can use:
from pyspark.sql import SparkSession
# Create a SparkSession
spark = SparkSession.builder.appName("UpdateTable").getOrCreate()
# Read data from the table into a Spark DataFrame
df = spark.read.synapsesql("<database>.<schema>.<table>")
# Update the column in the DataFrame
df = df.withColumn("<column_name>", <new_value>)
# Write the updated data back to the table
df.write.synapsesql("<database>.<schema>.<table>", mode="overwrite")
Replace <database>
, <schema>
, <table>
, <column_name>
, and <new_value>
with the actual database name, schema name, table name, column name, and new value, respectively.
However, please note that this solution may depend on your specific scenario and may not work in all cases.
Reference: Read and write data in Azure Synapse Analytics using Apache Spark
Hope this helps. Do let us know if you any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.