Edit Apache Spark application job definition through Azure Devops

Geetanjali Puri 1 Reputation point

We have created a Apache Spark Job Definition in Azure Synapse & want to know if we can edit its configuration using Azure Devops pipeline.

Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
3,828 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Samara Soucy - MSFT 5,041 Reputation points

    Yes, though how depends on the scope of your ADO deployment.

    If you are just looking to update a few items but leave the rest outside of the pipeline, then you can call the Azure REST API to update the Spark job.

    PUT exampleWorkspace.dev.azuresynapse.net/sparkJobDefinitions/exampleSparkJobDefinition?api-version=2019-06-01-preview

    "properties": {
    "description": "A sample spark job definition",
    "targetBigDataPool": {
    "referenceName": "exampleBigDataPool",
    "type": "BigDataPoolReference"
    "requiredSparkVersion": "2.4",
    "jobProperties": {
    "name": "exampleSparkJobDefinition",
    "file": "abfss://test@test .dfs.core.windows.net/artefacts/sample.jar",
    "className": "dev.test.tools.sample.Main",
    "conf": {},
    "args": [
    "jars": [],
    "pyFiles": [],
    "files": [],
    "archives": [],
    "driverMemory": "28g",
    "driverCores": 4,
    "executorMemory": "28g",
    "executorCores": 4,
    "numExecutors": 2

    Alternatively, if you want to fully integrate your Synapse workspace with ADO, then it has CI/CD capability built in. You can look at the current instructions and requirements for this to work in the doc: https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-deployment