Yes, though how depends on the scope of your ADO deployment.
If you are just looking to update a few items but leave the rest outside of the pipeline, then you can call the Azure REST API to update the Spark job.
PUT exampleWorkspace.dev.azuresynapse.net/sparkJobDefinitions/exampleSparkJobDefinition?api-version=2019-06-01-preview
Body:
{
"properties": {
"description": "A sample spark job definition",
"targetBigDataPool": {
"referenceName": "exampleBigDataPool",
"type": "BigDataPoolReference"
},
"requiredSparkVersion": "2.4",
"jobProperties": {
"name": "exampleSparkJobDefinition",
"file": "abfss://test@test .dfs.core.windows.net/artefacts/sample.jar",
"className": "dev.test.tools.sample.Main",
"conf": {},
"args": [
"exampleArg"
],
"jars": [],
"pyFiles": [],
"files": [],
"archives": [],
"driverMemory": "28g",
"driverCores": 4,
"executorMemory": "28g",
"executorCores": 4,
"numExecutors": 2
}
}
}
Alternatively, if you want to fully integrate your Synapse workspace with ADO, then it has CI/CD capability built in. You can look at the current instructions and requirements for this to work in the doc: https://learn.microsoft.com/en-us/azure/synapse-analytics/cicd/continuous-integration-deployment