question

philmarius-new avatar image
0 Votes"
philmarius-new asked KranthiPakala-MSFT commented

How to monitor REST API "run submit" job on Azure Databricks?

Reposted here for visibility.

We're moving away from notebooks and putting our pyspark workflows into .py files that are being uploaded to DBFS via CI/CD pipelines and then being run by the run submit API endpoint. However, we're struggling to monitor these jobs for failures.

We've setup the spark-monitoring scripts on our Azure Databricks instance and it's successfully feeding logs back to Azure Monitor. However, when running deliberately failing jobs, we cannot tell via the logs whether the job has failed or not, making monitoring the ETL workflows we have problematic.

Has anyone done something similar to this before and how did they do it?



azure-databricks
· 10
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hi @philmarius-new,

Thanks for reaching out. We are reaching out to internal team to get assistance on this query and will keep you posted as soon as I have a update.

Thank you for your patience.

0 Votes 0 ·

Hi Kranthi

We're working on an alternative solution where we use Durable Azure Functions to continuously poll the API to retrieve the state of the run

0 Votes 0 ·

Hi @philmarius-new,

Thanks for the update. Do let us know know how it goes so that it can be beneficial to other community members reading this thread.

Also curious to know the reason for switching from python notebook to .py ? Appreciate if you could share specific reason for this change :)

Thank you

0 Votes 0 ·
Show more comments

0 Answers