Training
Module
Automate workloads with Azure Databricks Jobs - Training
Automate workloads with Azure Databricks Jobs
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
You can create and run a job using the Jobs UI, or developer tools such as the Databricks CLI or the REST API. Using the UI or API, you can repair and rerun a failed or canceled job. This article shows how to create, configure, and edit jobs using the Workflows workspace UI. For information about other tools, see the following:
Tip
To view a job as YAML, click the kebab menu to the left of Run now for the job and then click Switch to code version (YAML).
All jobs on Azure Databricks require the following:
This section describes the steps to create a new job with a notebook task and schedule with the workspace UI.
Jobs contain one or more tasks. You create a new job by configuring the first task for that job.
Note
Each task type has dynamic configuration options in the workspace UI. See Configure and edit Databricks tasks.
If your workspace is not enabled for serverless compute for jobs, you must select a Compute option. Databricks recommends always using jobs compute when configuring tasks.
A new job appears in the workspace jobs list with the default name New Job <date> <time>
.
You can continue to add more tasks within the same job, if needed for your workflow.
You can decide when your job is run. By default, it will only run when you manually start it, but you can also configure it to run automatically. You can create a trigger to run a job on a schedule, or based on an event.
When configuring multiple tasks in jobs, you can use specialized tasks to control how the tasks run. See Control the flow of tasks within a Databricks job.
To edit an existing job with the workspace UI, do the following:
Use the jobs UI to do the following:
The side panel contains the Job details. You can change the job trigger, compute configuration, notifications, the maximum number of concurrent runs, configure duration thresholds, and add or change tags. You can also edit job permissions if job access control is enabled.
Parameters configured at the job level are passed to the job’s tasks that accept key-value parameters, including Python wheel files configured to accept keyword arguments. See Parameterize jobs.
To add labels or key-value attributes to your job, you can add tags when you edit the job. You can use tags to filter jobs in the Jobs list. For example, you can use a department
tag to filter all jobs that belong to a specific department.
Note
Because job tags are not designed to store sensitive information such as personally identifiable information or passwords, Databricks recommends using tags for non-sensitive values only.
Tags also propagate to job clusters created when a job is run, allowing you to use tags with your existing cluster monitoring.
Click + Tag in the Job details side panel to add or edit tags. You can add the tag as a label or key-value pair. To add a label, enter the label in the Key field and leave the Value field empty.
Important
This feature is in Public Preview.
If your workspace uses serverless budget policies to attribute serverless usage, you can select your jobs’s serverless budget policy using the Budget policy setting in the Job details side panel. See Attribute usage with serverless budget policies.
To rename a job, go to the jobs UI and click the job name.
You can quickly create a new job by cloning an existing job. Cloning a job creates an identical copy of the job except for the job ID. To clone a job, do the following:
To delete a job, go to the job page, click next to the job name, and select Delete job from the drop-down menu.
If your job contains any tasks that support using a remote Git provider, the jobs UI contains a Git field and the option to add or edit Git settings.
You can configure the following task types to use a remote Git repository:
All tasks in a job must reference the same commit in the remote repository. You must specify only one of the following for a job that uses a remote repository:
main
.release-1.0.0
.e0056d01
.When a job run begins, Databricks takes a snapshot commit of the remote repository to ensure that the entire job runs against the same version of code.
When you view the run history of a task that runs code stored in a remote Git repository, the Task run details panel includes Git details, including the commit SHA associated with the run. See View task run history.
Note
Tasks configured to use a remote Git repository cannot write to workspace files. These tasks must write temporary data to ephemeral storage attached to the driver node of the compute configured to run the task and persistent data to a volume or table.
Databricks recommends referencing workspace paths in Git folders only for rapid iteration and testing during development. As you move jobs into staging and production, Databricks recommends configuring those jobs to reference a remote Git repository. To learn more about using a remote Git repository with a Databricks job, see the following section.
The jobs UI has a dialog to configure a remote Git repository. This dialog is accessible from the Job details panel under the Git heading or in any task configured to use a Git provider.
The options displayed to access the dialog vary based on task type and whether or not a git reference has already been configured for the job. Buttons to launch the dialog include Add Git settings, Edit, or Add a git reference.
In the Git Information dialog (just labelled Git if access by the Job details panel), enter the following details:
Note
The dialog might prompt you with the following: Git credentials for this account are missing. Add credentials. You must configure a remote Git repository before using it as a reference. See Set up Databricks Git folders (Repos).
Important
Streaming observability for Databricks Jobs is in Public Preview.
You can configure optional thresholds for job run duration or streaming backlog metrics. To configure duration or streaming metric thresholds, click Duration and streaming backlog thresholds in the Job details panel.
To configure job duration thresholds, including expected and maximum completion times for the job, select Run duration in the Metric drop-down menu. Enter a duration in the Warning field to configure the job’s expected completion time. If the job exceeds this threshold, an event is triggered. You can use this event to notify when a job is running slowly. See Configure notifications for slow jobs. To configure a maximum completion time for a job, enter the maximum duration in the Timeout field. If the job does not complete in this time, Azure Databricks sets its status to “Timed Out”.
To configure a threshold for a streaming backlog metric, select the metric in the Metric drop-down menu and enter a value for the threshold. To learn about the specific metrics supported by a streaming source, see View metrics for streaming tasks.
If an event is triggered because a threshold is exceeded, you can use the event to send a notification. See Configure notifications for slow jobs.
You can optionally specify duration thresholds for tasks. See Configure thresholds for task run duration or streaming backlog metrics.
Note
Queueing is enabled by default for jobs created through the UI after April 15, 2024.
To prevent runs of a job from being skipped because of concurrency limits, you can enable queueing for the job. When queueing is enabled, the run is queued for up to 48 hours if resources are unavailable for a job run. When capacity is available, the job run is dequeued and run. Queued runs are displayed in the runs list for the job and the recent job runs list.
A run is queued when one of the following limits is reached:
Run Job
task runs in the workspace.Queueing is a job-level property that queues runs only for that job.
To enable or disable queueing, click Advanced settings and click the Queue toggle button in the Job details side panel.
By default, the maximum concurrent runs for all new jobs is 1.
Click Edit concurrent runs under Advanced settings to set this job’s maximum number of parallel runs.
Azure Databricks skips the run if the job has already reached its maximum number of active runs when attempting to start a new run.
Set this value higher than 1 to allow multiple concurrent runs of the same job. This is useful, for example, if you trigger your job on a frequent schedule and want to enable consecutive runs to overlap or trigger multiple runs that differ by their input parameters.
Training
Module
Automate workloads with Azure Databricks Jobs - Training
Automate workloads with Azure Databricks Jobs
Documentation
Databricks Jobs queueing and concurrency settings - Azure Databricks
Learn about advanced settings and options for Databricks Jobs.
Configure and edit Databricks tasks - Azure Databricks
Learn how to create, configure, and edit Databricks tasks to orchestrate data processing, machine learning, and analytics pipelines.
Orchestration using Databricks Jobs - Azure Databricks
Learn how to orchestrate data processing, machine learning, and data analysis workflows with Databricks Jobs.