Jobs access control
Note
Access control is available only in the Premium plan.
Enabling access control for jobs allows job owners to control who can view job results or manage runs of a job. This article describes the individual permissions and how to configure jobs access control.
Before you can use jobs access control, an Azure Databricks admin must enable it for the workspace. See Enable access control.
Job permissions
There are five permission levels for jobs: No Permissions, Can View, Can Manage Run, Is Owner, and Can Manage. Admins are granted the Can Manage permission by default, and they can assign that permission to non-admin users.
Note
The job owner can be changed only by an admin.
The table lists the abilities for each permission.
Ability | No Permissions | Can View | Can Manage Run | Is Owner | Can Manage |
---|---|---|---|---|---|
View job details and settings | x | x | x | x | x |
View results, Spark UI, logs of a job run | x | x | x | x | |
Run now | x | x | x | ||
Cancel run | x | x | x | ||
Edit job settings | x | x | |||
Modify permissions | x | x | |||
Delete job | x | x | |||
Change owner |
Note
- The creator of a job has Is Owner permission.
- A job cannot have more than one owner.
- A job cannot have a group as an owner.
- Jobs triggered through Run Now assume the permissions of the job owner and not the user who issued Run Now. For example, even if job A is configured to run on an existing cluster accessible only to the job owner (user A), a user (user B) with Can Manage Run permission can start a new run of the job.
- You can view notebook run results only if you have the Can View or higher permission on the job. This allows jobs access control to be intact even if the job notebook was renamed, moved, or deleted.
- Jobs access control applies to jobs displayed in the Databricks Jobs UI and their runs. It doesn’t apply to:
- Runs spawned by modularized or linked code in notebooks that use the permissions of the notebook. If a notebook workflow is created from a notebook stored in Git, a fresh checkout is created and files in that checkout have only the permissions of the user the original run was executed as.
- Runs submitted by API whose ACLs are by default bundled with the notebooks. However, the default ACLs can be overriden by setting the
access_control_list
parameter in the request body.
Configure job permissions
Note
This section describes how to manage permissions using the UI. You can also use the Permissions API 2.0.
You must have Can Manage or Is Owner permission.
Go to the details page for a job.
Click the Edit permissions button in the Job details panel.
In the pop-up dialog box, assign job permissions via the drop-down menu beside a user’s name.
Click Save Changes.
Terraform integration
You can manage permissions in a fully automated setup using Databricks Terraform provider and databricks_permissions:
resource "databricks_group" "auto" {
display_name = "Automation"
}
resource "databricks_group" "eng" {
display_name = "Engineering"
}
data "databricks_spark_version" "latest" {}
data "databricks_node_type" "smallest" {
local_disk = true
}
resource "databricks_job" "this" {
name = "Featurization"
max_concurrent_runs = 1
new_cluster {
num_workers = 300
spark_version = data.databricks_spark_version.latest.id
node_type_id = data.databricks_node_type.smallest.id
}
notebook_task {
notebook_path = "/Production/MakeFeatures"
}
}
resource "databricks_permissions" "job_usage" {
job_id = databricks_job.this.id
access_control {
group_name = "users"
permission_level = "CAN_VIEW"
}
access_control {
group_name = databricks_group.auto.display_name
permission_level = "CAN_MANAGE_RUN"
}
access_control {
group_name = databricks_group.eng.display_name
permission_level = "CAN_MANAGE"
}
}
Feedback
Submit and view feedback for