Defines the attributes to build artifacts, where each key is the name of the artifact, and the value is a Map that defines the artifact build settings. For information about the artifacts mapping, see artifacts.
An optional set of build commands to run locally before deployment.
dynamic_version
Boolean
Whether to patch the wheel version dynamically based on the timestamp of the whl file. If this is set to true, new code can be deployed without having to update the version in setup.py or pyproject.toml. This setting is only valid when type is set to whl. See artifacts.
executable
String
The executable type. Valid values are bash, sh, and cmd.
files
Sequence
The relative or absolute path to the built artifact files. See artifacts.name.files.
path
String
The local path of the directory for the artifact.
type
String
Required if the artifact is a Python wheel. The type of the artifact. Valid values are whl and jar.
The Git version control details that are associated with your bundle. For supported attributes see git. See bundle.git.
name
String
The name of the bundle.
uuid
String
Reserved. A Universally Unique Identifier (UUID) for the bundle that uniquely identifies the bundle in internal Databricks systems. This is generated when a bundle project is initialized using a Databricks template (using the databricks bundle init command).
bundle.deployment
Type: Map
The definition of the bundle deployment
Key
Type
Description
fail_on_active_runs
Boolean
Whether to fail on active runs. If this is set to true a deployment that is running can be interrupted.
Configures loading of Python code defined with ‘databricks-bundles’ package. See experimental.python.
python_wheel_wrapper
Boolean
Whether to use a Python wheel wrapper.
scripts
Map
The commands to run.
use_legacy_run_as
Boolean
Whether to use the legacy run_as behavior.
experimental.pydabs
Type: Map
The PyDABs configuration.
Key
Type
Description
enabled
Boolean
Whether or not PyDABs (Private Preview) is enabled.
import
Sequence
The PyDABs project to import to discover resources, resource generator and mutators.
venv_path
String
The Python virtual environment path.
experimental.python
Type: Map
Configures loading of Python code defined with ‘databricks-bundles’ package.
Key
Type
Description
mutators
Sequence
Mutators contains a list of fully qualified function paths to mutator functions, such as [my_project.mutators:add_default_cluster].
resources
Sequence
Resources contains a list of fully qualified function paths to load resources defined in Python code, such as ["my_project.resources:load_resources"]
venv_path
String
VEnvPath is path to the virtual environment. If enabled, Python code will execute within this environment. If disabled, it defaults to using the Python interpreter available in the current shell.
include
Type: Sequence
Specifies a list of path globs that contain configuration files to include within the bundle. See include.
permissions
Type: Sequence
A Sequence that defines the permissions to apply to experiments, jobs, pipelines, and models defined in the bundle, where each item in the sequence is a permission for a specific entity.
Whether pipeline deployments should be locked in development mode.
source_linked_deployment
Boolean
Whether to link the deployment to the bundle source.
tags
Map
The tags for the bundle deployment.
trigger_pause_status
String
A pause status to apply to all job triggers and schedules. Valid values are PAUSED or UNPAUSED.
resources
Type: Map
A Map that defines the resources for the bundle, where each key is the name of the resource, and the value is a Map that defines the resource. For more information about Databricks Asset Bundles supported resources, and resource definition reference, see Databricks Asset Bundles resources.
The Databricks app definitions for the bundle, where each key is the name of the app. See app.
clusters
Map
The cluster definitions for the bundle, where each key is the name of a cluster. See cluster.
dashboards
Map
The dashboard definitions for the bundle, where each key is the name of the dashboard. See dashboard.
experiments
Map
The experiment definitions for the bundle, where each key is the name of the experiment. See experiment.
jobs
Map
The job definitions for the bundle, where each key is the name of the job. See job.
model_serving_endpoints
Map
The model serving endpoint definitions for the bundle, where each key is the name of the model serving endpoint. See model_serving_endpoint.
models
Map
The model definitions for the bundle, where each key is the name of the model. See model (legacy).
pipelines
Map
The pipeline definitions for the bundle, where each key is the name of the pipeline. See pipeline.
quality_monitors
Map
The quality monitor definitions for the bundle, where each key is the name of the quality monitor. See quality_monitor (Unity Catalog).
registered_models
Map
The registered model definitions for the bundle, where each key is the name of the Unity Catalog registered model. See registered_model (Unity Catalog).
schemas
Map
The schema definitions for the bundle, where each key is the name of the schema. See schema (Unity Catalog).
volumes
Map
The volume definitions for the bundle, where each key is the name of the volume. See volume (Unity Catalog).
The Git version control details that are associated with your bundle. For supported attributes see git. See targets.name.bundle.git.
name
String
The name of the bundle.
uuid
String
Reserved. A Universally Unique Identifier (UUID) for the bundle that uniquely identifies the bundle in internal Databricks systems. This is generated when a bundle project is initialized using a Databricks template (using the databricks bundle init command).
targets.name.bundle.deployment
Type: Map
The definition of the bundle deployment
Key
Type
Description
fail_on_active_runs
Boolean
Whether to fail on active runs. If this is set to true a deployment that is running can be interrupted.
The permissions for deploying and running the bundle in the target.
Key
Type
Description
group_name
String
The name of the group that has the permission set in level.
level
String
The allowed permission for user, group, service principal defined for this permission.
service_principal_name
String
The name of the service principal that has the permission set in level.
user_name
String
The name of the user that has the permission set in level.
targets.name.presets
Type: Map
The deployment presets for the target.
Key
Type
Description
jobs_max_concurrent_runs
Integer
The maximum concurrent runs for a job.
name_prefix
String
The prefix for job runs of the bundle.
pipelines_development
Boolean
Whether pipeline deployments should be locked in development mode.
source_linked_deployment
Boolean
Whether to link the deployment to the bundle source.
tags
Map
The tags for the bundle deployment.
trigger_pause_status
String
A pause status to apply to all job triggers and schedules. Valid values are PAUSED or UNPAUSED.
targets.name.resources
Type: Map
The resource definitions for the target.
Key
Type
Description
apps
Map
The Databricks app definitions for the bundle, where each key is the name of the app. See app.
clusters
Map
The cluster definitions for the bundle, where each key is the name of a cluster. See cluster.
dashboards
Map
The dashboard definitions for the bundle, where each key is the name of the dashboard. See dashboard.
experiments
Map
The experiment definitions for the bundle, where each key is the name of the experiment. See experiment.
jobs
Map
The job definitions for the bundle, where each key is the name of the job. See job.
model_serving_endpoints
Map
The model serving endpoint definitions for the bundle, where each key is the name of the model serving endpoint. See model_serving_endpoint.
models
Map
The model definitions for the bundle, where each key is the name of the model. See model (legacy).
pipelines
Map
The pipeline definitions for the bundle, where each key is the name of the pipeline. See pipeline.
quality_monitors
Map
The quality monitor definitions for the bundle, where each key is the name of the quality monitor. See quality_monitor (Unity Catalog).
registered_models
Map
The registered model definitions for the bundle, where each key is the name of the Unity Catalog registered model. See registered_model (Unity Catalog).
schemas
Map
The schema definitions for the bundle, where each key is the name of the schema. See schema (Unity Catalog).
volumes
Map
The volume definitions for the bundle, where each key is the name of the volume. See volume (Unity Catalog).
targets.name.run_as
Type: Map
The identity to use to run the bundle.
Key
Type
Description
service_principal_name
String
The application ID of an active service principal. Setting this field requires the servicePrincipal/user role.
user_name
String
The email of an active workspace user. Non-admin users can only set this field to their own email.
targets.name.sync
Type: Map
The local paths to sync to the target workspace when a bundle is run or deployed.
Key
Type
Description
exclude
Sequence
A list of files or folders to exclude from the bundle.
include
Sequence
A list of files or folders to include in the bundle.
paths
Sequence
The local folder paths, which can be outside the bundle root, to synchronize to the workspace when the bundle is deployed.
The default value for the variable. If this is not specified,
description
String
The description of the variable.
lookup
Map
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID. See targets.name.variables.name.lookup.
type
String
The type of the variable.
targets.name.variables.name.lookup
Type: Map
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.
Key
Type
Description
alert
String
The name of the alert for which to retrieve an ID.
cluster
String
The name of the cluster for which to retrieve an ID.
cluster_policy
String
The name of the cluster_policy for which to retrieve an ID.
dashboard
String
The name of the dashboard for which to retrieve an ID.
instance_pool
String
The name of the instance_pool for which to retrieve an ID.
job
String
The name of the job for which to retrieve an ID.
metastore
String
The name of the metastore for which to retrieve an ID.
notification_destination
String
The name of the notification_destination for which to retrieve an ID.
pipeline
String
The name of the pipeline for which to retrieve an ID.
query
String
The name of the query for which to retrieve an ID.
service_principal
String
The name of the service_principal for which to retrieve an ID.
warehouse
String
The name of the warehouse for which to retrieve an ID.
targets.name.workspace
Type: Map
The Databricks workspace for the target.
Key
Type
Description
artifact_path
String
The artifact path to use within the workspace for both deployments and workflow runs
auth_type
String
The authentication type.
azure_client_id
String
The Azure client ID
azure_environment
String
The Azure environment
azure_login_app_id
String
The Azure login app ID
azure_tenant_id
String
The Azure tenant ID
azure_use_msi
Boolean
Whether to use MSI for Azure
azure_workspace_resource_id
String
The Azure workspace resource ID
client_id
String
The client ID for the workspace
file_path
String
The file path to use within the workspace for both deployments and workflow runs
google_service_account
String
The Google service account name
host
String
The Databricks workspace host URL
profile
String
The Databricks workspace profile name
resource_path
String
The workspace resource path
root_path
String
The Databricks workspace root path
state_path
String
The workspace state path
variables
Type: Map
Defines a custom variable for the bundle. See variables.
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID. See variables.name.lookup.
type
String
The type of the variable.
variables.name.lookup
Type: Map
The name of the alert, cluster_policy, cluster, dashboard, instance_pool, job, metastore, pipeline, query, service_principal, or warehouse object for which to retrieve an ID.
Key
Type
Description
alert
String
The name of the alert for which to retrieve an ID.
cluster
String
The name of the cluster for which to retrieve an ID.
cluster_policy
String
The name of the cluster_policy for which to retrieve an ID.
dashboard
String
The name of the dashboard for which to retrieve an ID.
instance_pool
String
The name of the instance_pool for which to retrieve an ID.
job
String
The name of the job for which to retrieve an ID.
metastore
String
The name of the metastore for which to retrieve an ID.
notification_destination
String
The name of the notification_desination for which to retrieve an ID.
pipeline
String
The name of the pipeline for which to retrieve an ID.
query
String
The name of the query for which to retrieve an ID.
service_principal
String
The name of the service_principal for which to retrieve an ID.
warehouse
String
The name of the warehouse for which to retrieve an ID.
workspace
Type: Map
Defines the Databricks workspace for the bundle. See workspace.
Key
Type
Description
artifact_path
String
The artifact path to use within the workspace for both deployments and workflow runs
auth_type
String
The authentication type.
azure_client_id
String
The Azure client ID
azure_environment
String
The Azure environment
azure_login_app_id
String
The Azure login app ID
azure_tenant_id
String
The Azure tenant ID
azure_use_msi
Boolean
Whether to use MSI for Azure
azure_workspace_resource_id
String
The Azure workspace resource ID
client_id
String
The client ID for the workspace
file_path
String
The file path to use within the workspace for both deployments and workflow runs
Explore and configure the Azure Machine Learning workspace, its resources and its assets. Explore which developer tools you can use to interact with the workspace, focusing on the CLI and Python SDK v2.
Learn about the syntax and behaviors for declaring and using deployment modes for Databricks Asset Bundles. Bundles enable programmatic management of Databricks workflows.