Use cluster-scoped init scripts

Cluster-scoped init scripts are init scripts defined in a cluster configuration. Cluster-scoped init scripts apply to both clusters you create and those created to run jobs.

You can configure cluster-scoped init scripts using the UI, the CLI, and by invoking the Clusters API. This section focuses on performing these tasks using the UI. For the other methods, see the Databricks CLI and the Clusters API.

You can add any number of scripts, and the scripts are executed sequentially in the order provided.

If a cluster-scoped init script returns a non-zero exit code, the cluster launch fails. You can troubleshoot cluster-scoped init scripts by configuring cluster log delivery and examining the init script log. See Init script logging.

Configure a cluster-scoped init script using the UI

This section contains instructions for configuring a cluster to run an init script using the Azure Databricks UI.

Databricks recommends managing all init scripts as cluster-scoped init scripts. If you are using compute with shared or single user access mode, store init scripts in Unity Catalog volumes. If you are using compute with no-isolation shared access mode, use workspace files for init scripts.

For shared access mode, you must add init scripts to the allowlist. See Allowlist libraries and init scripts on shared compute.

To use the UI to configure a cluster to run an init script, complete the following steps:

  1. On the cluster configuration page, click the Advanced Options toggle.
  2. At the bottom of the page, click the Init Scripts tab.
  3. In the Source drop-down, select the Workspace, Volume, or ABFSS source type.
  4. Specify a path to the init script, such as one of the following examples:
    • For an init script stored in your home directory with workspace files: /Users/<user-name>/<script-name>.sh.
    • For an init script stored with Unity Catalog volumes: /Volumes/<catalog>/<schema>/<volume>/<path-to-script>/<script-name>.sh.
    • For an init script stored with object storage: abfss://container-name@storage-account-name.dfs.core.windows.net/path/to/init-script.
  5. Click Add.

In single user access mode, the identity of the assigned principal (a user or service principal) is used.

In shared access mode, the identity of the cluster owner is used.

Note

No-isolation shared access mode does not support volumes, but uses the same identity assignment as shared access mode.

To remove a script from the cluster configuration, click the trash icon at the right of the script. When you confirm the delete you will be prompted to restart the cluster. Optionally you can delete the script file from the location you uploaded it to.

Note

If you configure an init script using the ABFSS source type, you must configure access credentials.

Databricks recommends using Microsoft Entra ID service principals to manage access to init scripts stored in Azure Data Lake Storage Gen2. Use the following linked documentation to complete this setup:

  1. Create a service principal with read and list permissions on your desired blobs. See Access storage using a service principal & Microsoft Entra ID(Azure Active Directory).

  2. Save your credentials using secrets. See Secrets.

  3. Set the properties in the Spark configuration and environmental variables while creating a cluster, as in the following example:

    Spark config:

    spark.hadoop.fs.azure.account.auth.type.<storage-account>.dfs.core.windows.net OAuth
    spark.hadoop.fs.azure.account.oauth.provider.type.<storage-account>.dfs.core.windows.net org.apache.hadoop.fs.azurebfs.oauth2.ClientCredsTokenProvider
    spark.hadoop.fs.azure.account.oauth2.client.id.<storage-account>.dfs.core.windows.net <application-id>
    spark.hadoop.fs.azure.account.oauth2.client.secret.<storage-account>.dfs.core.windows.net {{secrets/<secret-scope>/<service-credential-key>}}
    spark.hadoop.fs.azure.account.oauth2.client.endpoint.<storage-account>.dfs.core.windows.net https://login.microsoftonline.com/<tenant-id>/oauth2/token
    

    Environmental variables:

    SERVICE_CREDENTIAL={{secrets/<secret-scope>/<service-credential-key>}}
    
  4. (Optional) Refactor init scripts using azcopy or the Azure CLI.

    You can reference environmental variables set during cluster configuration within your init scripts to pass credentials stored as secrets for validation.

Warning

Cluster-scoped init scripts on DBFS are end-of-life. The DBFS option in the UI exists in some workspaces to support legacy workloads and is not recommended. All init scripts stored in DBFS should be migrated. For migration instructions, see Migrate init scripts from DBFS.

Troubleshooting cluster-scoped init scripts

  • The script must exist at the configured location. If the script doesn’t exist, attempts to start the cluster or scale up the executors result in failure.
  • The init script cannot be larger than 64KB. If a script exceeds that size, the cluster will fail to launch and a failure message will appear in the cluster log.