Κοινή χρήση μέσω


Serverless compute for notebooks

This article explains how to use serverless compute for notebooks. For information on using serverless compute for jobs, see Run your Azure Databricks job with serverless compute for workflows.

For pricing information, see Databricks pricing.

Requirements

Attach a notebook to serverless compute

If your workspace is enabled for serverless interactive compute, all users in the workspace have access to serverless compute for notebooks. No additional permissions are required.

To attach to the serverless compute, click the Connect drop-down menu in the notebook and select Serverless. For new notebooks, the attached compute automatically defaults to serverless upon code execution if no other resource has been selected.

Select a budget policy for your serverless usage

Important

This feature is in Public Preview.

Budget policies allow your organization to apply custom tags on serverless usage for granular billing attribution.

If your workspace uses budget policies to attribute serverless usage, you can select the budget policy you want to apply to the notebook. If are only assigned to one budget policy, that policy will be selected by default.

To select the budget policy before you connect to serverless compute:

  1. In the notebook UI, click the Connect dropdown.
  2. Click More…
  3. Select Serverless then select the Budget policy.
  4. Click Start and attach.

Connect to existing compute with budget policy

You can select the budget policy after your notebook is connected to serverless compute by using the Environment side panel:

  1. In the notebook UI, click the Environment side panel Environment side panel.
  2. Under Budget policy select the budget policy you want to apply to your notebook.
  3. Click Apply.

Serverless notebook environment panel with budget policies

From that point on, all usage from your notebook will inherit the budget policy’s custom tags.

Note

Your existing notebooks are assigned your last chosen budget policy the next time the notebook is attached to serverless compute.

For more on budget policies, see Attribute serverless usage with budget policies.

View query insights

Serverless compute for notebooks and jobs uses query insights to assess Spark execution performance. After running a cell in a notebook, you can view insights related to SQL and Python queries by clicking the See performance link.

Show query performance

You can click on any of the Spark statements to view the query metrics. From there you can click See query profile to see a visualization of the query execution. For more information on query profiles, see Query profile.

Note

To view performance insights for your job runs, see View job run query insights.

Query history

All queries that are run on serverless compute will also be recorded on your workspace’s query history page. For information on query history, see Query history.

Query insight limitations

  • The query profile is only available after the query execution terminates.
  • Metrics are updated live although the query profile is not shown during execution.
  • Only the following query statuses are covered: RUNNING, CANCELED, FAILED, FINISHED.
  • Running queries cannot be canceled from the query history page. They can be canceled in notebooks or jobs.
  • Verbose metrics are not available.
  • Query Profile download is not available.
  • Access to the Spark UI is not available.
  • The statement text only contains the last line that was run. However, there might be several lines preceding this line that were run as part of the same statement.