Edit

Background Jobs - Run On Demand Notebook

Run on-demand notebook job instance.

Note

This API is a release version of a beta version due to be deprecated on April 1, 2028. When calling this API - callers must set the query parameter beta to the value false.

Required Delegated Scopes

Notebook.Execute.All or Item.Execute.All

Microsoft Entra supported identities

This API supports the Microsoft identities listed in this section.

Identity Support
User Yes
Service principal and Managed identities Yes

Interface

POST https://api.fabric.microsoft.com/v1/workspaces/{workspaceId}/notebooks/{notebookId}/jobs/execute/instances?beta={beta}

URI Parameters

Name In Required Type Description
notebookId
path True

string (uuid)

The notebook item ID.

workspaceId
path True

string (uuid)

The workspace ID.

beta
query True

boolean

This parameter specifies which version of the API to use. Set to false to use the release version.

Request Body

Name Type Description
executionData RunNotebookExecutionData:

Optional. The notebook configurations used during execution.

parameters

Parameter[]

The parameter list for run on-demand job request. Per-run, user-defined inputs to tailor this invocation. Note: parameter names are case-insensitive, but the casing must match the parameter name used in the code cell.

Responses

Name Type Description
202 Accepted

Request accepted, job execution is in progress.


Headers:

Location string - The URL to the job instance resource.

Retry-After integer - The number of seconds the client should wait before polling the job status for the first time. Clients must use this value to determine when to check the job status and should wait at least this many seconds before the first status poll.

429 Too Many Requests

ErrorResponse

The service rate limit was exceeded. The server returns a Retry-After header indicating, in seconds, how long the client must wait before sending additional requests.

Headers

Retry-After: integer

Other Status Codes

ErrorResponse

Common error codes:

  • MissingMinimalPermissions - The caller does not have sufficient permissions to run job instance.

  • InvalidJobType - The requested job type is invalid.

  • TooManyRequestsForJobs - The caller is making to many run on-demand job requests.

  • ItemNotFound - Requested item id not found.

  • ComputeTypeMismatch - The compute type provided in the request does not align with the compute type configured in the notebook content.

Examples

Run Data Warehouse notebook with request body.
Run Jupyter notebook with request body.
Run notebook with no request body.
Run notebook with parameters.
Run Spark notebook with request body.

Run Data Warehouse notebook with request body.

Sample request

POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false

{
  "executionData": {
    "compute": "DataWarehouse"
  }
}

Sample response

Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60

Run Jupyter notebook with request body.

Sample request

POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false

{
  "executionData": {
    "compute": "Jupyter",
    "computeConfiguration": {
      "name": "mySessionName",
      "numCores": 4,
      "mountPoints": [
        {
          "source": "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath",
          "mountPointPath": "/myMountPoint"
        }
      ],
      "defaultLakehouse": {
        "referenceType": "ById",
        "itemId": "2434b3e1-d753-4438-8e72-00cb6703e83a",
        "workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
      },
      "attachedEnvironment": {
        "referenceType": "ById",
        "itemId": "39f73c18-9970-43a4-9c6e-72d22160493d",
        "workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
      }
    }
  }
}

Sample response

Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60

Run notebook with no request body.

Sample request

POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false

Sample response

Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60

Run notebook with parameters.

Sample request

POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false

{
  "parameters": [
    {
      "name": "param1",
      "value": "value1",
      "type": "Text"
    },
    {
      "name": "param2",
      "value": true,
      "type": "Boolean"
    }
  ],
  "executionData": {
    "compute": "Spark",
    "computeConfiguration": {
      "highConcurrencyModeOptions": {
        "enabled": true,
        "sessionTag": "userInputSessionTag"
      }
    }
  }
}

Sample response

Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60

Run Spark notebook with request body.

Sample request

POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false

{
  "executionData": {
    "compute": "Spark",
    "computeConfiguration": {
      "name": "mySessionName",
      "driverMemory": "28g",
      "driverCores": 4,
      "executorMemory": "28g",
      "executorCores": 4,
      "numExecutors": 10,
      "jars": [
        "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/myjar.jar"
      ],
      "pyFiles": [
        "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/mypy.py"
      ],
      "files": [
        "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/myfile.txt"
      ],
      "archives": [
        "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/myzip.zip"
      ],
      "sparkProperties": [
        {
          "key": "spark.key1",
          "value": "value1"
        }
      ],
      "instancePool": {
        "name": "poolName",
        "type": "Workspace"
      },
      "mountPoints": [
        {
          "source": "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath",
          "mountPointPath": "/myMountPoint"
        }
      ],
      "defaultLakehouse": {
        "referenceType": "ById",
        "itemId": "2434b3e1-d753-4438-8e72-00cb6703e83a",
        "workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
      },
      "attachedEnvironment": {
        "referenceType": "ById",
        "itemId": "39f73c18-9970-43a4-9c6e-72d22160493d",
        "workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
      },
      "highConcurrencyModeOptions": {
        "enabled": true,
        "sessionTag": "userInputSessionTag"
      }
    }
  }
}

Sample response

Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60

Definitions

Name Description
ComputeType

Represents the type of the compute. Additional ComputeType types may be added over time.

CustomPoolMemory

Custom pool memory for Spark driver or Spark executor. Additional CustomPoolMemory types may be added over time.

CustomPoolType

Custom pool type. Additional CustomPoolType types may be added over time.

ErrorRelatedResource

The error related resource details object.

ErrorResponse

The error response.

ErrorResponseDetails

The error response details.

HighConcurrencyModeOptions

High concurrency mode options.

InstancePool

The instance pool.

ItemJobParameterType

A string that represents the parameter's type. Additional types may be added over time.

ItemReferenceById

An item reference by ID object.

ItemReferenceType

The item reference type. Additional ItemReferenceType types may be added over time.

JupyterNotebookComputeConfiguration

Jupyter notebook compute configuration.

MountPoint

The storage mount point.

Parameter

An item job parameter.

RunDataWarehouseNotebookExecutionData

Data Warehouse notebook execution data. This compute type does not support compute configuration.

RunJupyterNotebookExecutionData

Jupyter notebook execution data.

RunNotebookRequest

Run notebook request with executionData.

RunSparkNotebookExecutionData

Spark notebook execution data.

SparkNotebookComputeConfiguration

Spark notebook compute configuration.

SparkProperty

A Spark property key and its value.

ComputeType

Represents the type of the compute. Additional ComputeType types may be added over time.

Value Description
Spark

Spark compute type.

Jupyter

Jupyter compute type.

DataWarehouse

Data Warehouse compute type.

CustomPoolMemory

Custom pool memory for Spark driver or Spark executor. Additional CustomPoolMemory types may be added over time.

Value Description
28g

28GB memory.

56g

56G memory.

112g

112G memory.

224g

224G memory.

400g

400G memory.

CustomPoolType

Custom pool type. Additional CustomPoolType types may be added over time.

Value Description
Workspace

Workspace level custom pool

Capacity

Capacity level custom pool

ErrorRelatedResource

The error related resource details object.

Name Type Description
resourceId

string

The resource ID that's involved in the error.

resourceType

string

The type of the resource that's involved in the error.

ErrorResponse

The error response.

Name Type Description
errorCode

string

A specific identifier that provides information about an error condition, allowing for standardized communication between our service and its users.

isRetriable

boolean

When true, the request can be retried. Use the Retry-After response header to determine the delay, if available.

message

string

A human readable representation of the error.

moreDetails

ErrorResponseDetails[]

List of additional error details.

relatedResource

ErrorRelatedResource

The error related resource details.

requestId

string (uuid)

ID of the request associated with the error.

ErrorResponseDetails

The error response details.

Name Type Description
errorCode

string

A specific identifier that provides information about an error condition, allowing for standardized communication between our service and its users.

message

string

A human readable representation of the error.

relatedResource

ErrorRelatedResource

The error related resource details.

HighConcurrencyModeOptions

High concurrency mode options.

Name Type Description
enabled

boolean

The status of the high concurrency mode. False - Disabled, true - Enabled.

sessionTag

string

Setting the session tag instructs Spark to reuse existing Spark sessions which minimizes startup time. Arbitrary string values can be used for the session tag. If no session exists, a new session is created using the tag value.

InstancePool

The instance pool.

Name Type Description
id

string (uuid)

Instance pool ID.

name

string

Instance pool name.

type

CustomPoolType

Instance pool type.

ItemJobParameterType

A string that represents the parameter's type. Additional types may be added over time.

Value Description
VariableReference

The type of parameter is a variable reference.

Integer

The type of parameter is an integer.

Number

The type of parameter is a number, it accepts both integer and float values.

Text

The type of parameter is a text.

Boolean

The type of parameter is a boolean.

DateTime

The type of parameter is a datetime in UTC, using the YYYY-MM-DDTHH:mm:ssZ format.

Guid

The parameter type is a string representation of a GUID, using 00000000-0000-0000-0000-000000000000 format. See https://learn.microsoft.com/dotnet/api/system.guid.tostring for formatting details, and please use the default format: "D".

Automatic

The parameter type is automatically determined. Note: this type may not be supported for all item job types.

ItemReferenceById

An item reference by ID object.

Name Type Description
itemId

string (uuid)

The ID of the item.

referenceType string:

ById

The item reference type.

workspaceId

string (uuid)

The workspace ID of the item.

ItemReferenceType

The item reference type. Additional ItemReferenceType types may be added over time.

Value Description
ById

The item is referenced by its ID.

ByVariable

The item is referenced by a variable.

JupyterNotebookComputeConfiguration

Jupyter notebook compute configuration.

Name Type Description
attachedEnvironment

ItemReferenceById

Environment to be used in this session.

defaultLakehouse

ItemReferenceById

Default lakehouse to be used in this session.

mountPoints

MountPoint[]

Mount points to be used in this session.

name

string

The name of this session.

numCores

integer (int32)

The number of cores that this job can consume. Must be one of the following values: 2, 4, 8, 16, 32, 64.

MountPoint

The storage mount point.

Name Type Description
mountPointPath

string

The local path that to mount the remote storage to.

source

string

Source storage abfss path.

Parameter

An item job parameter.

Name Type Description
name

string

The parameter name, specified by the caller, must be unique (case-insensitive check) and no longer than 256 characters.

type

ItemJobParameterType

The parameter type.

value

object

The parameter value based on the parameter type.

RunDataWarehouseNotebookExecutionData

Data Warehouse notebook execution data. This compute type does not support compute configuration.

Name Type Description
compute string:

DataWarehouse

The execution engine for the job instance. This value needs to match the language of the notebook.

RunJupyterNotebookExecutionData

Jupyter notebook execution data.

Name Type Description
compute string:

Jupyter

The execution engine for the job instance. This value needs to match the language of the notebook.

computeConfiguration

JupyterNotebookComputeConfiguration

The Jupyter notebook execution configuration.

RunNotebookRequest

Run notebook request with executionData.

Name Type Description
executionData RunNotebookExecutionData:

Optional. The notebook configurations used during execution.

parameters

Parameter[]

The parameter list for run on-demand job request. Per-run, user-defined inputs to tailor this invocation. Note: parameter names are case-insensitive, but the casing must match the parameter name used in the code cell.

RunSparkNotebookExecutionData

Spark notebook execution data.

Name Type Description
compute string:

Spark

The execution engine for the job instance. This value needs to match the language of the notebook.

computeConfiguration

SparkNotebookComputeConfiguration

The Spark notebook execution configuration.

SparkNotebookComputeConfiguration

Spark notebook compute configuration.

Name Type Description
archives

string[]

The list of abfs path of archives to be used in this session. Archives to be extracted into the working directory of each executor. .jar, .tar.gz, .tgz and .zip are supported. You can specify the directory name to unpack via adding # after the file name to unpack, for example, file.zip#directory.

attachedEnvironment

ItemReferenceById

Environment to be used in this session.

defaultLakehouse

ItemReferenceById

Default lakehouse to be used in this session.

driverCores

integer (int32)

Spark driver core. Must be one of the following values: 4, 8, 16, 32, 64.

driverMemory

CustomPoolMemory

Spark driver memory.

executorCores

integer (int32)

Spark executor core. Must be one of the following values: 4, 8, 16, 32, 64.

executorMemory

CustomPoolMemory

Spark executor memory.

files

string[]

The list of abfs path of files to be used in this session. Files to be placed in the working directory of each executor.

highConcurrencyModeOptions

HighConcurrencyModeOptions

High concurrency mode options.

instancePool

InstancePool

Instance pool used to run this notebook.

jars

string[]

The list of abfs path of jars to be used in this session. Jars to include on the driver and executor classpaths.

mountPoints

MountPoint[]

Mount points to be used in this session.

name

string

The name of this session.

numExecutors

integer (int32)

Number of executors to launch for this session. The minimum value is 1, and the maximum value has to be lower than the instance pool maxNodeCount.

pyFiles

string[]

The list of abfs path of python files to be used in this session. .zip, .egg, or .py files to place on the PYTHONPATH for Python apps.

sparkProperties

SparkProperty[]

A dictionary of Spark property key to their value.

SparkProperty

A Spark property key and its value.

Name Type Description
key

string

The Spark property key.

value

string

The Spark property value.