Background Jobs - Run On Demand Notebook
Run on-demand notebook job instance.
Note
This API is a release version of a beta version due to be deprecated on April 1, 2028.
When calling this API - callers must set the query parameter beta to the value false.
Required Delegated Scopes
Notebook.Execute.All or Item.Execute.All
Microsoft Entra supported identities
This API supports the Microsoft identities listed in this section.
| Identity | Support |
|---|---|
| User | Yes |
| Service principal and Managed identities | Yes |
Interface
POST https://api.fabric.microsoft.com/v1/workspaces/{workspaceId}/notebooks/{notebookId}/jobs/execute/instances?beta={beta}
URI Parameters
| Name | In | Required | Type | Description |
|---|---|---|---|---|
|
notebook
|
path | True |
string (uuid) |
The notebook item ID. |
|
workspace
|
path | True |
string (uuid) |
The workspace ID. |
|
beta
|
query | True |
boolean |
This parameter specifies which version of the API to use. Set to |
Request Body
| Name | Type | Description |
|---|---|---|
| executionData | RunNotebookExecutionData: |
Optional. The notebook configurations used during execution. |
| parameters |
The parameter list for run on-demand job request. Per-run, user-defined inputs to tailor this invocation. Note: parameter names are case-insensitive, but the casing must match the parameter name used in the code cell. |
Responses
| Name | Type | Description |
|---|---|---|
| 202 Accepted |
Request accepted, job execution is in progress. Headers: Location Retry-After |
|
| 429 Too Many Requests |
The service rate limit was exceeded. The server returns a Headers Retry-After: integer |
|
| Other Status Codes |
Common error codes:
|
Examples
| Run Data Warehouse notebook with request body. |
| Run Jupyter notebook with request body. |
| Run notebook with no request body. |
| Run notebook with parameters. |
| Run Spark notebook with request body. |
Run Data Warehouse notebook with request body.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false
{
"executionData": {
"compute": "DataWarehouse"
}
}
Sample response
Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60
Run Jupyter notebook with request body.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false
{
"executionData": {
"compute": "Jupyter",
"computeConfiguration": {
"name": "mySessionName",
"numCores": 4,
"mountPoints": [
{
"source": "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath",
"mountPointPath": "/myMountPoint"
}
],
"defaultLakehouse": {
"referenceType": "ById",
"itemId": "2434b3e1-d753-4438-8e72-00cb6703e83a",
"workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
},
"attachedEnvironment": {
"referenceType": "ById",
"itemId": "39f73c18-9970-43a4-9c6e-72d22160493d",
"workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
}
}
}
}
Sample response
Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60
Run notebook with no request body.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false
Sample response
Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60
Run notebook with parameters.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false
{
"parameters": [
{
"name": "param1",
"value": "value1",
"type": "Text"
},
{
"name": "param2",
"value": true,
"type": "Boolean"
}
],
"executionData": {
"compute": "Spark",
"computeConfiguration": {
"highConcurrencyModeOptions": {
"enabled": true,
"sessionTag": "userInputSessionTag"
}
}
}
}
Sample response
Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60
Run Spark notebook with request body.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/notebooks/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/execute/instances?beta=false
{
"executionData": {
"compute": "Spark",
"computeConfiguration": {
"name": "mySessionName",
"driverMemory": "28g",
"driverCores": 4,
"executorMemory": "28g",
"executorCores": 4,
"numExecutors": 10,
"jars": [
"abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/myjar.jar"
],
"pyFiles": [
"abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/mypy.py"
],
"files": [
"abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/myfile.txt"
],
"archives": [
"abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath/myzip.zip"
],
"sparkProperties": [
{
"key": "spark.key1",
"value": "value1"
}
],
"instancePool": {
"name": "poolName",
"type": "Workspace"
},
"mountPoints": [
{
"source": "abfss://myfilesystem@myaccount.dfs.core.windows.net/mypath",
"mountPointPath": "/myMountPoint"
}
],
"defaultLakehouse": {
"referenceType": "ById",
"itemId": "2434b3e1-d753-4438-8e72-00cb6703e83a",
"workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
},
"attachedEnvironment": {
"referenceType": "ById",
"itemId": "39f73c18-9970-43a4-9c6e-72d22160493d",
"workspaceId": "d9438604-fdf3-472d-93d8-fcb832a1d2b6"
},
"highConcurrencyModeOptions": {
"enabled": true,
"sessionTag": "userInputSessionTag"
}
}
}
}
Sample response
Location: https://api.fabric.microsoft.com/v1/workspaces/d9438604-fdf3-472d-93d8-fcb832a1d2b6/items/5171b288-8487-4d1e-82b3-693edfa14aee/jobs/instances/2d6aa964-5f3a-4c95-a878-cc761ae71391
Retry-After: 60
Definitions
| Name | Description |
|---|---|
|
Compute |
Represents the type of the compute. Additional |
|
Custom |
Custom pool memory for Spark driver or Spark executor. Additional |
|
Custom |
Custom pool type. Additional |
|
Error |
The error related resource details object. |
|
Error |
The error response. |
|
Error |
The error response details. |
|
High |
High concurrency mode options. |
|
Instance |
The instance pool. |
|
Item |
A string that represents the parameter's type. Additional types may be added over time. |
|
Item |
An item reference by ID object. |
|
Item |
The item reference type. Additional |
|
Jupyter |
Jupyter notebook compute configuration. |
|
Mount |
The storage mount point. |
| Parameter |
An item job parameter. |
|
Run |
Data Warehouse notebook execution data. This compute type does not support compute configuration. |
|
Run |
Jupyter notebook execution data. |
|
Run |
Run notebook request with executionData. |
|
Run |
Spark notebook execution data. |
|
Spark |
Spark notebook compute configuration. |
|
Spark |
A Spark property key and its value. |
ComputeType
Represents the type of the compute. Additional ComputeType types may be added over time.
| Value | Description |
|---|---|
| Spark |
Spark compute type. |
| Jupyter |
Jupyter compute type. |
| DataWarehouse |
Data Warehouse compute type. |
CustomPoolMemory
Custom pool memory for Spark driver or Spark executor. Additional CustomPoolMemory types may be added over time.
| Value | Description |
|---|---|
| 28g |
28GB memory. |
| 56g |
56G memory. |
| 112g |
112G memory. |
| 224g |
224G memory. |
| 400g |
400G memory. |
CustomPoolType
Custom pool type. Additional CustomPoolType types may be added over time.
| Value | Description |
|---|---|
| Workspace |
Workspace level custom pool |
| Capacity |
Capacity level custom pool |
ErrorRelatedResource
The error related resource details object.
| Name | Type | Description |
|---|---|---|
| resourceId |
string |
The resource ID that's involved in the error. |
| resourceType |
string |
The type of the resource that's involved in the error. |
ErrorResponse
The error response.
| Name | Type | Description |
|---|---|---|
| errorCode |
string |
A specific identifier that provides information about an error condition, allowing for standardized communication between our service and its users. |
| isRetriable |
boolean |
When true, the request can be retried. Use the |
| message |
string |
A human readable representation of the error. |
| moreDetails |
List of additional error details. |
|
| relatedResource |
The error related resource details. |
|
| requestId |
string (uuid) |
ID of the request associated with the error. |
ErrorResponseDetails
The error response details.
| Name | Type | Description |
|---|---|---|
| errorCode |
string |
A specific identifier that provides information about an error condition, allowing for standardized communication between our service and its users. |
| message |
string |
A human readable representation of the error. |
| relatedResource |
The error related resource details. |
HighConcurrencyModeOptions
High concurrency mode options.
| Name | Type | Description |
|---|---|---|
| enabled |
boolean |
The status of the high concurrency mode. False - Disabled, true - Enabled. |
| sessionTag |
string |
Setting the session tag instructs Spark to reuse existing Spark sessions which minimizes startup time. Arbitrary string values can be used for the session tag. If no session exists, a new session is created using the tag value. |
InstancePool
The instance pool.
| Name | Type | Description |
|---|---|---|
| id |
string (uuid) |
Instance pool ID. |
| name |
string |
Instance pool name. |
| type |
Instance pool type. |
ItemJobParameterType
A string that represents the parameter's type. Additional types may be added over time.
| Value | Description |
|---|---|
| VariableReference |
The type of parameter is a variable reference. |
| Integer |
The type of parameter is an integer. |
| Number |
The type of parameter is a number, it accepts both integer and float values. |
| Text |
The type of parameter is a text. |
| Boolean |
The type of parameter is a boolean. |
| DateTime |
The type of parameter is a datetime in UTC, using the YYYY-MM-DDTHH:mm:ssZ format. |
| Guid |
The parameter type is a string representation of a GUID, using 00000000-0000-0000-0000-000000000000 format. See https://learn.microsoft.com/dotnet/api/system.guid.tostring for formatting details, and please use the default format: "D". |
| Automatic |
The parameter type is automatically determined. Note: this type may not be supported for all item job types. |
ItemReferenceById
An item reference by ID object.
| Name | Type | Description |
|---|---|---|
| itemId |
string (uuid) |
The ID of the item. |
| referenceType |
string:
By |
The item reference type. |
| workspaceId |
string (uuid) |
The workspace ID of the item. |
ItemReferenceType
The item reference type. Additional ItemReferenceType types may be added over time.
| Value | Description |
|---|---|
| ById |
The item is referenced by its ID. |
| ByVariable |
The item is referenced by a variable. |
JupyterNotebookComputeConfiguration
Jupyter notebook compute configuration.
| Name | Type | Description |
|---|---|---|
| attachedEnvironment |
Environment to be used in this session. |
|
| defaultLakehouse |
Default lakehouse to be used in this session. |
|
| mountPoints |
Mount points to be used in this session. |
|
| name |
string |
The name of this session. |
| numCores |
integer (int32) |
The number of cores that this job can consume. Must be one of the following values: 2, 4, 8, 16, 32, 64. |
MountPoint
The storage mount point.
| Name | Type | Description |
|---|---|---|
| mountPointPath |
string |
The local path that to mount the remote storage to. |
| source |
string |
Source storage abfss path. |
Parameter
An item job parameter.
| Name | Type | Description |
|---|---|---|
| name |
string |
The parameter name, specified by the caller, must be unique (case-insensitive check) and no longer than 256 characters. |
| type |
The parameter type. |
|
| value |
object |
The parameter value based on the parameter type. |
RunDataWarehouseNotebookExecutionData
Data Warehouse notebook execution data. This compute type does not support compute configuration.
| Name | Type | Description |
|---|---|---|
| compute |
string:
Data |
The execution engine for the job instance. This value needs to match the language of the notebook. |
RunJupyterNotebookExecutionData
Jupyter notebook execution data.
| Name | Type | Description |
|---|---|---|
| compute |
string:
Jupyter |
The execution engine for the job instance. This value needs to match the language of the notebook. |
| computeConfiguration |
The Jupyter notebook execution configuration. |
RunNotebookRequest
Run notebook request with executionData.
| Name | Type | Description |
|---|---|---|
| executionData | RunNotebookExecutionData: |
Optional. The notebook configurations used during execution. |
| parameters |
The parameter list for run on-demand job request. Per-run, user-defined inputs to tailor this invocation. Note: parameter names are case-insensitive, but the casing must match the parameter name used in the code cell. |
RunSparkNotebookExecutionData
Spark notebook execution data.
| Name | Type | Description |
|---|---|---|
| compute |
string:
Spark |
The execution engine for the job instance. This value needs to match the language of the notebook. |
| computeConfiguration |
The Spark notebook execution configuration. |
SparkNotebookComputeConfiguration
Spark notebook compute configuration.
| Name | Type | Description |
|---|---|---|
| archives |
string[] |
The list of abfs path of archives to be used in this session. Archives to be extracted into the working directory of each executor. |
| attachedEnvironment |
Environment to be used in this session. |
|
| defaultLakehouse |
Default lakehouse to be used in this session. |
|
| driverCores |
integer (int32) |
Spark driver core. Must be one of the following values: 4, 8, 16, 32, 64. |
| driverMemory |
Spark driver memory. |
|
| executorCores |
integer (int32) |
Spark executor core. Must be one of the following values: 4, 8, 16, 32, 64. |
| executorMemory |
Spark executor memory. |
|
| files |
string[] |
The list of abfs path of files to be used in this session. Files to be placed in the working directory of each executor. |
| highConcurrencyModeOptions |
High concurrency mode options. |
|
| instancePool |
Instance pool used to run this notebook. |
|
| jars |
string[] |
The list of abfs path of jars to be used in this session. Jars to include on the driver and executor classpaths. |
| mountPoints |
Mount points to be used in this session. |
|
| name |
string |
The name of this session. |
| numExecutors |
integer (int32) |
Number of executors to launch for this session. The minimum value is 1, and the maximum value has to be lower than the instance pool |
| pyFiles |
string[] |
The list of abfs path of python files to be used in this session. |
| sparkProperties |
A dictionary of Spark property key to their value. |
SparkProperty
A Spark property key and its value.
| Name | Type | Description |
|---|---|---|
| key |
string |
The Spark property key. |
| value |
string |
The Spark property value. |