Подія
31 бер., 23 - 2 квіт., 23
Найбільша подія навчання Fabric, Power BI і SQL. 31 березня – 2 квітня. Щоб заощадити 400 грн, скористайтеся кодом FABINSIDER.
Реєструйтеся сьогодніЦей браузер більше не підтримується.
Замініть його на Microsoft Edge, щоб користуватися перевагами найновіших функцій, оновлень безпеки та технічної підтримки.
The Microsoft Fabric REST API provides a service endpoint for the create, read, update, and delete (CRUD) operations of a Fabric item. This article describes the available environment REST APIs and their usage.
Важливо
This feature is in preview.
API | Description | Category |
---|---|---|
Create environment | Create a new environment in the workspace. | General |
Get environment | Get the metadata of an environment. The response includes the status of the environment. | General |
Update environment | Update the metadata of an environment, like name and description. | General |
Delete environment | Delete an existing environment. | General |
List environment in workspace | Get the list of environment in a workspace. | General |
Publish environment | Trigger the publish of the environment with current pending changes. | General |
Publish cancellation | Cancel an ongoing publish of the environment | General |
Get published Spark compute | Get the Spark compute configurations that are effective. | Spark compute |
Get staging Spark compute | Get the full staging compute configurations. The staging configurations include the published and pending compute configurations. | Spark compute |
Get published libraries | Get the library list that is effective. | Libraries |
Get staging libraries | Get the full staging library list. This list includes the published and pending libraries. | Libraries |
Upload staging libraries | Adding one custom library or one/multiple public library in the environment. | Libraries |
Delete staging libraries | Delete one staging custom library or all public library. | Libraries |
Learn more about the environment public APIs in Item APIs - Environment
This section walks you through several common scenarios when dealing with environment. You can replace the {WORKSPACE_ID}
and {ARTIFACT_ID}
in the following examples with appropriate values.
You can create a new empty environment using the following API.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments
{
"displayName": "Environment_1",
"description": "An environment description"
}
You can use the upload/delete staging libraries APIs to manage the library section in the environment
Before adding/deleting library, you can use the get published libraries API to check what libraries are currently effective.
Sample request
GET https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/libraries
Sample response
{
"customLibraries": {
"wheelFiles": [
"samplewheel-0.18.0-py2.py3-none-any.whl"
],
"pyFiles": [
"samplepython.py"
],
"jarFiles": [
"samplejar.jar"
],
"rTarFiles": [
"sampleR.tar.gz"
]
},
"environmentYml": "dependencies:\r\n- pip:\r\n - matplotlib==3.4.3"
}
The API for uploading staging library accepts one file at a time. The supported file types are .whl, .jar, .tar.gz, .py and environment.yml for public library. You can specify the file via the multipart/form-data content-type.
Примітка
Sample requests
POST https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/libraries
By specifying the full library file name with the type suffix, you can delete one library at a time.
Примітка
environment.yml
as the file to be deleted, you are removing all public libraries.Sample requests
DELETE https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/libraries?libraryToDelete=fuzzywuzzy-0.18.0-py2.py3-none-any.whl
You can use the update staging Spark compute to manage the Spark compute.
Before changing the configurations for the environment, you can use the get published Spark compute API to check what Spark compute configurations are currently effective.
Sample request
GET https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/sparkcompute
Sample response
{
"instancePool": {
"name": "Starter Pool",
"type": "Workspace"
},
"driverCores": 4,
"driverMemory": "56g",
"executorCores": 4,
"executorMemory": "56g",
"dynamicExecutorAllocation": {
"enabled": false,
"minExecutors": 1,
"maxExecutors": 1
},
"sparkProperties": {
"spark.acls.enable": "false"
},
"runtimeVersion": "1.2"
}
You can update the Spark runtime, switch to another pool, refine compute configuration and add/remove Spark properties through editing the request body of this API.
You can switch the attached pool by specifying the pool name and pool. Specify the pool name as Starter Pool
to switch the pool to default settings. To get the full list of the available custom pools of the workspace by REST API, see Custom Pools - List Workspace Custom Pools
If you want to remove an existing Spark property, you need to specify the value as null
with the key that you want to remove, as showing in the following example.
Sample request
PATCH https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/sparkcompute
{
"instancePool": {
"name": "Starter Pool",
"type": "Workspace"
},
"driverCores": 4,
"driverMemory": "56g",
"executorCores": 4,
"executorMemory": "56g",
"dynamicExecutorAllocation": {
"enabled": false,
"minExecutors": 1,
"maxExecutors": 1
},
"sparkProperties": {
"spark.acls.enable": null
},
"runtimeVersion": "1.2"
}
Using the following sets of APIs to publish the changes.
The environment can accept one publish at a time. Before publishing your environment, you can validate the status of the environment and have a final review of the staging changes. Once the environment is published successfully, all configurations in the staging state become effective.
Step 1: get the metadata of the environment
GET https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/
In the response body, you can tell clearly the state of the environment. Make sure there is no ongoing publish before you move to next step.
Step 2: get the staging libraries/Spark compute to have a final review
GET https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/libraries
GET https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/sparkcompute
The changes you made in for the staging libraries and Spark compute are cached but require publishing to become effective. Follow the following example to trigger the publish.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/publish
Sample response
{
"publishDetails":
{
"state": "Running",
"targetVersion": "46838a80-5450-4414-bea0-40fb6f3e0c0d",
"startTime": "2024-03-29T14:17:09.0697022Z",
"componentPublishInfo": {
"sparkLibraries": {
"state": "Running"
},
"sparkSettings": {
"state": "Running"
}
}
}
}
During the publish, you can also call following API to cancel it.
Sample request
POST https://api.fabric.microsoft.com/v1/workspaces/{{WORKSPACE_ID}}/environments/{{ARTIFACT_ID}}/staging/cancelPublish
Подія
31 бер., 23 - 2 квіт., 23
Найбільша подія навчання Fabric, Power BI і SQL. 31 березня – 2 квітня. Щоб заощадити 400 грн, скористайтеся кодом FABINSIDER.
Реєструйтеся сьогодніНавчання
Модуль
Work with environments in GitHub Actions - Training
machine learning operations, MLOps, environments
Сертифікація
Microsoft Certified: Fabric Data Engineer Associate - Certifications
Як спеціаліст з обробки даних fabric, ви маєте мати досвід роботи з предметами з шаблонами завантаження даних, архітектурами даних і процесами оркестрування.
Документація
Migrate libraries and properties to a default environment - Microsoft Fabric
Learn how to migrate your existing workspace libraries and Apache Spark properties to a default Fabric environment.
Manage the resources in Fabric environment - Microsoft Fabric
The Resources section in Fabric environment enables small resources management. Learn how to use the resources folder in the development lifecycle.
Compute management in Fabric environments - Microsoft Fabric
A Fabric environment contains a collection of configurations, including Spark compute properties. Learn how to configure these properties in an environment.