Interactive Cluster update api, is removing unity access for a cluster which has unity enabled

Siddharth Choudhary 40 Reputation points
2025-03-25T07:50:13.6533333+00:00

Hi, when i'm changing my interactive cluster config using databricks rest api
/api/2.1/clusters/edit , the cluster which is already unity enabled , after hitting the api the unity enable tag is gone the api is changing other config but why it is removing unity access.
api body

{ "cluster_id": "",

          "cluster_name": "",

  "num_workers": 1,

      "use_photon": true,

      "autotermination_minutes": 10,

      "azure_attributes": {

        "availability": "SPOT_WITH_FALLBACK_AZURE",

        "spot_bid_max_price": "-1.0"

      },

      "autoscale":{

        "min_workers":1,

        "max_workers":8

        },

      "spark_version": "14.3.x-scala2.12",  

      "node_type_id": "Standard_DS3_v2",

      "driver_node_type_id": "Standard_DS4_v2"}

can you please look into the issue

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,514 questions
0 comments No comments
{count} votes

Accepted answer
  1. Venkat Reddy Navari 2,975 Reputation points Microsoft External Staff Moderator
    2025-03-25T11:32:39.8033333+00:00

    Hi Siddharth Choudhary

    Looking at the API body you shared, everything seems pretty standard updating the number of workers, auto scale settings, Photon, and node types shouldn’t, in theory, affect the Unity Catalog enablement. Since you mentioned the cluster is already Unity-enabled before the API call, it’s strange that the “unity enable tag” is being stripped out after the update. It almost sounds like the API might be resetting some hidden or required field that Unity Catalog depends on, even though it’s not explicitly mentioned in your request.

    Here are a few key areas to investigate:

    1. Full Cluster Config: When you hit the /api/2.1/clusters/edit endpoint, it’s a full replacement of the cluster’s configuration (not a partial update). If the Unity Catalog setting—like "data_security_mode": "SINGLE_USER" or a similar flag—isn’t included in your API payload, it could be getting reset to a default (disabled) state.
    2. Cluster Access Mode: Unity Catalog usually requires the cluster to be in single-user or shared mode. Your original payload doesn’t specify the access mode, so it might be reverting to a mode that doesn’t support Unity Catalog. The updated query includes "data_security_mode": "SINGLE_USER" and "single_user_name" to lock it in.

    Here’s the updated API body with Unity Catalog settings added:

    {
      "cluster_id": "",
      "cluster_name": "",
      "num_workers": 1,
      "use_photon": true,
      "autotermination_minutes": 10,
      "azure_attributes": {
        "availability": "SPOT_WITH_FALLBACK_AZURE",
        "spot_bid_max_price": "-1.0"
      },
      "autoscale": {
        "min_workers": 1,
        "max_workers": 8
      },
      "spark_version": "14.3.x-scala2.12",
      "node_type_id": "Standard_DS3_v2",
      "driver_node_type_id": "Standard_DS4_v2",
      "data_security_mode": "SINGLE_USER",
      "single_user_name": "<your-username>"
    }
    
    
    

    I hope this information helps. Please do let us know if you have any further queries.

    Kindly consider upvoting the comment if the information provided is helpful. This can assist other community members in resolving similar issues.

    1 person found this answer helpful.

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.