Delta Live Tables error when start Pipeline

Pablo Donoso 0 Reputation points
2023-01-18T13:22:31.29+00:00

Hi,

I'm trying to run a Delta Live Tables Pipeline and I get the following error at the 'Waiting for resources' stage:

Failed-DLT

Error: The operation could not be performed on your account with the following error message: azure_error_code: OperationNotAllowed, azure_error_message: Operation could not be completed as it results in..

Pipeline Settings:

{
    "id": "2486b115-a46a-4b6e-b562-11f4d1a0a46d",
    "clusters": [
        {
            "label": "default",
            "autoscale": {
                "min_workers": 1,
                "max_workers": 4,
                "mode": "ENHANCED"
            }
        }
    ],
    "development": true,
    "continuous": false,
    "channel": "CURRENT",
    "edition": "CORE",
    "photon": false,
    "libraries": [
        {
            "notebook": {
                "path": "/DLT/DLT-Quickstart-Notebook"
            }
        }
    ],
    "name": "DLT-Quickstart-Pipeline",
    "storage": "/DLT/Output"
}

Cluster Settings:


{
    "autoscale": {
        "min_workers": 2,
        "max_workers": 8
    },
    "cluster_name": "Pablo Donoso's Cluster",
    "spark_version": "11.3.x-scala2.12",
    "spark_conf": {
        "spark.databricks.delta.preview.enabled": "true"
    },
    "azure_attributes": {
        "first_on_demand": 1,
        "availability": "ON_DEMAND_AZURE",
        "spot_bid_max_price": -1
    },
    "node_type_id": "Standard_DS3_v2",
    "driver_node_type_id": "Standard_DS3_v2",
    "ssh_public_keys": [],
    "custom_tags": {},
    "spark_env_vars": {
        "PYSPARK_PYTHON": "/databricks/python3/bin/python3"
    },
    "autotermination_minutes": 120,
    "enable_elastic_disk": true,
    "cluster_source": "UI",
    "init_scripts": [],
    "single_user_name": "pabdonoso@gmail.com",
    "data_security_mode": "LEGACY_SINGLE_USER_STANDARD",
    "runtime_engine": "STANDARD",
    "cluster_id": "0118-123051-gasjgdm0"
}

Regards ans Thanks !

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,080 questions
{count} votes

1 answer

Sort by: Most helpful
  1. KranthiPakala-MSFT 46,442 Reputation points Microsoft Employee
    2023-01-20T18:11:35.31+00:00

    Hi Pablo Donoso,

    Welcome to the Microsoft Q&A platform and thanks for posting your query.

    You will receive this error when you exceed the limit of cores for a region. You need to raise a support ticket to increase the limit of the number of cores for a West Europe region.

    Cause: In general, Quotas are applied per resource group, subscriptions, accounts, and other scopes. For example, your subscription may be configured to limit the number of cores for a region. If you attempt to deploy a virtual machine with more cores than the permitted amount, you receive an error stating the quota has been exceeded (I agree your error message is incomplete but usually the error you received is complaining about the insufficient vCores).

    Workaround 1:

    Solution: To request a quota increase, go to the portal and file a support issue. In the support issue, request an increase in your quota for the region into which you want to create the VMs.

    How to check Usage + Quotas for your subscription?

    Select your subscription => Under Settings => Usage + quotas => Use filter to select "Microsoft.Compute" & "West Europe" => Check usage of Total Regional vCPUs => If the usage is full, you need to click on Request Increase to increase the limit of cores in the region.

    User's image

    OR

    Workaround 2: As the issue seems to be caused by databricks using a vCPU type that was at its quota. To solve this, try adding an explicit vCPU type to settings.json

        "clusters": [
            {
                "label": "default",
                "node_type_id": "Standard_DS3_v2",
                "driver_node_type_id": "Standard_DS3_v2",
            }
        ],
    
    

    Here is the reference from an old thread from Databricks community channel: Delta Live Tables failed to launch pipeline cluster

    In case if any of the above two workarounds didn't work, then I recommend filing a support ticket for deeper investigation to figure out the exact root cause and possible solution.

    Hope this info helps.


    Please don’t forget to Accept Answer and Up-Vote wherever the information provided helps you, this can be beneficial to other community members.

    0 comments No comments