An Apache Spark-based analytics platform optimized for Azure.
Thanks for using Microsoft Q&A forum and posting your query.
it seems like the issue might be related to how the policy is being enforced and the specific requirements of the linked service configuration. Here are a few additional steps you can try:
Steps to Resolve the Issue
Verify Policy Configuration:
- Ensure that the policy associated with the
policyIdincludes both the driver and worker instance pool IDs. The policy should not only specify the pool IDs but also ensure that no conflicting settings are present. Specify Cluster Information: - If the policy does not fully cover all required settings, you might need to provide minimal cluster information. This can include specifying the
newClusterVersionandnewClusterNumOfWorkerdirectly in thetypeProperties.
Example Configuration
Here’s an updated example of how your typeProperties might look:
{
"type": "AzureDatabricks",
"typeProperties": {
"domain": "https://<databricks-instance>.azuredatabricks.net",
"accessToken": "<your-databricks-access-token>",
"policyId": "<your-policy-id>",
"newClusterVersion": "<runtime-version>", // Specify the Databricks runtime version
"newClusterNumOfWorker": "<number-of-workers>"
}
}
- Policy Enforcement: Ensure that the policy does not forbid specifying
instance_pool_id. If it does, you might need to adjust the policy settings or work with your Databricks admin to create a policy that aligns with your requirements. - Permissions: Since you mentioned not having permissions to view the pools, ensure that the policy is correctly set up to enforce the instance pool usage without needing additional visibility.
Testing
After making these adjustments, test the configuration again by triggering a minimal pipeline. If the issue persists, it might be helpful to work closely with your Databricks admin to ensure the policy and linked service configurations are fully compatible.
Hope this helps. Do let us know if you any further queries.