HTTP Repo update from DevOps to databricks fails with 'invalid access token' error

Anonymous
2023-03-06T09:50:07.01+00:00

I have inconsistent behaviour when updating a Databricks repo from Azure DevOps vs running the same command in Postman.

In Postman, I can successfully PATCH the repo using a PAT token from Databricks:

enter image description here

However, running the pipeline in Azure DevOps I'm using a Generic Service Connection to store the URL:

enter image description here

The pipeline (which I usually store in a secret variable) attempts to mimic the postman call using this generic service provider:

stages: 
  - stage: deploy_production
    jobs:
    - deployment: deploy_prod_databricks
      displayName: Deploy Production Databricks
      environment: Debug
      pool: server
      strategy:
        runOnce:
          deploy:
            steps:          
            - task: InvokeRESTAPI@1
              displayName: 'Update deployment repo'
              inputs:
                connectionType: 'connectedServiceName'
                serviceConnection: 'Databricks (Prod)'
                method: 'PATCH'
                headers: |
                  {
                  "Accept":"application/json", 
                  "Authorization": "Bearer dapid956<snip>7b7"
                  }
                body: |
                  {
                  "branch": "releases/2023/02"
                  }

However this generates an incorrect access token error:

enter image description here

I've tried re-issuing tokens, but get the same error. I have confirmed that IP access limits are not enabled on the workspace.

I have ascertained that the token itself is fine because it works in Postman. I have mirror pipelines in my Dev/Test environments which work OK.

Flummoxed what's happening here, it's as if it's not using the token I'm passing it at all. Is there a way to debug what headers are being sent, or any other ideas?

Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. Musaif Shaikh 85 Reputation points
    2023-03-06T09:55:46.95+00:00

    It looks like there may be a problem with the way the access token is being used in the Azure DevOps pipeline. Here are a few things you can try to troubleshoot the issue:

    1. Verify the scope of the token: Make sure the personal access token (PAT) has the necessary scope to perform the operation you're attempting. In Databricks, you can check the token's scope by going to the user settings and clicking on "Access Tokens". Ensure that the token has the necessary scopes to perform the PATCH operation.
    2. Verify the connection: Check the connection between Azure DevOps and Databricks. Go to the "Service connections" page in Azure DevOps and make sure the connection is active and has the correct credentials.
    3. Verify the headers: You can add a task to the pipeline that prints out the headers being sent to Databricks. This can help you identify if the headers are being properly set in the pipeline. You can use a command-line task to run the "curl" command with the appropriate options to print out the headers.
    4. Check the Databricks logs: Check the Databricks logs to see if there are any error messages that indicate why the access token is invalid.
    5. Check for IP restrictions: Ensure that IP access limits are not enabled on the Databricks workspace. If it is enabled, make sure that the IP address of the Azure DevOps agent is included in the whitelist.

    I hope this helps you troubleshoot the issue. Let me know if you have any further questions!


Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.