Thanks for reaching out to Q&A.
You can install Simba Spark ODBC Driver to query Azure Databricks via ODBC connection. However you wont be able to install ODBC driver due to the functions sandbox limitation
You could use Windows Containers on Azure App Service in which you have control over what drivers or custom software to install.
To run Functions inside the container you can use the following Windows Container for Azure Functions v3 : mcr.microsoft.com/azure-functions/dotnet:3.0-nanoserver-1809 and run it App Service: https://learn.microsoft.com/en-us/azure/app-service/quickstart-custom-container?pivots=container-windows&tabs=dotnet
Please note that, you won’t have the portal experience.
I hope this helps!
Please 'Accept as answer' and ‘Upvote’ if it helped so that it can help others in the community looking for help on similar topics.
Hi @MughundhanRaveendran-MSFT ,
I will not be able to go for the App service. I got a suggestion to use databricks sql endpoints? From what I understand, to use databricks SQL endpoints, I would still require ODBC driver.
I can call Rest API but they do not have the capability to return the result back to azure functions. One option is to write the result to storage and access read from there in my function.
Could you please let me know if my understanding is correct?
@Karpagam Gurumurthy , If you write the result to the storage, it can be read using Storage blob trigger or Storage queue trigger functions.
Were you able to get to a solution to this problem ? I am also looking at a potential solution for a similar scenario. SQL Endpoints -- to be consumed at Azure Function apps.
How is the performance overall if there is this additional layer of writing and reading from storage.
Sign in to comment