How to execute Hive query in Databricks?

Suman Dutta 1 Reputation point
2022-02-11T12:21:07.377+00:00

We are calling a ".jar" file from Azure Data Factory using Databricks JAR activity. In the JAR activity we are specifying the Cluster Id in Databricks Linked Service.

In Databricks cluster we are adding below Spark config:

spark.hadoop.hive.warehouse.subdir.inherit.perms true
spark.databricks.delta.preview.enabled true
spark.hadoop.dfs.adls.adls2name.hostname adls2name.dfs.core.windows.net
spark.sql.warehouse.dir abfss://clusters@adls2name.blob.core.windows.net/nonprod/warehouse/
spark.hadoop.dfs.adls.adls2name.mountpoint /clusters/nonprod/
spark.hadoop.dfs.adls.home.hostname adls2name.dfs.core.windows.net
spark.hadoop.dfs.adls.home.mountpoint /clusters/foldername/
spark.hadoop.dfs.adls.oauth2.refresh.url https://login.microsoftonline.com/xxxxxxxxxx/oauth2/token
spark.hadoop.dfs.adls.oauth2.client.id
spark.hadoop.dfs.adls.oauth2.credential
spark.hadoop.dfs.adls.oauth2.access.token.provider.type ClientCredential
spark.hadoop.fs.azure.account.key.stagedata01.blob.core.windows.net
spark.hadoop.fs.azure.account.key.adls2name.blob.core.windows.net

The jar file is in "stagedata01" adls. The jar file is copying file from one location to other location in "adls2name" adls.

But it is failing...

Please help how to resolve?

Azure Data Lake Storage
Azure Data Lake Storage
An Azure service that provides an enterprise-wide hyper-scale repository for big data analytic workloads and is integrated with Azure Blob Storage.
1,490 questions
Azure HDInsight
Azure HDInsight
An Azure managed cluster service for open-source analytics.
213 questions
Azure Databricks
Azure Databricks
An Apache Spark-based analytics platform optimized for Azure.
2,220 questions
{count} votes

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.