merge in sql server table using dataframe
how to use merge statement in 2 dataframe df1=spark.sql("" sellect cole1,col2 from table1""") Table2 from sql server
![](https://techprofile.blob.core.windows.net/images/UVyLn-Xlr0apNce5TRX1RA.png?8DBA3B)
primary key in adf pipeline
ADF package is failing due to Primarky and not null column available in target table due to which pipiline is failing how to handle such situation in adf.. Please prrimary key is only creating unique constraint
merge statement in 2 data frame
Hi , how to use merge statement in 2 dataframe df1=spark.sql("" sellect cole1,col2 from table1""") df2=spark.sql("" sellect cole1,col2 from table2""") expected results merge into table2 using tabl1 on…
How to assign identity to azure databricks
How to assign identity to azure databricks as identity option is not available in the portal as per the MS doc
how to fetch data from Azure Active Directory(AD) by using either ADF or databricks
To fetch data from Azure Active Directory (AD) using either Azure Data Factory (ADF) or Azure Databricks, Pleae let me know in detail. thanks
Downsizing subnets that are associated with a databricks workspace
Hi, Is it possible to downsize the private and public subnets that currently have nothing attached to it but are use by a databricks workspace? Would this require the databricks cluster to be redeployed? Are there any extra steps needed so the…
Can we connect ADX to Databricks, without using App Registration.?
I am trying to Access ADX data into Databricks, but failing to configure connection between databricks and ADX. I don't have permissions to use App registration. Is there any way to do with managed identity or any other alternative.
![](https://techprofile.blob.core.windows.net/images/VfQFAmOikEWfBHko2XlWTA.png?8D7F33)
While running SQL query in Azure Databricks workspace i.e. on SQL warehouse as well as on UC enabled shared cluster facing an SSL handshake error
Hello Team, We have UC enabled Azure databricks workspace, also the Public access and delta sharing is disabled on our workspace. So while running the below SQL query on SQL Warehouse as well as on UC enabled shared cluster, I am receiving an…
How to reduce unnecessary high memory usage in a Databricks cluster?
We are having unnecessary high memory usage even when nothing is running on the cluster. When the cluster first starts, it's fine, but when I run a script and it finishes executing, nothing gets back to the idle (initial) state (even hours after nothing…
![](https://techprofile.blob.core.windows.net/images/3b270b575c094eeca63e9bc66c861c5a.png)
Guidance on how to use Service Principal with Certificate to Authorize for EventHub Stream Read
I found this documentation https://github.com/Azure/azure-event-hubs-spark/blob/master/docs/use-aad-authentication-to-connect-eventhubs.md online on how to use service principal with certificate to use spark stream read from EventHubs, I want to do this…
Set cloudFiles.maxFileAge and cloudFiles.backfillInterval values in Autoloader
I'm using following in the autoloader options. .option("cloudFiles.maxFileAge", "90 days")\ .option("cloudFiles.backfillInterval", "1 day")\ Our data retention policy is 7 years. Shall I use maxFileAge as 7 years…
![](https://techprofile.blob.core.windows.net/images/e3KmMP62FEeXRBX1Jzhdpg.png?8DB239)
![](https://techprofile.blob.core.windows.net/images/e3KmMP62FEeXRBX1Jzhdpg.png?8DB239)
How to specify a custom catalog name for Azure Databricks Delta Lake Dataset in ADF
Hello, I am creating an Azure Databricks Delta Lake Dataset in ADF and I am only able to choose the database name that links to Databricks's hive_metastore. How can I specify a custom catalog name that I created in Databricks instead of…
Serverless warehouse suddenly stops to start up.
Hey All. From today, suddenly we are getting below error while starting a serverless warehouse. Details for the latest failure: Error: Cluster launch timeout. Type: SERVICE_FAULT Code: K8S_DBR_CLUSTER_LAUNCH_TIMEOUT Warehouse details: Type:…
![](https://techprofile.blob.core.windows.net/images/VfQFAmOikEWfBHko2XlWTA.png?8D7F33)
Can we run delta live tables with free tier azure account?
While running the delta live tables pipeline in azure databricks I'm getting an error saying: QuotaExceeded, error message: Operation could not be completed as it results in exceeding approved Total Regional Cores quota. Additional details - Deployment…
![](https://techprofile.blob.core.windows.net/images/VfQFAmOikEWfBHko2XlWTA.png?8D7F33)
How to solve Invalid SessionHandle error with Azure Databricks ?
I am applying a SQLDatabaseChain Chatbot model by using LangChain SQLDatabaseChain and GPT4. I first created this model on Azure Databricks notebook like this : import json import os import langchain import mlflow from mlflow.models import…
How do I share all of my databricks notebooks with all databricks users?
Hi all, I know that I've done this in the distant past, but we have a new instance of Databricks and I need to do a global setting to share all of my notebooks with all Databricks users (read only). That way I don't need to remember to share individual…
![](https://techprofile.blob.core.windows.net/images/VfQFAmOikEWfBHko2XlWTA.png?8D7F33)
while creating cluster in databricks i am getting following error Azure Quota Exceeded Exception: Error code: SkuNotAvailable, error message: The requested VM size for resource 'Following SKUs have failed for Capacity Restrictions: Standard_DS3_v2' is cur
while creating cluster in databricks i am getting following error Azure Quota Exceeded Exception: Error code: SkuNotAvailable, error message: The requested VM size for resource 'Following SKUs have failed for Capacity Restrictions: Standard_DS3_v2' is…
![](https://techprofile.blob.core.windows.net/images/VfQFAmOikEWfBHko2XlWTA.png?8D7F33)
array in databricks
Hi , I got the 1st row in sorted order how can i get the view on row1,row2,row3 using array and how the reshuffling will happen 1,2,3,4,5 --- sorted order 5,1,2,3,4 - 1 4,5,1,2,3 - 2 3,4,5,1,2 - 3
![](https://techprofile.blob.core.windows.net/images/VfQFAmOikEWfBHko2XlWTA.png?8D7F33)
Azure Databricks fails
Hello, In the databricks notebook which is provided by Microsoft training classes, when I tried to import => read a data (csv or json) like path = source + "/wikipedia/pagecounts/staging_parquet_en_only_clean/" files =…
databricks cluster sizing
Hey, how to calculate cluster core and workers node of 10gb data load every 2 hours ... what is the calculation behind this