Share via

azure data factory - data bricks

Asres Shiferaw 21 Reputation points
2021-05-26T10:18:01.157+00:00

We are trying to perform a schema drift to the data bricks delta tables and it’s not working through the delta lake connecter in ADF.
If we use the Delta lake API (i.e. Merge API) using PYSPARK, it is working fine as expected. But it is not working through the ADF delta connector .
See the error we are getting from the ADF job. Is there anyone who experienced this problem and any resolution.

Error:


StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink1': org.apache.spark.sql.AnalysisException: cannot resolve target.UserId in UPDATE clause given columns target.Id, target.IsActive, target.CreatedById, target.Name, target.LastModifiedById, target.Type, target.CreatedDate, target.SystemModstamp, target.LastModifiedDate;

99738-image.png 99739-image.png

Azure Data Factory
Azure Data Factory

An Azure service for ingesting, preparing, and transforming data at scale.


Answer accepted by question author

  1. Jack Ma 161 Reputation points
    2021-05-27T03:48:37.813+00:00

    Hi @Asres Shiferaw ,

    Yes this issue is due to the limitation of internal library used by ADF data flow. Please follow this troubleshooting guidance to resolve this issue: https://learn.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-connector-format#delta

    Thanks,
    Jack

    Was this answer helpful?


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.