question

AsresShiferaw-6083 avatar image
0 Votes"
AsresShiferaw-6083 asked WolfLorenz-3847 commented

azure data factory - data bricks

We are trying to perform a schema drift to the data bricks delta tables and it’s not working through the delta lake connecter in ADF.
If we use the Delta lake API (i.e. Merge API) using PYSPARK, it is working fine as expected. But it is not working through the ADF delta connector .
See the error we are getting from the ADF job. Is there anyone who experienced this problem and any resolution.


Error:


StatusCode":"DFExecutorUserError","Message":"Job failed due to reason: at Sink 'sink1': org.apache.spark.sql.AnalysisException: cannot resolve target.UserId in UPDATE clause given columns target.`Id`, target.`IsActive`, target.`CreatedById`, target.`Name`, target.`LastModifiedById`, target.`Type`, target.`CreatedDate`, target.`SystemModstamp`, target.`LastModifiedDate`;



99738-image.png 99739-image.png


azure-data-factory
image.png (46.8 KiB)
image.png (63.9 KiB)
· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Hi @AsresShiferaw-6083,

Just checking in to see if the below answer provided by @JikaiJackMa-MSFThelped. If this answers your query, do click Accept Answer and Up-Vote for the same. And, if you have any further query do let us know.

0 Votes 0 ·

1 Answer

JikaiJackMa-MSFT avatar image
0 Votes"
JikaiJackMa-MSFT answered WolfLorenz-3847 commented

Hi @AsresShiferaw-6083,

Yes this issue is due to the limitation of internal library used by ADF data flow. Please follow this troubleshooting guidance to resolve this issue: https://docs.microsoft.com/en-us/azure/data-factory/data-flow-troubleshoot-connector-format#delta

Thanks,
Jack

· 1
5 |1600 characters needed characters left characters exceeded

Up to 10 attachments (including images) can be used with a maximum of 3.0 MiB each and 30.0 MiB total.

Thanks for the link. A fix would be highly appreciated as it would significantly reduce the additional complexity that comes with the work-around. How can we track the progress of the fix mentioned in the docs?

0 Votes 0 ·