Hello @Mayuri Kadam ,
Welcome to the Microsoft Q&A platform.
The ProtocolChangedError happens when a new table is being created in the same directory concurrently i.e., if multiple streams write output to the same delta location. A rerun for the same query should succeed and the subsequent run will not face that issue. In case, you are writing to particular partition in overwrite mode, please use the below spark conf -
sparkSession.conf.set("spark.sql.sources.partitionOverwriteMode", "dynamic")
Hope this helps. Do let us know if you any further queries.
------------
- Please accept an answer if correct. Original posters help the community find answers faster by identifying the correct answer. Here is how.
- Want a reminder to come back and check responses? Here is how to subscribe to a notification.