Hello @Rohit Kulkarni ,
Thanks for the question and using MS Q&A platform.
I did tried the below code and it does the job .
%python
delta_df = spark.read.format("delta") \
.option("readChangeFeed", "true") \
.option("startingVersion", 9) \
.option("endingVersion", 9) \
.table("studentstest")
#Get the most latest version , so doing max(_commit_version)
delta_df =delta_df.groupBy("name", "address","student_id","_commit_version").max('_commit_version')
delta_df = delta_df["name", "address","student_id"]
#The the content back to the table
delta_df.write.option("header", "true").mode("overwrite").saveAsTable("studentstest")
Himanshu
Please accept as "Yes" if the answer provided is useful , so that you can help others in the community looking for remediation for similar issues.