Hello @reddy ,
It’s hard to provide the sample code snippet which helps to dynamically transform all the array type columns without understand the underlying column types present in your dataset.
While working with nested data types, Delta Lake on Databricks optimizes certain transformations out-of-the-box. The following notebooks contain many examples on how to convert between complex and primitive data types using functions natively supported in Apache Spark SQL.
For more details, refer “Azure Databricks – Transform complex data types”.
I would request you to kindly go through the below notebook which explains some data transformation examples using Spark SQL. Spark SQL supports many built-in transformation functions in the module pyspark.sql.functions therefore we will start off by importing that.
Hope this helps. Do let us know if you any further queries.
----------------------------------------------------------------------------------------
Do click on "Accept Answer" and Upvote on the post that helps you, this can be beneficial to other community members.