Hello SaiSekhar, MahasivaRavi (Philadelphia),
Read multiple csv files using pyspark is discussed here: https://sparkbyexamples.com/spark/spark-read-multiple-csv-files/
You can try the below code and let me know
`from pyspark.sql.functions import *
Define the file paths
file_paths = ['abfss://test@testsalesdatalake.dfs.core.windows.net/Bronze/properties/2024/01/26/test1.csv', 'abfss:/test@testsalesdatalake.dfs.core.windows.net/Bronze/properties/2024/02/02/test1.csv', 'abfss:/test@testsalesdatalake.dfs.core.windows.net/Bronze/properties/2024/02/03/test2.csv']
Read the CSV files into a single DataFrame
df = spark.read.format("csv")
.option("header", "true")
.option("inferSchema", "true")
.load(file_paths)
Show the DataFrame
df.show() `