When you connect Power BI to a folder in ADLS Gen2, it lists all the files in the folder rather than interpreting them as a Delta table. Power BI can read Parquet files directly, but it doesn't understand the Delta Lake format and its transaction logs.
Delta Lake is a storage layer that brings ACID (Atomicity, Consistency, Isolation, Durability) transactions to Apache Spark and big data workloads. Reading a Delta table requires understanding both the Parquet files and the Delta transaction log, which Power BI doesn't natively support.
You could use Azure Synapse Analytics or Azure Databricks to read the Delta table, convert it to a format Power BI can consume, and then export it to a location that Power BI can access (another set of Parquet files, a CSV, or direct connection via an SQL endpoint).
Power BI has a built-in connector for Azure Databricks. You can load your Delta table into a Databricks cluster and then use Power BI to query the data through the Databricks connector.
If your Delta table doesn't require the ACID transaction features, you could directly read the Parquet files from the Delta table folder in Power BI. However, this method ignores the Delta Lake transaction log and will only give you the latest snapshot of the data in the Parquet files.
You could add a step in Azure Data Factory to transform the Delta table into a flat table (Parquet or CSV) that Power BI can read directly.