Try to use the list comprehension in python, it might not be efficient for a very large number of files but can be a quick fix if the number of files is manageable.
files = dbutils.fs.ls("/path/to/your/directory")
filtered_files = [file for file in files if file.name.startswith("Energy")]
If the number of files is extremely large, another approach is to parallelize the operation using Spark. You can read the directory as a DataFrame and then filter the it based on your requirements.
from pyspark.sql import SparkSession
spark = SparkSession.builder.appName("FilterFiles").getOrCreate()
df = spark.read.format("binaryFile").load("/path/to/your/directory").select("path")
filtered_df = df.filter(df.path.contains("Energy"))
filtered_files = [row.path for row in filtered_df.collect()]