Start by creating the linked services in ADF for both the source and sink Azure Blob Storage accounts.
Then create two datasets in ADF - one for the source container (Container_Source) and another for the sink container (Container_Sink). In the dataset for the source container, you can specify the path to include files starting with "Energy" and ending with ".zip".
In the Copy Data activity, configure the source dataset to point to Container_Source. Use the wildcard file path to select files starting with "Energy" and ending with ".zip". Set the sink dataset to Container_Sink.
To implement the incremental copy logic, for the first execution, you can simply run the pipeline to copy all existing files that match your criteria. Then to copy only the new files, I recommend using a metadata store (like Azure SQL Database) where you can log the details of the files already copied. In each subsequent run, your pipeline can check this metadata store to determine which files are new since the last run.
To filter by the last modified date, you need to store the timestamp of the last pipeline run, where you can configure the source dataset only to pick up files that have been modified since that timestamp and of course you can parameterize the pipeline to make it more flexible and schedule it to run at your desired frequency.