Without more details about your environment, I am assuming the following (I will answer point by point) :
Processing single files one after the other is possible but can be inefficient in a distributed environment like Azure Synapse. Spark is designed to handle parallel processing of data. Processing individual files one at a time might lead to a lot of overhead, particularly in your case, where the pipeline execution time seems quite significant.
A minute per run may seem high, but that's not unusual for Spark. The overhead comes from provisioning resources, initializing the job, reading and writing data.... If you're looking for millisecond-level processing, Spark may not be the best choice. Spark excels at processing large batches of data in parallel.
Returning back to the error message you mentioned, I clearly see that it indicates that you are hitting the limit on the number of concurrent executions. You can either increase this limit or adjust your approach to process more files in each execution.
If you're seeing 0-byte files, it could be that the trigger is firing before the write is complete. Using a success file, as you mentioned, can be a way to ensure that the trigger only fires after the write is complete.
It is recommended to process multiple files in parallel. You could consider triggering the pipeline at regular intervals (for example every 10 minutes) and then processing all available files in that batch. This would allow you to take advantage of Spark's parallel processing capabilities, reduce the overhead of spinning up individual jobs, and likely improve performance.