Hello @rt and welcome to Microsoft Q&A. Thank you for you well-considered ask.
At this time, there is no feature to control pipeline concurrency with respect to parameter values.
Other than the option you outlined, I can think of one more possibility. A sort of home-brew custom scheduler. The details of this depend upon whether you are triggering pipelines manually, or via triggers.
If you are triggering manually, then you may want to build an application outside of Data Factory. This application would receive requests for pipeline runs, and query the Data Factory service to either get the status of existing runs, or start a new run. This could use REST API, Powershell, or other SDK/API/CLI. This has an advantage over the next option, it can provide feedback to the user.
While roundabout, it is possible to use the ADF Web activity to query the ADF service and get the list of current or past pipeline runs. Suppose we have a pipeline, which takes as input, the parameters to start another pipeline with. This pipeline would query the service and check for in-progress runs and compare parameters . Depending upon the result, it could trigger the desired pipeline and pass parameters, it could wait and try checking again later, or it could halt.
Your triggers would point to this "proxy" pipeline. This would necessitate having 1 "proxy" pipeline for every 1 "business" pipeline. I hope this increase would be smaller than the permutation of parameters.
Did I communicate effectively?