If your sink is a file based sink, then the most performant setting will be to just leave the partitioning to default or "current partitioning". This will allow Spark to determine the best number of partitions based on the number of cores in the worker nodes. It will allow your data flow to scale proportionally as you add cores to your integration runtime to scale-up your execution:
This is the latest updated ADF data flows performance guide: https://learn.microsoft.com/en-us/azure/data-factory/concepts-data-flow-performance
I would only manually set the file sink partitioning if you want to control the partitioned file and folder structure (which takes time in processing) or if you intentionally wish to manually minimize the number of partitions that data flow can use.