Hello @Ryan Abbey ,
Thanks for the question and using MS Q&A platform.
Unfortunately, there is no built-in mechanism to prioritize the jobs based on the file sizes.
Azure Synapse provides this feature out of box in Apache Spark pools.
Apache Spark pools provide the ability to automatically scale up and down compute resources based on the amount of activity.
- When the autoscale feature is enabled, you can set the minimum and maximum number of nodes to scale.
- When the autoscale feature is disabled, the number of nodes set will remain fixed.
For more details, refer to Apache Spark pool configurations in Azure Synapse Analytics and Automatically scale Azure Synapse Analytics Apache Spark pools.
Hope this helps. Do let us know if you any further queries.
---------------------------------------------------------------------------
Please "Accept the answer" if the information helped you. This will help us and others in the community as well.