Hello @Shivank.Agarwal ,
Thanks for the question and using MS Q&A platform.
Every Azure Synapse workspace comes with a default quota of vCores that can be used for Spark. The quota is split between the user quota and the dataflow quota so that neither usage pattern uses up all the vCores in the workspace. The quota is different depending on the type of your subscription but is symmetrical between user and dataflow. However if you request more vCores than are remaining in the workspace, then you will get the following error:
Is there any way to reduce the number of vcores for the active session?
No, when you define a Spark pool you are effectively defining a quota per user for that pool, if you run multiple notebooks or jobs or a mix of the 2 it is possible to exhaust the pool quota.
If not then what should be standard number of cores to be requested for increasing the quota?
It's based on your business requirement, you can pick any number of vcores.
Important Logic: If you request for 50-100 vcores, the backend team will approve asap without any business requirement. If you are going for more than 100 vcores they may ask for the business requirement and based on the credit limit which you have they will approve the quota request.
To resolve this issue, you need to request a capacity increase via the Azure portal by creating a new support ticket.
Step1: Create a new support ticket and select issue type as Service and subscription limits (quotas)
and quota type as Azure Synapse Analytics
.
Step2: In the Details tab, click on Enter details and choose quota type as Apache Spark (vCore) per workspace
, select workspace, and request quota as shown below.
Step3: Select support method and create the ticket.
For more details, refer to Apache Spark in Azure Synapse Analytics Core Concepts.
Hope this helps. Do let us know if you any further queries.
---------------------------------------------------------------------------
Please "Accept the answer" if the information helped you. This will help us and others in the community as well.