Greetings & Welcome to Microsoft Q&A forum! Thanks for posting your query!
The error message indicates the Vcores are exhausted on your spark pool.
The vcores limit depends on the node size and the number of nodes. To resolve this error, you can scale up the node size and the nodes.
For example: If we choose a node size small(4 vcore/32 GB) and the number of nodes 6, then the total number of Vcores will be 4*6 = 24 Vcores.
Regarding your statement: "I thought that each application needed 12 vcores, so for 4 applications I´ll needed 48 vcores, then 50 vcores of quota should be enough"
To run a single notebook (application), the minimum number of Vcores depends on the code. Some notebooks may use 12 Vcores, and some may use 20 Vcores. These are depending on the workload.
Here is one more example provided in the documentation (with respect to the nodes):
- You create a Spark pool called SP1; it has a fixed cluster size of 20 nodes.
- You submit a notebook job, J1 that uses 10 nodes, a Spark instance, SI1 is created to process the job.
- You now submit another job, J2, that uses 10 nodes because there is still capacity in the pool and the instance, the J2, is processed by SI1.
- If J2 had asked for 11 nodes, there would not have been capacity in SP1 or SI1. In this case, if J2 comes from a notebook, then the job will be rejected; if J2 comes from a batch job, then it will be queued.
Reference document: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-concepts
For Vcore allocation please refer this thread: https://learn.microsoft.com/en-us/answers/questions/1163256/how-is-node-allocation-done-for-spark-pools-in-syn
Hope this helps. Do let us know if you have any further queries.
If this answers your query, do click Accept Answer
and Yes
for was this answer helpful. And, if you have any further query do let us know.