Synapse notebooks are not using overriden spark resources configured with %% configure

Raut, Saurabh J 0 Reputation points
2024-10-01T11:20:46.7866667+00:00

We are trying to override default settings for synapse spark notebooks which used very fat configbfor executor memory/cores etc using magic commands like this, but when notebook is getting executed it still demands 12vcores and gives out of quota error as 50 cores getting exhausted by other user sessions

User's image

%%configure
{
    "driverMemory": "1g",
    "driverCores": 1,
    "executorMemory": "1g",
    "executorCores": 1
}
Azure Synapse Analytics
Azure Synapse Analytics
An Azure analytics service that brings together data integration, enterprise data warehousing, and big data analytics. Previously known as Azure SQL Data Warehouse.
4,928 questions
{count} votes

2 answers

Sort by: Most helpful
  1. Chandra Boorla 2,120 Reputation points Microsoft Vendor
    2024-10-01T18:25:46.8466667+00:00

    Hi @Raut, Saurabh J

    Greetings & Welcome to Microsoft Q&A forum! Thanks for posting your query!

    The error message indicates the Vcores are exhausted on your spark pool.

    The vcores limit depends on the node size and the number of nodes. To resolve this error, you can scale up the node size and the nodes.

    For example: If we choose a node size small(4 vcore/32 GB) and the number of nodes 6, then the total number of Vcores will be 4*6 = 24 Vcores.

    Regarding your statement: "I thought that each application needed 12 vcores, so for 4 applications I´ll needed 48 vcores, then 50 vcores of quota should be enough"

    To run a single notebook (application), the minimum number of Vcores depends on the code. Some notebooks may use 12 Vcores, and some may use 20 Vcores. These are depending on the workload.

    User's image

    Here is one more example provided in the documentation (with respect to the nodes):

    • You create a Spark pool called SP1; it has a fixed cluster size of 20 nodes.
    • You submit a notebook job, J1 that uses 10 nodes, a Spark instance, SI1 is created to process the job.
    • You now submit another job, J2, that uses 10 nodes because there is still capacity in the pool and the instance, the J2, is processed by SI1.
    • If J2 had asked for 11 nodes, there would not have been capacity in SP1 or SI1. In this case, if J2 comes from a notebook, then the job will be rejected; if J2 comes from a batch job, then it will be queued.

    Reference document: https://learn.microsoft.com/en-us/azure/synapse-analytics/spark/apache-spark-concepts

    For Vcore allocation please refer this thread: https://learn.microsoft.com/en-us/answers/questions/1163256/how-is-node-allocation-done-for-spark-pools-in-syn

    Hope this helps. Do let us know if you have any further queries.


    If this answers your query, do click Accept Answer and Yes for was this answer helpful. And, if you have any further query do let us know.


  2. Deleted

    This answer has been deleted due to a violation of our Code of Conduct. The answer was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.


    Comments have been turned off. Learn more

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.