Hello Manash,
When dynamic allocation enabled, spark will possibly acquire much more executors than expected. This means that your application may give resources back to the cluster if they are no longer used and request them again later when there is demand.
My understanding is, If the additional executors were created but not used during the job execution, then you will not be charged for those executors.
one another possible reason for the additional executors could be due to garbage collection. When Spark runs a job, it creates a number of objects in memory. These objects are managed by the Java Virtual Machine and are periodically cleaned up by the garbage collector. If the garbage collector is not able to keep up with the rate of object creation, it may cause the JVM to run out of memory. To prevent this, Spark may add additional executors to the pool to handle the increased workload.
Please refer the below document:
https://www.databricks.com/blog/2015/05/28/tuning-java-garbage-collection-for-spark-applications.html