Share via


Batch Parallelism in AX – Part - IV

 

Comparison of the three techniques:

Even Work Load: Invoicing of 100,000 single line Sales Orders. Bundle size used for this test is 1,000.

image

 

Uneven workload: Invoicing of 1000 Sales Orders and the number of lines for the sales orders vary between 1 and 500. Bundle size used is 100.

image

Very large number of work items: Since I wanted to use over a million work items for this test, instead of using the Sales Order invoicing, used a workload that completes much faster. (Check status of few different things and update my work item table accordingly). Used 2 Million work items for this test. The bellow metric shows when you create a very large number of work items, neither ‘top picking’ nor creating individual task will scale.

image

Recap:

 

Bundling:

PROS:

  • Will work fine for simple even workload.
  • No need for a staging table, no extra maintenance by the application code.
  • Not over pollute the batch table.

CONS:

  • Fixed number of tasks is created while scheduling the job (usually). Batch framework is designed in such a way that you can add or shrink the number of batch threads either automatically thru batch server schedule or manually by the admin. Because the number of task are pre-created, it will not scale up or down along with the batch schedule either using or yielding the extra resources.
  • For uneven workload you may need complex algorithm to find equal distribution of the work
  • In some applications, it may not be possible to distribute the workload evenly.

 

Individual Task modeling:

PROS:

  • Will work fine with uneven workload.
  • Simple to write.
  • Since number of tasks are not fixed, it will scale up or down along with the batch schedule either using or yielding the extra resources
  • Best fit to create dependency among the work items.

CONS:

  • Relies on the batch framework fully.
  • When the number of tasks is very large, the extra overhead due to batch framework will impede the performance quite severely.
  • It can negatively affect the other batch jobs as it is putting pressure on the framework tables

Top Picking:

PROS:

  • Will work fine with uneven workload.
  • Simple to write.
  • Not over pollute the batch table.

CONS:

  • Need an extra staging table to track the progress and work load.
  • Fixed number of tasks is created while scheduling the job (usually). Because the number of task are pre-created, it will not scale up or down along with the batch schedule either using or yielding the extra resources.
  • When a very large number of short work items need to be processed, tracking the work items thru the staging table affects the performance and the throughput.

Depending on the nature of the workload and amount of work that needs to done on a regular basis you can decide a technique that suits your need the best.

Comments

  • Anonymous
    March 23, 2012
    If the workload was both very large and uneven, what would the best approach be?  I assume bundling with some kind of algorithm to split up the workload evenly.  If we were able to do that, is there a max number of batch tasks that should be allowed per batch in order to not stress the batch framework?  Is that number 100? 500? 5000?  I suppose the number of batch tasks should also be dependent on number of cores on the DB server.  
  • Anonymous
    March 23, 2012
    How large is your 'very large' workload?  Based on the nature of the workload, you can spend few extra minutes before creating the bundle to find the right size.  Combining 2 of the techniques is also an option.  The number of task that can be executed  in parallel, directly depends on the number of batch threads available on your batch servers first.  
  • Anonymous
    March 23, 2012
    If I understand what you mean by overloading the batch table, wouldn't that be independent of number of batch threads?For example, let's say 1 million GL journal transactions.  If each journal had on average 250 lines, then there would be 4000 batch tasks if 1 task was created to create 1 journal.We have the ability to go to up to 64 threads on 32 cores.
  • Anonymous
    March 23, 2012
    Yes.  Overloading of batch framework tables is independent of the number of batch threads.  The example you have given should not pose any problem.  But assume you have 2 million Journal transactions and each journal has just 2 lines means you have 1M journals.  If you create 1M batch task to handle this load, irrespective of whether you have 32 batch threads or 64 batch threads you may see some performance impact.
  • Anonymous
    September 02, 2015
    The comment has been removed