Delen via


Batch Parallelism in AX –Part - I

 

Dynamics AX 2012 and AX 2009 have the ability to breakdown a batch job into small manageable fragments and process them independently in parallel. Ability to process them in parallel is critical to improve the throughput and response time for the batch job and shrink the batch window considerably. There are few different approaches available to break a huge batch job into small fragments. The three common approaches are

1. Batch Bundling.

2. Individual task modeling

3. Top Picking.

Each one has its own pros and cons. Knowing which approach to use and using the right one may sometimes get the job done in a fraction of time instead of waiting for hours.

Let us take a peek at each one. This is a four part article. In part IV, I will compare the performance of these three techniques for different workloads. I am taking an example of wrapping a Sales Order posting API in these three different approaches and invoice different workloads of Sales Orders. I will compare the performance numbers of each approach for different workloads in the end.

Note: The code used in this is only an example. Do NOT use it for your Sales Order Posting needs. AX 2012 default Sales Order Posting form uses a much more sophisticated and feature rich way of handling this parallelism.

Batch Bundling

In this model you create a static number of tasks. Split the work among these tasks (try to be equal) by grouping the work items together into bundles. Each worker thread will process a bundle of work items before picking up the next bundle. This will work fine if all the tasks take roughly the same amount of time to process each bundle. In an ideal situation, each worker would be actively doing the same amount of work. But in scenarios where the workload is variable because of data composition or difference in server hardware this approach will not be the most efficient. In these situations you may end up waiting for the last few threads of the bigger bundle to complete while other threads have already been completed a while back.

Here is the sample code.

DemoBatchBundles:

public class DemoBatchBundles extends RunBaseBatch
{
str 20 fromSalesOrder, toSalesOrder;
#define.CurrentVersion(1)
#localmacro.CurrentList
fromSalesOrder, toSalesOrder
#endmacro
}

public void new()
{
super();
}

public container pack()
{
return [#CurrentVersion, #CurrentList];

}

private void parmEndBlock( str _toSalesOrder)
{
toSalesOrder = _toSalesOrder;
}

private void parmStartBlock(str _fromSalesOrder)
{
fromSalesOrder= _fromSalesOrder;
}

void run()
{
SalesTable salesTable;
SalesFormLetter formletter;
Map SalesMap;

info(fromSalesOrder+':'+toSalesOrder);

/* Each task knows the range of work items it needs to process. This range information is already packed when the task is created */
while select * from salesTable where salesTable.salesId >= fromSalesOrder
&& salesTable.salesId <= toSalesOrder
&& salesTable.documentStatus == DocumentStatus::none
{
formletter = SalesFormLetter::construct(DocumentStatus::Invoice);
formletter.getLast();
formletter.resetParmListCommonCS();
formletter.allowEmptyTable(formletter.initAllowEmptyTable(true));
SalesMap = new Map(Types::Int64,Types::Record);
SalesMap.insert(salesTable.recid,salesTable);
formletter.parmDataSourceRecordsPacked(SalesMap.pack());
formletter.createParmUpdateFromParmUpdateRecord(SalesFormletterParmData::initSalesParmUpdateFormletter(DocumentStatus::Invoice, FormLetter.pack()));
formletter.showQueryForm(false);
formletter.initLinesQuery();
formletter.update(salesTable, systemDateGet(), SalesUpdate::All, AccountOrder::None, false, false);
}
}

public boolean unpack(container packedClass)
{
Version version = RunBase::getVersion(packedClass);
switch(version)
{
case #CurrentVersion:
[version,#CurrentList] = packedClass;
break;
default:
return false;
}
return true;
}

public static DemoBatchBundles construct(str _fromSalesOrder, str _toSalesOrder)
{
DemoBatchBundles c;
c = new DemoBatchBundles();
c.parmStartBlock(_fromSalesOrder);
c.parmEndBlock(_toSalesOrder);
return c;
}

Job to Schedule the above batch:

/* Here tasks are created to process work items equivalent to the bundle size. The range between the fromSalesOrder and toSalesOrder is a bundle of work items. */
static void scheduleDemoBundlesJob(Args _args)
{
int blockSize=1000; //My bundle size
BatchHeader batchHeader;
DemoBatchBundles demoBatchBundles;
SalesTable salesTable;
str fromSalesOrder, toSalesOrder;
str lastSalesId;
BatchInfo batchInfo;
int Counter=0;

    ttsBegin;
select count(RecId) from salesTable where salesTable.salesId >= 'SO-00400001' && salesTable.salesId <= 'SO-00500000'
&& salesTable.documentStatus == DocumentStatus::none;

    if (salesTable.recid > 0)
{
batchHeader = BatchHeader::construct();
batchHeader.parmCaption(strFmt('Batch job for DemoBundlesBatch Invoice SalesOrders %1 thru %2', 'SO-00400001', 'SO-00500000'));

    while select salesid from salesTable
order by salesTable.SalesId
where salesTable.salesId >= 'SO-00400001'
&& salesTable.salesId <= 'SO-00500000'
&& salesTable.documentStatus == DocumentStatus::none
{
Counter += 1;
if (Counter ==1)
{
fromSalesOrder = salesTable.salesid;
}
if (Counter == blockSize)
{
toSalesOrder = salesTable.salesid;
/* Each task is created to process a bundle of work items (in this case a range of sales Orders)*/

                demoBatchBundles = DemoBatchBundles::construct( fromSalesOrder, toSalesOrder);
info(fromSalesOrder+' : ' + toSalesOrder);
batchInfo = DemoBatchBundles.batchInfo();
BatchInfo.parmCaption(fromSalesOrder+' : ' + toSalesOrder);
batchHeader.addTask(demoBatchBundles);
Counter = 0;
}
lastSalesId = salesTable.SalesId;
}
// This is to handle the spillover
// #SalesOrders in this last bundle will be less than the bundle size
if (Counter > 0)
{
toSalesOrder = lastSalesId;
demoBatchBundles = DemoBatchBundles::construct( fromSalesOrder, toSalesOrder);
info(fromSalesOrder+' : ' + toSalesOrder);
batchInfo = DemoBatchBundles.batchInfo();
BatchInfo.parmCaption(fromSalesOrder+' : ' + toSalesOrder);
batchHeader.addTask(demoBatchBundles);
}
batchHeader.save();
}
ttsCommit;
info('Done');
}

Assuming I am trying to process 100,000 work items

#Tasks Created

#Batch Threads (In my test server)

#Parallel Tasks that can be executed in parallel at anytime

100

10

10

Once the first 10 task complete, the batch framework will load the next 10 task and execute them and so on; in this case it may load 10 or more times over all.

Comments

  • Anonymous
    February 14, 2013
    Hi,We tried to post Free Text Invoices by this method, but we get several errors about GeneralJournalEntry table deadlocks. Can you please guide us?Regards,Yakup
  • Anonymous
    May 06, 2013
    How many free text invoices were you posting and how many failed with deadlocks?
  • Anonymous
    October 29, 2013
    Same here. Got a batch of 200 invoices, ran in 20 separate tasks and 19 tasks failed to operate due to Subledger errors (locks). The single task that worked posted invoices just fine.
  • Anonymous
    June 27, 2015
  • Please enter a comment Post