Improving Performance Accessing Tables in Dexterity

David Meego - Click for blog homepageHere are a couple of really cool tricks that can boost performance of your Dexterity code for almost no effort at all.  The techniques discussed below can significantly speed up accessing table buffers in functions and procedures when looping through a range of records in a table.

The Theory

When Dexterity first accesses a table buffer it opens a connection to the physical table at the database server or operating system level. Then it creates a table buffer which it uses to communicate with the physical table.  Other table buffers for the table can be created as needed.  A table buffer is destroyed when it loses scope because the procedure it was created for has completed or the form it is associated with closes.  When the final table buffer for a table is destroyed, the connection to the physical table is closed.

The process of opening and closing a connection to the external physical table is relatively slow when compared to the creation and destruction of a internal table buffer.  All the techniques listed below are based on the concept of opening a connection early and holding the connection open while the code is running and avoiding opening and closing the connection multiple times.

To demonstrate what I am talking about, let's say that external connections take 10ms to open and 5ms to close, and internal table buffers take 2ms to create and 1ms to destroy.  If I have a procedure (X) which processes a table (A) and for each of the 100 records in the table A calls a child procedure (Y) which updates another table (B). 

Based on the code in the parent procedure X passing the table buffer for table A to the child procedure Y.  This means that a table buffer for table B will be created for each time procedure Y is executed.

Original Code Example

 { Procedure X }
range clear table A;

get first table A by number 1; 
while err() <> EOF do
 call Y, table A;
    
    get next table A by number 1; 
end while; 
{ End }

{ Procedure Y }
inout table A; {Positioned}

clear table B;
copy table A to table B;
save table B;
check error;
{ End }

So the time taken would be as follows:

Time taken for Procedure X

= Open Table A + Create Table Buffer A + Destroy Table Buffer A + Close Table A

= 10ms + 2ms + 1ms + 5ms = 18ms

Time taken for Procedure Y

= Open Table B + Create Table Buffer B + Destroy Table Buffer B + Close Table B

= 10ms + 2ms + 1ms + 5ms = 18ms

 Total Time

= Procedure X + 100 Records *  Procedure Y

= 18ms + 100 * 18ms = 1818ms

So let's look at the ways we can improve on this.

 

Method 1

Open all the tables used in any child procedures from the top level procedure. Just a clear table command is enough to access the table and open the connection.

Method 1 Code Example - add clear table commands

 { Procedure X }

{ use clear table to open connections to tables }
clear table A; 
clear table B;

range clear table A;

get first table A by number 1; 
while err() <> EOF do
    call Y, table A;
    
    get next table A by number 1; 
end while; 
{ End }

{ Procedure Y }
inout table A; {Positioned}

clear table B;
copy table A to table B;
save table B;
check error;
{ End }

So now the calculations are:

Time taken for Procedure X

= Open Table A + Create Table Buffer A + Destroy Table Buffer A + Close Table A
+ Open Table B + Create Table Buffer B + Destroy Table Buffer B + Close Table B

= 10ms + 2ms + 1ms + 5ms + 10ms + 2ms + 1ms + 5ms = 36ms

Time taken for Procedure Y

= Create Table Buffer B + Destroy Table Buffer B

= 2ms + 1ms = 3ms

 Total Time

= Procedure X + 100 Records *  Procedure Y

= 36ms + 100 * 3ms = 336ms

The time taken now is approx. 18% of the original time.

 

Method 2

Open all the tables used in any child procedures from the top level procedure and pass them as inout parameters to the child scripts.

Method 2 Code Example - pass tables as parameters

 { Procedure X }

{ use clear table to open connections to tables }
clear table A; 
clear table B;

range clear table A;

get first table A by number 1; 
while err() <> EOF do
   { Pass table buffer }
   call Y, table A, table B;
   
    get next table A by number 1; 
end while; 
{ End }

{ Procedure Y }
inout table A; {Positioned}
inout table B;

clear table B;
copy table A to table B;
save table B;
check error;
{ End }

So now the calculations are:

Time taken for Procedure X

= Open Table A + Create Table Buffer A + Destroy Table Buffer A + Close Table A
+ Open Table B + Create Table Buffer B + Destroy Table Buffer B + Close Table B

= 10ms + 2ms + 1ms + 5ms + 10ms + 2ms + 1ms + 5ms = 36ms

Time taken for Procedure Y

= Table Buffers passed = 0ms

 Total Time

= Procedure X + 100 Records *  Procedure Y

= 36ms + 100 * 0ms = 36ms

The time taken now is approx 2% of the original time.

Note: This could require a big parameter list if there are a large number of tables.  This also means that all scripts will be sharing the same table buffers and so accessing multiple records from the same table will require additional coding to position and re-position the table buffer.

 

Summary

I know that these numbers are arbitrary but they demonstrate the concepts and how we can improve the performance of our code with some simple techniques.  Method 1 requires almost no changes to the scripts, just add a clear table command for each of the tables used anywhere in the scripts.  Method 2 gains even more performance, but will require every script to have table buffers passed to it as parameters. 

If working with Report Writer functions, we can have a similar performance issue when accessing a table for every body record of the report. The following Knowledge Base (KB) article explains a method of using a dummy form to open the tables and hold them open until the report completes.

How to improve the performance of user-defined Report Writer functions in Microsoft Dynamics GP 9.0 or in Microsoft Great Plains (KB 920830) Secure Link

 

I hope you find this information useful and adopt some of the ideas for your code.

David