Azure Database for PostgreSQL
An Azure managed PostgreSQL database service for app development and deployment.
1,437 questions
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
Explain_Analyze_from_Equivalent_Environment_For_Problematic_insert.txt
Hi team,
we have table which is of total size(inc indexes) is 173GB and of total records of around (132530734). We have 28 indexes in that table. And these are the specifics to the table
FILLFACTOR = 85,
autovacuum_enabled = TRUE,
autovacuum_analyze_scale_factor = 0.002,
autovacuum_vacuum_cost_delay = 0,
autovacuum_vacuum_scale_factor = 0,
autovacuum_vacuum_threshold = 100000
No Partitioning is done on Transaction table
We have a module which process which uses 5000 records per batch for every 15s. In entire stored procedure, Insert into this table is alone taking more than 60s. I have attached explain analyze plan for insert query. Entire selction of data is taking only 273ms but insert into main table is taking 60s. Is there any suggestions for improvising batch insert timing in big tables.
Note: Work_mem is set as 40MB in procedure level.
We have tried below mentioned ways
1. insert data into temp table and then insert the data from temp table to main table ( This is not increasing the timing. Temp table data insertion is taking 1s but insert into main table is taking more than 56s)
Kindly suggest if any other ways to improve batch insert timing in big tables
Thanks and regards,
Manimahesh S
@Mani Mahesh S Thank you for reaching out.
My understanding is that you are having issues with the insert performance of the large table with 28 indexes.
Could you please try the below assuming this is an incremental load.
And I recommend reviewing this Doc https://learn.microsoft.com/en-us/azure/postgresql/flexible-server/how-to-bulk-load-data#best-practices-for-incremental-data-loads
Regards,
Oury