Azure Database for PostgreSQL MemoryParameter blog feedback

2023-05-30T17:14:06.4166667+00:00

Customer is using Azure Database for PostgreSQL Flexible Server (Memory Optimized, E32s_v3, 32 vCores, 256 GiB RAM, 8192 GiB storage). Loaded the data (around 4 – 5 Tb) and after that using this database as read-only. They don’t run anything else on this server. Customer would like to get the highest possible read performance.

Current configuration values are:

  • shared_buffers = 16 777 216 (x 8kb = 128GB = 50% of total RAM)
  • max_connections = 500: now there are no problem that max_connections x work_mem will overflow available memory
  • huge_pages: was FALSE, he has changed it to ON
  • effective_cache_size is set to 25 165 824 (x 8kb = 192GB = 75% of total RAM)

Customer Concerns:

It looks that they can’t use potential of this machine, is there a way to use most of memory by PostgreSQL database?

Azure Database for PostgreSQL
0 comments No comments
{count} votes

Accepted answer
  1. GeethaThatipatri-MSFT 29,542 Reputation points Microsoft Employee Moderator
    2023-05-30T20:33:51.1666667+00:00

    Hi. @Naveen Kumar Kakarla (Quadrant Resource LLC) Thanks for posting your question in the Microsoft Q&A forum.

    Can you try the below steps?

    Increase the shared_buffers parameter to 25% of RAM, which is 64 GB in your case, and Increase the effective_cache_size parameter to 50% of RAM, which is 128 GB in your case.

    Increase the max_connections parameter to 1000 or more, depending on the number of concurrent connections you expect. and Set maintenance_work_mem to a value higher than work_mem.

    Use a connection pooler like PgBouncer to efficiently manage connections and decrease idle connections and reuse existing connections.

    Regards

    Geetha

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.