# IOPS postgresql monitoring - meaning of absolute values in graphs

Hi.

I have a slight problem understanding the metrics for IOPS.

What do the absolute values/percentages in graphs represent?

These are the absolute values

This is the percentual usage

Both graphs are from the same time span and with 5-minute granularity.

The server`s specs are:

- Azure Database for PostgreSQL flexible server
- General Purpose, D8s_v3, 8 vCores, 32 GiB RAM, 1024 GiB storage

**- 5000 IOPS** - plus one readonly replica with the same specs

If the first graph shows real IOPS (average per second) and we have the specs 5000, then the usage before 2PM -> 10+K should reach 100% (or more).

We can see a peak in the second graph as well but it is below 50%, so the disk should not be overloaded according to the second graph.

The question is:

What do the values in the first graph represent? **What does the avg of absolute values of IOPS mean? Are IOPS logged and measured every second -> the first graph is the real representation?** If yes, then what does the second graph represent? Why is there such low percentual usage? Which graph is more relevant to see if we are experiencing issues with IOPS?

We have tried to understand it better using the disk depth queue metric, but the units/measurements are unclear. What do the absolute values for disk depth queue mean?

Did < 40 disk IO operations wait in the queue at the peak before 2PM? How was this measured?