Replication / Sending Servers

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_slot_wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum WAL size that can be reserved by replication slots.
Data type integer
Default value -1
Allowed values -1
Parameter type read-only
Documentation max_slot_wal_keep_size

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the size of WAL files held for standby servers.
Data type integer
Default value 400
Allowed values 400
Parameter type read-only
Documentation wal_keep_size

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_slot_wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum WAL size that can be reserved by replication slots.
Data type integer
Default value -1
Allowed values -1
Parameter type read-only
Documentation max_slot_wal_keep_size

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the size of WAL files held for standby servers.
Data type integer
Default value 400
Allowed values 400
Parameter type read-only
Documentation wal_keep_size

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_slot_wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum WAL size that can be reserved by replication slots.
Data type integer
Default value -1
Allowed values -1
Parameter type read-only
Documentation max_slot_wal_keep_size

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the size of WAL files held for standby servers.
Data type integer
Default value 400
Allowed values 400
Parameter type read-only
Documentation wal_keep_size

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_slot_wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum WAL size that can be reserved by replication slots.
Data type integer
Default value -1
Allowed values -1
Parameter type read-only
Documentation max_slot_wal_keep_size

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the size of WAL files held for standby servers.
Data type integer
Default value 400
Allowed values 400
Parameter type read-only
Documentation wal_keep_size

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_slot_wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum WAL size that can be reserved by replication slots.
Data type integer
Default value -1
Allowed values -1
Parameter type read-only
Documentation max_slot_wal_keep_size

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_size

Attribute Value
Category Replication / Sending Servers
Description Sets the size of WAL files held for standby servers.
Data type integer
Default value 400
Allowed values 400
Parameter type read-only
Documentation wal_keep_size

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_segments

Attribute Value
Category Replication / Sending Servers
Description Sets the number of WAL files held for standby servers.
Data type integer
Default value 25
Allowed values 25
Parameter type read-only
Documentation

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout

max_replication_slots

Attribute Value
Category Replication / Sending Servers
Description Specifies the maximum number of replication slots that the server can support.
Data type integer
Default value 10
Allowed values 2-262143
Parameter type static
Documentation max_replication_slots

max_wal_senders

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum number of simultaneously running WAL sender processes.
Data type integer
Default value 10
Allowed values 5-100
Parameter type static
Documentation max_wal_senders

Azure-specific notes

The default value for the max_wal_senders server parameter set when you provision the instance of Azure Database for PostgreSQL flexible server must never be decreased below 2 (if HA is enabled) + number of read replicas provisioned + slots_used_in_logical_replication.

When considering the need to increase max_wal_senders to a much higher value to be able to cope with the logical replication of a substantial number of tables, have the following important points in mind:

  • Logically replicating a large number of tables doesn't necessarily need a large number of WAL senders.
  • The only reason why you need separate WAL sender per-table or group of tables is if you need separate subscriptions for each of those tables or groups of.
  • Whatever number of WAL senders are being utilized for physical and logical replication, they all become active at once, whenever any backend writes something to the write-ahead log. When that happens, the WAL senders that are assigned to do logical replication all wake up to:
    1. Decode all new records in the WAL,
    2. Filter out log records they're not interested in,
    3. Replicate the data that's relevant to each of them.
  • WAL senders are similar to connections in the sense that, if they are idle, it doesn't matter how many there are. However, if they are active, they'll just compete for the same resources and the performance could end up being terribly bad. This is especially true for senders with logical replication, because the logical decoding is rather CPU expensive. Each worker has to decode the entire WAL, even if it only replicates the operations affecting a single table, and that represents a tiny percentage of all the data in the write-ahead log. For physical replication it's not that important, because the WAL senders don't consume CPU so intensively, and they tend to be bounded by network bandwidth first.
  • Therefore, in general, it's better to not have many more WAL senders than vCores.
  • It's a good practice to add room for a few extra WAL senders to accommodate future growth or temporary spikes in replication connections. The following two examples might help illustrate it better.
    • For a server with 8 vCores, HA disabled, 2 read replicas, and 3 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (0) + physical slots for read replicas (2) + logical slots(3) + some extra for future growth, considering available vCores (1) = 6.
    • For a server with 16 vCores, HA enabled, 4 read replicas, and 5 logical replication slots, you may want to configure max_wal_senders as the sum of physical slots for HA (2) + physical slots for read replicas (4) + logical slots(5) + some extra for future growth, considering available vCores (2) = 13.
  • If you still consider that the maximum value allowed for this parameter is too low for your needs, please contact us, describe your scenario in detail and explain what do you consider that would be the minimum acceptable value you would need for your scenario to perform properly.

track_commit_timestamp

Attribute Value
Category Replication / Sending Servers
Description Collects transaction commit time.
Data type boolean
Default value off
Allowed values on,off
Parameter type static
Documentation track_commit_timestamp

wal_keep_segments

Attribute Value
Category Replication / Sending Servers
Description Sets the number of WAL files held for standby servers.
Data type integer
Default value 25
Allowed values 25
Parameter type read-only
Documentation

wal_sender_timeout

Attribute Value
Category Replication / Sending Servers
Description Sets the maximum time to wait for WAL replication.
Data type integer
Default value 60000
Allowed values 60000
Parameter type read-only
Documentation wal_sender_timeout