- The prolonged restart and subsequent connection failures during the Fabric mirroring preparation could have been triggered by several factors, including resource constraints or configuration changes that led to high throughput on the server. If the Fabric capacity was paused or exceeded, it might have caused mirroring to stop or pause, leading to unexpected behavior during the setup process. Additionally, any persistent resource errors could disable mirroring, resulting in connection issues.
- To recover from this state without requiring a failover, you can try the following steps:
- Check the Azure metrics for any resource bottlenecks and address them accordingly.
- Execute the SQL query
SELECT * FROM pg_stat_activity WHERE state = 'idle in transaction';to identify any hanging transactions that could be causing the unresponsiveness. - Review the configuration of the mirrored PostgreSQL database using
SELECT * FROM pg_replication_slots;to ensure it was properly enabled. - If the database remains unresponsive, contacting Azure support may be necessary for further assistance.
- Before attempting the Fabric mirroring setup again, consider the following precautions:
- Ensure that your Azure Database for PostgreSQL flexible server has sufficient capacity to handle the expected load during the mirroring process.
- Monitor the server's performance metrics closely to identify any potential issues before starting the mirroring setup.
- Review the official documentation for any updates or additional recommendations regarding mirroring setup.
- Consider performing the mirroring setup during off-peak hours to minimize the impact on production services.
References: