Share via


Performance degradation - 11/16 - Mitigated

We’ve confirmed that all systems are back to normal as of November 6th 2018 15:45 UTC. Our logs show the incident started on November 6th 2018 14:45 UTC and during the 60 minutes it took to resolve the issue, a peak of 2150 users experienced slow and failed commands in West Europe region. Sorry for any inconvenience this may have caused.

  • Root Cause: Our current investigations point at a failover on SQL Azure layer as the cause for this incident.
  • Chance of Re-occurrence: Medium at this point, we understand to some extent what happened.
  • Lessons Learned: We are logging a number of repair items and investigating in depth why an Azure SQL failover got our DBs in a bad state.
  • Incident Timeline: 14:45 UTC through 15:45 UTC.

  • We are actively investigating performance issues with Azure DevOps.
  • Some customers may experience slower performance than usual while accessing their accounts.