Subject: Azure Container Apps - Microsoft.App resource provider failure caused ~16-hour
production outage (Feb 12-13, 2026)
Severity: Sev B - Production system down
Subscription: Omniware Azure 2448515
Region: East US 2
Affected Resources:
- Container App: pm-portal (production), Resource Group: client-apps
- Container App: pm-portal-qa (QA), Resource Group: client-apps
- Container App Environment: pm-portal-env, Resource Group: client-apps
Issue Description:
On the evening of Thursday, February 12, 2026, both our production and QA Azure Container
Apps became completely unreachable. Per Application Insights telemetry (Log Analytics
workspace pm-portal-logs, workspace ID 626c2c7b-acd9-4084-904e-04ed73d30a44), the last
successful requests were recorded at approximately 22:35 UTC on Thursday, Feb 12 (last
verified activity might not be the time of the outage). Service was restored at approximately
14:30 UTC on Friday, Feb 13 after manual intervention, resulting in a total outage of
approximately 16 hours. No changes were made on our side prior to the outage.
Symptoms:
- Both pm-portal (production) and pm-portal-qa (QA) container apps went down simultaneously
with no deployments or configuration changes on our side.
- "az containerapp show" returned provisioningState: Failed with null values for all configuration
properties. Ingress, secrets, and environment variables were all empty.
- Any ACA write operation (e.g., "az containerapp update") failed with SubscriptionIsNotActive
errors, even though the subscription was active.
- The Azure CLI crashed with "NoneType has no len()" when attempting updates, a CLI bug
triggered by the null configuration state I presume.
- No Azure Service Health alerts or advisories were posted for this issue during the outage
window.
Impact:
Our production PM Portal application was completely unavailable for ~16 hours overnight into
the following business day, affecting all end users. No workaround was available through the
Azure CLI or Portal. The apps could not be updated, restarted, or redeployed.
Root Cause and Resolution:
After extensive troubleshooting, we discovered that the Microsoft.App resource provider had
entered a broken state on our subscription. The fix was to manually re-register the Microsoft.App
resource provider from the Azure Portal (Subscription > Settings > Resource providers >
search "Microsoft.App" > Re-register). Apps recovered immediately after re-registration
completed.
Questions for Microsoft:
- What caused the Microsoft.App resource provider to enter a broken state on an active
subscription with no changes on our end?
- Why did the subscription show as Enabled while the resource provider was effectively non-
functional?
- Why were no Azure Service Health alerts generated for this issue?
- What safeguards exist to prevent this from recurring? Is there a way to monitor or alert on
resource provider state changes? Although it was showing as Registered before Re-
Registration.
- Can Microsoft provide a Root Cause Analysis (RCA) for this incident?