hi chris! thanks for posting this, its a tricky one but lets break it down ))
first off, wow 10k+ ip rules? thats a lot but hey if terraform handled it on create, thats a good sign )) for containerapps, theres no hard limit on ip restriction rules in the docs, but performance can get weird with huge batches. microsoft's official stance is 'it depends' on backend processing, which isnt super helpful i know... check their containerapps networking docs here.
about that patch issue yeah the 202 with no failure state is frustrating. when u hit this, try splitting updates into smaller chunks. like 500 rules at a time. the api might just be choking on the payload size. also, after patching, poll the provisioning state manually. sometimes the ui lags but the api knows whats up. heres a quick powershell snippet to check
az containerapp revision list --name yourApp --resource-group yourRG
as well check the 'revision' status separately! containerapps can get stuck in revision purgatory. run
az containerapp revision list
if u see a revision stuck in 'processing', thats ur culprit. might need to roll back or force a new deploy.
for tracking actual rule states... its messy. the ip restrictions dont show up in 'az containerapp show' output clearly. u gotta dig into the 'configuration.ingress.ipSecurityRestrictions' field. try
az containerapp show
avoid giant json patches. use terraform or bicep for incremental changes instead. and yes this headache applies to other azure services too - app services have similar quirks with big ip rule sets.
ps: microsoft's containerapps team is actively improving this area. their github issues page has some workarounds here. worth subscribing to updates ))
let us know if chunking the updates helps
rgds,
Alex