Container not starting because of read-only mount of /sys/fs/cgroup in App Service

Karl Schmidt 20 Reputation points
2025-05-28T16:07:26.66+00:00

We are running a PHP application in a Web App. Our image is based on Alpine linux with PHP FPM and we are running the rsyslog service which is started in the Docker entrypoint:

rc-service rsyslog start

Lately we encountered the following problem:

After a new deployment our container was not able to start. The log stream contained these errors:

2024-12-05T20:02:05.4483749Z /lib/rc/sh/openrc-run.sh: line 108: can't create /sys/fs/cgroup/blkio/tasks: Read-only file system

This was produced by the rsyslog service which failed to start, and because of this the container was also not able to start.

So we removed the start of rsyslog in the Docker entrypoint and these error messages disappeared and the container was started successfully again.

We did some research and and noticed that /sys/fs/cgroup was mounted read-only in the container. The read-only mount caused open-rc to fail starting rsyslog, because it's using cgroups.

After some more research we found out that Azure uses cgroups internally to enforce quotas on:

  • CPU
  • Memory
  • Disk I/O
  • Network bandwidth (in some cases)
  • ...

So our first idea was, that we are using a plan which uses shared resources and is limited in terms of resource settings. we upgraded to P0v3 and it worked again. After that, our conclusion was, that we are using a pricing tier with shared resources and Azure is mounting cgroups to prevent apps to alter resource settings.

But now we noticed the same behavior in another App Service, where again the cgroups volume was mounted read-only and the container failed to start. Here we are using the same plan "Basic B2" we were using for the App Service mentioned above, where the problem first occurred.

Later the container suddenly was able to start successfully and there was no read-only mount for cgroups, without any changes from our side.

I remember we had the issue with the read-only mount some time ago and we thought we fixed it because the problem did not occur anymore.

We digged deeper into the Azure documentation and found out that the following pricing tiers are using the category "Dedicated compute".

https://learn.microsoft.com/en-us/azure/app-service/overview-hosting-plans#pricing-tiers

Our App Service is using the tier "Basic B2". So it should be in the category "Dedicated compute".

It looks like upgrading to P0v3 in the first case was not the solution and was just a conincidence that the container was able to start again and sometimes /sys/fs/cgroup is mounted read-only and sometimes not ... ?

So our question is: Why is the volume /sys/fs/cgroup sometimes mounted read-only by Azure and sometimes not?

I would understand this for a free plan with shared resources, but not for one in the category "Dedicated compute". And certainly not that it sometimes works and sometimes not.

Please bring some light into the dark ...

Azure App Service
Azure App Service
Azure App Service is a service used to create and deploy scalable, mission-critical web apps.
8,890 questions
{count} votes

Accepted answer
  1. Alekhya Vaddepally 1,670 Reputation points Microsoft External Staff Moderator
    2025-05-28T18:50:07.56+00:00

    Hi Karl Schmidt,
    Azure App uses cgroup internally to apply service resources quota (CPU, Memory, I/O, etc.). In some environment, read to prevent containers from modifying these boundaries only as Azure Mounts /sys/fs/cgroup, especially with shared infrastructure or strict separation.

    This behavior is not constantly documented in all pricing levels. Even in "Dedicated compute" levels, like the basic B2, may difference due to mount mode.

    The Azure App Service runs on a fleet of VMS. Depending on the host OS version or container runtime update, mount behavior may different. Azure updates its infrastructure from time to time. These updates can change mount permissions without temporarily notice. Some hosts can use different versions of containers or docker, which are differently handled by cgroup Mount. During the start or rapid redeployments, the container can land on a host with various mount configurations.

    Avoid starting rsyslog in app service containers. Since the azure app service already captures the STDOUT/STDERR log, using the RSYSLOG inside the container is redundant and problematic. Removing it from the entry point is the right approach.

    Use Azure logging instead for inherent logging of leverage azure:

    Application log (STDOUT/STDERR), Diagnosis Settings and Log analytics integration.

    It avoids the requirement of RSYSLOG and ensures compatibility with the platform of Azure.

    If you migrate in Azure Kubernetes Service (AKS), you can separate logging agents in Sidecar containers with permissions suitable to avoid conflicts with your main app container.

    When your app is transported to a new host, use Azure Resource Health and App Serving diagnosis to track. This can help correlate mount behavior with host changes.

    If this inconsistency is affecting production reliability, open a support ticket with Microsoft. Provide log and timestamp to help detect host-tier behavior.
    https://learn.microsoft.com/en-us/azure/container-apps/troubleshoot-container-start-failures
    if you have any further concerns or queries, please feel free to reach out to us.

    Please do not forget to click "Accept the answer” and Yes wherever the information provided helps you, this can be beneficial to other community members.


0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.