How to troubleshoot 502 issue on Application gateway

Rukhsar Afroz 20 Reputation points
2025-04-08T12:59:57.4633333+00:00

I am getting 502 error for Application gateway and in logs its showing ERRORINFO_NO_ERROR and ERRORINFO_UPSTREAM_CLOSED_CONNECTION

Azure Application Gateway
Azure Application Gateway
An Azure service that provides a platform-managed, scalable, and highly available application delivery controller as a service.
1,166 questions
{count} votes

Accepted answer
  1. Venkat V 1,390 Reputation points Microsoft External Staff
    2025-04-17T05:34:05.0733333+00:00

    Hi Rukhsar Afroz

    The error you encountered ERRORINFO_UPSTREAM_CLOSED_CONNECTIONis due to the backend server closing the connection unexpectedly or before the request was fully processed. This indicates that the backend server (NGINX, in your case) is terminating the connection before Application Gateway finishes sending the request or receiving a complete response. Refer to the documentation for more details.
    User's image

    Based on your environment and the logs (ERRORINFO_UPSTREAM_CLOSED_CONNECTION, the root cause of the intermittent 502 errors was that the ingress-nginx controller was prematurely closing the connection, especially during POST requests or large payloads. This made Azure Application Gateway believe the backend had failed, resulting in a 502. The ingress controller had lower timeout values for reading/writing client requests. When Application Gateway sent a request, the ingress controller timed out or closed the connection before a full response was delivered.

    You also saw- client request body is buffered to a temporary

    file /tmp/nginx/client-body/xxx
    

    This indicates that NGINX was writing request bodies to disk because the request payload exceeded the default buffer size.

    To fix this, run the following to patch your nginx-configuration ConfigMap:

    kubectl patch configmap nginx-configuration -n ingress-nginx \
      --type merge \
      -p '{"data": {
        "keep-alive-timeout": "300",
        "proxy-read-timeout": "300",
        "proxy-send-timeout": "300",
        "proxy-connect-timeout": "300",
        "client-body-buffer-size": "512k"
      }}'
    kubectl rollout restart deployment ingress-nginx-controller -n ingress-nginx
    
    

    These values ensure that NGINX holds connections open long enough for App Gateway and avoids writing small POST payloads to disk.

    Your next question: What does client-body-buffer-size do?

    It increases the in-memory buffer for POST data. Without this, NGINX logs give—a client request body is buffered to a temporary file. This is not an error, but increasing the buffer (e.g., 512k or 1m) avoids disk I/O for moderate payloads.

    Avoid setting it to 16k that’s often already the default and won’t help with large payloads.

    If you previously used this

    keep-alive: "on"
    keep-alive-requests: "1000"
    

    Remove them, as these are not valid keys for the ingress-nginx ConfigMap and can cause errors like unexpected error merging defaults."cannot parse 'keep-alive'

    After applying the above steps. The 502 errors are no longer observed, your application gateway logs show healthy responses and your Ingress-nginx works smoothly with large POSTs

    Reference:Troubleshooting bad gateway errors in Application Gateway

    I hope this helps to resolve your issue. Please feel free to ask any questions if the solution provided isn't helpful.

    Please don’t forget to close the thread by clicking "Accept the answer" wherever the information provided helps you, as this can be beneficial to other community members.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.