Share via

502 Bad Gateway error when uploading large files (~5GB) through Azure API Management BasicV2 tier

Diane Bloodworth 20 Reputation points
2026-03-11T16:26:04.1366667+00:00

I'm experiencing a consistent 502 Bad Gateway error when uploading large files (~5GB) through Azure API Management BasicV2 tier. I need help confirming whether this is a known limitation of the BasicV2 tier and if there is any workaround.

Environment:

  • Production: API Management BasicV2 tier
  • QA: API Management Basic tier, platform version stv2.1 — works correctly

Current backend policy (same in both environments):

xml

<backend>
    <forward-request timeout="1800" buffer-request-body="false" />
</backend>

Observed behavior:

  • Large file uploads via POST fail with HTTP 502 after exactly 4 minutes
  • The exact same policy works correctly in QA (Basic/stv2.1)
  • No entry is recorded in ApiManagementGatewayLogs for the failed request — this strongly suggests the 502 is being generated by the underlying infrastructure before reaching the APIM gateway layer
  • The 4-minute failure matches the Azure Load Balancer idle timeout

What we've already investigated:

  • The buffer-request-body="false" attribute is intended to stream the request body directly to the backend without buffering, keeping the TCP connection active and avoiding the idle timeout. This works in Basic/stv2.1 but not in BasicV2.
  • Setting buffer-request-body="true" is not viable because the file is ~5GB and would exhaust the gateway memory.
  • The Azure portal's "Diagnose and solve problems" tool pointed to SNAT port exhaustion documentation, but the failure pattern (exactly 4 minutes, no GatewayLogs entry) points to an infrastructure-level idle timeout rather than SNAT exhaustion.
  • Increasing timeout value has no effect since the connection is being dropped at the infrastructure level before the policy timeout is reached.

Key question: Does BasicV2 handle buffer-request-body="false" differently than Basic/stv2? Is there a known limitation in BasicV2 that prevents large file streaming uploads, and is there any supported configuration to work around this?

Fallback plan: We are aware of the SAS token architecture pattern (client uploads directly to Blob Storage, bypassing APIM entirely) and are considering implementing it. However, we would prefer to confirm first whether this is a fixable configuration issue or a hard platform limitation before committing to an architectural change.

I'm experiencing a consistent 502 Bad Gateway error when uploading large files (~5GB) through Azure API Management BasicV2 tier. I need help confirming whether this is a known limitation of the BasicV2 tier and if there is any workaround.

Environment:

  • Production: API Management BasicV2 tier
  • QA: API Management Basic tier, platform version stv2.1 — works correctly

Current backend policy (same in both environments):

xml

<backend>
    <forward-request timeout="1800" buffer-request-body="false" />
</backend>

Observed behavior:

  • Large file uploads via POST fail with HTTP 502 after exactly 4 minutes
  • The exact same policy works correctly in QA (Basic/stv2.1)
  • No entry is recorded in ApiManagementGatewayLogs for the failed request — this strongly suggests the 502 is being generated by the underlying infrastructure before reaching the APIM gateway layer
  • The 4-minute failure matches the Azure Load Balancer idle timeout

What we've already investigated:

  • The buffer-request-body="false" attribute is intended to stream the request body directly to the backend without buffering, keeping the TCP connection active and avoiding the idle timeout. This works in Basic/stv2.1 but not in BasicV2.
  • Setting buffer-request-body="true" is not viable because the file is ~5GB and would exhaust the gateway memory.
  • The Azure portal's "Diagnose and solve problems" tool pointed to SNAT port exhaustion documentation, but the failure pattern (exactly 4 minutes, no GatewayLogs entry) points to an infrastructure-level idle timeout rather than SNAT exhaustion.
  • Increasing timeout value has no effect since the connection is being dropped at the infrastructure level before the policy timeout is reached.

Key question: Does BasicV2 handle buffer-request-body="false" differently than Basic/stv2? Is there a known limitation in BasicV2 that prevents large file streaming uploads, and is there any supported configuration to work around this?

Fallback plan: We are aware of the SAS token architecture pattern (client uploads directly to Blob Storage, bypassing APIM entirely) and are considering implementing it. However, we would prefer to confirm first whether this is a fixable configuration issue or a hard platform limitation before committing to an architectural change.

Azure API Management
Azure API Management

An Azure service that provides a hybrid, multi-cloud management platform for APIs.


Answer accepted by question author
  1. Pravallika KV 12,825 Reputation points Microsoft External Staff Moderator
    2026-03-11T17:54:47.36+00:00

    Hi @Diane Bloodworth ,

    Thanks for reaching out to Microsoft Q&A.

    Even though you have buffer-request-body="false", BasicV2 today does not fully stream the payload end-to-end the same way stv2.1 does. The underlying Azure Load Balancer between APIM and your backend will drop any connection that appears idle for more than 240 seconds, and there’s currently no APIM policy or portal setting to bump that up on the managed BasicV2 tier.

    Here are your main options:

    1. Bypass APIM for large uploads
    • Use a SAS-token pattern so your client posts directly to Blob Storage (no APIM in the middle).
    • This is the recommended large-file pattern for APIM scenarios.
    1. Upgrade or re-architect
    • Move to a tier or hosting model where you control the front-end load balancer or Application Gateway (e.g. deploy APIM into your own VNet behind your own LB/App GW with a custom idle timeout).
    • Or jump to a higher APIM tier that can better handle unbuffered streaming (e.g. PremiumV2 in VNet).
    1. Chunk the upload
    • Split the 5 GB file into smaller parts client-side (e.g. 5×1 GB), upload each sequentially, then reassemble server-side.

    Unfortunately, there’s no supported policy tweak in BasicV2 to increase that 4-minute idle timeout. If you’d like to explore a VNet-hosted or higher-tier approach, let us know and we can point you at sample ARM templates and networking docs.

    References:

    buffer-request-body policy (streaming)

    Application Gateway idle‐timeout info

    SAS-direct client upload pattern

    Hope this helps!


    If the resolution was helpful, kindly take a moment to click on User's imageand click on Yes for was this answer helpful. And, if you have any further query do let us know.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as 'Accepted' by the question author and 'Recommended' by moderators, which helps users know the answer solved the author's problem.