Slow client-dependent download speeds after upgrade from ASP.Net WebAPI 5 to ASP.Net Core 6

Emanuel 6 Reputation points
2022-11-25T18:58:22.713+00:00

We upgraded one of our applications from .NET Framework 4.7 (and ASP.Net WebAPI 5.2.6) to .NET 6 (and ASP.Net Core 6) a while back. One of the controllers is basically a glorified, authenticated web server. A client logs in and interacts with other controllers of the service, is handed a download link which then ends up in this specific controller.
And as it happens, we didn't notice until recently that we had a massive regression in download speeds that only popped up once a client/machine dropped in that didn't use the service before.

Depending on what the client operating system is, we see varying download speeds after the upgrade compared to the old implementation. Some Windows 8.1 machine that was being phased out had average speeds around 5 MB/s while a recent Windows 11 client achieves a whopping 0.8 MB/s. Recent Windows 10 as well as Windows Server 2022 (used as a client in this case) can get up to 50 MB/s, while yet another Windows 11 (surprisingly the exact same build as the other one, just different hardware) tops out at about 55 MB/s.
And thats all with the same server (same hardware for all tests, running the same unmodified application), same file (just a 1 GB test file), same login for the service, same client application (we wrote a small downloader tool to automate this, rather than going through varying web browsers - except we see the same varying speeds using HttpClient instead). Sniffing traffic shows no discernible differences between them, HTTP headers are the same etc. The only difference appears to be the Client OS.

Comparing this to a web server (we tried both IIS and Apache) and the old application, we get almost the maximum possible speed that particular server can give us (which is around 65 MB/s on a good day; and tbh. even that is less than what the 1 Gbit NIC should be capable of...but thats a different story). And we're wondering: Where'd the performance/speed go? Especially with pretty much every ASP.Net and ASP.Net Core release notification boasting extra performance gains and improvements.

We're pretty much stuck there, because we have no idea on how to debug this. According to some logging (implemented as very simple ActionFilter that logs before/after with a Stopwatch), we get the same time spent in our Controller (both in old and new code), but what happens beyond this is vastly different. No dice with ANTS Performance Profiler and PerfView, mainly because what we're looking for isn't visibly in our own code...or more precisely, we don't even know what we should be looking for/at.

The new Controller (method) looks basically like this:

[HttpHead]  
[HttpGet("{downloadTicket}")]  
public async Task<IActionResult> DownloadContent(Guid downloadTicket)  
{  
    var content = await FetchContent(downloadTicket);  
    return File(content.Stream, content.ContentType, content.FileName, enableRangeProcessing: true);  
}  

What FetchContent does is not really important, its implementation didn't change a whole lot (and according to profiling, it has barely any effect on runtime). content.Stream is typed as Stream, but internally it is always a FileStream. The rest is just metadata so we can make the clients life a bit nicer.

The old Controller looked like this:

[HttpGet]  
[Route("{downloadTicket}")]  
[FileDownload]  
public async Task<HttpResponseMessage> DownloadContent(Guid downloadTicket)  
{  
    var content = await FetchContent(downloadTicket);  
    return Request.GetStreamResponse(content.Stream, content.FileName);  
}  

(the FileDownloadAttribute is just an annotation for Swagger by the way)

TL;DR: both the old and the new service simply called ASP.Net methods to return a file. Nothing too fancy.

There is barely any noteworthy Middleware installed; at least none that should matter (Authentication, Authorization, CORS, Https Redirection, SignalR for Web Client updates, Swagger and an Exception Handler to make things look a bit nicer).
The only noteworthy thing is that during the upgrade, the dev chose to use http.sys rather than Kestrel; but we're not too sure why at this point (and we're also not sure on how to migrate some of the things done in UseHttpSys over to UseKestrel so we can't try this at the moment).

We did try a variant of this suggestion; and register a custom Middleware for this specific endpoint that pretty much only wraps the result in a BufferedStream - and it has some effect (we couldn't really use a MemoryStream since downloads may exceed many gigabytes; which is too much to buffer in memory and especially a MemoryStream). Most clients get a decent speedup (when they were slower before), but the faster clients drop down to the same speed. Fiddling around with the buffer size helps a little, but we're still far from native web server performance. Which is ok, as long as we can get close - but at the moment we're barely scratching the 50% mark. And it feels wrong to stop here and just accept the fact that we can't break the 30-35 MB/s barrier on a server that is clearly capable of 65 MB/s.

public async Task InvokeAsync(HttpContext context)  
{  
    var originalBodyStream = context.Response.Body;  
    const int bufferSize = 10 * 1024 * 1024;  
    using var responseBody = new BufferedStream(originalBodyStream, bufferSize));  
    context.Response.Body = responseBody;  
    await _next(content);  
    await responseBody.FlushAsync();  
}  

Which makes me question whether there is indeed something wrong with our pipeline, but we don't see anything obvious that would affect anything.

Any pointers on what we could try?

Thanks in advance,
Emanuel

ASP.NET Core
ASP.NET Core
A set of technologies in the .NET Framework for building web applications and XML web services.
4,816 questions
{count} votes

1 answer

Sort by: Most helpful
  1. Bruce (SqlWork.com) 77,476 Reputation points Volunteer Moderator
    2022-11-28T18:47:45.647+00:00

    I suspect (you should use a network trace tool to verify), that your clients are using range requests with the core version of the service. this will require calling FetchContent() and buffering the response for each range request.

    you might want to use a range request on the FetchContent call or use some caching scheme.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.