Why adding more memory won't resolve OutOfMemoryExceptions

Quite often I am met with the incorrect assumption that out of memory exceptions can be resolved by adding more memory. I can understand why you'd think that, but actually it won't matter at all. Additional RAM may increase performance, but an additional 8GB of memory won't increase the available amount of memory for the .NET application.

Here's what you need to know:

If you have a 32 bit machine then you have 2 GB of memory available.

That's the way it is. Never mind if you have 512 MB or 8 GB of RAM. A 32-bit system can address 4 GB of virtual memory. 2 GB of those are reserved for the operating system, leaving 2 GB for each user mode process. If there is no RAM available we will use the page file and save infrequently used memory to disk instead. So additional RAM will cause fewer page faults, but the amount of memory available for the process is still only 2 GB.

(Actually you can tune this in boot.ini. There is a switch that will allow you to steal 1 GB from the OS and increase the limit for the user process from 2 GB to 3 GB. It still requires some additional tweaking though. I suggest looking at https://support.microsoft.com/kb/820108/en-us for more information on this.)

Okay, so now that we know this it's time for the next vital piece of information:

Your process will most likely be out of memory when it reaches ~800 MB

- What!? How is that possible!? Didn't I just say that we have 2 GB available? I did, but there is another thing you really need to know about memory allocations:

Memory allocations need to be continuous.

If you want to allocate a 100 MB string, then you need 100 MB of continuous space. Memory allocations don't work the way the file system does. If we were talking about files we could save part of the file in one place, another part in another place, etc. but memory != file system. Open up the disk defragmenter and take a look at your hard drive. You may have 100 GB available on your 250 GB hard drive, but how much of that is continuous? The available memory for your process will eventually become just as fragmented as your hard drive. And as your memory becomes more and more fragmented it will be harder and harder to squeeze in those large allocations.

Let's say you and your party of four walk into a restaurant. The restaurant is packed with four seat tables so finding a seat shouldn't be a problem. Unfortunately there is 1-3 persons sitting at each table. So even if only half the seats are taken you are still unable to get your own table. The usher gets an OutOfSeatsException as he tries to seat you. :-)

The rule of thumb is that you'll begin seeing OutOfMemoryExceptions at around 800 MB. Off course this isn't set in stone. It depends entirely on the average size of the objects you allocate and how fragmented your memory becomes. You might make it up to 1.2 GB or you might see the exceptions already at 550 MB.

 

So what can be done?

Well if you're in the planning stage of a WebService that will load 4 GB images, process them somehow and then return them to the client you'll want to either:

  1. Work with streams and make sure you never load the full image into memory
  2. Set aside the money to invest in a 64 bit system
  3. Rethink

I'm currently writing a bigger post on how the managed heap and garbage collector work, so expect more to come on the subject.

/ Johan