I took over a large, complex Windows service where the original developers did reuse object references. Their mistake was not testing the code under adverse conditions which lead to bringing down a critical server.
What I mean by adverse conditions, there are two processes, one which reads in a text file which on average has 3,000 lines, keeps lines in memory then processes the 3,000 (now) records to an external entity. When the pandemic hit the 3,000 lines jumped to 20,000 to 50,000 lines.
I had to refactor the code to properly handle any amount of incoming lines be it 1 to 100,000 for instance. I can’t share the code as it’s a large code base spanning several class projects, dependency injection and quartz library.
In short my refactors keeps things in memory until finished than disposed by better code practices and using temp database tables which are setup to be removed even if a process were to crash.
Final note, it’s critical to have not only unit test and memory profilers but also load testing on a staging server which mirrors a production server as development environments may have less resources and more open permission wise.
Since this service and all of our services and web applications serve citizens for the state of Oregon for unemployment benefits we cannot afford more than one hour a week planned down time.
Way too many developers never consider memory leakage and with that reap the issues of applications crashing which can be prevented by thinking through more than just business requirements. It’s becoming a lost art as back in the day for those who coded in C had no automatic memory management as we do today and need to think about what is a developer’s responsibility and what is the GC’s responsibilities.