File overwriting performance with SSDs
[context]
I have a use case where I write data at a very hight data rate (> 4GB/s) in files. Disk writing performance is critical.
The files are used as a circular buffer : when the last file of the pool is full, I can overwrite the oldest one, and so on.
Once fixed all "natural" SSD problems to get a reliable write speed on the long run (using RAID0, taking into account the "fast" SSD cache memory for the first few GB of data, using async I/O with completion routines to enqueue writes in kernel space...), I have a working, efficient base code.
But I observed a strange behaviour :
the write speed performance drops when starting to overwrite the oldest files. I can understand that, it must be related to overwriting non-empty cells of SSD which requires more time. (I know TRIM & all.)
[the question]
My question is about how to circumvent that the best way, because I do not fully understand how to tune the OS behaviour there.
Initially, I planned to never close files, and just REWIND/TRUNCATE when recalling the oldest files before overwriting them. But it does not help for performance. I discovered that closing/deleting the files and creating new ones on demand is more efficient. This seems un-natural to me, because it adds a filesystem overhead.
I don't really understand why truncate is not enough : as far as I know, the filesystem can use new SSD cells to append contents to the newly 0-length file, it does not have to reuse the previous cells, so a new file should not be "more empty" than a truncated one.
I tried to use the "FSCTL_FILE_LEVEL_TRIM" after rewind/truncate, but it did not bring any measurable effect.
Did I miss something ? Is there a filesystem advanced call that could help ?