Share via


fwrite issues with large data write

Question

Tuesday, September 30, 2008 3:00 PM

I am using fwrite to store the array of structure with large size . As I am in 64 bit development when I pass the number of records as 64 bit value then fwrite hangs..and it could not handle large number of records, is there any solution to make disk I/O for large number of records in 64 bit

For example i am trying to write the memory size 11489112868 bytes in one shot using fwrite

as follows

records = 114891128668/ 512
Size of record = 512 byte

fwrite(buffer,sizeof(record), records, FILE*)

now this takes very long time and almost hangs..I wonder as this fwrite accepts 64 bit for sizeof and number of records argument then why it hangs IF i use 64 bit value for number of records

All replies (8)

Tuesday, September 30, 2008 5:19 PM âś…Answered

Right , I agree and it takes more and more time and no response..so I think the only way is split the buffer and write into disk ??


Tuesday, September 30, 2008 3:18 PM

Can you Show your code here .
Thanx


Rupesh Shukla


Tuesday, September 30, 2008 3:21 PM

Out of curiosity, how much memory does your machine have?


Tuesday, September 30, 2008 4:04 PM

15 GB RAM and I am deciding the record size by available physical memory 

MEMORYSTATUSEX Mem_Stat;
  Mem_Stat.dwLength = sizeof(MEMORYSTATUSEX);
  DWORDLONG Memory_To_Use = RAM_GB * 1024 * 1024 * 1024 - 1;  //Translate Gigabytes to Bytes.
  if(GlobalMemoryStatusEx(&Mem_Stat))
  {
    Memory_To_Use = Mem_Stat.ullAvailPhys ;    //Amount of available physical memory.
    cout << "Available physical memory is " << Memory_To_Use/1024 << "K\n";
  }
  Memory_To_Use    /= 4;                            //We'll use 3/4 of it.
  Memory_To_Use    *= 3;

  Quotes_In_Sorting_Buffer     = Memory_To_Use / sizeof(OwnStructure);

Sorting_Buffer = new OwnStructure[Quotes_In_Sorting_Buffer];

fwrite(Sorting_Buffer, sizeof(OwnStructure), Quotes_In_Sorting_Buffer, Work_File1);

if the quotes_In_sorting_Buffer * sizeof(OwnStructure) goes more than the value 2^32 or 4GB then it hangs


Tuesday, September 30, 2008 4:12 PM

Trying out your new PC?  It is still the same slow hard drive.  Writing 11.5 Gigabytes to a disk takes a good while, especially since it can never fit in the lazy write-back cache.


Hans Passant.


Monday, November 30, 2009 5:02 AM | 1 vote

I think nobody ever tested fwrite with data more than 4GB as Microsoft code loops forever.

MSVC 2008, 64-bit project:

    fwrite( p, sizeof(int), num, fout );

num is 1024*1024*1024
sizeof(int) is 4

fwrite locks the stream and calls
size_t __cdecl _fwrite_nolock

there is nice loop there, where (bufsize is 4096)

nbytes = ( bufsize ? (unsigned)(count - count % bufsize) : (unsigned)count );

count at this point is 4*1024*1024*1024

so
nbytes = (unsigned)(4*1024*1024*1024) =0
it tries to write 0 bytes, subtructs it from count (no change of course) so infinite tight loop.

if I'm trying to write not 4GB of data but say 5GB then

nbytes = ( bufsize ? (unsigned)(count - count % bufsize) : (unsigned)count );

nbytes now 1GB, it writes 1GB, then count is 5GB-1GB = 4GB, (unsigned)4GB is 0 - infinite loop.

So no matter what the size is if it's above 4GB (i've tried to write 10GB originally) it writes out whatever is above closest multiple of 4GB (in the case of 10GB - it writes what is above 8GB hence 2GB) then it gets count as something in multiple of 4GB units (say 8GB) and does

nbytes = (unsigned)(8*1024*1024*1024) =0

and tight loop forever.

I suspect that fread may have the same issue....


Monday, October 27, 2014 8:10 AM

so, what's the solution for writing file larger than 4GB using fwrite

am using fwrite and fread because it's easy to choose file writing mode for opening the same file as flush(fresh) mode(re-write) by using "wb" or append mode by using "rb+", but larger file size is depth issue for me.


Wednesday, February 11, 2015 7:04 AM

You can write your own function to copy/delete/move the extra large file. MS default commands for large file will freeze or choke the network. In my project, I move large media files in small chunks at a time. For example I use 10MB of sub chunks ( in each iteration) of the 23+ GB media files regularly. You can also control the speed of the file transfer by adjusting the sub chunk size. Use a batch command window to stack up the incoming commands.