We already updated the block size and after a days of monitoring, it seems that it helps making the heap performing better.
Java api: blob input stream default values might causes out of memory error
Hi
I have noticed that when using the BlobInputStream with default BlobInputStreamOptions, the jvm might crahs because of out of memory.
I uses a jvm with 12g, it crashed after couple of days.
from the gc log i see many humongous allocation because the g1heapregion we use is 4m, but the default BlobInputStream's block size is 4m, which is not suffecient in this case.
I wondered if a fine tune to the block size is required to fit with my jvm args or there is other azure blob store's java api i may use.
Following is how i use it
BlobInputStreamOptions biso = new BlobInputStreamOptions();
biso.setRequestConditions(new BlobRequestConditions().setIfNoneMatch((String) moreData));
BlobInputStream blob = remoteBlobClient.openInputStream(biso);
maybe the better use is by setting the block size
BlobInputStreamOptions biso = new BlobInputStreamOptions();
biso.setBlockSize(1024*1024).setRequestConditions(new BlobRequestConditions().setIfNoneMatch((String) moreData));
BlobInputStream blob = remoteBlobClient.openInputStream(biso);