Hadoop Likes Big Files
One of the frequently overlooked yet essential best practices for Hadoop is to prefer fewer, bigger files over more, smaller files. How small is too small and how many is too many? How do you stitch together all those small Internet of Things files into files "big enough" for Hadoop to process efficiently?
The Problem
One performance best practice for Hadoop is to have fewer large files as opposed to large numbers of small files. A related best practice is to not partition “too much”. Part of the reason for not over-partitioning is that it generally leads to larger numbers of smaller files.
Too small is smaller than HDFS block size (chunk size), or realistically small is something less than several times larger than chunk size. A very, very rough rule of thumb is files should be at least 1GB each and no more than maybe around 10,000-ish files per table. These numbers, especially the maximum total number of files per table, vary depending on many factors. However, it gives you a reference point. The 1GB is based on multiples of the chunk size while the 2nd is honestly a bit of a guess based on a typical small cluster.
Why Is It Important?
One reason for this recommendation is that Hadoop’s name node service keep track of all the files and where the internal chunks of the individual files are. The more files it has to track the more memory it needs on the head node and the longer it takes to build a job execution plan. The number and size of files also affects how memory is used on each node.
Let’s say your chunk size is 256MB. That’s the maximum size of each piece of the file that Hadoop will store per node. So if you have 10 nodes and a single 1GB file it would be split into 4 chunks of 256MB each and stored on 4 of those nodes (I’m ignoring the replication factor for this discussion). If you have 1000 files that are 1MB each (still a total data size of ~1GB) then every one of those files is a separate chunk and 1000 chunks are spread across those 10 nodes. NOTE: In Azure and WASB this happens somewhat differently behind the scenes – the data isn’t physically chunked up when initially stored but rather chunked up at the time a job runs.
With the single 1GB file the name node has 5 things to keep track of – the logical file plus the 4 physical chunks and their associated physical locations. With 1000 smaller files the name node has to track the logical file plus 1000 physical chunks and their physical locations. That uses more memory and results in more work when the head node service uses the file location information to build out the plan for how it will split out any Hadoop job into tasks across the many nodes. When we’re talking about systems that often have TBs or PBs of data the difference between small and large files can add up quickly.
The other problem comes at the time that the data is read by a Hadoop job. When the job runs on each node it loads the files the task tracker identified for it to work with into memory on that local node (in WASB the chunking is done at this point). When there are more files to be read for the same amount of data it results in more work and slower execution time for each task within each job. Sometimes you will see hard errors when operating system limits are hit related to the number of open files. There is also more internal work involved in reading the larger number of files and combining the data.
Stitching
There are several options for stitching files together.
- Combine the files as they land using the code that moves the files. This is the most performant and efficient method in most cases.
- INSERT into new Hive tables (directories) which creates larger files under the covers. The output file size can be controlled with settings like hive.merge.smallfiles.avgsize and hive.merge.size.per.task.
- Use a combiner in Pig to load the many small files into bigger splits.
- Use the HDFS FileSystem Concat API https://hadoop.apache.org/docs/current/api/org/apache/hadoop/fs/FileSystem.html#concat.
- Write custom stitching code and make it a JAR.
- Enable the Hadoop Archive (HAR). This is not very efficient for this scenario but I am including it for completeness.
There are several writeups out there that address the details of each of these methods so I won't repeat them.
- Merging small files on HDInsight https://blogs.msdn.com/b/mostlytrue/archive/2014/04/10/merging-small-files-on-hdinsight.aspx which uses a Java MapReduce JAR https://github.com/mooso/smallfilesmerge.
- Quick Tip for Compressing Many Small Text Files within HDFS via Pig https://dennyglee.com/2014/01/06/quick-tip-for-compressing-many-small-text-files-within-hdfs-via-pig/.
- FileCrush https://github.com/edwardcapriolo/filecrush.
- HDFS FileSystem Concat API
- CombineFileInputFormat (splits)
- This may not work with really large numbers of files and has to be used EVERY time a job is run.
- https://www.ibm.com/developerworks/library/bd-hadoopcombine/index.html
- Process Small Files on Hadoop Using CombineFileInputFormat (1) https://www.idryman.org/blog/2013/09/22/process-small-files-on-hadoop-using-combinefileinputformat-1/
- Dealing with Hadoop's small files problem https://snowplowanalytics.com/blog/2013/05/30/dealing-with-hadoops-small-files-problem/ “aggregating with the small files first reduced total processing time from 2 hours 57 minutes to just 9 minutes - of which 3 minutes was the aggregation, and 4 minutes was running our actual Enrichment process. That’s a speedup of 1,867% .”
- The Small Files problem in Hadoop https://piglog4j.blogspot.com/2013/06/the-small-files-problem-in-hadoop.html
- Hadoop Archive: File Compaction for HDFS https://developer.yahoo.com/blogs/hadoop/hadoop-archive-file-compaction-hdfs-461.html
- The Small Files Problem https://blog.cloudera.com/blog/2009/02/the-small-files-problem/ “Reading through files in a HAR is no more efficient than reading through files in HDFS, and in fact may be slower since each HAR file access requires two index file reads as well as the data file read (see diagram). And although HAR files can be used as input to MapReduce, there is no special magic that allows maps to operate over all the files in the HAR co-resident on a HDFS block.”
The key here is to work with fewer, larger files as much as possible in Hadoop. The exact steps to get there will vary depending on your specific scenario.
I hope you enjoyed this small bite of big data!
Cindy Gross – Neal Analytics: Big Data and Cloud Technical Fellow
@SQLCindy | @NealAnalytics | CindyG@NealAnalytics.com | https://smallbitesofbigdata.com
Technorati Tags: HDInsight,Best practices,performance,Microsoft Azure,Neal Analytics,Big Data,configuration,deploy,Hadoop,SQLCindy