Hadoop Binary Streaming and PDF File Inclusion

In a previous post I talked about Hadoop Binary Streaming for the processing of Microsoft Office Word documents. However, due to there popularity, I thought inclusion for support of Adobe PDF documents would  be beneficial. To this end I have updated the source code to support processing of both “.docx” and “.pdf” documents.


To support reading PDFs I have used the open source library provided by iText (https://itextpdf.com/). iText is a library that allows you to read, create and manipulate PDF documents (https://itextpdf.com/download.php). The original code was written in Java but a port for .Net is also available (https://sourceforge.net/projects/itextsharp/files/).

In using these libraries I only use the PdfReader class, from the Core library. This class allows one to derive the page count, and the Author from an Info property.

To use the library in Hadoop one just has to specify a file property for the iTextSharp core library:

-file "C:\Reference Assemblies\itextsharp.dll"

This assumes the downloaded and extracted DLL has been copied to and referenced from the “Reference Assemblies” folder.

Source Code Changes

To support the PDF document inclusion only two changes were necessary to the code.

Firstly, a new Mapper was defined that supports the processing of a PdfReader type and returns the author and pages for the document:

namespace FSharp.Hadoop.MapReduce

open System

open iTextSharp.text
open iTextSharp.text.pdf

// Calculates the pages per author for a Pdf document
module OfficePdfPageMapper =

    let authorKey = "Author"
    let unknownAuthor = "unknown author"

    let getAuthors (document:PdfReader) =          
        // For PDF documents perform the split on a ","
        if document.Info.ContainsKey(authorKey) then
            let creators = document.Info.[authorKey]
            if String.IsNullOrWhiteSpace(creators) then
                [| unknownAuthor |]
            [| unknownAuthor |]

    let getPages (document:PdfReader) =
        // return page count

    // Map the data from input name/value to output name/value
    let Map (document:PdfReader) =
        let pages = getPages document
        (getAuthors document)
        |> Seq.map (fun author -> (author, pages))

Secondly one has to call the correct mapper based on the document type; namely the file extension:

let (|WordDocument|PdfDocument|UnsupportedDocument|) extension =
    if String.Equals(extension, ".docx", StringComparison.InvariantCultureIgnoreCase) then
    else if String.Equals(extension, ".pdf", StringComparison.InvariantCultureIgnoreCase) then

// Check we do not have a null document
if (reader.Length > 0L) then
        match Path.GetExtension(filename) with
        | WordDocument ->
            // Get access to the word processing document from the input stream
            use document = WordprocessingDocument.Open(reader, false)
            // Process the word document with the mapper
            OfficeWordPageMapper.Map document
            |> Seq.iter (fun value -> outputCollector value)        
            // close document
        | PdfDocument ->
            // Get access to the pdf processing document from the input stream
            let document = new PdfReader(reader)
            // Process the word document with the mapper
            OfficePdfPageMapper.Map document
            |> Seq.iter (fun value -> outputCollector value)        
            // close document
        | UnsupportedDocument ->
    | :? System.IO.FileFormatException ->
        // Ignore invalid files formats

And that is it.


In Microsoft Word, if one needs to process the actual text/words of a document, this is relatively straight-forward:


Using iText the text/word extraction code is a little more complex but relativity easy. An example can be found here: