Document Intelligence bank statement model

The Document Intelligence bank statement model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract data from US bank statements. The API analyzes printed bank statements; extracts key information such as account number, bank details, statement details, transaction details, and fees; and returns a structured JSON data representation.

Feature version Model ID
Bank statement model • v4.0:2024-07-31 (preview) prebuilt-bankStatement.us

Bank statement data extraction

A bank statement helps review account's activities during a specified period. It's an official statement that helps in detecting fraud, tracking expenses, accounting errors and record the period's activities. See how data is extracted using the prebuilt-bankStatement.us model. You need the following resources:

  • An Azure subscription—you can create one for free

  • A Document Intelligence instance in the Azure portal. You can use the free pricing tier (F0) to try the service. After your resource deploys, select Go to resource to get your key and endpoint.

    Screenshot of keys and endpoint location in the Azure portal.

Document Intelligence Studio

  1. On the Document Intelligence Studio home page, select bank statements.

  2. You can analyze the sample bank statement or upload your own files.

  3. Select the Run analysis button and, if necessary, configure the Analyze options :

    Screenshot of Run analysis and Analyze options buttons in the Document Intelligence Studio.

Input requirements

  • Supported file formats:

    Model PDF Image:
    JPEG/JPG, PNG, BMP, TIFF, HEIF
    Microsoft Office:
    Word (DOCX), Excel (XLSX), PowerPoint (PPTX), HTML
    Read
    Layout ✔ (2024-07-31-preview, 2024-02-29-preview, 2023-10-31-preview)
    General Document
    Prebuilt
    Custom extraction
    Custom classification ✔ (2024-07-31-preview, 2024-02-29-preview)
  • For best results, provide one clear photo or high-quality scan per document.

  • For PDF and TIFF, up to 2,000 pages can be processed (with a free tier subscription, only the first two pages are processed).

  • The file size for analyzing documents is 500 MB for paid (S0) tier and 4 MB for free (F0) tier.

  • Image dimensions must be between 50 pixels x 50 pixels and 10,000 pixels x 10,000 pixels.

  • If your PDFs are password-locked, you must remove the lock before submission.

  • The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 pixel image. This dimension corresponds to about 8 point text at 150 dots per inch (DPI).

  • For custom model training, the maximum number of pages for training data is 500 for the custom template model and 50,000 for the custom neural model.

    • For custom extraction model training, the total size of training data is 50 MB for template model and 1 GB for the neural model.

    • For custom classification model training, the total size of training data is 1 GB with a maximum of 10,000 pages. For 2024-07-31-preview and later, the total size of training data is 2 GB with a maximum of 10,000 pages.

Supported languages and locales

For a complete list of supported languages, see our prebuilt model language support page.

Field extractions

For supported document extraction fields, refer to the bank check model schema page in our GitHub sample repository.

Supported locales

The prebuilt-bankStatement.us version 2027-07-31-preview supports the en-us locale.

Next steps