Редагувати

Поділитися через


Document Intelligence bank check model

The Document Intelligence bank check model combines powerful Optical Character Recognition (OCR) capabilities with deep learning models to analyze and extract data from US bank statements. The API analyzes printed checks; extracts key information, and returns a structured JSON data representation.

Feature version Model ID
Check model • v4.0:2024-07-31 (preview) prebuilt-check.us

Check data extraction

A check is a secure way to transfer amount from payee's account to receiver's account. Businesses use check to pay their vendors as a signed document to instruct the bank for payment. See how data, including check details, account details, amount, memo, is extracted from bank statement US. You need the following resources:

  • An Azure subscription—you can create one for free

  • A Document Intelligence instance in the Azure portal. You can use the free pricing tier (F0) to try the service. After your resource deploys, select Go to resource to get your key and endpoint.

Screenshot of keys and endpoint location in the Azure portal.

Document Intelligence Studio

Note

Document Intelligence Studio is available with v3.1 and v3.0 APIs.

  1. On the Document Intelligence Studio home page, select check.

  2. You can analyze the sample check or upload your own files.

  3. Select the Run analysis button and, if necessary, configure the Analyze options :

    Screenshot of Run analysis and Analyze options buttons in the Document Intelligence Studio.

Input requirements

  • Supported file formats:

    Model PDF Image:
    JPEG/JPG, PNG, BMP, TIFF, HEIF
    Microsoft Office:
    Word (DOCX), Excel (XLSX), PowerPoint (PPTX), HTML
    Read
    Layout ✔ (2024-07-31-preview, 2024-02-29-preview, 2023-10-31-preview)
    General Document
    Prebuilt
    Custom extraction
    Custom classification ✔ (2024-07-31-preview, 2024-02-29-preview)
  • For best results, provide one clear photo or high-quality scan per document.

  • For PDF and TIFF, up to 2,000 pages can be processed (with a free tier subscription, only the first two pages are processed).

  • The file size for analyzing documents is 500 MB for paid (S0) tier and 4 MB for free (F0) tier.

  • Image dimensions must be between 50 pixels x 50 pixels and 10,000 pixels x 10,000 pixels.

  • If your PDFs are password-locked, you must remove the lock before submission.

  • The minimum height of the text to be extracted is 12 pixels for a 1024 x 768 pixel image. This dimension corresponds to about 8 point text at 150 dots per inch (DPI).

  • For custom model training, the maximum number of pages for training data is 500 for the custom template model and 50,000 for the custom neural model.

    • For custom extraction model training, the total size of training data is 50 MB for template model and 1 GB for the neural model.

    • For custom classification model training, the total size of training data is 1 GB with a maximum of 10,000 pages. For 2024-07-31-preview and later, the total size of training data is 2 GB with a maximum of 10,000 pages.

Supported languages and locales

For a complete list of supported languages, see our prebuilt model language support page.

Field extractions

For supported document extraction fields, refer to the bank check model schema page in our GitHub sample repository.

Supported locales

The prebuilt-check.us version 2024-07-31-preview supports the en-us locale.

Next steps