An Azure artificial intelligence service and end-to-end platform for applying computer vision to specific domains.
Hello Sachin Soni ,
Welcome to Microsoft Q&A .Thank you for reaching out with detailed case description.
From the query,it is understood that you are looking for Azure AI and Custom Vision services for the following uses cases.
1. 25 digit Bar code Detection
2. Image Quality Enhancement
3. Image Quality check / Validation
1. Using Azure AI Document Intelligence for 25 Digit Bar code Detection
· Azure AI Document Intelligence has “prebuilt- barcode” model for barcode related usecases.
· Code 128 ,Code 39 ,EAN / UPC , PDF417 and additionally QR codes are supported by this model. As 25 digit barcodes fall under the code 128 category , AI document intelligence can be used.
· The below are the supported formats
For documents - PDF (.pdf) - Single‑page and multi‑page PDFs are supported
For images -JPEG / JPG (.jpg, .jpeg), PNG (.png) ,BMP (.bmp) ,TIFF / TIF (.tiff, .tif) ,HEIF (.heif)
Please refer the below documentations to get started with Azure AI document intelligence
Add-on capabilities - Document Intelligence - Foundry Tools | Microsoft Learn
Quickstart: Document Intelligence client libraries - Foundry Tools | Microsoft Learn
The suggested is an Azure‑native way to extract barcodes and text from documents in one pass.This is how it works:
1. Create an Azure AI Document Intelligence resource.
2. Use the prebuilt‑barcode model (or layout/other models with barcode extraction enabled) in your API request.
3. Works via REST API or SDKs (Python, C#, JavaScript, Java).
4. Upload images or scanned PDFs of your packing slips.
5. The API detects and decodes barcodes whether they’re Code 128, Code 39, PDF417, etc., and includes their values in the response
6. You’ll get a JSON output where the barcodes are listed with their decoded values.
2. Image Quality Enhancement
· While there is no single shot , ready to use , direct approach to enhance image quality , Azure’s AI vision can be used to handle this use case.
The below links can be further used for detailed reference.
Use AI Enrichment With Image and Text Processing - Azure Architecture Center | Microsoft Learn
Suggested approaches would be:
a) Preprocessing the images in Azure (before Blob) by using
Azure Blob Storage Trigger → Azure Function
Use a Function that triggers on image upload, enhances it, then stores a processed version into another container.
b) Using custom ML pipelines using Azure Machine Learning for AI‑based enhancement (super‑resolution, denoising, restoration).This can be done by training or use a pre‑trained model (e.g., ESRGAN, SwinIR, Restormer) and then deploy it as a real‑time inference endpoint using one of the below
· Azure Machine Learning Endpoints
· Azure Kubernetes Service
· Azure Container Apps
Then call that endpoint before upload → blob.
The following references might be helpful for enhancing image quality
Smart-croppedthumbnails - Azure Vision in Foundry Tools - Foundry Tools | Microsoft Learn
Analyzean image - Training | Microsoft Learn
https://learn.microsoft.com/en-us/training/modules/analyze-images/3-analyze-image?pivots=csharp
Imagecaptions - Image Analysis 4.0 - Foundry Tools | Microsoft Learn
3. Image Quality check / Validation
For blurry images
Azure AI Vision does not have a dedicated “blur score” API, but low confidence + poor detections + OCR failure are expected outcomes for blurry images.
Blurry images can be indirectly detected using:
· Image Analysis confidence scores
· Object detection confidence
· OCR (Read) confidence / missing text
· Image captions mentioning blur (example: “a blurry image of…”).The image quality (blur, lighting, resolution) directly impacts accuracy and confidence.
Please check this reference guide to follow through
· For Wrong angle or incomplete object and poor lighting the metrics for Confidence loss (tags, captions, objects) and detection failure (objects missing / partial) can be used as validations and image quality checks.
An exclusive ML model can be built leveraging Azure AI Vision for the above purpose. It would either accept or reject the incomplete images based on lighting and blur.
The model outline would be as below:
· Use Azure AI Vision – Image Analysis to extract tags, captions, objects, and confidence scores from every uploaded image.
· Treat low average tag/caption confidence as a strong signal for blurry or poorly lit images.
· Use Object Detection to verify expected objects are detected (e.g., ID card, product, face).
· Flag incomplete objects when bounding boxes are too small or cropped relative to the image size.
· Detect wrong angle when expected objects are detected with very low confidence or partial boxes.
· Run Read OCR and flag images when text is missing or word confidence is low (common in blur/low light).
· Combine Vision outputs into features (confidence, box size %, OCR success, object count).
· Train an Azure ML binary classification model (Accept vs Reject) using these features.
· Apply rule-based prechecks (hard failures) before ML scoring for faster rejection.
· Return a clear rejection reason (Blurry / Poor lighting / Wrong angle / Incomplete object) for feedback.
Please refer the below references for detailed extended reading.
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/overview
https://learn.microsoft.com/en-us/azure/ai-services/computer-vision/
https://github.com/microsoft/computervision-recipes
Please let me know if you have any questions.
Thank you!