Hi there ALAGARSAMY Vanitha
Thanks for using QandA platform
you can use the Azure Form Recognizer Layout Model or the Read API from Azure AI Document Intelligence. These APIs provide bounding box information for each text line, enabling you to recreate the original layout. The Layout Model is ideal for extracting text, tables, and spatial data from documents with complex formatting, while the Read API is designed for OCR, focusing on extracting text with positional details. After extracting the text and layout data, you can use visualization libraries like Matplotlib or PIL to overlay the text onto a blank canvas to reconstruct the original format.
in short -
For documents with complex layouts (e.g., tables or forms), use the Layout Model.
For simple text extraction with layout details, the Read API is sufficient.
Refer to the Form Recognizer Layout API documentation or the Read API documentation for implementation details.
If this helps kindly accept the answer thanks much.