Combining Form Recognizer with OpenAI for Improved Table Extraction - Seeking Examples/Documentation
I'm currently working on a project that involves extracting tables from UK financial statements. I have been using Azure Form Recognizer, but I'm encountering challenges due to the diverse layouts and inconsistent table naming across documents. The supervised extraction isn't as efficient as I hoped it would be.
In an effort to overcome these obstacles, I'm considering combining the Form Recognizer with OpenAI. My idea is to use Form Recognizer for the initial extraction process (unsupervised), and then employ OpenAI to find the specific tables I need.
Has anyone attempted a similar solution before? I'm seeking documentation or examples of Azure implementations where Form Recognizer and OpenAI have been used in tandem, specifically for table extraction from documents. Any guidance on how to train or configure this kind of system would be highly appreciated.
Hello @Millns, Bailey
Thanks for sharing the sample, so from your side you don't want to train the custom model from Form Recognizer side. We will forward it to product team to see if there any possible solutions.
My preference is to use unsupervised extraction to pull all elements from the documents, and then leverage OpenAI to filter and identify the income table.
If you extract all element from the original documents and leverage OpenAI to filter it, the final result can hardly be table again. Or you want to use OpenAI to make it as a table?
Sign in to comment