Combining Form Recognizer with OpenAI for Improved Table Extraction - Seeking Examples/Documentation
Hello community,
I'm currently working on a project that involves extracting tables from UK financial statements. I have been using Azure Form Recognizer, but I'm encountering challenges due to the diverse layouts and inconsistent table naming across documents. The supervised extraction isn't as efficient as I hoped it would be.
In an effort to overcome these obstacles, I'm considering combining the Form Recognizer with OpenAI. My idea is to use Form Recognizer for the initial extraction process (unsupervised), and then employ OpenAI to find the specific tables I need.
Has anyone attempted a similar solution before? I'm seeking documentation or examples of Azure implementations where Form Recognizer and OpenAI have been used in tandem, specifically for table extraction from documents. Any guidance on how to train or configure this kind of system would be highly appreciated.
Thank you