It turned out that the reason for the difference was that the models were different. Sorry for that. Anyways, is in either way the result in the form recognizer studio the same as that when executing the AnalyzeDocumentFromUriAsync method in the DocumentAnalysisClient?
Inconsistent analyze results for form recognizer studio and form regognizer service invoked via the sdk (28-02-23-preview api version)?
I've trained a custom neural model for some customer specific invoices using the 28-02-23-preview api version. The invoices comprise a number of typical line items with price etc. Specific to those invoices, however, is that a group of line items is preceeded by a header where some group information is displayed. I have tackled this bei mapping the relevant group information to a specific column. After training via the form recognizer studio and going into the test area, I have tested the model with a new invoice and have found that the analyze result actually looks fine in terms of what I would expect (for the test invoice, I expected in total six rows, one header row, two normal line item rows and again one header and two line item rows). However, when I have executed the form recognizer service via the .NET sdk to the same invoice (AnalyzeDocumentFromUriAsync in the DocumentAnalysisClient), the analyze result comprises only four rows (only the four normal line item rows were included but not the two header rows). In addition, the columns have not been correctly identified. This holds true for both 4.0.0 and 4.1.0-beta.1. Also, I've verified that in fact in both cases the 28-02-23-preview api version has been called.
-> What could be the reason for the differences? What is done in the 'run analysis' method in the form recognizer studio and is this different to what is done in the AnalyzeDocumentFromUriAsync method in the sdk?
-> What can I do to mitigate this and to end up with the same positive results when using the sdk.
Thanks in advance for your help!