Thanks for reaching out to us, I am able to answer some of your questions and will follow up with your other questions if I can get more information.
The General (compact) domain for Object Detection of Azure custom vision requires special postprocessing logic. If you need a model without the postprocessing logic, use General (compact) [S1].
There is no guarantee that the exported models give the exactly same result as the prediction API on the cloud. Slight difference in the running platform or the preprocessing implementation can cause larger difference in the model outputs. For the detail of the preprocessing logic, please see this document.
The models generated by compact domains can be exported to run locally. In the Custom Vision 3.4 public preview API, you can get a list of the exportable platforms for compact domains by calling the GetDomains API.
All of the following domains support export in ONNX, TensorFlow,TensorFlowLite, TensorFlow.js, CoreML, and VAIDK formats, with the exception that the Object Detection General (compact) domain does not support VAIDK.
Model performance varies by selected domain. In the table below, we report the model size and inference time on Intel Desktop CPU and NVidia GPU [1]. These numbers don't include preprocessing and postprocessing time.
Please let me know if you have more questions, if you need more suggestions about your scenario, please share the case you are working on so that we can provide more details.
Regards, Yutong
-Please kindly accept the answer and vote 'Yes' if you feel helpful to support the community, thanks a lot.