Difference in Confusion Metrics between Vision Portal & using Python SDK
Hi,
Ive been using the custom vision api for object detection purposes, currently im using an object detection model with Products on Shelves as the domain. Im doing this implementation (starting from creating a project) to annotation to uploading the preprocessed images along with their annotations to training & evaluating the model via python sdk in jupyter notebook.
Parallelly I am keeping a watch on the vision portal to ensure what im implementing is reflecting in vision studio. After training my model, i check the performance metrics using the following code(along with values, please find the image below)
I didnt tweak the probability or the overlap threshold even one bit, checked on the portal here are the values displayed
Can you please help me understand why is their a difference in values between studio & python sdk implementation?