HoloLens2 - integrate custom vision error
Hello, I am having issues with the Integrate Azure Cloud Services to your Unity project on HoloLens 2 tutorial. Specifically, I am having issues getting Exercise - Integrate Azure Custom Vision to work after I deployed my project onto the HoloLens2 with Visual Studio 2022. The issue I'm having is with the step under the section titled "Take and upload images" which states: "Once you have enough images, select the Train button to start the model-training process in the cloud. This will upload all images and then start the training. The process can take up to a minute or more. A message inside the menu indicates the current progress. Once it indicates the process is complete, you can stop the application." Unfortunately, once I take the 6 pictures and press the train button I get a message that states "Please wait, uploading images" but nothing ends up happening after waiting more than 30 minutes and I am given no indication that the process has been completed. The Custom Vision project type I used was for object detection. If anyone knows has had the same issue as me or knows why this is not working properly, I would greatly appreciate the help in fixing this issue. Thanks!
Azure
HoloLens Development
-
romungi-MSFT 46,751 Reputation points • Microsoft Employee
2023-06-29T08:45:46.6333333+00:00 @Matthew O'Leary I understand the steps mentioned in this section use a script to pass the pictures to custom vision API through unity. I do not have experience with HoloLens or unity but as per the steps mentioned in the document, after uploading the images you should be able to view them in your custom vision project or studio if the upload is successful. Do you see the images in your custom vision portal for the project created in section Prepare Azure Custom Vision? If No, then the upload and most probably the keys used in the script did not work or there is no connectivity to custom vision API to upload and train with the images.
-
Matthew O'Leary 20 Reputation points
2023-07-05T16:13:13.0966667+00:00 @romungi-MSFT Thank you for getting back to me! Sorry, I took a while to respond back. I'm pretty sure I would have seen a message in my HoloLens2 project that says "no connectivity" or something along those lines if that was my issue so I think it may have to do with the keys implemented into the C# script on Unity3D. I'm not sure why this is because I followed the tutorial where it said to "Retrieve Azure settings credentials" and "Retrieve project settings credentials" properly. I took screenshots showing which credentials I input into the C# script in Unity3D. They are circled in yellow. Please let me know if I did this properly or not. You can view my screenshots in this PDF: CustomVisionCredentials.pdf.
-
romungi-MSFT 46,751 Reputation points • Microsoft Employee
2023-07-06T04:40:18.6766667+00:00 @Matthew O'Leary I think the 2nd and 3rd screen shots are referring to different resources. I think you are using the resource in the 2nd screen shot and the one in 3rd screen shot is for prediction. Are you using the keys and endpoint of a single resource(one used for training) from the second screen shot? If you have mixed up the endpoint and keys from 2nd and 3rd screen shots the upload would fail.
To summarize, Please try to use the credentials of the training resource in this script and then use the prediction resource credentials for prediction later.
-
Matthew O'Leary 20 Reputation points
2023-07-06T15:25:57.31+00:00 I input both the credentials of the custom vision resource and the prediction into the Unity3D C# script, so this must be why I am running into an error then. So, if I'm reading your comment correctly, I should ONLY input the credentials of the custom vision resource first in the Unity3D C# script, then after uploading the images I should remove the custom vision resource credentials and input ONLY the prediction credentials in the Unity3D C# script, and then redeploy the project on Visual Studio. Am I correct about this?
-
romungi-MSFT 46,751 Reputation points • Microsoft Employee
2023-07-06T18:20:27.0366667+00:00 @Matthew O'Leary I think when you run ObjectDetectionManager it is creating another resource with
-Prediction
suffix but I am not really sure since I do not have access to this script. Essentially, what the tutorial is trying to do is use your prediction keys at the final step of Detect objects section.An easy way to understand which keys to use at the Detect objects section is to go your project in custom vision, go to Prediction tab from the top menu and select the latest iteration and hit the view endpoint button to load the prediction endpoint and keys. Whichever keys are displayed here, update in ObjectDetectionManager and follow step 3 of Detect objects
Example:
-
Matthew O'Leary 20 Reputation points
2023-07-07T15:15:11.57+00:00 @romungi-MSFT Ok, that makes a bit more sense now after providing the screenshot. But, before the initial deployment on Visual Studio 2022, what do I input into the ObjectDetectionManager script? Before making a Custom Vision project, a custom vision resource and custom vision prediction resource are created. Would I input the Custom Vision project's "API Key" and "Project ID" under the "Project Settings" section and input the credentials of the custom vision resource that DOES NOT have the -Prediction at the end of it in the "Azure Settings" section? And then for the Detect objects section of the tutorial would I only change the "API key" under "project settings" or would I also be changing the custom vision resource's credentials to the custom vision prediction resource's credentials? Sorry if I am making this more confusing than it should be. Here is another PDF to help convey what I'm asking about. CustomVisionCredentials2.pdf
-
romungi-MSFT 46,751 Reputation points • Microsoft Employee
2023-07-11T06:25:45.7233333+00:00 @Matthew O'Leary You have to use your first custom vision resource endpoint and keys i.e Custom Vision resource1 page 2, This should match the keys in your project settings page from custom vision portal. Hope this helps!!
-
Matthew O'Leary 20 Reputation points
2023-07-11T14:54:11.14+00:00 I tested this exercise again yesterday using the credentials from the first custom vision resource and the project id from the custom vision project (exactly how you described in your comment) and I still get the "please wait, uploading images" message and after waiting about 20 minutes the images never upload to my custom vision project from my HoloLens2. Do you believe that this tutorial could be outdated? A few other tutorials from Microsoft I tried that I had issues with include "Use Azure Spatial Anchors to anchor objects in the real world" and "Add Azure Cognitive Services to your mixed reality project". Regarding the Azure Cognitive Services tutorial, it mentions using a resource called LUIS, which cannot be used anymore, and integrating it with the Unity3D project for use on HoloLens2. These instructions are clearly outdated, so do you think this could be the same for this "Integrate Custom Vision" tutorial as well? I can try again today and let you know my results here. Thank you for trying to help me with this!
-
dluong 10 Reputation points
2023-08-12T08:31:59.5+00:00 I have the same issue as Matthew. I think this tutorial was not presented clearly. Is it possible to provide us detail steps to fill the Azure resource and Project information to detect the object?
-
dluong 10 Reputation points
2023-08-12T15:57:33.0566667+00:00 In order to detail our problem, I assume that I create a Custom Vision resource as "ExampleCustomVision" with the both options of training and prediction, therefore, Azure server automatically generates 2 resources as "ExampleCustomVision" for training and "ExampleCustomVision-Prediction" for prediction. Then, I create a classification project as "ExampleProject" assigned to ExampleCustomVision resource.
Following the tutorial, here are credentials set for ObjectDetectionManage to take and upload images
Azure setting
- Azure Resource Subscription ID: My-Azure-account-ID
- Azure Resource Group Name: My-Group-Name
- Cognitive Service Resource Name: ExampleCustomVision
- Resource Base End Point: End-Point-of-ExampleCustomVision
Project setting
- API Key: Key-of-ExampleCustomVision
- Project ID: ExampleProject-ID
And, here are the ones to predict the object
Azure setting
- Azure Resource Subscription ID: My-Azure-account-ID
- Azure Resource Group Name: My-Group-Name
- Cognitive Service Resource Name: ExampleCustomVision-Prediction
- Resource Base End Point: End-Point-of-ExampleCustomVision-Prediction
Project setting
- API Key: Key-of-ExampleCustomVision-Prediction
- Project ID: ExampleProject-ID
Could you let me know the above settings are correct?
-
Matthew O'Leary 20 Reputation points
2023-09-06T23:43:56.9366667+00:00 @dluong that looks correct to me and is exactly what I tried with the implementation of my project on Unity3D in that ObjectDetectionManage script.
-
Anna Zielinska 0 Reputation points
2024-02-12T16:30:38.9+00:00 Hi. I have an issue with the same tutorial (https://learn.microsoft.com/en-us/training/modules/azure-cloud-services-tutorials/7-exercise-integrate-azure-custom-vision); however, my problem is that I get "All images have been uploaded!" message, but the training does not start. I changed the wait time from 3 to 6 minutes (I will try with more wait time, but I don't think it will change anything). I have tried both classification and object detection projects on Custom Vision; the photos are imported to both projects. But still, I cannot figure out how the training can be started straight from the HoloLens. My API keys, endpoints, etc. look correct. If you have any suggestions, please let me know. Thank you. Piece of ComputerVisionController where it should change the state after the images have been uploaded: messageLabel.text = "All images have been uploaded!"; var objectTrainingResult = await sceneController.ObjectDetectionManager.TrainProject(); messageLabel.text = "Started training process, please wait for completion."; sceneController.CurrentProject.CustomVisionIterationId = objectTrainingResult.Id; await sceneController.DataManager.UpdateProject(sceneController.CurrentProject); var tries = 0; while (tries < 360) { await Task.Delay(1000); var status = await sceneController.ObjectDetectionManager.GetTrainingStatus(objectTrainingResult.Id); if (status.IsCompleted()) { var publishResult = await sceneController.ObjectDetectionManager.PublishTrainingIteration(objectTrainingResult.Id, trainingModelPublishingName); if (!publishResult) { messageLabel.text = "Failed to publish, please check the custom vision portal of your project."; } else { trackedObject.HasBeenTrained = true; await sceneController.DataManager.UploadOrUpdate(trackedObject); sceneController.CurrentProject.CustomVisionPublishedModelName = trainingModelPublishingName; await sceneController.DataManager.UpdateProject(sceneController.CurrentProject); messageLabel.text = "Model training is done and ready for detection."; await Task.Delay(1000); previousMenu.SetActive(true); gameObject.SetActive(false); } break; } tries++; } SetButtonsInteractiveState(true); }
-
Deleted
This comment has been deleted due to a violation of our Code of Conduct. The comment was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.
Sign in to comment