I found a solution to this problem!
Please check this link
This browser is no longer supported.
Upgrade to Microsoft Edge to take advantage of the latest features, security updates, and technical support.
I created a python based web app (using flask and gunicorn) that contains a pytorch based library which can detect if the machine has CUDA GPUs. I added this image to azure container registry then used an azure container instance with GPU capabilities to deploy it. However when checking the logs, it tells me that it was not able to detect GPUs. What am I doing wrong here?
The Dockerfile used doesn't specify anything related to GPUs. is that the main problem?
I created my application on windows but have been using wsl2 with ubuntu linux kernel to create the image.
I found a solution to this problem!
Please check this link
To run certain compute-intensive workloads on Azure Container Instances, deploy your container groups with GPU resources. The container instances in the group can access one or more NVIDIA Tesla GPUs while running container workloads such as CUDA and deep learning applications.
more details
https://learn.microsoft.com/en-us/azure/container-instances/container-instances-gpu
If the Answer is helpful, please click Accept Answer
and up-vote, so that it can help others in the community looking for help on similar topics.