Torch Model + Inference script in a container to Azure Kubernetes

Pedrojfb 41 Reputation points

Hello everyone,

I am somewhat inexperienced with cloud services and am currently trying to find the optimal way to deploy, to Azure, an ML model and inference script that I have developed locally.

I have everything running on a container which has the following process:

Model fetches data from the DB server -> If there are new sources of data -> Creates a new
thread to constantly perform inference on that particular source of data and constantly
put the results on another server

I have successfully deployed this container to both Container Instances, Container Apps (to experiment and verify that it works) and as a Pod inside a Kubernetes cluster. The main idea would be to run it inside Kubernetes and scale it the more data sources it needs to process.

My question is: is this the best approach for this scenario, deploying the container as a single image application and scaling it from there?

Thanks for your help in advance!

Azure Container Registry
Azure Container Registry
An Azure service that provides a registry of Docker and Open Container Initiative images.
423 questions
Azure Kubernetes Service (AKS)
Azure Kubernetes Service (AKS)
An Azure service that provides serverless Kubernetes, an integrated continuous integration and continuous delivery experience, and enterprise-grade security and governance.
1,976 questions
{count} votes