I see now in preview we can select models from the HuggingFace Hub and deploy them on an Azure ML endpoint.
What if my model is a custom model ? I see two options
A) Ask Huggingface to add it to their model list (if they start doing that for everyone the catalog will become a huge mess)
B) Deploy on Azure ML via the HuggingFace Hub (only CPU and East US for now, apparent separate subscription)
What I'm looking for is a tutorial/method to deploy a huggingface hub custom model on an azure endpoint, selecting my own compute instance (not via the huggingface hub).