Deploying model in Azure ML confusion

2JK 241 Reputation points
2021-09-29T20:06:09.247+00:00

I'm following a tutorial (https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-and-where?tabs=python) on how to deploy a model to Azure, and I had a few questions that have had confused a bit. I had a ready model that I trained using a notebook in Azure ML and have saved the model in a folder (as .h5) in my compute directory (Users/username/projectname/models).

1- Can I deploy from the Azure ML Notebook section? So I create a .py file (or can I do it in a .ipynb notebook?), connect to my workspace, and register the model through there? I have my model stored in the models folder, so can I just reference that from an azureml.core.Run object?

2- When I create my entry scripts and inference and deployment configurations, do they have to be in separate files or does that not matter? Same for the code to deploy the model.

3- What model extensions are supported? Is .h5 fine?

4- When I deploy successfully, do I get an endpoint or uri I can connect to from anywhere?

I know this is a bit all over the place, but any clarifications would be appreciated.

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
3,108 questions
0 comments No comments
{count} vote

Accepted answer
  1. GiftA-MSFT 11,171 Reputation points
    2021-09-30T18:42:24.787+00:00

    Hi, thanks for reaching out. Here's the workflow for deploying a model:

    1. Register the model
    2. Prepare an entry script
    3. Prepare an inference configuration
    4. Deploy the model locally to ensure everything works
    5. Choose a compute target
    6. Re-deploy the model to the cloud
    7. Test the resulting web service

    You can perform the above steps through AML notebooks. However, you entry script and deployment configuration need to be in separate files. After deployment, you obtain an endpoint for calling the webservice. Model with extension .h5 is supported.

    You can create new or reference an existing environment in your config, here's information on how to create/use software environments. Also, here's another example (Deploy the model in ACI section) of how to create a scoring script. Please review the following document for details on how to save and load Keras models.

    %%writefile score.py  
    import json  
    import numpy as np  
    import os  
    import tensorflow as tf  
      
    from azureml.core.model import Model  
      
    def init():  
        global tf_model  
        model_root = os.getenv('AZUREML_MODEL_DIR')  
        # the name of the folder in which to look for tensorflow model files  
        tf_model_folder = 'model'  
          
        tf_model = tf.saved_model.load(os.path.join(model_root, tf_model_folder))  
      
    def run(raw_data):  
        data = np.array(json.loads(raw_data)['data'], dtype=np.float32)  
          
        # make prediction  
        out = tf_model(data)  
        y_hat = np.argmax(out, axis=1)  
      
        return y_hat.tolist()  
    
    1 person found this answer helpful.

1 additional answer

Sort by: Most helpful
  1. 2JK 241 Reputation points
    2021-10-01T21:01:34.717+00:00

    Hate to bump threads, but if someone can help with my comments to @GiftA-MSFT , that would be appreciated. I'm a bit stuck in some areas.


Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.