Error creating endpoint from mlflow model (tensorflow job)

Juna Salviati 11 Reputation points MVP

Hello everybody,
I am trying to deploy a realtime endpoint from a registered mlflow model obtained from a tensorflow training job.
In this repository, you will find the training scripts:

The job outputs a MLFlow model with its conda environment yml file.



When I try to deploy the model to a realtime endpoint, I get the following error:


It seems to be an error related to protobuf, when loading the model:

 File "/opt/miniconda/envs/userenv/lib/python3.8/site-packages/google/protobuf/", line 560, in __new__  
TypeError: Descriptors cannot not be created directly.  
If this call came from a file, your generated code is out of date and must be regenerated with protoc >= 3.19.0.  
If you cannot immediately regenerate your protos, some other possible workarounds are:  
 1. Downgrade the protobuf package to 3.20.x or lower.  
 2. Set PROTOCOL_BUFFERS_PYTHON_IMPLEMENTATION=python (but this will use pure-Python parsing and will be much slower).  
More information:  

The environment is deployed automatically (the scoring script is also generated).
I have also tried different images, with different python versions (3.7) and Tensorflow versions (2.4) with no luck.

How can I solve this issue?

Thank you in advance for your support.

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,720 questions
0 comments No comments
{count} vote

2 answers

Sort by: Most helpful
  1. Juna Salviati 11 Reputation points MVP

    Seems like the problem is in azureml-inference-server-http package, where there is a mismatch with protobuf version.

    As a workaround, I created a custom managed online deployment via CLI, specifing the following environment variable:


    and then I was able to publish the endpoint.

    1 person found this answer helpful.

  2. Juna Salviati 11 Reputation points MVP

    I have a Keras model and I had to develop and upload my own to override the init() function in order to load the model using load_model() for Keras models, instead of using joblib.load(model_path) as it was by default.
    You probably also have to override the run() function to customize the inference.

    0 comments No comments