Endopint not consumable after successful model deployment to azure instance container (machine learning studio - designer)

Tommaso Bassignana 1 Reputation point
2021-07-27T10:30:40.39+00:00

hi, after i register a model and then deploy it on Azure container istance via the graphical interface of the machine learning studio - designer, although the state is "heathy" i cannot test the endpoint with data or consume the endpoint. These are the logs of the deploiment

2021-07-27 09:28:09,742 | root | INFO | 500 127.0.0.1 - - [27/Jul/2021:09:28:09 +0000] "POST /score?verbose=true HTTP/1.0" 500 37 "-" "Go-http-client/1.1" Exception in worker process Traceback (most recent call last): File "/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages/gunicorn/arbiter.py", line 589, in spawn_worker worker.init_process() File "/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages/gunicorn/workers/base.py", line 142, in init_process self.run() File "/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages/gunicorn/workers/sync.py", line 125, in run self.run_for_one(timeout) File "/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages/gunicorn/workers/sync.py", line 84, in run_for_one self.wait(timeout) File "/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages/gunicorn/workers/sync.py", line 36, in wait ret = select.select(self.wait_fds, [], [], timeout) File "/var/azureml-server/routes_common.py", line 162, in alarm_handler raise TimeoutException(error_message) timeout_exception.TimeoutException Worker exiting (pid: 90) worker timeout is set to 300 Booting worker with pid: 330 SPARK_HOME not set. Skipping PySpark Initialization. Failure while loading azureml_run_type_providers. Failed to load entrypoint azureml.PipelineRun = azureml.pipeline.core.run:PipelineRun._from_dto with exception (azureml-core 1.32.0 (/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages), Requirement.parse('azureml-core~=1.31.0')). Failure while loading azureml_run_type_providers. Failed to load entrypoint azureml.ReusedStepRun = azureml.pipeline.core.run:StepRun._from_reused_dto with exception (azureml-core 1.32.0 (/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages), Requirement.parse('azureml-core~=1.31.0')). Failure while loading azureml_run_type_providers. Failed to load entrypoint azureml.StepRun = azureml.pipeline.core.run:StepRun._from_dto with exception (azureml-core 1.32.0 (/azureml-envs/azureml_66791b0cfb22a4c054c681ce0ae95fcf/lib/python3.6/site-packages), Requirement.parse('azureml-core~=1.31.0')). Initializing logger 2021-07-27 09:29:12,687 | root | INFO | Starting up app insights client 2021-07-27 09:29:12,687 | root | INFO | Starting up request id generator 2021-07-27 09:29:12,687 | root | INFO | Starting up app insight hooks 2021-07-27 09:29:12,687 | root | INFO | Invoking user's init function 2021-07-27 09:29:12,882 | root | INFO | Users's init has completed successfully 2021-07-27 09:29:12,884 | root | INFO | Skipping middleware: dbg_model_info as it's not enabled. 2021-07-27 09:29:12,884 | root | INFO | Skipping middleware: dbg_resource_usage as it's not enabled. 2021-07-27 09:29:12,888 | root | INFO | Scoring timeout is found from os.environ: 60000 ms 2021-07-27 09:55:11,506 | root | INFO | Swagger file not present 2021-07-27 09:55:11,506 | root | INFO | 404 127.0.0.1 - - [27/Jul/2021:09:55:11 +0000] "GET /swagger.json HTTP/1.0" 404 19 "-" "Go-http-client/1.1"

also if i try to consume the endpoint it ends with error 502, the specific error is this JSONDecodeError: Expecting value: line 1 column 1 (char 0)

i'm trying to deploy the trained model, but the same thing happen if i try to deploy the inference pipeline

i'm referring to this documentation
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-deploy-model-designer

my dount really is " it seems that i can deploy a model to an Azure Instance Container directly from the deployment tab without creating a container instance separately before, since it seems that it is created at the moment. The process should be authomatic. then the deployment state is healthy, so its ok, but somewhere during the actual deployment something fails, because i can't consume the endpoint"

thanks for the support

Azure Machine Learning
Azure Machine Learning
An Azure machine learning service for building and deploying models.
2,899 questions
0 comments No comments
{count} votes

1 answer

Sort by: Most helpful
  1. romungi-MSFT 45,961 Reputation points Microsoft Employee
    2021-07-27T12:16:12.313+00:00

    @TommasoBassignana-0564 Based on the error I think the input passed to your endpoint is not handled correctly by the entry script that is used. During the deployment process the entry script should contain the init() method to load your model and the run() method should parse the input or JSON and return the result. You can lookup a sample entry script here.
    If there are any dependencies to be installed you can also add them as part of the conda dependencies file during the deployment. A healthy endpoint indicates successful creation of endpoint but it could still error out if the entry script does not handle the inputs. Thanks.

    0 comments No comments

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.