@Ramr-msft @romungi-MSFT Is there somthing new here? Or do I have to create a suppor ticket?
Use register trained .ilearner model and deploy it as Real-Time-Inference Endpoint
Dear Community, since more than 2 weeks I'm struggeling on Azure Machine Learning Studio. Our Training-Pipeline generates a Trained-Best-Model folder containing the following files _meta.yaml _samples.json _schema.json conda_env.yaml data.ilearner model_spec.yaml score.py In the designer I can just run my training Pipeline, Update my Inference Pipeline and´once this finishied the progress, I press the "Deploy" Button. This creates some kind of deployment package, registers the model and deploys it to a kubernetes cluster. I'm completly happy with the setting of my pipeline and my resulting endpoint. But our customer wants us to automate it so there is a weekly deployment shedule on the endpoint. All tutorial and informations I found use other file formats (like .pkl and .onnx) but really not a single Jupyter notebook shows me how a) To Read my mltable Dataset (Type File, contains the folder of the listed filnames above ) and "convert it" to a model b) package this model c) deploy it to an existing kubernetes webservice If i could just automate those 2 clicks from the Inference Pipeline Run in python, it would all be done. But this issue already consumed more like 2 weeks of constant failing. Is it so hard or is it just me? ![205506-image.png][1] [1]: /api/attachments/205506-image.png?platform=QnA