How to run AI model in a Blazor frontend
Hi,
What is the best way to run a AI model (programmed in Python) that has been deployed in Azure as a managed endpoint in the Blazor frontend of each user of a Blazor website?
Is there a way to run the AI model in the Blazor frontend in a simple and efficient manner or is it necessary to rewrite the model e.g. in tensorflow.js?
Regards,
A
Azure Machine Learning
Azure Virtual Machines
Blazor
.NET Machine learning
-
santoshkc 4,270 Reputation points • Microsoft Vendor
2023-09-22T09:08:35.9633333+00:00 Thank you for reaching out to the Microsoft Q&A forum. I understand that you want to run the Azure hosted AI model in the Blazor frontend. I will be happy to assist you with this.
To run an Azure hosted AI model in the Blazor frontend can be achieved without rewriting the model in TensorFlow.js. You will need to create a Blazor web application that communicates with the Azure-hosted AI model. Here are the steps to achieve this:
- Create and deploy Azure AI Model:
- First, you need to create and deploy your AI model as a managed endpoint in Azure service.
- Make sure the deployed model is accessible via an API endpoint. You might use Azure API Management or Azure Functions to create an API wrapper around your model. Quickstart - Create an Azure API Management instance | Microsoft Learn
- Create a Blazor Web Application:
- Create a Blazor web application.
- Ensure you have the necessary dependencies and configuration to access external APIs from your Blazor application. Blazor Tutorial | Build your first app (microsoft.com)
- Access the Azure AI Model:
- In your Blazor component, add code to make HTTP requests to the Azure API endpoint that hosts your AI model.
I hope this information helps! Let me know if you have any further questions.
-
Héðinn Steingrímsson 0 Reputation points
2023-09-22T12:50:56.1366667+00:00 Thank you @santoshkc for an informative answer.
A follow up question: if there is data sent from the Blazor client to the ML model (real time, every 2 to 5 seconds), could the data just go locally (never leave the end user's machine) between the Blazor client and the ML model or would it have to go through the Azure cloud?
Is there a way to run the ML model in such a way that the data never goes from the local machine?
As I understand the ML model would be in the Azure cloud even if it would run in the Blazor client. Is there a way to configure the setup in such a way that all data from the Blazor client would be sent directly to the ML model running in the Blazor client and that the data would never go to the Azure cloud?
-
Bruce (SqlWork.com) 56,526 Reputation points
2023-09-22T16:47:56.8333333+00:00 if you host the ai engine as a web service in azure, then the data needs to be sent to web service.
if you are using Blazor server, then the blazor server host is making the web service call and also the information has left the client machine to get to the blazor server.
If you are using Blazor WASM, then the client machine is making the call.
if you recode to .net ML and C#, then the Blazor could host the engine. IF WASM, then the client hosts, else the blazor server hosts.
If you convert to tensor flow.js, then the client hosts the engine, and jsinterop is used to call.
-
Héðinn Steingrímsson 0 Reputation points
2023-09-22T23:30:25.26+00:00 Thank you @Bruce (SqlWork.com)
With Blazor WASM client both is making the call and also hosting the ML model.
For running the ML in Blazor WASM, I need to recode it to .net and C# (https://www.tensorflow.org/versions, has a C# version), (https://pytorch.org/get-started/locally/ has a C++ version).
Am I understanding it correctly that if I recode the ML to C# and .net, then I can with Blazor WASM run the ML in the client without the data ever been sent to Azure (the data just staying on the local machine)?
Another question: this question was about how the input is sent to the ML model. The other question is: do you know how exactly I get the output from the ML model to my Blazor backend (or Blazor application) in each of these scenarios?
-
Bruce (SqlWork.com) 56,526 Reputation points
2023-09-23T21:48:41.66+00:00 Most hosted ml engines have function calls with parameters that return a response. What this looks like depends on your ml implementation. See ml.net
-
Deleted
This comment has been deleted due to a violation of our Code of Conduct. The comment was manually reported or identified through automated detection before action was taken. Please refer to our Code of Conduct for more information.
-
santoshkc 4,270 Reputation points • Microsoft Vendor
2023-09-25T08:14:00.14+00:00 Following up to see if the above suggestion was helpful. And, if you have any further query do let us know.
-
santoshkc 4,270 Reputation points • Microsoft Vendor
2023-09-26T08:56:29.3533333+00:00 I hope the above suggestions are helpful. In case if you have any resolution, please do share that same with the community as it can be helpful to others. Thank you.
Sign in to comment