Introduction

Completed

You can run a large language model (LLM) locally or through a cloud provider, such as Azure. Opting for a cloud-hosted version offers the assurance that the provider validates the model's functionality. If you're building a tech startup or integrating an LLM into an existing product, you can foster customer trust via guaranteed performance. Additionally, when you run an LLM through a cloud provider, you don't need to worry about data leakage, because your data is protected.

Scenario: Trying out an LLM for a tech MVP

Imagine you run a tourism tech startup and want to use an LLM for various tasks to run your business more efficiently and improve customer experience. Before you commit to an LLM, you want to run tests to see if it generates useful responses. You also want to ensure that whatever prompts the LLM is given are secure between you and your cloud provider. You imagine there should be some sort of testing playground to explore these concerns.

What are you going to learn?

In this module, you learn to:

  • Provision an Azure OpenAI cloud resource.
  • Create a deployment for a specific LLM model.
  • Experiment with different prompts to see what type of results an LLM can provide in different situations.

What is the main objective?

Provision and use an LLM deployment in Azure OpenAI Studio.