Episode

Intelligent Apps on AKS Ep02: Bring Your Own AI Models to Intelligent Apps on AKS with Kaito

with Paul Yu, Ishaan Sehgal, Steven Murawski

Join us to learn how to run open-source Large Language Models (LLMs) with HTTP-based inference endpoints inside your AKS cluster using the Kubernetes AI Toolchain Operator (KAITO). We’ll walk through the setup and deployment of containerized LLMs on GPU node pools and see how KAITO can help reduce operational burden of provisioning GPU nodes and tuning model deployment parameters to fit GPU profiles.

Learning objectives

  • Learn how to extend existing microservices with AI capabilities.
  • Understand using progressive enhancement to integrate AI capabilities in existing applications.
  • Learn how to use open source or custom Large Language Models (LLM) with existing applications.
  • Learn how to run open source or custom Large Language Models on Azure Kubernetes Service

Chapters

Connect

Advanced
Solution Architect
DevOps Engineer
Developer
AI Engineer
Azure Kubernetes Service (AKS)
Azure Virtual Machines