Implement Generative AI engineering with Azure Databricks

Intermediate
Data Scientist
Azure Databricks

Generative Artificial Intelligence (AI) engineering with Azure Databricks uses the platform's capabilities to explore, fine-tune, evaluate, and integrate advanced language models. By using Apache Spark's scalability and Azure Databricks' collaborative environment, you can design complex AI systems.

Prerequisites

Before starting this module, you should be familiar with fundamental AI concepts and Azure Databricks. Consider completing the Get started with artificial intelligence learning path and the Explore Azure Databricks module first.

Modules in this learning path

Large Language Models (LLMs) have revolutionized various industries by enabling advanced natural language processing (NLP) capabilities. These language models are utilized in a wide array of applications, including text summarization, sentiment analysis, language translation, zero-shot classification, and few-shot learning.

Retrieval Augmented Generation (RAG) is an advanced technique in natural language processing that enhances the capabilities of generative models by integrating external information retrieval mechanisms. When you use both generative models and retrieval systems, RAG dynamically fetches relevant information from external data sources to augment the generation process, leading to more accurate and contextually relevant outputs.

Multi-stage reasoning systems break down complex problems into multiple stages or steps, with each stage focusing on a specific reasoning task. The output of one stage serves as the input for the next, allowing for a more structured and systematic approach to problem-solving.

Fine-tuning uses Large Language Models' (LLMs) general knowledge to improve performance on specific tasks, allowing organizations to create specialized models that are more accurate and relevant while saving resources and time compared to training from scratch.

Learn to compare Large Language Model (LLM) and traditional Machine Learning (ML) evaluations, understand their relationship with AI system evaluation, and explore various LLM evaluation metrics and specific task-related evaluations.

When working with Large Language Models (LLMs) in Azure Databricks, it's important to understand the responsible AI principles for implementation, ethical considerations, and how to mitigate risks. Based on identified risks, learn how to implement key security tooling for language models.

Streamline the implementation of Large Language Models (LLMs) with LLMOps (LLM Operations) in Azure Databricks. Learn how to deploy and manage LLMs throughout their lifecycle using Azure Databricks.