Episode
Run AI models on your local machine with Ollama (Part 5 of 10)
with Yohan Lasorsa
Learn how to seamlessly integrate local AI models into your development workflow using Ollama. You'll see how to download, run, and interact with powerful AI models on your machine, and keep compatibility with OpenAI's API. We'll explore the Phi-3 model family and discover how you can use it to build prototypes and experiment with AI applications.
Chapters
- 00:00 - Introduction to Local AI Models
- 00:12 - Benefits of Using Local Models
- 00:52 - Overview of Phi-3 Model Family
- 01:30 - Introduction to Ollama
- 02:10 - Installing Ollama and Downloading Models
- 03:10 - Running a UI with Ollama
- 04:20 - Using Ollama's HTTP API
- 05:50 - OpenAI Compatible API Features
- 06:40 - Next Steps with Ollama and Phi-3
Recommended resources
Related episodes
Learn how to seamlessly integrate local AI models into your development workflow using Ollama. You'll see how to download, run, and interact with powerful AI models on your machine, and keep compatibility with OpenAI's API. We'll explore the Phi-3 model family and discover how you can use it to build prototypes and experiment with AI applications.
Chapters
- 00:00 - Introduction to Local AI Models
- 00:12 - Benefits of Using Local Models
- 00:52 - Overview of Phi-3 Model Family
- 01:30 - Introduction to Ollama
- 02:10 - Installing Ollama and Downloading Models
- 03:10 - Running a UI with Ollama
- 04:20 - Using Ollama's HTTP API
- 05:50 - OpenAI Compatible API Features
- 06:40 - Next Steps with Ollama and Phi-3
Recommended resources
Related episodes
Have feedback? Submit an issue here.