Note
Access to this page requires authorization. You can try signing in or changing directories.
Access to this page requires authorization. You can try changing directories.
Microsoft Foundry on Windows provides multiple ready-to-use local Large Language Models (LLMs) that you can integrate into your Windows applications.
Ready-to-use LLMs
Your app can effortlessly use the following local LLMs in less than an hour. Distribution of the LLM is handled by Microsoft, and the models are shared across apps. Using these LLMs only takes a handful of lines of code, zero ML-expertise needed.
| What is it | Supported devices | Docs | |
|---|---|---|---|
| Phi Silica | The same on-device LLM that inbox Windows experiences use | Copilot+ PCs (NPU) | Learn more |
| 20+ open-source LLMs | Choose from over 20+ available OSS LLM models | Windows 10+ (Performance varies, not all models available on all devices) |
Learn more |
Fine-tune local LLMs
If the above ready-to-use LLMs don't work for your scenario, you can fine-tune LLMs for your scenario. This option requires some work to build a fine-tuning training dataset, but is less work than training your own model.
- Phi Silica: See LoRA Fine-Tuning for Phi Silica to get started.
Use LLMs from Hugging Face or other sources
You can use a wide variety of LLMs from Hugging Face or other sources, and run those locally on Windows 10+ PCs using Windows ML (model compatibility and performance will vary based on device hardware). This option can be more complex and may take more time compared to the ready-to-use local LLMs.
See find or train models for use with Windows ML to learn more.