Lưu ý
Cần có ủy quyền mới truy nhập được vào trang này. Bạn có thể thử đăng nhập hoặc thay đổi thư mục.
Cần có ủy quyền mới truy nhập được vào trang này. Bạn có thể thử thay đổi thư mục.
In this quickstart, you create a console application that downloads a local AI model, generates a streaming chat response, and unloads the model. Everything runs on your device with no cloud dependency or Azure subscription.
Prerequisites
- .NET 8.0 SDK or later installed.
Samples repository
The complete sample code for this article is available in the Foundry Local GitHub repository. To clone the repository and navigate to the sample use:
git clone https://github.com/microsoft/Foundry-Local.git
cd Foundry-Local/samples/cs/native-chat-completions
Install packages
If you're developing or shipping on Windows, select the Windows tab. The Windows package integrates with the Windows ML runtime — it provides the same API surface area with a wider breadth of hardware acceleration.
dotnet add package Microsoft.AI.Foundry.Local.WinML
dotnet add package OpenAI
The C# samples in the GitHub repository are preconfigured projects. If you're building from scratch, you should read the Foundry Local SDK reference for more details on how to set up your C# project with Foundry Local.
Use native chat completions API
Copy and paste the following code into a C# file named Program.cs:
using Microsoft.AI.Foundry.Local;
using Betalgo.Ranul.OpenAI.ObjectModels.RequestModels;
CancellationToken ct = new CancellationToken();
var config = new Configuration
{
AppName = "foundry_local_samples",
LogLevel = Microsoft.AI.Foundry.Local.LogLevel.Information
};
// Initialize the singleton instance.
await FoundryLocalManager.CreateAsync(config, Utils.GetAppLogger());
var mgr = FoundryLocalManager.Instance;
// Discover available execution providers and their registration status.
var eps = mgr.DiscoverEps();
int maxNameLen = 30;
Console.WriteLine("Available execution providers:");
Console.WriteLine($" {"Name".PadRight(maxNameLen)} Registered");
Console.WriteLine($" {new string('─', maxNameLen)} {"──────────"}");
foreach (var ep in eps)
{
Console.WriteLine($" {ep.Name.PadRight(maxNameLen)} {ep.IsRegistered}");
}
// Download and register all execution providers with per-EP progress.
// EP packages include dependencies and may be large.
// Download is only required again if a new version of the EP is released.
// For cross platform builds there is no dynamic EP download and this will return immediately.
Console.WriteLine("\nDownloading execution providers:");
if (eps.Length > 0)
{
string currentEp = "";
await mgr.DownloadAndRegisterEpsAsync((epName, percent) =>
{
if (epName != currentEp)
{
if (currentEp != "")
{
Console.WriteLine();
}
currentEp = epName;
}
Console.Write($"\r {epName.PadRight(maxNameLen)} {percent,6:F1}%");
});
Console.WriteLine();
}
else
{
Console.WriteLine("No execution providers to download.");
}
// Get the model catalog
var catalog = await mgr.GetCatalogAsync();
// Get a model using an alias.
var model = await catalog.GetModelAsync("qwen2.5-0.5b") ?? throw new Exception("Model not found");
// Download the model (the method skips download if already cached)
await model.DownloadAsync(progress =>
{
Console.Write($"\rDownloading model: {progress:F2}%");
if (progress >= 100f)
{
Console.WriteLine();
}
});
// Load the model
Console.Write($"Loading model {model.Id}...");
await model.LoadAsync();
Console.WriteLine("done.");
// Get a chat client
var chatClient = await model.GetChatClientAsync();
// Create a chat message
List<ChatMessage> messages = new()
{
new ChatMessage { Role = "user", Content = "Why is the sky blue?" }
};
// Get a streaming chat completion response
Console.WriteLine("Chat completion response:");
var streamingResponse = chatClient.CompleteChatStreamingAsync(messages, ct);
await foreach (var chunk in streamingResponse)
{
Console.Write(chunk.Choices[0].Message.Content);
Console.Out.Flush();
}
Console.WriteLine();
// Tidy up - unload the model
await model.UnloadAsync();
Run the code by using the following command:
dotnet run
Note
If you're targeting Windows, use the Windows-specific instructions under the Windows tab for the best performance and experience.
Troubleshooting
- Build errors referencing
net8.0: Install the .NET 8.0 SDK, then rebuild the app. Model not found: Run the optional model listing snippet to find an alias available on your device, then update the alias passed toGetModelAsync.- Slow first run: Model downloads can take time the first time you run the app.
Prerequisites
- Node.js 20 or later installed.
Samples repository
The complete sample code for this article is available in the Foundry Local GitHub repository. To clone the repository and navigate to the sample use:
git clone https://github.com/microsoft/Foundry-Local.git
cd Foundry-Local/samples/js/native-chat-completions
Install packages
If you're developing or shipping on Windows, select the Windows tab. The Windows package integrates with the Windows ML runtime — it provides the same API surface area with a wider breadth of hardware acceleration.
npm install foundry-local-sdk-winml openai
Use native chat completions API
Copy and paste the following code into a JavaScript file named app.js:
import { FoundryLocalManager } from 'foundry-local-sdk';
// Initialize the Foundry Local SDK
console.log('Initializing Foundry Local SDK...');
const manager = FoundryLocalManager.create({
appName: 'foundry_local_samples',
logLevel: 'info'
});
console.log('✓ SDK initialized successfully');
// Discover available execution providers and their registration status.
const eps = manager.discoverEps();
const maxNameLen = 30;
console.log('\nAvailable execution providers:');
console.log(` ${'Name'.padEnd(maxNameLen)} Registered`);
console.log(` ${'─'.repeat(maxNameLen)} ──────────`);
for (const ep of eps) {
console.log(` ${ep.name.padEnd(maxNameLen)} ${ep.isRegistered}`);
}
// Download and register all execution providers with per-EP progress.
// EP packages include dependencies and may be large.
// Download is only required again if a new version of the EP is released.
console.log('\nDownloading execution providers:');
if (eps.length > 0) {
let currentEp = '';
await manager.downloadAndRegisterEps((epName, percent) => {
if (epName !== currentEp) {
if (currentEp !== '') {
process.stdout.write('\n');
}
currentEp = epName;
}
process.stdout.write(`\r ${epName.padEnd(maxNameLen)} ${percent.toFixed(1).padStart(5)}%`);
});
process.stdout.write('\n');
} else {
console.log('No execution providers to download.');
}
// Get the model object
const modelAlias = 'qwen2.5-0.5b'; // Using an available model from the list above
const model = await manager.catalog.getModel(modelAlias);
// Download the model
console.log(`\nDownloading model ${modelAlias}...`);
await model.download((progress) => {
process.stdout.write(`\rDownloading... ${progress.toFixed(2)}%`);
});
console.log('\n✓ Model downloaded');
// Load the model
console.log(`\nLoading model ${modelAlias}...`);
await model.load();
console.log('✓ Model loaded');
// Create chat client
console.log('\nCreating chat client...');
const chatClient = model.createChatClient();
console.log('✓ Chat client created');
// Example chat completion
console.log('\nTesting chat completion...');
const completion = await chatClient.completeChat([
{ role: 'user', content: 'Why is the sky blue?' }
]);
console.log('\nChat completion result:');
console.log(completion.choices[0]?.message?.content);
// Example streaming completion
console.log('\nTesting streaming completion...');
for await (const chunk of chatClient.completeStreamingChat(
[{ role: 'user', content: 'Write a short poem about programming.' }]
)) {
const content = chunk.choices?.[0]?.delta?.content;
if (content) {
process.stdout.write(content);
}
}
console.log('\n');
// Unload the model
console.log('Unloading model...');
await model.unload();
console.log(`✓ Model unloaded`);
Run the code by using the following command:
node app.js
Prerequisites
- Python 3.11 or later installed.
Samples repository
The complete sample code for this article is available in the Foundry Local GitHub repository. To clone the repository and navigate to the sample use:
git clone https://github.com/microsoft/Foundry-Local.git
cd Foundry-Local/samples/python/native-chat-completions
Install packages
If you're developing or shipping on Windows, select the Windows tab. The Windows package integrates with the Windows ML runtime — it provides the same API surface area with a wider breadth of hardware acceleration.
pip install foundry-local-sdk-winml openai
Use native chat completions API
Copy and paste the following code into a Python file named app.py:
from foundry_local_sdk import Configuration, FoundryLocalManager
def main():
# Initialize the Foundry Local SDK
config = Configuration(app_name="foundry_local_samples")
FoundryLocalManager.initialize(config)
manager = FoundryLocalManager.instance
# Download and register all execution providers.
current_ep = ""
def ep_progress(ep_name: str, percent: float):
nonlocal current_ep
if ep_name != current_ep:
if current_ep:
print()
current_ep = ep_name
print(f"\r {ep_name:<30} {percent:5.1f}%", end="", flush=True)
manager.download_and_register_eps(progress_callback=ep_progress)
if current_ep:
print()
# Select and load a model from the catalog
model = manager.catalog.get_model("qwen2.5-0.5b")
model.download(
lambda progress: print(
f"\rDownloading model: {progress:.2f}%",
end="",
flush=True,
)
)
print()
model.load()
print("Model loaded and ready.")
# Get a chat client
client = model.get_chat_client()
# Create the conversation messages
messages = [
{"role": "user", "content": "What is the golden ratio?"}
]
# Stream the response token by token
print("Assistant: ", end="", flush=True)
for chunk in client.complete_streaming_chat(messages):
content = chunk.choices[0].delta.content
if content:
print(content, end="", flush=True)
print()
# Clean up
model.unload()
print("Model unloaded.")
if __name__ == "__main__":
main()
Run the code by using the following command:
python app.py
Troubleshooting
ModuleNotFoundError: No module named 'foundry_local_sdk': Install the SDK by runningpip install foundry-local-sdk.Model not found: Run the optional model listing snippet to find an alias available on your device, then update the alias passed toget_model.- Slow first run: Model downloads can take time the first time you run the app.
Prerequisites
- Rust and Cargo installed (Rust 1.70.0 or later).
Samples repository
The complete sample code for this article is available in the Foundry Local GitHub repository. To clone the repository and navigate to the sample use:
git clone https://github.com/microsoft/Foundry-Local.git
cd Foundry-Local/samples/rust/native-chat-completions
Install packages
If you're developing or shipping on Windows, select the Windows tab. The Windows package integrates with the Windows ML runtime — it provides the same API surface area with a wider breadth of hardware acceleration.
cargo add foundry-local-sdk --features winml
cargo add tokio --features full
cargo add tokio-stream anyhow
Use native chat completions API
Replace the contents of main.rs with the following code:
// Copyright (c) Microsoft Corporation. All rights reserved.
// Licensed under the MIT License.
use std::io::{self, Write};
use foundry_local_sdk::{
ChatCompletionRequestMessage, ChatCompletionRequestSystemMessage,
ChatCompletionRequestUserMessage, FoundryLocalConfig, FoundryLocalManager,
};
use tokio_stream::StreamExt;
const ALIAS: &str = "qwen2.5-0.5b";
#[tokio::main]
async fn main() -> Result<(), Box<dyn std::error::Error>> {
println!("Native Chat Completions");
println!("=======================\n");
// ── 1. Initialise the manager ────────────────────────────────────────
let manager = FoundryLocalManager::create(FoundryLocalConfig::new("foundry_local_samples"))?;
// Download and register all execution providers.
manager
.download_and_register_eps_with_progress(None, {
let mut current_ep = String::new();
move |ep_name: &str, percent: f64| {
if ep_name != current_ep {
if !current_ep.is_empty() {
println!();
}
current_ep = ep_name.to_string();
}
print!("\r {:<30} {:5.1}%", ep_name, percent);
io::stdout().flush().ok();
}
})
.await?;
println!();
// ── 2. Pick a modeland ensure it is downloaded ──────────────────────
let model = manager.catalog().get_model(ALIAS).await?;
println!("Model: {} (id: {})", model.alias(), model.id());
if !model.is_cached().await? {
println!("Downloading model...");
model
.download(Some(|progress: f64| {
print!("\r {progress:.1}%");
io::stdout().flush().ok();
}))
.await?;
println!();
}
println!("Loading model...");
model.load().await?;
println!("✓ Model loaded\n");
// ── 3. Create a chat client──────────────────────────────────────────
let client = model.create_chat_client()
.temperature(0.7)
.max_tokens(256);
// ── 4. Non-streamingchat completion ─────────────────────────────────
let messages: Vec<ChatCompletionRequestMessage> = vec![
ChatCompletionRequestSystemMessage::from("You are a helpful assistant.").into(),
ChatCompletionRequestUserMessage::from("What is Rust's ownership model?").into(),
];
println!("--- Non-streaming completion ---");
let response = client.complete_chat(&messages, None).await?;
if let Some(choice) = response.choices.first() {
if let Some(ref content) = choice.message.content {
println!("Assistant: {content}");
}
}
// ── 5. Streamingchat completion ─────────────────────────────────────
let stream_messages: Vec<ChatCompletionRequestMessage> = vec![
ChatCompletionRequestSystemMessage::from("You are a helpful assistant.").into(),
ChatCompletionRequestUserMessage::from("Explain the borrow checker in two sentences.")
.into(),
];
println!("\n--- Streaming completion ---");
print!("Assistant: ");
let mut stream = client
.complete_streaming_chat(&stream_messages, None)
.await?;
while let Some(chunk) = stream.next().await {
let chunk = chunk?;
if let Some(choice) = chunk.choices.first() {
if let Some(ref content) = choice.delta.content {
print!("{content}");
io::stdout().flush().ok();
}
}
}
println!("\n");
// ── 6. Unloadthe model──────────────────────────────────────────────
println!("Unloading model...");
model.unload().await?;
println!("Done.");
Ok(())
}
Run the code by using the following command:
cargo run
Troubleshooting
- Build errors: Ensure you have Rust 1.70.0 or later installed. Run
rustup updateto get the latest version. Model not found: Verify the model alias is correct. Usemanager.catalog().get_models().await?to list available models.- Slow first run: Model downloads can take time the first time you run the app.