Med lagrade slutföranden kan du samla in konversationshistoriken från chattsessioner som ska användas som datauppsättningar för utvärderingar och finjustering.
Stöd för lagrade kompletteringar
API-stöd
Supporten lades först till i 2024-10-01-preview
, använd 2025-02-01-preview
eller senare för åtkomst till de senaste funktionerna.
Distributionstyp
Sparade slutföranden stöds för alla Azure OpenAI-distributionstyper: standard, global, datazon och tilldelad.
Tillgänglighet för modell och region
Så länge du använder API:et för chattslutsättning för slutsatsdragning kan du utnyttja lagrade slutföranden. Det stöds för alla Azure OpenAI-modeller och i alla regioner som stöds (inklusive regioner som endast är globala).
Om du vill aktivera lagrade slutföranden för Azure OpenAI-distributionen anger du parametern store
till True
. Använd parametern metadata
för att berika den lagrade datauppsättningen för slutföranden med ytterligare information.
import os
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
azure_ad_token_provider=token_provider,
api_version="2025-02-01-preview"
)
completion = client.chat.completions.create(
model="gpt-4o", # replace with model deployment name
store= True,
metadata = {
"user": "admin",
"category": "docs-test",
},
messages=[
{"role": "system", "content": "Provide a clear and concise summary of the technical content, highlighting key concepts and their relationships. Focus on the main ideas and practical implications."},
{"role": "user", "content": "Ensemble methods combine multiple machine learning models to create a more robust and accurate predictor. Common techniques include bagging (training models on random subsets of data), boosting (sequentially training models to correct previous errors), and stacking (using a meta-model to combine base model predictions). Random Forests, a popular bagging method, create multiple decision trees using random feature subsets. Gradient Boosting builds trees sequentially, with each tree focusing on correcting the errors of previous trees. These methods often achieve better performance than single models by reducing overfitting and variance while capturing different aspects of the data."}
]
)
print(completion.choices[0].message)
Viktigt!
Använd API-nycklar med försiktighet. Inkludera inte API-nyckeln direkt i koden och publicera den aldrig offentligt. Om du använder en API-nyckel lagrar du den på ett säkert sätt i Azure Key Vault. Mer information om hur du använder API-nycklar på ett säkert sätt i dina appar finns i API-nycklar med Azure Key Vault.
Mer information om säkerhet för AI-tjänster finns i Autentisera begäranden till Azure AI-tjänster.
import os
from openai import AzureOpenAI
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2025-02-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
completion = client.chat.completions.create(
model="gpt-4o", # replace with model deployment name
store= True,
metadata = {
"user": "admin",
"category": "docs-test",
},
messages=[
{"role": "system", "content": "Provide a clear and concise summary of the technical content, highlighting key concepts and their relationships. Focus on the main ideas and practical implications."},
{"role": "user", "content": "Ensemble methods combine multiple machine learning models to create a more robust and accurate predictor. Common techniques include bagging (training models on random subsets of data), boosting (sequentially training models to correct previous errors), and stacking (using a meta-model to combine base model predictions). Random Forests, a popular bagging method, create multiple decision trees using random feature subsets. Gradient Boosting builds trees sequentially, with each tree focusing on correcting the errors of previous trees. These methods often achieve better performance than single models by reducing overfitting and variance while capturing different aspects of the data."}
]
)
print(completion.choices[0].message)
Microsoft Entra ID
curl $AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-4o/chat/completions?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
-d '{
"model": "gpt-4o",
"store": true,
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
API-nyckel
curl $AZURE_OPENAI_ENDPOINT/openai/deployments/gpt-4o/chat/completions?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
-d '{
"model": "gpt-4o",
"store": true,
"messages": [
{
"role": "system",
"content": "You are a helpful assistant."
},
{
"role": "user",
"content": "Hello!"
}
]
}'
{
"id": "chatcmpl-B4eQ716S5wGUyFpGgX2MXnJEC5AW5",
"choices": [
{
"finish_reason": "stop",
"index": 0,
"logprobs": null,
"message": {
"content": "Ensemble methods enhance machine learning performance by combining multiple models to create a more robust and accurate predictor. The key techniques include:\n\n1. **Bagging (Bootstrap Aggregating)**: Involves training multiple models on random subsets of the data to reduce variance and overfitting. A popular method within bagging is Random Forests, which build numerous decision trees using random subsets of features and data samples.\n\n2. **Boosting**: Focuses on sequentially training models, where each new model attempts to correct the errors made by previous ones. Gradient Boosting is a common boosting technique that builds trees sequentially, concentrating on the mistakes of earlier trees to improve accuracy.\n\n3. **Stacking**: Uses a meta-model to combine predictions from various base models, leveraging their strengths to enhance overall predictions.\n\nThese ensemble methods generally outperform individual models because they effectively handle overfitting, reduce variance, and capture diverse aspects of the data. In practical applications, they are valued for their ability to improve model accuracy and stability.",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null
},
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"protected_material_code": {
"filtered": false,
"detected": false
},
"protected_material_text": {
"filtered": false,
"detected": false
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
],
"created": 1740448387,
"model": "gpt-4o-2024-08-06",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_b705f0c291",
"usage": {
"completion_tokens": 205,
"prompt_tokens": 157,
"total_tokens": 362,
"completion_tokens_details": {
"accepted_prediction_tokens": 0,
"audio_tokens": 0,
"reasoning_tokens": 0,
"rejected_prediction_tokens": 0
},
"prompt_tokens_details": {
"audio_tokens": 0,
"cached_tokens": 0
}
},
"prompt_filter_results": [
{
"prompt_index": 0,
"content_filter_results": {
"hate": {
"filtered": false,
"severity": "safe"
},
"jailbreak": {
"filtered": false,
"detected": false
},
"self_harm": {
"filtered": false,
"severity": "safe"
},
"sexual": {
"filtered": false,
"severity": "safe"
},
"violence": {
"filtered": false,
"severity": "safe"
}
}
}
]
}
När lagrade slutföranden har aktiverats för en Azure OpenAI-distribution börjar de visas i Azure AI Foundry-portalen i fönstret Lagrade slutföranden .
Destillering
Genom destillation kan du omvandla dina lagrade avslutningar till ett dataset för finjustering. Ett vanligt användningsfall är att använda lagrade slutföranden med en större kraftfullare modell för en viss uppgift och sedan använda lagrade slutföranden för att träna en mindre modell på högkvalitativa exempel på modellinteraktioner.
Destillation kräver minst 10 sparade kompletteringar, men vi rekommenderar att ha hundratals till tusentals sparade kompletteringar för bästa resultat.
I fönstret Lagrade slutföranden i Azure AI Foundry-portalen använder du filteralternativen för att välja de slutföranden som du vill träna din modell med.
Börja destillationen genom att välja Destillera
Välj vilken modell du vill finjustera med din lagrade slutförandedatauppsättning.
Bekräfta vilken version av modellen du vill finjustera:
En .jsonl
fil med ett slumpmässigt genererat namn skapas som en träningsdatauppsättning från dina lagrade slutföranden. Välj filen >Nästa.
Anmärkning
Det går inte att komma åt lagrade träningsfiler för slutföringsdestillation direkt, och de kan inte exporteras eller laddas ner externt.
Resten av stegen motsvarar de typiska finjusteringsstegen för Azure OpenAI. Mer information finns i vår vägledning för att komma igång med finjustering.
Utvärdering
Utvärderingen av stora språkmodeller är ett viktigt steg för att mäta deras prestanda i olika uppgifter och dimensioner. Detta är särskilt viktigt för finjusterade modeller, där bedömning av prestandavinster (eller förluster) från träning är avgörande. Noggranna utvärderingar kan hjälpa dig att förstå hur olika versioner av modellen kan påverka ditt program eller scenario.
Lagrade slutföranden kan användas som en datauppsättning för att köra utvärderingar.
I fönstret Lagrade slutföranden i Azure AI Foundry-portalen använder du filteralternativen för att välja de slutföranden som du vill ska ingå i din utvärderingsdatauppsättning.
Om du vill konfigurera utvärderingen väljer du Utvärdera
Då startas fönstret Utvärderingar med en fördefinierad .jsonl
fil med ett slumpmässigt genererat namn som skapas som en utvärderingsdatauppsättning från dina lagrade slutföranden.
Anmärkning
Lagrade utvärderingsdatafiler för slutförande kan inte nås direkt och kan inte exporteras externt/laddas ned.
Mer information om utvärdering finns i Komma igång med utvärderingar
API för lagrade slutföranden
För att komma åt API-kommandona för lagrade slutföranden kan du behöva uppgradera din version av OpenAI-biblioteket.
pip install --upgrade openai
Lista lagrade slutföranden
Ytterligare parametrar:
-
metadata
: Filtrera efter nyckel/värde-paret i de lagrade slutförandena
-
after
: Identifierare för det senast lagrade slutförandemeddelandet från föregående sidnumreringsbegäran.
-
limit
: Antal lagrade slutförandemeddelanden som ska hämtas.
-
order
: Resultatordning efter index (stigande eller fallande).
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com",
azure_ad_token_provider=token_provider,
api_version="2025-02-01-preview"
)
response = client.chat.completions.list()
print(response.model_dump_json(indent=2))
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com",
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2025-02-01-preview"
)
response = client.chat.completions.list()
print(response.model_dump_json(indent=2))
Microsoft Entra ID
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
API-nyckel
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
{
"data": [
{
"id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u",
"choices": [
{
"finish_reason": null,
"index": 0,
"logprobs": null,
"message": {
"content": "Ensemble methods enhance machine learning performance by combining multiple models to create a more robust and accurate predictor. The key techniques include:\n\n1. **Bagging (Bootstrap Aggregating):** This involves training models on random subsets of the data to reduce variance and prevent overfitting. Random Forests, a popular bagging method, build multiple decision trees using random feature subsets, leading to robust predictions.\n\n2. **Boosting:** This sequential approach trains models to correct the errors of their predecessors, thereby focusing on difficult-to-predict data points. Gradient Boosting is a common implementation that sequentially builds decision trees, each improving upon the prediction errors of the previous ones.\n\n3. **Stacking:** This technique uses a meta-model to combine the predictions of multiple base models, leveraging their diverse strengths to enhance overall prediction accuracy.\n\nThe practical implications of ensemble methods include achieving superior model performance compared to single models by capturing various data patterns and reducing overfitting and variance. These methods are widely used in applications where high accuracy and model reliability are critical.",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null
}
}
],
"created": 1740447656,
"model": "gpt-4o-2024-08-06",
"object": null,
"service_tier": null,
"system_fingerprint": "fp_b705f0c291",
"usage": {
"completion_tokens": 208,
"prompt_tokens": 157,
"total_tokens": 365,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"request_id": "0000aaaa-11bb-cccc-dd22-eeeeee333333",
"seed": -430976584126747957,
"top_p": 1,
"temperature": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"metadata": {
"user": "admin",
"category": "docs-test"
}
}
],
"has_more": false,
"object": "list",
"total": 1,
"first_id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u",
"last_id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u"
}
Hämta lagrad slutföringsdata
Hämta lagrad slutföring med ID.
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com/",
azure_ad_token_provider=token_provider,
api_version="2025-02-01-preview"
)
response = client.chat.completions.retrieve("chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u")
print(response.model_dump_json(indent=2))
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com",
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2025-02-01-preview"
)
response = client.chat.completions.retrieve("chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u")
print(response.model_dump_json(indent=2))
Microsoft Entra ID
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
API-nyckel
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
{
"id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u",
"choices": [
{
"finish_reason": null,
"index": 0,
"logprobs": null,
"message": {
"content": "Ensemble methods enhance machine learning performance by combining multiple models to create a more robust and accurate predictor. The key techniques include:\n\n1. **Bagging (Bootstrap Aggregating):** This involves training models on random subsets of the data to reduce variance and prevent overfitting. Random Forests, a popular bagging method, build multiple decision trees using random feature subsets, leading to robust predictions.\n\n2. **Boosting:** This sequential approach trains models to correct the errors of their predecessors, thereby focusing on difficult-to-predict data points. Gradient Boosting is a common implementation that sequentially builds decision trees, each improving upon the prediction errors of the previous ones.\n\n3. **Stacking:** This technique uses a meta-model to combine the predictions of multiple base models, leveraging their diverse strengths to enhance overall prediction accuracy.\n\nThe practical implications of ensemble methods include achieving superior model performance compared to single models by capturing various data patterns and reducing overfitting and variance. These methods are widely used in applications where high accuracy and model reliability are critical.",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null
}
}
],
"created": 1740447656,
"model": "gpt-4o-2024-08-06",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_b705f0c291",
"usage": {
"completion_tokens": 208,
"prompt_tokens": 157,
"total_tokens": 365,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"request_id": "0000aaaa-11bb-cccc-dd22-eeeeee333333",
"seed": -430976584126747957,
"top_p": 1,
"temperature": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"metadata": {
"user": "admin",
"category": "docs-test"
}
}
Hämta lagrade meddelanden om chattens slutförande
Ytterligare parametrar:
-
after
: Identifierare för det senast lagrade slutförandemeddelandet från föregående sidnumreringsbegäran.
-
limit
: Antal lagrade slutförandemeddelanden som ska hämtas.
-
order
: Resultatordning efter index (stigande eller fallande).
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com/",
azure_ad_token_provider=token_provider,
api_version="2025-02-01-preview"
)
response = client.chat.completions.messages.list("chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u", limit=2)
print(response.model_dump_json(indent=2))
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com",
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2025-02-01-preview"
)
response = client.chat.completions.messages.list("chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u", limit=2)
print(response.model_dump_json(indent=2))
Microsoft Entra ID
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u/messages?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN" \
API-nyckel
curl https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u/messages?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY" \
{
"data": [
{
"content": "Provide a clear and concise summary of the technical content, highlighting key concepts and their relationships. Focus on the main ideas and practical implications.",
"refusal": null,
"role": "system",
"audio": null,
"function_call": null,
"tool_calls": null,
"id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u-0"
},
{
"content": "Ensemble methods combine multiple machine learning models to create a more robust and accurate predictor. Common techniques include bagging (training models on random subsets of data), boosting (sequentially training models to correct previous errors), and stacking (using a meta-model to combine base model predictions). Random Forests, a popular bagging method, create multiple decision trees using random feature subsets. Gradient Boosting builds trees sequentially, with each tree focusing on correcting the errors of previous trees. These methods often achieve better performance than single models by reducing overfitting and variance while capturing different aspects of the data.",
"refusal": null,
"role": "user",
"audio": null,
"function_call": null,
"tool_calls": null,
"id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u-1"
}
],
"has_more": false,
"object": "list",
"total": 2,
"first_id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u-0",
"last_id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u-1"
}
Uppdatera lagrad chattkomplettering
Lägg till metadatanyckel:värdepar i en befintlig lagrad komplettering.
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com/",
azure_ad_token_provider=token_provider,
api_version="2025-02-01-preview"
)
response = client.chat.completions.update(
"chatcmpl-C2dE3fH4iJ5kL6mN7oP8qR9sT0uV1w",
metadata={"fizz": "buzz"}
)
print(response.model_dump_json(indent=2))
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com",
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2025-02-01-preview"
)
response = client.chat.completions.update(
"chatcmpl-C2dE3fH4iJ5kL6mN7oP8qR9sT0uV1w",
metadata={"fizz": "buzz"}
)
print(response.model_dump_json(indent=2))
Microsoft Entra ID
curl -X https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN"
-d '{
"metadata": {
"fizz": "buzz"
}
}'
API-nyckel
curl -X https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY"
-d '{
"metadata": {
"fizz": "buzz"
}
}'
"id": "chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u",
"choices": [
{
"finish_reason": null,
"index": 0,
"logprobs": null,
"message": {
"content": "Ensemble methods enhance machine learning performance by combining multiple models to create a more robust and accurate predictor. The key techniques include:\n\n1. **Bagging (Bootstrap Aggregating):** This involves training models on random subsets of the data to reduce variance and prevent overfitting. Random Forests, a popular bagging method, build multiple decision trees using random feature subsets, leading to robust predictions.\n\n2. **Boosting:** This sequential approach trains models to correct the errors of their predecessors, thereby focusing on difficult-to-predict data points. Gradient Boosting is a common implementation that sequentially builds decision trees, each improving upon the prediction errors of the previous ones.\n\n3. **Stacking:** This technique uses a meta-model to combine the predictions of multiple base models, leveraging their diverse strengths to enhance overall prediction accuracy.\n\nThe practical implications of ensemble methods include achieving superior model performance compared to single models by capturing various data patterns and reducing overfitting and variance. These methods are widely used in applications where high accuracy and model reliability are critical.",
"refusal": null,
"role": "assistant",
"audio": null,
"function_call": null,
"tool_calls": null
}
}
],
"created": 1740447656,
"model": "gpt-4o-2024-08-06",
"object": "chat.completion",
"service_tier": null,
"system_fingerprint": "fp_b705f0c291",
"usage": {
"completion_tokens": 208,
"prompt_tokens": 157,
"total_tokens": 365,
"completion_tokens_details": null,
"prompt_tokens_details": null
},
"request_id": "0000aaaa-11bb-cccc-dd22-eeeeee333333",
"seed": -430976584126747957,
"top_p": 1,
"temperature": 1,
"presence_penalty": 0,
"frequency_penalty": 0,
"metadata": {
"user": "admin",
"category": "docs-test"
"fizz": "buzz"
}
}
Ta bort lagrad chatt
Ta bort lagrad komplettering genom kompletterings-ID.
Microsoft Entra ID
from openai import AzureOpenAI
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
token_provider = get_bearer_token_provider(
DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default"
)
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com/",
azure_ad_token_provider=token_provider,
api_version="2025-02-01-preview"
)
response = client.chat.completions.delete("chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u")
print(response.model_dump_json(indent=2))
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = "https://YOUR-RESOURCE-NAME.openai.azure.com",
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2025-02-01-preview"
)
response = client.chat.completions.delete("chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u")
print(response.model_dump_json(indent=2))
curl -X DELETE -D - https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $AZURE_OPENAI_AUTH_TOKEN"
API-nyckel
curl -X DELETE -D - https://YOUR-RESOURCE-NAME.openai.azure.com/openai/chat/completions/chatcmpl-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u?api-version=2025-02-01-preview \
-H "Content-Type: application/json" \
-H "api-key: $AZURE_OPENAI_API_KEY"
"id"• "chatcmp1-A1bC2dE3fH4iJ5kL6mN7oP8qR9sT0u",
"deleted": true,
"object": "chat. completion. deleted"
Felsökning
Behöver jag särskilda behörigheter för att använda lagrade slutföranden?
Åtkomst till lagrade slutföranden styrs via två DataActions:
Microsoft.CognitiveServices/accounts/OpenAI/stored-completions/read
Microsoft.CognitiveServices/accounts/OpenAI/stored-completions/action
Som standard Cognitive Services OpenAI Contributor
har åtkomst till båda dessa behörigheter:
Hur gör jag för att ta bort lagrade data?
Data kan tas bort genom att ta bort den associerade Azure OpenAI-resursen. Om du bara vill ta bort lagrade slutförandedata måste du öppna ett ärende med kundsupport.
Hur mycket lagrade slutförandedata kan jag lagra?
Du kan lagra högst 10 GB data.
Kan jag förhindra att lagrade slutföranden någonsin aktiveras i en prenumeration?
Du måste öppna ett ärende med kundsupport för att inaktivera lagrade slutföranden på prenumerationsnivå.
TypeError: Completions.create() fick ett oväntat argument "store"
Det här felet uppstår när du kör en äldre version av OpenAI-klientbiblioteket som föregår den lagrade slutförandefunktionen som släpps. Kör pip install openai --upgrade
.