如何搭配 Azure OpenAI 服務使用函式呼叫 (預覽)
最新版的 gpt-35-turbo 和 gpt-4 經過微調,可與函式搭配運作,而且能夠同時判斷應該呼叫函式的時機和方式。 如果您的要求中包含一或多個函式,模型會根據提示的內容來判斷是否應該呼叫任何函式。 當模型判斷應該呼叫函式時,它會以 JSON 物件回應,包括函式的引數。
模型會根據您指定的函式來制定 API 呼叫和結構資料輸出。 請務必注意,雖然模型可以產生這些呼叫,但您必須執行這些呼叫,以確保您保持控制權。
進一步來説,您可以將使用函式分成三個步驟:
- 使用您的函式和使用者的輸入呼叫聊天完成 API
- 使用模型的回應來呼叫您的 API 或函式
- 再次呼叫聊天完成 API,包括來自函式的回應以取得最終回應
重要
functions
和 function_call
參數已隨著 API 2023-12-01-preview
版的發行而淘汰。 functions
的取代項目是 tools
參數。 function_call
的取代項目是 tool_choice
參數。
gpt-35-turbo
(1106)gpt-35-turbo
(0125)gpt-4
(1106-預覽)gpt-4
(0125-預覽)gpt-4
(vision-preview)gpt-4
(2024-04-09)gpt-4o
(2024-05-13)gpt-4o-mini
(2024-07-18)
API 版本 2023-12-01-preview
首次新增對平行函式的支援
- 支援平行函式呼叫的所有模型
gpt-4
(0613)gpt-4-32k
(0613)gpt-35-turbo-16k
(0613)gpt-35-turbo
(0613)
首先,我們將示範簡單的玩具函式呼叫,其可在三個硬式編碼位置中檢查時間,並定義單一工具/函式。 我們已新增 print 語句,以協助讓程式碼執行更容易遵循:
import os
import json
from openai import AzureOpenAI
from datetime import datetime
from zoneinfo import ZoneInfo
# Initialize the Azure OpenAI client
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview"
)
# Define the deployment you want to use for your chat completions API calls
deployment_name = "<YOUR_DEPLOYMENT_NAME_HERE>"
# Simplified timezone data
TIMEZONE_DATA = {
"tokyo": "Asia/Tokyo",
"san francisco": "America/Los_Angeles",
"paris": "Europe/Paris"
}
def get_current_time(location):
"""Get the current time for a given location"""
print(f"get_current_time called with location: {location}")
location_lower = location.lower()
for key, timezone in TIMEZONE_DATA.items():
if key in location_lower:
print(f"Timezone found for {key}")
current_time = datetime.now(ZoneInfo(timezone)).strftime("%I:%M %p")
return json.dumps({
"location": location,
"current_time": current_time
})
print(f"No timezone data found for {location_lower}")
return json.dumps({"location": location, "current_time": "unknown"})
def run_conversation():
# Initial user message
messages = [{"role": "user", "content": "What's the current time in San Francisco"}] # Single function call
#messages = [{"role": "user", "content": "What's the current time in San Francisco, Tokyo, and Paris?"}] # Parallel function call with a single tool/function defined
# Define the function for the model
tools = [
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current time in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g. San Francisco",
},
},
"required": ["location"],
},
}
}
]
# First API call: Ask the model to use the function
response = client.chat.completions.create(
model=deployment_name,
messages=messages,
tools=tools,
tool_choice="auto",
)
# Process the model's response
response_message = response.choices[0].message
messages.append(response_message)
print("Model's response:")
print(response_message)
# Handle function calls
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
if tool_call.function.name == "get_current_time":
function_args = json.loads(tool_call.function.arguments)
print(f"Function arguments: {function_args}")
time_response = get_current_time(
location=function_args.get("location")
)
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": "get_current_time",
"content": time_response,
})
else:
print("No tool calls were made by the model.")
# Second API call: Get the final response from the model
final_response = client.chat.completions.create(
model=deployment_name,
messages=messages,
)
return final_response.choices[0].message.content
# Run the conversation and print the result
print(run_conversation())
輸出:
Model's response:
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_pOsKdUlqvdyttYB67MOj434b', function=Function(arguments='{"location":"San Francisco"}', name='get_current_time'), type='function')])
Function arguments: {'location': 'San Francisco'}
get_current_time called with location: San Francisco
Timezone found for san francisco
The current time in San Francisco is 09:24 AM.
如果我們使用支援平行函式呼叫的模型部署,我們可以藉由變更訊息陣列來要求多個而不是一個位置的時間,將其轉換成平行函式呼叫範例。
若要完成此作業,請使用這兩行來交換註解:
messages = [{"role": "user", "content": "What's the current time in San Francisco"}] # Single function call
#messages = [{"role": "user", "content": "What's the current time in San Francisco, Tokyo, and Paris?"}] # Parallel function call with a single tool/function defined
要看起來像這樣,然後再次執行程式碼:
#messages = [{"role": "user", "content": "What's the current time in San Francisco"}] # Single function call
messages = [{"role": "user", "content": "What's the current time in San Francisco, Tokyo, and Paris?"}] # Parallel function call with a single tool/function defined
這會產生下列輸出:
輸出:
Model's response:
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_IjcAVz9JOv5BXwUx1jd076C1', function=Function(arguments='{"location": "San Francisco"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_XIPQYTCtKIaNCCPTdvwjkaSN', function=Function(arguments='{"location": "Tokyo"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_OHIB5aJzO8HGqanmsdzfytvp', function=Function(arguments='{"location": "Paris"}', name='get_current_time'), type='function')])
Function arguments: {'location': 'San Francisco'}
get_current_time called with location: San Francisco
Timezone found for san francisco
Function arguments: {'location': 'Tokyo'}
get_current_time called with location: Tokyo
Timezone found for tokyo
Function arguments: {'location': 'Paris'}
get_current_time called with location: Paris
Timezone found for paris
As of now, the current times are:
- **San Francisco:** 11:15 AM
- **Tokyo:** 03:15 AM (next day)
- **Paris:** 08:15 PM
平行函式呼叫可讓您一起執行多個函式呼叫,以平行執行和擷取結果。 這樣可減少需要對 API 呼叫的數目,並可改善整體效能。
例如,在我們的簡單時間應用程式中,我們同時擷取多次。 這會導致 tool_calls
陣列中有三個函式呼叫的聊天完成訊息,每個函式都會有唯一的 id
。 如果您想要回應這些函式呼叫,您會將三個新訊息新增至交談,每個訊息都包含一個函式呼叫的結果,其中 tool_call_id
參考來自 tools_calls
的 id
。
若要強制模型呼叫特定函式,請使用特定函式名稱來設定 tool_choice
參數。 您也可以藉由設定 tool_choice: "none"
,強制模型產生使用者面向訊息。
注意
預設行為 (tool_choice: "auto"
) 是讓模型自行決定是否要呼叫函式,以及要呼叫哪一個函式。
現在,我們將示範另一個玩具函式呼叫範例,這次會定義兩個不同的工具/函式。
import os
import json
from openai import AzureOpenAI
from datetime import datetime, timedelta
from zoneinfo import ZoneInfo
# Initialize the Azure OpenAI client
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2024-05-01-preview"
)
# Provide the model deployment name you want to use for this example
deployment_name = "YOUR_DEPLOYMENT_NAME_HERE"
# Simplified weather data
WEATHER_DATA = {
"tokyo": {"temperature": "10", "unit": "celsius"},
"san francisco": {"temperature": "72", "unit": "fahrenheit"},
"paris": {"temperature": "22", "unit": "celsius"}
}
# Simplified timezone data
TIMEZONE_DATA = {
"tokyo": "Asia/Tokyo",
"san francisco": "America/Los_Angeles",
"paris": "Europe/Paris"
}
def get_current_weather(location, unit=None):
"""Get the current weather for a given location"""
print(f"get_current_weather called with location: {location}, unit: {unit}")
for key in WEATHER_DATA:
if key in location_lower:
print(f"Weather data found for {key}")
weather = WEATHER_DATA[key]
return json.dumps({
"location": location,
"temperature": weather["temperature"],
"unit": unit if unit else weather["unit"]
})
print(f"No weather data found for {location_lower}")
return json.dumps({"location": location, "temperature": "unknown"})
def get_current_time(location):
"""Get the current time for a given location"""
print(f"get_current_time called with location: {location}")
location_lower = location.lower()
for key, timezone in TIMEZONE_DATA.items():
if key in location_lower:
print(f"Timezone found for {key}")
current_time = datetime.now(ZoneInfo(timezone)).strftime("%I:%M %p")
return json.dumps({
"location": location,
"current_time": current_time
})
print(f"No timezone data found for {location_lower}")
return json.dumps({"location": location, "current_time": "unknown"})
def run_conversation():
# Initial user message
messages = [{"role": "user", "content": "What's the weather and current time in San Francisco, Tokyo, and Paris?"}]
# Define the functions for the model
tools = [
{
"type": "function",
"function": {
"name": "get_current_weather",
"description": "Get the current weather in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g. San Francisco",
},
"unit": {"type": "string", "enum": ["celsius", "fahrenheit"]},
},
"required": ["location"],
},
}
},
{
"type": "function",
"function": {
"name": "get_current_time",
"description": "Get the current time in a given location",
"parameters": {
"type": "object",
"properties": {
"location": {
"type": "string",
"description": "The city name, e.g. San Francisco",
},
},
"required": ["location"],
},
}
}
]
# First API call: Ask the model to use the functions
response = client.chat.completions.create(
model=deployment_name,
messages=messages,
tools=tools,
tool_choice="auto",
)
# Process the model's response
response_message = response.choices[0].message
messages.append(response_message)
print("Model's response:")
print(response_message)
# Handle function calls
if response_message.tool_calls:
for tool_call in response_message.tool_calls:
function_name = tool_call.function.name
function_args = json.loads(tool_call.function.arguments)
print(f"Function call: {function_name}")
print(f"Function arguments: {function_args}")
if function_name == "get_current_weather":
function_response = get_current_weather(
location=function_args.get("location"),
unit=function_args.get("unit")
)
elif function_name == "get_current_time":
function_response = get_current_time(
location=function_args.get("location")
)
else:
function_response = json.dumps({"error": "Unknown function"})
messages.append({
"tool_call_id": tool_call.id,
"role": "tool",
"name": function_name,
"content": function_response,
})
else:
print("No tool calls were made by the model.")
# Second API call: Get the final response from the model
final_response = client.chat.completions.create(
model=deployment_name,
messages=messages,
)
return final_response.choices[0].message.content
# Run the conversation and print the result
print(run_conversation())
輸出
Model's response:
ChatCompletionMessage(content=None, role='assistant', function_call=None, tool_calls=[ChatCompletionMessageToolCall(id='call_djHAeQP0DFEVZ2qptrO0CYC4', function=Function(arguments='{"location": "San Francisco", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_q2f1HPKKUUj81yUa3ITLOZFs', function=Function(arguments='{"location": "Tokyo", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_6TEY5Imtr17PaB4UhWDaPxiX', function=Function(arguments='{"location": "Paris", "unit": "celsius"}', name='get_current_weather'), type='function'), ChatCompletionMessageToolCall(id='call_vpzJ3jElpKZXA9abdbVMoauu', function=Function(arguments='{"location": "San Francisco"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_1ag0MCIsEjlwbpAqIXJbZcQj', function=Function(arguments='{"location": "Tokyo"}', name='get_current_time'), type='function'), ChatCompletionMessageToolCall(id='call_ukOu3kfYOZR8lpxGRpdkhhdD', function=Function(arguments='{"location": "Paris"}', name='get_current_time'), type='function')])
Function call: get_current_weather
Function arguments: {'location': 'San Francisco', 'unit': 'celsius'}
get_current_weather called with location: San Francisco, unit: celsius
Weather data found for san francisco
Function call: get_current_weather
Function arguments: {'location': 'Tokyo', 'unit': 'celsius'}
get_current_weather called with location: Tokyo, unit: celsius
Weather data found for tokyo
Function call: get_current_weather
Function arguments: {'location': 'Paris', 'unit': 'celsius'}
get_current_weather called with location: Paris, unit: celsius
Weather data found for paris
Function call: get_current_time
Function arguments: {'location': 'San Francisco'}
get_current_time called with location: San Francisco
Timezone found for san francisco
Function call: get_current_time
Function arguments: {'location': 'Tokyo'}
get_current_time called with location: Tokyo
Timezone found for tokyo
Function call: get_current_time
Function arguments: {'location': 'Paris'}
get_current_time called with location: Paris
Timezone found for paris
Here's the current information for the three cities:
### San Francisco
- **Time:** 09:13 AM
- **Weather:** 72°C (quite warm!)
### Tokyo
- **Time:** 01:13 AM (next day)
- **Weather:** 10°C
### Paris
- **Time:** 06:13 PM
- **Weather:** 22°C
Is there anything else you need?
重要
JSON 回應可能不總是有效,因此您必須將額外的邏輯新增至程式碼,才能處理錯誤。 針對某些使用案例,您可能會發現需要使用微調來改善函式呼叫效能。
當您將函式定義為要求的一部分時,會使用模型已定型的特定語法將詳細資料插入系統訊息中。 這表示函式會取用提示中的權杖,而且您可以套用提示工程技術來最佳化函式呼叫的效能。 此模型會使用提示的完整內容來判斷是否應該呼叫函式,包括函式定義、系統訊息和使用者訊息。
如果模型在您未預期的時機或以您未預期的方式呼叫您的函式,您可以嘗試改善品質。
請務必提供有意義的函式 description
,並為模型可能不明瞭的任何參數提供描述。 例如,在 location
參數的描述中,您可以包含位置格式的額外詳細資料和範例。
"location": {
"type": "string",
"description": "The location of the hotel. The location should include the city and the state's abbreviation (i.e. Seattle, WA or Miami, FL)"
},
系統訊息也可以用來提供更多內容給模型。 例如,如果您有稱為 search_hotels
的函式,可以包含類似下列的系統訊息,以指示模型在使用者要求尋找飯店的協助時呼叫函式。
{"role": "system", "content": "You're an AI assistant designed to help users search for hotels. When a user asks for help finding a hotel, you should call the search_hotels function."}
在某些情況下,您想要指示模型詢問可說明清楚的問題,以避免假設關於要與函式搭配使用的值。 例如,使用 search_hotels
時,當使用者的要求不包含 location
的詳細資料時,您會希望模型詢問厘清問題。 若要指示模型詢問可說明清楚的問題,您可以在系統訊息中包含如下一個範例的內容。
{"role": "system", "content": "Don't make assumptions about what values to use with functions. Ask for clarification if a user request is ambiguous."}
另一個提示工程有價值的地方是減少函式呼叫中的錯誤。 模型會定型以產生符合您定義的結構描述的函式呼叫,但模型會產生不符合您已定義的結構描述的函式呼叫,或嘗試呼叫您未包含的函式。
如果您發現模型正在產生未提供的函式呼叫,請嘗試在系統訊息中包含顯示 "Only use the functions you have been provided with."
的句子。
就像任何 AI 系統一樣,使用函式呼叫來整合語言模型與其他工具和系統,可能會帶來潛在的風險。 請務必了解函式呼叫可能會出現的風險,並採取措施以確保您負責地使用功能。
以下是一些可協助您安全可靠地使用函式的秘訣:
- 驗證函式呼叫: 一律驗證模型所產生的函式呼叫。 這包括檢查參數、所呼叫的函式,以及確保呼叫符合預期的動作。
- 使用受信任的資料和工具: 僅使用來自受信任和已驗證來源的資料。 函式輸出中不受信任的資料可用來指示模型以您預期以外的方式撰寫函式呼叫。
- 遵循最低權限的原則: 只授與函式執行其作業所需的最低存取權。 如果函式遭到誤用或惡意探索,這會降低潛在影響。 例如,如果您使用函式呼叫來查詢資料庫,則應該只為應用程式提供資料庫的唯讀存取權。 您也不應該只依賴將函式定義中的功能排除為安全性控制項。
- 請考慮實際影響: 請注意您計畫執行之函式呼叫的實際影響,特別是觸發動作,例如執行程式碼、更新資料庫或傳送通知。
- 實作使用者確認步驟: 特別是針對採取動作的函式,我們建議包含一個步驟讓使用者在執行動作之前進行確認。
若要深入了解我們對如何負責任地使用 Azure OpenAI 模型的建議,請參閱 Azure OpenAI 模型的負責任 AI 做法的概觀。
- 深入了解 Azure OpenAI。
- 如需使用函式的詳細資訊,請參閱 Azure OpenAI 範例 GitHub 存放庫
- 使用 GPT-35-Turbo 快速入門開始使用 GPT-35-Turbo 模型。