你当前正在访问 Microsoft Azure Global Edition 技术文档网站。 如果需要访问由世纪互联运营的 Microsoft Azure 中国技术文档网站,请访问 https://docs.azure.cn

迁移到 OpenAI Python API 库 1.x

OpenAI 发布了新版本的 OpenAI Python API 库。 本指南是对 OpenAI 迁移指南的补充,可帮助加快特定于 Azure OpenAI 的更改速度。

更新

  • 这是 OpenAI Python API 库的新版本。
  • 从 2023 年 11 月 6 日开始,pip install openaipip install openai --upgrade 将安装 OpenAI Python 库 version 1.x
  • version 0.28.1 升级到 version 1.x 是一项中断性变更,需要测试和更新代码。
  • 出现错误时自动重试并回退
  • 正确的类型(适用于 mypy/pyright/editors)
  • 现在可以实例化客户端,而不是使用全局默认值。
  • 切换到显式客户端实例化
  • 名称更改

已知问题

迁移前进行测试

重要

Azure OpenAI 不支持使用 openai migrate 自动迁移代码。

由于这是具有中断性变更的库新版本,因此在迁移任何生产应用程序以依赖于版本 1.x 之前,应针对新版本广泛测试代码。 还应查看代码和内部流程,确保遵循最佳做法,并将生产代码固定到已完全测试的版本。

为了简化迁移过程,我们将 Python 文档中的现有代码示例更新为选项卡式体验:

pip install openai --upgrade

这提供了更改内容的上下文,并允许你并行测试新库,同时继续为版本 0.28.1 提供支持。 如果升级到 1.x 并意识到需要暂时恢复到以前的版本,则可以始终使用 pip uninstall openai,然后使用 pip install openai==0.28.1 重新安装到目标 0.28.1

聊天完成

需要将变量 model 设置为部署 GPT-3.5-Turbo 或 GPT-4 模型时选择的部署名称。 输入模型名称将会导致错误,除非所选部署名称与基础模型名称相同。

import os
from openai import AzureOpenAI

client = AzureOpenAI(
  azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"), 
  api_key=os.getenv("AZURE_OPENAI_API_KEY"),  
  api_version="2024-02-01"
)

response = client.chat.completions.create(
    model="gpt-35-turbo", # model = "deployment_name"
    messages=[
        {"role": "system", "content": "You are a helpful assistant."},
        {"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
        {"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
        {"role": "user", "content": "Do other Azure AI services support this too?"}
    ]
)

print(response.choices[0].message.content)

可以在我们的深入聊天补全文章中找到其他示例。

完成

import os
from openai import AzureOpenAI
    
client = AzureOpenAI(
    api_key=os.getenv("AZURE_OPENAI_API_KEY"),  
    api_version="2024-02-01",
    azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
)
    
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model. 
    
# Send a completion call to generate an answer
print('Sending a test completion job')
start_phrase = 'Write a tagline for an ice cream shop. '
response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10) # model = "deployment_name"
print(response.choices[0].text)

嵌入

import os
from openai import AzureOpenAI

client = AzureOpenAI(
  api_key = os.getenv("AZURE_OPENAI_API_KEY"),  
  api_version = "2024-02-01",
  azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT") 
)

response = client.embeddings.create(
    input = "Your text string goes here",
    model= "text-embedding-ada-002"  # model = "deployment_name".
)

print(response.model_dump_json(indent=2))

可以在我们的嵌入教程中找到其他示例,包括如何处理没有 embeddings_utils.py 的语义文本搜索。

异步

OpenAI 不支持在模块级客户端中调用异步方法,而是应实例化异步客户端。

import os
import asyncio
from openai import AsyncAzureOpenAI

async def main():
    client = AsyncAzureOpenAI(  
      api_key = os.getenv("AZURE_OPENAI_API_KEY"),  
      api_version = "2024-02-01",
      azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
    )
    response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}]) # model = model deployment name

    print(response.model_dump_json(indent=2))

asyncio.run(main())

身份验证

from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from openai import AzureOpenAI

token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")

api_version = "2024-02-01"
endpoint = "https://my-resource.openai.azure.com"

client = AzureOpenAI(
    api_version=api_version,
    azure_endpoint=endpoint,
    azure_ad_token_provider=token_provider,
)

completion = client.chat.completions.create(
    model="deployment-name",  # model = "deployment_name"
    messages=[
        {
            "role": "user",
            "content": "How do I output all files in a directory using Python?",
        },
    ],
)
print(completion.model_dump_json(indent=2))

使用数据

有关使这些代码示例正常工作所需的完整配置步骤,请参阅使用数据快速入门

import os
import openai
import dotenv

dotenv.load_dotenv()

endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
deployment = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")

client = openai.AzureOpenAI(
    base_url=f"{endpoint}/openai/deployments/{deployment}/extensions",
    api_key=api_key,
    api_version="2023-08-01-preview",
)

completion = client.chat.completions.create(
    model=deployment, # model = "deployment_name"
    messages=[
        {
            "role": "user",
            "content": "How is Azure machine learning different than Azure OpenAI?",
        },
    ],
    extra_body={
        "dataSources": [
            {
                "type": "AzureCognitiveSearch",
                "parameters": {
                    "endpoint": os.environ["AZURE_AI_SEARCH_ENDPOINT"],
                    "key": os.environ["AZURE_AI_SEARCH_API_KEY"],
                    "indexName": os.environ["AZURE_AI_SEARCH_INDEX"]
                }
            }
        ]
    }
)

print(completion.model_dump_json(indent=2))

DALL-E fix

import time
import json
import httpx
import openai


class CustomHTTPTransport(httpx.HTTPTransport):
    def handle_request(
        self,
        request: httpx.Request,
    ) -> httpx.Response:
        if "images/generations" in request.url.path and request.url.params[
            "api-version"
        ] in [
            "2023-06-01-preview",
            "2023-07-01-preview",
            "2023-08-01-preview",
            "2023-09-01-preview",
            "2023-10-01-preview",
        ]:
            request.url = request.url.copy_with(path="/openai/images/generations:submit")
            response = super().handle_request(request)
            operation_location_url = response.headers["operation-location"]
            request.url = httpx.URL(operation_location_url)
            request.method = "GET"
            response = super().handle_request(request)
            response.read()

            timeout_secs: int = 120
            start_time = time.time()
            while response.json()["status"] not in ["succeeded", "failed"]:
                if time.time() - start_time > timeout_secs:
                    timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
                    return httpx.Response(
                        status_code=400,
                        headers=response.headers,
                        content=json.dumps(timeout).encode("utf-8"),
                        request=request,
                    )

                time.sleep(int(response.headers.get("retry-after")) or 10)
                response = super().handle_request(request)
                response.read()

            if response.json()["status"] == "failed":
                error_data = response.json()
                return httpx.Response(
                    status_code=400,
                    headers=response.headers,
                    content=json.dumps(error_data).encode("utf-8"),
                    request=request,
                )

            result = response.json()["result"]
            return httpx.Response(
                status_code=200,
                headers=response.headers,
                content=json.dumps(result).encode("utf-8"),
                request=request,
            )
        return super().handle_request(request)


client = openai.AzureOpenAI(
    azure_endpoint="<azure_endpoint>",
    api_key="<api_key>",
    api_version="<api_version>",
    http_client=httpx.Client(
        transport=CustomHTTPTransport(),
    ),
)
image = client.images.generate(prompt="a cute baby seal")

print(image.data[0].url)

名称更改

注意

已删除所有 a* 方法;必须改用异步客户端。

OpenAI Python 0.28.1 OpenAI Python 1.x
openai.api_base openai.base_url
openai.proxy openai.proxies
openai.InvalidRequestError openai.BadRequestError
openai.Audio.transcribe() client.audio.transcriptions.create()
openai.Audio.translate() client.audio.translations.create()
openai.ChatCompletion.create() client.chat.completions.create()
openai.Completion.create() client.completions.create()
openai.Edit.create() client.edits.create()
openai.Embedding.create() client.embeddings.create()
openai.File.create() client.files.create()
openai.File.list() client.files.list()
openai.File.retrieve() client.files.retrieve()
openai.File.download() client.files.retrieve_content()
openai.FineTune.cancel() client.fine_tunes.cancel()
openai.FineTune.list() client.fine_tunes.list()
openai.FineTune.list_events() client.fine_tunes.list_events()
openai.FineTune.stream_events() client.fine_tunes.list_events(stream=True)
openai.FineTune.retrieve() client.fine_tunes.retrieve()
openai.FineTune.delete() client.fine_tunes.delete()
openai.FineTune.create() client.fine_tunes.create()
openai.FineTuningJob.create() client.fine_tuning.jobs.create()
openai.FineTuningJob.cancel() client.fine_tuning.jobs.cancel()
openai.FineTuningJob.delete() client.fine_tuning.jobs.create()
openai.FineTuningJob.retrieve() client.fine_tuning.jobs.retrieve()
openai.FineTuningJob.list() client.fine_tuning.jobs.list()
openai.FineTuningJob.list_events() client.fine_tuning.jobs.list_events()
openai.Image.create() client.images.generate()
openai.Image.create_variation() client.images.create_variation()
openai.Image.create_edit() client.images.edit()
openai.Model.list() client.models.list()
openai.Model.delete() client.models.delete()
openai.Model.retrieve() client.models.retrieve()
openai.Moderation.create() client.moderations.create()
openai.api_resources openai.resources

已删除

  • openai.api_key_path
  • openai.app_info
  • openai.debug
  • openai.log
  • openai.OpenAIError
  • openai.Audio.transcribe_raw()
  • openai.Audio.translate_raw()
  • openai.ErrorObject
  • openai.Customer
  • openai.api_version
  • openai.verify_ssl_certs
  • openai.api_type
  • openai.enable_telemetry
  • openai.ca_bundle_path
  • openai.requestssession(OpenAI 现在使用 httpx
  • openai.aiosession(OpenAI 现在使用 httpx
  • openai.Deployment(以前用于 Azure OpenAI)
  • openai.Engine
  • openai.File.find_matching_files()