OpenAI 剛剛發行了新版本的
OpenAI Python API 連結庫
。 本指南是 OpenAI 移轉指南的補充
,
可協助您加快 Azure OpenAI 的特定變更速度。
這是新版本的 OpenAI Python API 連結庫。
從 2023
pip install openai
年 11 月 6 日開始,並
pip install openai --upgrade
將會安裝
version 1.x
OpenAI Python 連結庫。
從
version 0.28.1
升級至
version 1.x
是重大變更,您必須測試並更新程序代碼。
如果發生錯誤,請自動重試與輪詢
適當的類型 (適用於 mypy/pyright/editors)
您現在可以具現化用戶端,而不是使用全域預設值。
切換至明確的用戶端具現化
DALL-E3
最新的
1.x 版本
完全支援。
DALL-E2
您可以對程式代碼
進行
下列修改,搭配 1.x 使用。
embeddings_utils.py
用來提供語意文字搜尋
餘弦相似度等功能已不再是 OpenAI Python API 連結庫
的一部分。
您也應該檢查 OpenAI Python 連結庫的作用中
GitHub 問題
。
移轉之前先進行測試
Azure OpenAI 不支援使用
openai migrate
自動移轉程式代碼。
由於這是具有重大變更的新版本連結庫,因此您應該先針對新版本測試程序代碼,再將任何生產應用程式移轉至依賴 1.x 版。 您也應該檢閱您的程式代碼和內部程式,以確保您遵循最佳做法,並將生產程式代碼釘選到您完整測試的版本。
為了簡化移轉程式,我們將 Python 檔中的現有程式代碼範例更新為索引標籤體驗:
OpenAI Python 1.x
OpenAI Python 0.28.1
這會提供已變更的內容,並可讓您平行測試新連結庫,同時繼續提供版本
0.28.1
的支援。 如果您升級至
1.x
,並意識到需要暫時還原回舊版,您可以一律
pip uninstall openai
使用 重新安裝目標
0.28.1
pip install openai==0.28.1
。
OpenAI Python 1.x
OpenAI Python 0.28.1
您必須將
model
變數設定為部署 GPT-3.5-Turbo 或 GPT-4 模型時所選擇的部署名稱。 除非您選擇與基礎模型名稱相同的部署名稱,否則輸入模型名稱會導致錯誤。
import os
from openai import AzureOpenAI
client = AzureOpenAI(
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT"),
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-05-15"
response = client.chat.completions.create(
model="gpt-35-turbo", # model = "deployment_name".
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
{"role": "user", "content": "Do other Azure AI services support this too?"}
print(response.choices[0].message.content)
如需其他範例,請參閱深入 聊天完成一文。
您必須將 engine
變數設定為部署 GPT-3.5-Turbo 或 GPT-4 模型時所選擇的部署名稱。 除非您選擇與基礎模型名稱相同的部署名稱,否則輸入模型名稱會導致錯誤。
import os
import openai
openai.api_type = "azure"
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT")
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai.api_version = "2023-05-15"
response = openai.ChatCompletion.create(
engine="gpt-35-turbo", # engine = "deployment_name".
messages=[
{"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": "Does Azure OpenAI support customer managed keys?"},
{"role": "assistant", "content": "Yes, customer managed keys are supported by Azure OpenAI."},
{"role": "user", "content": "Do other Azure AI services support this too?"}
print(response)
print(response['choices'][0]['message']['content'])
OpenAI Python 1.x
OpenAI Python 0.28.1
client = AzureOpenAI(
api_key=os.getenv("AZURE_OPENAI_API_KEY"),
api_version="2023-12-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
# Send a completion call to generate an answer
print('Sending a test completion job')
start_phrase = 'Write a tagline for an ice cream shop. '
response = client.completions.create(model=deployment_name, prompt=start_phrase, max_tokens=10)
print(response.choices[0].text)
openai.api_key = os.getenv("AZURE_OPENAI_API_KEY")
openai.api_base = os.getenv("AZURE_OPENAI_ENDPOINT") # your endpoint should look like the following https://YOUR_RESOURCE_NAME.openai.azure.com/
openai.api_type = 'azure'
openai.api_version = '2023-05-15' # this might change in the future
deployment_name='REPLACE_WITH_YOUR_DEPLOYMENT_NAME' #This will correspond to the custom name you chose for your deployment when you deployed a model.
# Send a completion call to generate an answer
print('Sending a test completion job')
start_phrase = 'Write a tagline for an ice cream shop. '
response = openai.Completion.create(engine=deployment_name, prompt=start_phrase, max_tokens=10)
text = response['choices'][0]['text'].replace('\n', '').replace(' .', '.').strip()
print(start_phrase+text)
Embeddings
OpenAI Python 1.x
OpenAI Python 0.28.1
client = AzureOpenAI(
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2023-05-15",
azure_endpoint =os.getenv("AZURE_OPENAI_ENDPOINT")
response = client.embeddings.create(
input = "Your text string goes here",
model= "text-embedding-ada-002" # model = "deployment_name".
print(response.model_dump_json(indent=2))
在我們的內嵌教學課程中可以找到其他範例,包括如何處理語意文字搜尋embeddings_utils.py
。
import openai
openai.api_type = "azure"
openai.api_key = YOUR_API_KEY
openai.api_base = "https://YOUR_RESOURCE_NAME.openai.azure.com"
openai.api_version = "2023-05-15"
response = openai.Embedding.create(
input="Your text string goes here",
engine="YOUR_DEPLOYMENT_NAME"
embeddings = response['data'][0]['embedding']
print(embeddings)
OpenAI 不支援在模組層級用戶端中呼叫異步方法,相反地,您應該具現化異步用戶端。
import os
import asyncio
from openai import AsyncAzureOpenAI
async def main():
client = AsyncAzureOpenAI(
api_key = os.getenv("AZURE_OPENAI_API_KEY"),
api_version = "2023-12-01-preview",
azure_endpoint = os.getenv("AZURE_OPENAI_ENDPOINT")
response = await client.chat.completions.create(model="gpt-35-turbo", messages=[{"role": "user", "content": "Hello world"}])
print(response.model_dump_json(indent=2))
asyncio.run(main())
from azure.identity import DefaultAzureCredential, get_bearer_token_provider
from openai import AzureOpenAI
token_provider = get_bearer_token_provider(DefaultAzureCredential(), "https://cognitiveservices.azure.com/.default")
api_version = "2023-12-01-preview"
endpoint = "https://my-resource.openai.azure.com"
client = AzureOpenAI(
api_version=api_version,
azure_endpoint=endpoint,
azure_ad_token_provider=token_provider,
completion = client.chat.completions.create(
model="deployment-name", # gpt-35-instant
messages=[
"role": "user",
"content": "How do I output all files in a directory using Python?",
print(completion.model_dump_json(indent=2))
使用您的數據
如需讓這些程式代碼範例運作所需的完整設定步驟,請參閱 使用您的數據快速入門。
OpenAI Python 1.x
OpenAI Python 0.28.1
endpoint = os.environ.get("AZURE_OPENAI_ENDPOINT")
api_key = os.environ.get("AZURE_OPENAI_API_KEY")
deployment = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")
client = openai.AzureOpenAI(
base_url=f"{endpoint}/openai/deployments/{deployment}/extensions",
api_key=api_key,
api_version="2023-08-01-preview",
completion = client.chat.completions.create(
model=deployment,
messages=[
"role": "user",
"content": "How is Azure machine learning different than Azure OpenAI?",
extra_body={
"dataSources": [
"type": "AzureCognitiveSearch",
"parameters": {
"endpoint": os.environ["AZURE_AI_SEARCH_ENDPOINT"],
"key": os.environ["AZURE_AI_SEARCH_API_KEY"],
"indexName": os.environ["AZURE_AI_SEARCH_INDEX"]
print(completion.model_dump_json(indent=2))
openai.api_base = os.environ.get("AZURE_OPENAI_ENDPOINT")
openai.api_version = "2023-08-01-preview"
openai.api_type = 'azure'
openai.api_key = os.environ.get("AZURE_OPENAI_API_KEY")
def setup_byod(deployment_id: str) -> None:
"""Sets up the OpenAI Python SDK to use your own data for the chat endpoint.
:param deployment_id: The deployment ID for the model to use with your own data.
To remove this configuration, simply set openai.requestssession to None.
class BringYourOwnDataAdapter(requests.adapters.HTTPAdapter):
def send(self, request, **kwargs):
request.url = f"{openai.api_base}/openai/deployments/{deployment_id}/extensions/chat/completions?api-version={openai.api_version}"
return super().send(request, **kwargs)
session = requests.Session()
# Mount a custom adapter which will use the extensions endpoint for any call using the given `deployment_id`
session.mount(
prefix=f"{openai.api_base}/openai/deployments/{deployment_id}",
adapter=BringYourOwnDataAdapter()
openai.requestssession = session
aoai_deployment_id = os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID")
setup_byod(aoai_deployment_id)
completion = openai.ChatCompletion.create(
messages=[{"role": "user", "content": "What are the differences between Azure Machine Learning and Azure AI services?"}],
deployment_id=os.environ.get("AZURE_OPEN_AI_DEPLOYMENT_ID"),
dataSources=[ # camelCase is intentional, as this is the format the API expects
"type": "AzureCognitiveSearch",
"parameters": {
"endpoint": os.environ.get("AZURE_AI_SEARCH_ENDPOINT"),
"key": os.environ.get("AZURE_AI_SEARCH_API_KEY"),
"indexName": os.environ.get("AZURE_AI_SEARCH_INDEX"),
print(completion)
DALL-E 修正
DALLE-Fix
DALLE-Fix Async
request: httpx.Request,
) -> httpx.Response:
if "images/generations" in request.url.path and request.url.params[
"api-version"
] in [
"2023-06-01-preview",
"2023-07-01-preview",
"2023-08-01-preview",
"2023-09-01-preview",
"2023-10-01-preview",
request.url = request.url.copy_with(path="/openai/images/generations:submit")
response = super().handle_request(request)
operation_location_url = response.headers["operation-location"]
request.url = httpx.URL(operation_location_url)
request.method = "GET"
response = super().handle_request(request)
response.read()
timeout_secs: int = 120
start_time = time.time()
while response.json()["status"] not in ["succeeded", "failed"]:
if time.time() - start_time > timeout_secs:
timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
return httpx.Response(
status_code=400,
headers=response.headers,
content=json.dumps(timeout).encode("utf-8"),
request=request,
time.sleep(int(response.headers.get("retry-after")) or 10)
response = super().handle_request(request)
response.read()
if response.json()["status"] == "failed":
error_data = response.json()
return httpx.Response(
status_code=400,
headers=response.headers,
content=json.dumps(error_data).encode("utf-8"),
request=request,
result = response.json()["result"]
return httpx.Response(
status_code=200,
headers=response.headers,
content=json.dumps(result).encode("utf-8"),
request=request,
return super().handle_request(request)
client = openai.AzureOpenAI(
azure_endpoint="<azure_endpoint>",
api_key="<api_key>",
api_version="<api_version>",
http_client=httpx.Client(
transport=CustomHTTPTransport(),
image = client.images.generate(prompt="a cute baby seal")
print(image.data[0].url)
class AsyncCustomHTTPTransport(httpx.AsyncHTTPTransport):
async def handle_async_request(
self,
request: httpx.Request,
) -> httpx.Response:
if "images/generations" in request.url.path and request.url.params[
"api-version"
] in [
"2023-06-01-preview",
"2023-07-01-preview",
"2023-08-01-preview",
"2023-09-01-preview",
"2023-10-01-preview",
request.url = request.url.copy_with(path="/openai/images/generations:submit")
response = await super().handle_async_request(request)
operation_location_url = response.headers["operation-location"]
request.url = httpx.URL(operation_location_url)
request.method = "GET"
response = await super().handle_async_request(request)
await response.aread()
timeout_secs: int = 120
start_time = time.time()
while response.json()["status"] not in ["succeeded", "failed"]:
if time.time() - start_time > timeout_secs:
timeout = {"error": {"code": "Timeout", "message": "Operation polling timed out."}}
return httpx.Response(
status_code=400,
headers=response.headers,
content=json.dumps(timeout).encode("utf-8"),
request=request,
await asyncio.sleep(int(response.headers.get("retry-after")) or 10)
response = await super().handle_async_request(request)
await response.aread()
if response.json()["status"] == "failed":
error_data = response.json()
return httpx.Response(
status_code=400,
headers=response.headers,
content=json.dumps(error_data).encode("utf-8"),
request=request,
result = response.json()["result"]
return httpx.Response(
status_code=200,
headers=response.headers,
content=json.dumps(result).encode("utf-8"),
request=request,
return await super().handle_async_request(request)
async def dall_e():
client = openai.AsyncAzureOpenAI(
azure_endpoint="<azure_endpoint>",
api_key="<api_key>",
api_version="<api_version>",
http_client=httpx.AsyncClient(
transport=AsyncCustomHTTPTransport(),
image = await client.images.generate(prompt="a cute baby seal")
print(image.data[0].url)
asyncio.run(dall_e())
已移除所有 a* 方法;異步客戶端必須改用。
openai.ca_bundle_path
openai.requestssession
(OpenAI 現在使用 httpx
)
openai.aiosession
(OpenAI 現在使用 httpx
)
openai.Deployment
(先前用於 Azure OpenAI)
openai.Engine
openai.File.find_matching_files()