Azure OpenAI (Preview)
Easily integrate Azure OpenAI's cutting-edge artificial intelligence capabilities into your workflows
This connector is available in the following products and regions:
Service | Class | Regions |
---|---|---|
Logic Apps | Standard | All Logic Apps regions except the following: - Azure Government regions - Azure China regions - US Department of Defense (DoD) |
Power Automate | Premium | All Power Automate regions except the following: - US Government (GCC) - US Government (GCC High) - China Cloud operated by 21Vianet - US Department of Defense (DoD) |
Power Apps | Premium | All Power Apps regions except the following: - US Government (GCC) - US Government (GCC High) - China Cloud operated by 21Vianet - US Department of Defense (DoD) |
Contact | |
---|---|
Name | Microsoft |
URL | https://support.microsoft.com |
Connector Metadata | |
---|---|
Publisher | Abby Hartman |
Website | https://azure.microsoft.com/en-us/products/cognitive-services/openai-service |
Privacy policy | https://learn.microsoft.com/en-us/legal/cognitive-services/openai/data-privacy |
Categories | AI;Business Intelligence |
Creating a connection
The connector supports the following authentication types:
Default | Parameters for creating connection. | All regions | Not shareable |
Default
Applicable: All regions
Parameters for creating connection.
This is not shareable connection. If the power app is shared with another user, another user will be prompted to create new connection explicitly.
Name | Type | Description | Required |
---|---|---|---|
Azure OpenAI resource name | string | The name of the Azure OpenAI resource that hosts the AI model | True |
Azure OpenAI API key | securestring | The API key to access the Azure OpenAI resource that hosts the AI model | True |
Azure Cognitive Search endpoint URL | string | The URL of the Azure Cognitive Search endpoint indexing your data | |
Azure Cognitive Search API key | securestring | The API key to access the Azure Cognitive Search endpoint indexing your data |
Throttling Limits
Name | Calls | Renewal Period |
---|---|---|
API calls per connection | 1000 | 60 seconds |
Actions
Creates a completion for the chat message |
Creates a completion for the chat message |
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms | |
Using extensions to create a completion for chat messages |
Using extensions to create a completion for chat messages |
Creates a completion for the chat message
Creates a completion for the chat message
Parameters
Name | Key | Required | Type | Description |
---|---|---|---|---|
Deployment ID of the deployed model
|
deployment-id | True | string |
Deployment ID of the deployed model |
API version
|
api-version | True | string |
API version |
temperature
|
temperature | number |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or |
|
top_p
|
top_p | number |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or |
|
stream
|
stream | boolean |
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a |
|
stop
|
stop | array of string |
Up to 4 sequences where the API will stop generating further tokens. The returned text will not contain the stop sequence. |
|
max_tokens
|
max_tokens | integer |
The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). |
|
presence_penalty
|
presence_penalty | number |
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
|
frequency_penalty
|
frequency_penalty | number |
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
|
logit_bias
|
logit_bias | object |
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
|
user
|
user | string |
A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse. |
|
role
|
role | True | string |
The role of the messages author. |
content
|
content | True | string |
An array of content parts with a defined type, each can be of type |
type
|
type | True | string |
A representation of configuration data for a single Azure OpenAI chat extension. This will be used by a chat completions request that should use Azure OpenAI chat extensions to augment the response behavior. The use of this configuration is compatible only with Azure OpenAI. |
top_n_documents
|
top_n_documents | integer |
The configured top number of documents to feature for the configured query. |
|
in_scope
|
in_scope | boolean |
Whether queries should be restricted to use of indexed data. |
|
strictness
|
strictness | integer |
The configured strictness of the search relevance filtering. The higher of strictness, the higher of the precision but lower recall of the answer. |
|
role_information
|
role_information | string |
Give the model instructions about how it should behave and any context it should reference when generating a response. You can describe the assistant's personality and tell it how to format responses. There's a 100 token limit for it, and it counts against the overall token limit. |
|
index_name
|
index_name | True | string |
The name of the index to use as available in the referenced Azure Search resource. |
title_field
|
title_field | string |
The name of the index field to use as a title. |
|
url_field
|
url_field | string |
The name of the index field to use as a URL. |
|
filepath_field
|
filepath_field | string |
The name of the index field to use as a filepath. |
|
content_fields
|
content_fields | array of string |
The names of index fields that should be treated as content. |
|
content_fields_separator
|
content_fields_separator | string |
The separator pattern that content fields should use. |
|
vector_fields
|
vector_fields | array of string |
The names of fields that represent vector data. |
|
query_type
|
query_type | string |
The type of Azure Search retrieval query that should be executed when using it as an Azure OpenAI chat extension. |
|
semantic_configuration
|
semantic_configuration | string |
The additional semantic configuration for the query. |
|
filter
|
filter | string |
Search filter. |
|
type
|
type | string |
Represents the available sources Azure OpenAI On Your Data can use to configure vectorization of data for use with vector search. |
|
deployment_name
|
deployment_name | True | string |
Specifies the name of the model deployment to use for vectorization. This model deployment must be in the same Azure OpenAI resource, but On Your Data will use this model deployment via an internal call rather than a public one, which enables vector search even in private networks. |
enabled
|
enabled | boolean | ||
enabled
|
enabled | boolean | ||
n
|
n | integer |
How many chat completion choices to generate for each input message. |
|
seed
|
seed | integer |
If specified, our system will make a best effort to sample deterministically, such that repeated requests with the same |
|
logprobs
|
logprobs | boolean |
Whether to return log probabilities of the output tokens or not. If true, returns the log probabilities of each output token returned in the |
|
top_logprobs
|
top_logprobs | integer |
An integer between 0 and 5 specifying the number of most likely tokens to return at each token position, each with an associated log probability. |
|
type
|
type | string |
Setting to |
Returns
Get a vector representation of a given input that can be easily consumed by machine learning models and algorithms
Parameters
Name | Key | Required | Type | Description |
---|---|---|---|---|
Deployment ID of the deployed model
|
deployment-id | True | string |
Deployment ID of the deployed model |
API version
|
api-version | True | string |
API version |
input
|
input | True | string |
Input text to get embeddings for, encoded as a string. Input string must not exceed 2048 tokens in length |
user
|
user | string |
A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse. |
|
input_type
|
input_type | string |
The input type of embedding search to use. |
Returns
Using extensions to create a completion for chat messages
Using extensions to create a completion for chat messages
Parameters
Name | Key | Required | Type | Description |
---|---|---|---|---|
Deployment ID of the deployed model
|
deployment-id | True | string |
Deployment ID of the deployed model |
Confirm Deployment ID of the deployed model
|
deploymentId | True | string |
Confirm Deployment ID of the deployed model |
API version
|
api-version | True | string |
API version |
index
|
index | integer |
The index of the message in the conversation. |
|
role
|
role | True | string |
The role of the author of this message. |
recipient
|
recipient | string |
The recipient of the message in the format of .. Present if and only if the recipient is tool. |
|
content
|
content | True | string |
The contents of the message |
end_turn
|
end_turn | boolean |
Whether the message ends the turn. |
|
type
|
type | True | string |
The data source type. |
parameters
|
parameters | object |
The parameters to be used for the data source in runtime. |
|
temperature
|
temperature | number |
What sampling temperature to use, between 0 and 2. Higher values like 0.8 will make the output more random, while lower values like 0.2 will make it more focused and deterministic.
We generally recommend altering this or |
|
top_p
|
top_p | number |
An alternative to sampling with temperature, called nucleus sampling, where the model considers the results of the tokens with top_p probability mass. So 0.1 means only the tokens comprising the top 10% probability mass are considered.
We generally recommend altering this or |
|
stream
|
stream | boolean |
If set, partial message deltas will be sent, like in ChatGPT. Tokens will be sent as data-only server-sent events as they become available, with the stream terminated by a |
|
stop
|
stop | array of string |
Array minimum size of 1 and maximum of 4 |
|
max_tokens
|
max_tokens | integer |
The maximum number of tokens allowed for the generated answer. By default, the number of tokens the model can return will be (4096 - prompt tokens). |
|
presence_penalty
|
presence_penalty | number |
Number between -2.0 and 2.0. Positive values penalize new tokens based on whether they appear in the text so far, increasing the model's likelihood to talk about new topics. |
|
frequency_penalty
|
frequency_penalty | number |
Number between -2.0 and 2.0. Positive values penalize new tokens based on their existing frequency in the text so far, decreasing the model's likelihood to repeat the same line verbatim. |
|
logit_bias
|
logit_bias | object |
Modify the likelihood of specified tokens appearing in the completion. Accepts a json object that maps tokens (specified by their token ID in the tokenizer) to an associated bias value from -100 to 100. Mathematically, the bias is added to the logits generated by the model prior to sampling. The exact effect will vary per model, but values between -1 and 1 should decrease or increase likelihood of selection; values like -100 or 100 should result in a ban or exclusive selection of the relevant token. |
|
user
|
user | string |
A unique identifier representing your end-user, which can help Azure OpenAI to monitor and detect abuse. |
Returns
The response of the extensions chat completions.
Definitions
Message
A chat message.
Name | Path | Type | Description |
---|---|---|---|
index
|
index | integer |
The index of the message in the conversation. |
role
|
role | string |
The role of the author of this message. |
recipient
|
recipient | string |
The recipient of the message in the format of .. Present if and only if the recipient is tool. |
content
|
content | string |
The contents of the message |
end_turn
|
end_turn | boolean |
Whether the message ends the turn. |
ExtensionsChatCompletionsResponse
The response of the extensions chat completions.
Name | Path | Type | Description |
---|---|---|---|
id
|
id | string | |
object
|
object | string | |
created
|
created | integer | |
model
|
model | string | |
prompt_filter_results
|
prompt_filter_results | promptFilterResults |
Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders. |
prompt_tokens
|
usage.prompt_tokens | integer | |
completion_tokens
|
usage.completion_tokens | integer | |
total_tokens
|
usage.total_tokens | integer | |
choices
|
choices | array of ExtensionsChatCompletionChoice |
ExtensionsChatCompletionChoice
Name | Path | Type | Description |
---|---|---|---|
index
|
index | integer | |
finish_reason
|
finish_reason | string | |
content_filter_results
|
content_filter_results | contentFilterResults |
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. |
messages
|
messages | array of Message |
The list of messages returned by the service. |
contentFilterResult
Name | Path | Type | Description |
---|---|---|---|
severity
|
severity | string | |
filtered
|
filtered | boolean |
contentFilterResults
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not.
Name | Path | Type | Description |
---|---|---|---|
sexual
|
sexual | contentFilterResult | |
violence
|
violence | contentFilterResult | |
hate
|
hate | contentFilterResult | |
self_harm
|
self_harm | contentFilterResult | |
error
|
error | errorBase |
promptFilterResult
Content filtering results for a single prompt in the request.
Name | Path | Type | Description |
---|---|---|---|
prompt_index
|
prompt_index | integer | |
content_filter_results
|
content_filter_results | contentFilterResults |
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. |
promptFilterResults
Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.
Name | Path | Type | Description |
---|---|---|---|
Items
|
promptFilterResult |
Content filtering results for a single prompt in the request. |
errorBase
Name | Path | Type | Description |
---|---|---|---|
code
|
code | string | |
message
|
message | string |
errorBase_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
code
|
code | string | |
message
|
message | string |
contentFilterSeverityResult_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
filtered
|
filtered | boolean | |
severity
|
severity | string |
contentFilterDetectedResult_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
filtered
|
filtered | boolean | |
detected
|
detected | boolean |
contentFilterDetectedWithCitationResult_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
filtered
|
filtered | boolean | |
detected
|
detected | boolean | |
URL
|
citation.URL | string | |
license
|
citation.license | string |
contentFilterIdResult_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
id
|
id | string | |
filtered
|
filtered | boolean |
contentFilterPromptResults_2024Feb15Preview
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. Information about jailbreak content and profanity, if it has been detected, and if it has been filtered or not. And information about customer block list, if it has been filtered and its id.
Name | Path | Type | Description |
---|---|---|---|
sexual
|
sexual | contentFilterSeverityResult_2024Feb15Preview | |
violence
|
violence | contentFilterSeverityResult_2024Feb15Preview | |
hate
|
hate | contentFilterSeverityResult_2024Feb15Preview | |
self_harm
|
self_harm | contentFilterSeverityResult_2024Feb15Preview | |
profanity
|
profanity | contentFilterDetectedResult_2024Feb15Preview | |
custom_blocklists
|
custom_blocklists | array of contentFilterIdResult_2024Feb15Preview | |
error
|
error | errorBase_2024Feb15Preview | |
jailbreak
|
jailbreak | contentFilterDetectedResult_2024Feb15Preview |
contentFilterChoiceResults_2024Feb15Preview
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. Information about third party text and profanity, if it has been detected, and if it has been filtered or not. And information about customer block list, if it has been filtered and its id.
Name | Path | Type | Description |
---|---|---|---|
sexual
|
sexual | contentFilterSeverityResult_2024Feb15Preview | |
violence
|
violence | contentFilterSeverityResult_2024Feb15Preview | |
hate
|
hate | contentFilterSeverityResult_2024Feb15Preview | |
self_harm
|
self_harm | contentFilterSeverityResult_2024Feb15Preview | |
profanity
|
profanity | contentFilterDetectedResult_2024Feb15Preview | |
custom_blocklists
|
custom_blocklists | array of contentFilterIdResult_2024Feb15Preview | |
error
|
error | errorBase_2024Feb15Preview | |
protected_material_text
|
protected_material_text | contentFilterDetectedResult_2024Feb15Preview | |
protected_material_code
|
protected_material_code | contentFilterDetectedWithCitationResult_2024Feb15Preview |
promptFilterResult_2024Feb15Preview
Content filtering results for a single prompt in the request.
Name | Path | Type | Description |
---|---|---|---|
prompt_index
|
prompt_index | integer | |
content_filter_results
|
content_filter_results | contentFilterPromptResults_2024Feb15Preview |
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. Information about jailbreak content and profanity, if it has been detected, and if it has been filtered or not. And information about customer block list, if it has been filtered and its id. |
promptFilterResults_2024Feb15Preview
Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders.
Name | Path | Type | Description |
---|---|---|---|
Items
|
promptFilterResult_2024Feb15Preview |
Content filtering results for a single prompt in the request. |
azureChatExtensionsMessageContext_2024Feb15Preview
A representation of the additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information is only populated when using an Azure OpenAI request configured to use a matching extension.
Name | Path | Type | Description |
---|---|---|---|
citations
|
citations | array of citation_2024Feb15Preview |
The data source retrieval result, used to generate the assistant message in the response. |
intent
|
intent | string |
The detected intent from the chat history, used to pass to the next turn to carry over the context. |
citation_2024Feb15Preview
citation information for a chat completions response message.
Name | Path | Type | Description |
---|---|---|---|
content
|
content | string |
The content of the citation. |
title
|
title | string |
The title of the citation. |
url
|
url | string |
The URL of the citation. |
filepath
|
filepath | string |
The file path of the citation. |
chunk_id
|
chunk_id | string |
The chunk ID of the citation. |
createChatCompletionResponse_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
id
|
id | string |
A unique identifier for the chat completion. |
object
|
object | chatCompletionResponseObject_2024Feb15Preview |
The object type. |
created
|
created | integer |
The Unix timestamp (in seconds) of when the chat completion was created. |
model
|
model | string |
The model used for the chat completion. |
usage
|
usage | completionUsage_2024Feb15Preview |
Usage statistics for the completion request. |
system_fingerprint
|
system_fingerprint | string |
Can be used in conjunction with the |
prompt_filter_results
|
prompt_filter_results | promptFilterResults_2024Feb15Preview |
Content filtering results for zero or more prompts in the request. In a streaming request, results for different prompts may arrive at different times or in different orders. |
choices
|
choices | array of object | |
index
|
choices.index | integer | |
finish_reason
|
choices.finish_reason | string | |
message
|
choices.message | chatCompletionResponseMessage_2024Feb15Preview |
A chat completion message generated by the model. |
content_filter_results
|
choices.content_filter_results | contentFilterChoiceResults_2024Feb15Preview |
Information about the content filtering category (hate, sexual, violence, self_harm), if it has been detected, as well as the severity level (very_low, low, medium, high-scale that determines the intensity and risk level of harmful content) and if it has been filtered or not. Information about third party text and profanity, if it has been detected, and if it has been filtered or not. And information about customer block list, if it has been filtered and its id. |
logprobs
|
choices.logprobs | chatCompletionChoiceLogProbs_2024Feb15Preview |
Log probability information for the choice. |
chatCompletionChoiceLogProbs_2024Feb15Preview
Log probability information for the choice.
Name | Path | Type | Description |
---|---|---|---|
content
|
content | array of chatCompletionTokenLogprob_2024Feb15Preview |
A list of message content tokens with log probability information. |
chatCompletionTokenLogprob_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
token
|
token | string |
The token. |
logprob
|
logprob | number |
The log probability of this token. |
bytes
|
bytes | array of integer |
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be |
top_logprobs
|
top_logprobs | array of object |
List of the most likely tokens and their log probability, at this token position. In rare cases, there may be fewer than the number of requested |
token
|
top_logprobs.token | string |
The token. |
logprob
|
top_logprobs.logprob | number |
The log probability of this token. |
bytes
|
top_logprobs.bytes | array of integer |
A list of integers representing the UTF-8 bytes representation of the token. Useful in instances where characters are represented by multiple tokens and their byte representations must be combined to generate the correct text representation. Can be |
chatCompletionResponseMessage_2024Feb15Preview
A chat completion message generated by the model.
Name | Path | Type | Description |
---|---|---|---|
role
|
role | chatCompletionResponseMessageRole_2024Feb15Preview |
The role of the author of the response message. |
content
|
content | string |
The contents of the message. |
context
|
context | azureChatExtensionsMessageContext_2024Feb15Preview |
A representation of the additional context information available when Azure OpenAI chat extensions are involved in the generation of a corresponding chat completions response. This context information is only populated when using an Azure OpenAI request configured to use a matching extension. |
chatCompletionResponseMessageRole_2024Feb15Preview
The role of the author of the response message.
The role of the author of the response message.
chatCompletionResponseObject_2024Feb15Preview
completionUsage_2024Feb15Preview
Usage statistics for the completion request.
Name | Path | Type | Description |
---|---|---|---|
prompt_tokens
|
prompt_tokens | integer |
Number of tokens in the prompt. |
completion_tokens
|
completion_tokens | integer |
Number of tokens in the generated completion. |
total_tokens
|
total_tokens | integer |
Total number of tokens used in the request (prompt + completion). |
getSingleEmbeddingsResponse_2024Feb15Preview
Name | Path | Type | Description |
---|---|---|---|
object
|
object | string | |
model
|
model | string | |
data
|
data | array of getEmbeddingsResponseDataItem | |
prompt_tokens
|
usage.prompt_tokens | integer | |
total_tokens
|
usage.total_tokens | integer |
getEmbeddingsResponseDataItem
Name | Path | Type | Description |
---|---|---|---|
index
|
index | integer | |
object
|
object | string | |
embedding
|
embedding | array of number |