Rediger

Del via


Azure OpenAI embeddings input binding for Azure Functions

Important

The Azure OpenAI extension for Azure Functions is currently in preview.

The Azure OpenAI embeddings input binding allows you to generate embeddings for inputs. The binding can generate embeddings from files or raw text inputs.

For information on setup and configuration details of the Azure OpenAI extension, see Azure OpenAI extensions for Azure Functions. To learn more about embeddings in Azure OpenAI Service, see Understand embeddings in Azure OpenAI Service.

Note

References and examples are only provided for the Node.js v4 model.

Note

References and examples are only provided for the Python v2 model.

Note

While both C# process models are supported, only isolated worker model examples are provided.

Example

This example shows how to generate embeddings for a raw text string.

[Function(nameof(GenerateEmbeddings_Http_RequestAsync))]
public async Task GenerateEmbeddings_Http_RequestAsync(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "embeddings")] HttpRequestData req,
    [EmbeddingsInput("{RawText}", InputType.RawText, Model = "%EMBEDDING_MODEL_DEPLOYMENT_NAME%")] EmbeddingsContext embeddings)
{
    using StreamReader reader = new(req.Body);
    string request = await reader.ReadToEndAsync();

    EmbeddingsRequest? requestBody = JsonSerializer.Deserialize<EmbeddingsRequest>(request);

    this.logger.LogInformation(
        "Received {count} embedding(s) for input text containing {length} characters.",
        embeddings.Count,
        requestBody?.RawText?.Length);

    // TODO: Store the embeddings into a database or other storage.
}

This example shows how to retrieve embeddings stored at a specified file that is accessible to the function.

[Function(nameof(GetEmbeddings_Http_FilePath))]
public async Task GetEmbeddings_Http_FilePath(
    [HttpTrigger(AuthorizationLevel.Function, "post", Route = "embeddings-from-file")] HttpRequestData req,
    [EmbeddingsInput("{FilePath}", InputType.FilePath, MaxChunkLength = 512, Model = "%EMBEDDING_MODEL_DEPLOYMENT_NAME%")] EmbeddingsContext embeddings)
{
    using StreamReader reader = new(req.Body);
    string request = await reader.ReadToEndAsync();

    EmbeddingsRequest? requestBody = JsonSerializer.Deserialize<EmbeddingsRequest>(request);
    this.logger.LogInformation(
        "Received {count} embedding(s) for input file '{path}'.",
        embeddings.Count,
        requestBody?.FilePath);

    // TODO: Store the embeddings into a database or other storage.
}

This example shows how to generate embeddings for a raw text string.

@FunctionName("GenerateEmbeddingsHttpRequest")
public HttpResponseMessage generateEmbeddingsHttpRequest(
    @HttpTrigger(
        name = "req", 
        methods = {HttpMethod.POST},
        authLevel = AuthorizationLevel.ANONYMOUS,
        route = "embeddings")
    HttpRequestMessage<EmbeddingsRequest> request,
    @EmbeddingsInput(name = "Embeddings", input = "{RawText}", inputType = InputType.RawText, model = "%EMBEDDING_MODEL_DEPLOYMENT_NAME%") String embeddingsContext,
    final ExecutionContext context) {

    if (request.getBody() == null)
    {
        throw new IllegalArgumentException("Invalid request body. Make sure that you pass in {\"rawText\": value } as the request body.");
    }

    JSONObject embeddingsContextJsonObject = new JSONObject(embeddingsContext);
    
    context.getLogger().info(String.format("Received %d embedding(s) for input text containing %s characters.",
            embeddingsContextJsonObject.getJSONObject("response")
                    .getJSONArray("data")
                    .getJSONObject(0)
                    .getJSONArray("embedding").length(),
            request.getBody().getRawText().length()));

    // TODO: Store the embeddings into a database or other storage.
    return request.createResponseBuilder(HttpStatus.ACCEPTED)
            .header("Content-Type", "application/json")
            .build();
}

This example shows how to retrieve embeddings stored at a specified file that is accessible to the function.

@FunctionName("GenerateEmbeddingsHttpFilePath")
public HttpResponseMessage generateEmbeddingsHttpFilePath(
    @HttpTrigger(
        name = "req", 
        methods = {HttpMethod.POST},
        authLevel = AuthorizationLevel.ANONYMOUS,
        route = "embeddings-from-file")
    HttpRequestMessage<EmbeddingsRequest> request,
    @EmbeddingsInput(name = "Embeddings", input = "{FilePath}", inputType = InputType.FilePath, maxChunkLength = 512, model = "%EMBEDDING_MODEL_DEPLOYMENT_NAME%") String embeddingsContext,
    final ExecutionContext context) {

    if (request.getBody() == null)
    {
        throw new IllegalArgumentException("Invalid request body. Make sure that you pass in {\"rawText\": value } as the request body.");
    }

    JSONObject embeddingsContextJsonObject = new JSONObject(embeddingsContext);
    
    context.getLogger().info(String.format("Received %d embedding(s) for input file %s.",
            embeddingsContextJsonObject.getJSONObject("response")
                    .getJSONArray("data")
                    .getJSONObject(0)
                    .getJSONArray("embedding").length(),
            request.getBody().getFilePath()));

    // TODO: Store the embeddings into a database or other storage.
    return request.createResponseBuilder(HttpStatus.ACCEPTED)
            .header("Content-Type", "application/json")
            .build();
}

Examples aren't yet available.

This example shows how to generate embeddings for a raw text string.

const embeddingsHttpInput = input.generic({
    input: '{RawText}',
    inputType: 'RawText',
    type: 'embeddings',
    model: '%EMBEDDING_MODEL_DEPLOYMENT_NAME%'
})

app.http('generateEmbeddings', {
    methods: ['POST'],
    route: 'embeddings',
    authLevel: 'function',
    extraInputs: [embeddingsHttpInput],
    handler: async (request, context) => {
        let requestBody: EmbeddingsHttpRequest = await request.json();
        let response: any = context.extraInputs.get(embeddingsHttpInput);

        context.log(
            `Received ${response.count} embedding(s) for input text containing ${requestBody.RawText.length} characters.`
        );
        
        // TODO: Store the embeddings into a database or other storage.

        return {status: 202}
    }
});

This example shows how to retrieve embeddings stored at a specified file that is accessible to the function.

const embeddingsFilePathInput = input.generic({
    input: '{FilePath}',
    inputType: 'FilePath',
    type: 'embeddings',
    maxChunkLength: 512,
    model: '%EMBEDDING_MODEL_DEPLOYMENT_NAME%'
})

app.http('getEmbeddingsFilePath', {
    methods: ['POST'],
    route: 'embeddings-from-file',
    authLevel: 'function',
    extraInputs: [embeddingsFilePathInput],
    handler: async (request, context) => {
        let requestBody: EmbeddingsFilePath = await request.json();
        let response: any = context.extraInputs.get(embeddingsFilePathInput);

        context.log(
            `Received ${response.count} embedding(s) for input file ${requestBody.FilePath}.`
        );
        
        // TODO: Store the embeddings into a database or other storage.

        return {status: 202}
    }

This example shows how to generate embeddings for a raw text string.

Here's the function.json file for generating the embeddings:

{
  "bindings": [
    {
      "authLevel": "function",
      "type": "httpTrigger",
      "direction": "in",
      "name": "Request",
      "route": "embeddings",
      "methods": [
        "post"
      ]
    },
    {
      "type": "http",
      "direction": "out",
      "name": "Response"
    },
    {
      "name": "Embeddings",
      "type": "embeddings",
      "direction": "in",
      "inputType": "RawText",
      "input": "{RawText}",
      "model": "%EMBEDDING_MODEL_DEPLOYMENT_NAME%"
    }
  ]
}

For more information about function.json file properties, see the Configuration section.

using namespace System.Net

param($Request, $TriggerMetadata, $Embeddings)

$input = $Request.Body.RawText

Write-Host "Received $($Embeddings.Count) embedding(s) for input text containing $($input.Length) characters."

Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
        StatusCode = [HttpStatusCode]::Accepted
})

This example shows how to generate embeddings for a raw text string.

@app.function_name("GenerateEmbeddingsHttpRequest")
@app.route(route="embeddings", methods=["POST"])
@app.embeddings_input(arg_name="embeddings", input="{rawText}", input_type="rawText", model="%EMBEDDING_MODEL_DEPLOYMENT_NAME%")
def generate_embeddings_http_request(req: func.HttpRequest, embeddings: str) -> func.HttpResponse:
    user_message = req.get_json()
    embeddings_json = json.loads(embeddings)
    embeddings_request = {
        "raw_text": user_message.get("RawText"),
        "file_path": user_message.get("FilePath")
    }
    logging.info(f'Received {embeddings_json.get("count")} embedding(s) for input text '
        f'containing {len(embeddings_request.get("raw_text"))} characters.')
    # TODO: Store the embeddings into a database or other storage.
    return func.HttpResponse(status_code=200)

Attributes

Apply the EmbeddingsInput attribute to define an embeddings input binding, which supports these parameters:

Parameter Description
Input The input string for which to generate embeddings.
Model Optional. The ID of the model to use, which defaults to text-embedding-ada-002. You shouldn't change the model for an existing database. For more information, see Usage.
MaxChunkLength Optional. The maximum number of characters used for chunking the input. For more information, see Usage.
MaxOverlap Optional. Gets or sets the maximum number of characters to overlap between chunks.
InputType Optional. Gets the type of the input.

Annotations

The EmbeddingsInput annotation enables you to define an embeddings input binding, which supports these parameters:

Element Description
name Gets or sets the name of the input binding.
input The input string for which to generate embeddings.
model Optional. The ID of the model to use, which defaults to text-embedding-ada-002. You shouldn't change the model for an existing database. For more information, see Usage.
maxChunkLength Optional. The maximum number of characters used for chunking the input. For more information, see Usage.
maxOverlap Optional. Gets or sets the maximum number of characters to overlap between chunks.
inputType Optional. Gets the type of the input.

Decorators

During the preview, define the input binding as a generic_input_binding binding of type embeddings, which supports these parameters: embeddings decorator supports these parameters:

Parameter Description
arg_name The name of the variable that represents the binding parameter.
input The input string for which to generate embeddings.
model Optional. The ID of the model to use, which defaults to text-embedding-ada-002. You shouldn't change the model for an existing database. For more information, see Usage.
maxChunkLength Optional. The maximum number of characters used for chunking the input. For more information, see Usage.
max_overlap Optional. Gets or sets the maximum number of characters to overlap between chunks.
input_type Gets the type of the input.

Configuration

The binding supports these configuration properties that you set in the function.json file.

Property Description
type Must be EmbeddingsInput.
direction Must be in.
name The name of the input binding.
input The input string for which to generate embeddings.
model Optional. The ID of the model to use, which defaults to text-embedding-ada-002. You shouldn't change the model for an existing database. For more information, see Usage.
maxChunkLength Optional. The maximum number of characters used for chunking the input. For more information, see Usage.
maxOverlap Optional. Gets or sets the maximum number of characters to overlap between chunks.
inputType Optional. Gets the type of the input.

Configuration

The binding supports these properties, which are defined in your code:

Property Description
input The input string for which to generate embeddings.
model Optional. The ID of the model to use, which defaults to text-embedding-ada-002. You shouldn't change the model for an existing database. For more information, see Usage.
maxChunkLength Optional. The maximum number of characters used for chunking the input. For more information, see Usage.
maxOverlap Optional. Gets or sets the maximum number of characters to overlap between chunks.
inputType Optional. Gets the type of the input.

See the Example section for complete examples.

Usage

Changing the default embeddings model changes the way that embeddings are stored in the vector database. Changing the default model can cause the lookups to start misbehaving when they don't match the rest of the data that was previously ingested into the vector database. The default model for embeddings is text-embedding-ada-002.

When calculating the maximum character length for input chunks, consider that the maximum input tokens allowed for second-generation input embedding models like text-embedding-ada-002 is 8191. A single token is approximately four characters in length (in English), which translates to roughly 32,000 (English) characters of input that can fit into a single chunk.