Muokkaa

Jaa


Configure Speech service containers

Speech containers enable customers to build one speech application architecture that is optimized to take advantage of both robust cloud capabilities and edge locality.

The Speech container runtime environment is configured using the docker run command arguments. This container has some required and optional settings. The container-specific settings are the billing settings.

Configuration settings

The container has the following configuration settings:

Required Setting Purpose
Yes ApiKey Tracks billing information.
No ApplicationInsights Enables adding Azure Application Insights telemetry support to your container.
Yes Billing Specifies the endpoint URI of the service resource on Azure.
Yes Eula Indicates that you've accepted the license for the container.
No Fluentd Writes log and, optionally, metric data to a Fluentd server.
No HTTP Proxy Configures an HTTP proxy for making outbound requests.
No Logging Provides ASP.NET Core logging support for your container.
No Mounts Reads and writes data from the host computer to the container and from the container back to the host computer.

Important

The ApiKey, Billing, and Eula settings are used together, and you must provide valid values for all three of them; otherwise your container won't start. For more information about using these configuration settings to instantiate a container, see Billing.

ApiKey configuration setting

The ApiKey setting specifies the Azure resource key used to track billing information for the container. You must specify a value for the ApiKey and the value must be a valid key for the Speech resource specified for the Billing configuration setting.

This setting can be found in the following place:

  • Azure portal: Speech Resource Management, under Keys

ApplicationInsights setting

The ApplicationInsights setting allows you to add Azure Application Insights telemetry support to your container. Application Insights provides in-depth monitoring of your container. You can easily monitor your container for availability, performance, and usage. You can also quickly identify and diagnose errors in your container.

The following table describes the configuration settings supported under the ApplicationInsights section.

Required Name Data type Description
No InstrumentationKey String The instrumentation key of the Application Insights instance to which telemetry data for the container is sent. For more information, see Application Insights for ASP.NET Core.

Example:
InstrumentationKey=123456789

Billing configuration setting

The Billing setting specifies the endpoint URI of the Speech resource on Azure used to meter billing information for the container. You must specify a value for this configuration setting, and the value must be a valid endpoint URI for a Speech resource on Azure. The container reports usage about every 10 to 15 minutes.

This setting can be found in the following place:

  • Azure portal: Labeled Endpoint on the Speech overview page
Required Name Data type Description
Yes Billing String Billing endpoint URI. For more information on obtaining the billing URI, see billing. For more information and a complete list of regional endpoints, see Custom subdomain names for Azure AI services.

Eula setting

The Eula setting indicates that you've accepted the license for the container. You must specify a value for this configuration setting, and the value must be set to accept.

Required Name Data type Description
Yes Eula String License acceptance

Example:
Eula=accept

Azure AI services containers are licensed under your agreement governing your use of Azure. If you do not have an existing agreement governing your use of Azure, you agree that your agreement governing use of Azure is the Microsoft Online Subscription Agreement, which incorporates the Online Services Terms. For previews, you also agree to the Supplemental Terms of Use for Microsoft Azure Previews. By using the container you agree to these terms.

Fluentd settings

Fluentd is an open-source data collector for unified logging. The Fluentd settings manage the container's connection to a Fluentd server. The container includes a Fluentd logging provider, which allows your container to write logs and, optionally, metric data to a Fluentd server.

The following table describes the configuration settings supported under the Fluentd section.

Name Data type Description
Host String The IP address or DNS host name of the Fluentd server.
Port Integer The port of the Fluentd server.
The default value is 24224.
HeartbeatMs Integer The heartbeat interval, in milliseconds. If no event traffic has been sent before this interval expires, a heartbeat is sent to the Fluentd server. The default value is 60000 milliseconds (1 minute).
SendBufferSize Integer The network buffer space, in bytes, allocated for send operations. The default value is 32768 bytes (32 kilobytes).
TlsConnectionEstablishmentTimeoutMs Integer The timeout, in milliseconds, to establish a SSL/TLS connection with the Fluentd server. The default value is 10000 milliseconds (10 seconds).
If UseTLS is set to false, this value is ignored.
UseTLS Boolean Indicates whether the container should use SSL/TLS for communicating with the Fluentd server. The default value is false.

HTTP proxy credentials settings

If you need to configure an HTTP proxy for making outbound requests, use these two arguments:

Name Data type Description
HTTP_PROXY string The proxy to use, for example, http://proxy:8888
<proxy-url>
HTTP_PROXY_CREDS string Any credentials needed to authenticate against the proxy, for example, username:password. This value must be in lower-case.
<proxy-user> string The user for the proxy.
<proxy-password> string The password associated with <proxy-user> for the proxy.
docker run --rm -it -p 5000:5000 \
--memory 2g --cpus 1 \
--mount type=bind,src=/home/azureuser/output,target=/output \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
HTTP_PROXY=<proxy-url> \
HTTP_PROXY_CREDS=<proxy-user>:<proxy-password> \

Logging settings

The Logging settings manage ASP.NET Core logging support for your container. You can use the same configuration settings and values for your container that you use for an ASP.NET Core application.

The following logging providers are supported by the container:

Provider Purpose
Console The ASP.NET Core Console logging provider. All of the ASP.NET Core configuration settings and default values for this logging provider are supported.
Debug The ASP.NET Core Debug logging provider. All of the ASP.NET Core configuration settings and default values for this logging provider are supported.
Disk The JSON logging provider. This logging provider writes log data to the output mount.

This container command stores logging information in the JSON format to the output mount:

docker run --rm -it -p 5000:5000 \
--memory 2g --cpus 1 \
--mount type=bind,src=/home/azureuser/output,target=/output \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
Logging:Disk:Format=json \
Mounts:Output=/output

This container command shows debugging information, prefixed with dbug, while the container is running:

docker run --rm -it -p 5000:5000 \
--memory 2g --cpus 1 \
<registry-location>/<image-name> \
Eula=accept \
Billing=<endpoint> \
ApiKey=<api-key> \
Logging:Console:LogLevel:Default=Debug

Disk logging

The Disk logging provider supports the following configuration settings:

Name Data type Description
Format String The output format for log files.
Note: This value must be set to json to enable the logging provider. If this value is specified without also specifying an output mount while instantiating a container, an error occurs.
MaxFileSize Integer The maximum size, in megabytes (MB), of a log file. When the size of the current log file meets or exceeds this value, a new log file is started by the logging provider. If -1 is specified, the size of the log file is limited only by the maximum file size, if any, for the output mount. The default value is 1.

For more information about configuring ASP.NET Core logging support, see Settings file configuration.

Mount settings

Use bind mounts to read and write data to and from the container. You can specify an input mount or output mount by specifying the --mount option in the docker run command.

The Standard Speech containers don't use input or output mounts to store training or service data. However, custom speech containers rely on volume mounts.

The exact syntax of the host mount location varies depending on the host operating system. Additionally, the host computer's mount location might not be accessible due to a conflict between permissions used by the docker service account and the host mount location permissions.

Optional Name Data type Description
Not allowed Input String Standard Speech containers don't use this. Custom speech containers use volume mounts.
Optional Output String The target of the output mount. The default value is /output. This is the location of the logs. This includes container logs.

Example:
--mount type=bind,src=c:\output,target=/output

Volume mount settings

The custom speech containers use volume mounts to persist custom models. You can specify a volume mount by adding the -v (or --volume) option to the docker run command.

Note

The volume mount settings are only applicable for custom speech to text containers.

Custom models are downloaded the first time that a new model is ingested as part of the custom speech container docker run command. Sequential runs of the same ModelId for a custom speech container uses the previously downloaded model. If the volume mount isn't provided, custom models can't be persisted.

The volume mount setting consists of three color : separated fields:

  1. The first field is the name of the volume on the host machine, for example C:\input.
  2. The second field is the directory in the container, for example /usr/local/models.
  3. The third field (optional) is a comma-separated list of options, for more information, see use volumes.

Here's a volume mount example that mounts the host machine C:\input directory to the containers /usr/local/models directory.

-v C:\input:/usr/local/models

Next steps