Create a function on Linux using a custom container
In this tutorial, you create and deploy your code to Azure Functions as a custom Docker container using a Linux base image. You typically use a custom image when your functions require a specific language version or have a specific dependency or configuration that isn't provided by the built-in image.
Azure Functions supports any language or runtime using custom handlers. For some languages, such as the R programming language used in this tutorial, you need to install the runtime or more libraries as dependencies that require the use of a custom container.
Deploying your function code in a custom Linux container requires Premium plan or a Dedicated (App Service) plan hosting. Completing this tutorial incurs costs of a few US dollars in your Azure account, which you can minimize by cleaning-up resources when you're done.
You can also use a default Azure App Service container as described in Create your first function hosted on Linux. Supported base images for Azure Functions are found in the Azure Functions base images repo.
In this tutorial, you learn how to:
- Create a function app and Dockerfile using the Azure Functions Core Tools.
- Build a custom image using Docker.
- Publish a custom image to a container registry.
- Create supporting resources in Azure for the function app.
- Deploy a function app from Docker Hub.
- Add application settings to the function app.
- Enable continuous deployment.
- Enable SSH connections to the container.
- Add a Queue storage output binding.
- Create a function app and Dockerfile using the Azure Functions Core Tools.
- Build a custom image using Docker.
- Publish a custom image to a container registry.
- Create supporting resources in Azure for the function app.
- Deploy a function app from Docker Hub.
- Add application settings to the function app.
- Enable continuous deployment.
- Enable SSH connections to the container.
You can follow this tutorial on any computer running Windows, macOS, or Linux.
Important
When using custom containers, you are required to keep the base image of your container updated to the latest supported base image. Supported base images for Azure Functions are language-specific and are found in Azure Functions base image repos, by language:
The Functions team is committed to publishing monthly updates for these base images. Regular updates include the latest minor version updates and security fixes for both the Functions runtime and languages. For custom containers, you should regularly update the base image in the Dockerfile, rebuild, and redeploy updated versions of your custom containers.
Configure your local environment
Before you begin, you must have the following requirements in place:
Azure Functions Core Tools version 4.x.
One of the following tools for creating Azure resources:
Azure CLI version 2.4 or later.
The Azure Az PowerShell module version 5.9.0 or later.
Azure CLI version 2.4 or later.
- Node.js, Active LTS and Maintenance LTS versions (16.16.0 and 14.20.0 recommended).
- Python 3.8 (64-bit), Python 3.7 (64-bit), Python 3.6 (64-bit), which are supported by Azure Functions.
The Java Developer Kit version 8 or 11.
Apache Maven version 3.0 or above.
- Development tools for the language you're using. This tutorial uses the R programming language as an example.
If you don't have an Azure subscription, create an Azure free account before you begin.
You also need to get a Docker and Docker ID:
Create and activate a virtual environment
In a suitable folder, run the following commands to create and activate a virtual environment named .venv
. Ensure that you use Python 3.8, 3.7 or 3.6, which are supported by Azure Functions.
python -m venv .venv
source .venv/bin/activate
If Python didn't install the venv package on your Linux distribution, run the following command:
sudo apt-get install python3-venv
You run all subsequent commands in this activated virtual environment.
Create and test the local functions project
In a terminal or command prompt, run the following command for your chosen language to create a function app project in the current folder:
func init --worker-runtime dotnet --docker
func init --worker-runtime node --language javascript --docker
func init --worker-runtime powershell --docker
func init --worker-runtime python --docker
func init --worker-runtime node --language typescript --docker
In an empty folder, run the following command to generate the Functions project from a Maven archetype:
mvn archetype:generate -DarchetypeGroupId=com.microsoft.azure -DarchetypeArtifactId=azure-functions-archetype -DjavaVersion=8 -Ddocker
The -DjavaVersion
parameter tells the Functions runtime which version of Java to use. Use -DjavaVersion=11
if you want your functions to run on Java 11. When you don't specify -DjavaVersion
, Maven defaults to Java 8. For more information, see Java versions.
Important
The JAVA_HOME
environment variable must be set to the install location of the correct version of the JDK to complete this article.
Maven asks you for values needed to finish generating the project on deployment. Follow the prompts and provide the following information:
Prompt | Value | Description |
---|---|---|
groupId | com.fabrikam |
A value that uniquely identifies your project across all projects, following the package naming rules for Java. |
artifactId | fabrikam-functions |
A value that is the name of the jar, without a version number. |
version | 1.0-SNAPSHOT |
Select the default value. |
package | com.fabrikam.functions |
A value that is the Java package for the generated function code. Use the default. |
Type Y
or press Enter to confirm.
Maven creates the project files in a new folder named artifactId, which in this example is fabrikam-functions
.
func init --worker-runtime custom --docker
The --docker
option generates a Dockerfile for the project, which defines a suitable custom container for use with Azure Functions and the selected runtime.
Navigate into the project folder:
cd fabrikam-functions
No changes are needed to the Dockerfile.
Use the following command to add a function to your project, where the --name
argument is the unique name of your function and the --template
argument specifies the function's trigger. func new
creates a C# code file in your project.
func new --name HttpExample --template "HTTP trigger" --authlevel anonymous
Use the following command to add a function to your project, where the --name
argument is the unique name of your function and the --template
argument specifies the function's trigger. func new
creates a subfolder matching the function name that contains a configuration file named function.json.
func new --name HttpExample --template "HTTP trigger" --authlevel anonymous
In a text editor, create a file in the project folder named handler.R. Add the following code as its content:
library(httpuv)
PORTEnv <- Sys.getenv("FUNCTIONS_CUSTOMHANDLER_PORT")
PORT <- strtoi(PORTEnv , base = 0L)
http_not_found <- list(
status=404,
body='404 Not Found'
)
http_method_not_allowed <- list(
status=405,
body='405 Method Not Allowed'
)
hello_handler <- list(
GET = function (request) {
list(body=paste(
"Hello,",
if(substr(request$QUERY_STRING,1,6)=="?name=")
substr(request$QUERY_STRING,7,40) else "World",
sep=" "))
}
)
routes <- list(
'/api/HttpExample' = hello_handler
)
router <- function (routes, request) {
if (!request$PATH_INFO %in% names(routes)) {
return(http_not_found)
}
path_handler <- routes[[request$PATH_INFO]]
if (!request$REQUEST_METHOD %in% names(path_handler)) {
return(http_method_not_allowed)
}
method_handler <- path_handler[[request$REQUEST_METHOD]]
return(method_handler(request))
}
app <- list(
call = function (request) {
response <- router(routes, request)
if (!'status' %in% names(response)) {
response$status <- 200
}
if (!'headers' %in% names(response)) {
response$headers <- list()
}
if (!'Content-Type' %in% names(response$headers)) {
response$headers[['Content-Type']] <- 'text/plain'
}
return(response)
}
)
cat(paste0("Server listening on :", PORT, "...\n"))
runServer("0.0.0.0", PORT, app)
In host.json, modify the customHandler
section to configure the custom handler's startup command.
"customHandler": {
"description": {
"defaultExecutablePath": "Rscript",
"arguments": [
"handler.R"
]
},
"enableForwardingHttpRequest": true
}
To test the function locally, start the local Azure Functions runtime host in the root of the project folder.
func start
func start
npm install
npm start
mvn clean package
mvn azure-functions:run
R -e "install.packages('httpuv', repos='http://cran.rstudio.com/')"
func start
After you see the HttpExample
endpoint written to the output, navigate to http://localhost:7071/api/HttpExample?name=Functions
. The browser must display a "hello" message that echoes back Functions
, the value supplied to the name
query parameter.
Press Ctrl+C to stop the host.
Build the container image and test locally
(Optional) Examine the Dockerfile in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. The complete list of supported base images for Azure Functions can be found in the Azure Functions base image page.
Examine the Dockerfile in the root of the project folder. The Dockerfile describes the required environment to run the function app on Linux. Custom handler applications use the mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
image as its base.
Modify the Dockerfile to install R. Replace the contents of the Dockerfile with the following code:
FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
ENV AzureWebJobsScriptRoot=/home/site/wwwroot \
AzureFunctionsJobHost__Logging__Console__IsEnabled=true
RUN apt update && \
apt install -y r-base && \
R -e "install.packages('httpuv', repos='http://cran.rstudio.com/')"
COPY . /home/site/wwwroot
In the root project folder, run the docker build command, provide a name as azurefunctionsimage
, and tag as v1.0.0
. Replace <DOCKER_ID>
with your Docker Hub account ID. This command builds the Docker image for the container.
docker build --tag <DOCKER_ID>/azurefunctionsimage:v1.0.0 .
When the command completes, you can run the new container locally.
To test the build, run the image in a local container using the docker run command, replace <docker_id>
again with your Docker Hub account ID, and add the ports argument as -p 8080:80
:
docker run -p 8080:80 -it <docker_id>/azurefunctionsimage:v1.0.0
After the image starts in the local container, browse to http://localhost:8080/api/HttpExample?name=Functions
, which must display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. For more information, see authorization keys.
After the image starts in the local container, browse to http://localhost:8080/api/HttpExample?name=Functions
, which must display the same "hello" message as before. Because the HTTP triggered function you created uses anonymous authorization, you can call the function running in the container without having to obtain an access key. For more information, see authorization keys.
After verifying the function app in the container, press Ctrl+C to stop the docker.
Push the image to Docker Hub
Docker Hub is a container registry that hosts images and provides image and container services. To share your image, which includes deploying to Azure, you must push it to a registry.
If you haven't already signed in to Docker, do so with the
docker login
command, replacing<docker_id>
with your Docker Hub account ID. This command prompts you for your username and password. A "sign in Succeeded" message confirms that you're signed in.docker login
After you've signed in, push the image to Docker Hub by using the
docker push
command, again replace the<docker_id>
with your Docker Hub account ID.docker push <docker_id>/azurefunctionsimage:v1.0.0
Depending on your network speed, pushing the image for the first time might take a few minutes (pushing subsequent changes is much faster). While you're waiting, you can proceed to the next section and create Azure resources in another terminal.
Create supporting Azure resources for your function
Before you can deploy your function code to Azure, you need to create three resources:
- A resource group, which is a logical container for related resources.
- A Storage account, which is used to maintain state and other information about your functions.
- A function app, which provides the environment for executing your function code. A function app maps to your local function project and lets you group functions as a logical unit for easier management, deployment, and sharing of resources.
Use the following commands to create these items. Both Azure CLI and PowerShell are supported.
If you haven't done already, sign in to Azure.
az login
The
az login
command signs you into your Azure account.Create a resource group named
AzureFunctionsContainers-rg
in your chosen region.az group create --name AzureFunctionsContainers-rg --location <REGION>
The
az group create
command creates a resource group. In the above command, replace<REGION>
with a region near you, using an available region code returned from the az account list-locations command.Create a general-purpose storage account in your resource group and region.
az storage account create --name <STORAGE_NAME> --location <REGION> --resource-group AzureFunctionsContainers-rg --sku Standard_LRS
The
az storage account create
command creates the storage account.In the previous example, replace
<STORAGE_NAME>
with a name that is appropriate to you and unique in Azure Storage. Storage names must contain 3 to 24 characters numbers and lowercase letters only.Standard_LRS
specifies a general-purpose account supported by Functions.Use the command to create a Premium plan for Azure Functions named
myPremiumPlan
in the Elastic Premium 1 pricing tier (--sku EP1
), in your<REGION>
, and in a Linux container (--is-linux
).az functionapp plan create --resource-group AzureFunctionsContainers-rg --name myPremiumPlan --location <REGION> --number-of-workers 1 --sku EP1 --is-linux
We use the Premium plan here, which can scale as needed. For more information about hosting, see Azure Functions hosting plans comparison. For more information on how to calculate costs, see the Functions pricing page.
The command also creates an associated Azure Application Insights instance in the same resource group, with which you can monitor your function app and view logs. For more information, see Monitor Azure Functions. The instance incurs no costs until you activate it.
Create and configure a function app on Azure with the image
A function app on Azure manages the execution of your functions in your hosting plan. In this section, you use the Azure resources from the previous section to create a function app from an image on Docker Hub and configure it with a connection string to Azure Storage.
Create a function app using the following command:
az functionapp create --name <APP_NAME> --storage-account <STORAGE_NAME> --resource-group AzureFunctionsContainers-rg --plan myPremiumPlan --deployment-container-image-name <DOCKER_ID>/azurefunctionsimage:v1.0.0
In the
az functionapp create
command, the deployment-container-image-name parameter specifies the image to use for the function app. You can use the az functionapp config container show command to view information about the image used for deployment. You can also use theaz functionapp config container set
command to deploy from a different image.Note
If you're using a custom container registry, then the deployment-container-image-name parameter will refer to the registry URL.
In this example, replace
<STORAGE_NAME>
with the name you used in the previous section for the storage account. Also, replace<APP_NAME>
with a globally unique name appropriate to you, and<DOCKER_ID>
with your Docker Hub account ID. When you're deploying from a custom container registry, use thedeployment-container-image-name
parameter to indicate the URL of the registry.Tip
You can use the
DisableColor
setting in the host.json file to prevent ANSI control characters from being written to the container logs.Use the following command to get the connection string for the storage account you created:
az storage account show-connection-string --resource-group AzureFunctionsContainers-rg --name <STORAGE_NAME> --query connectionString --output tsv
The connection string for the storage account is returned by using the
az storage account show-connection-string
command.Replace
<STORAGE_NAME>
with the name of the storage account you created earlier.Use the following command to add the setting to the function app:
az functionapp config appsettings set --name <APP_NAME> --resource-group AzureFunctionsContainers-rg --settings AzureWebJobsStorage=<CONNECTION_STRING>
The
az functionapp config appsettings set
command creates the setting.In this command, replace
<APP_NAME>
with the name of your function app and<CONNECTION_STRING>
with the connection string from the previous step. The connection should be a long encoded string that begins withDefaultEndpointProtocol=
.The function can now use this connection string to access the storage account.
Note
If you publish your custom image to a private container registry, you must also set the DOCKER_REGISTRY_SERVER_USERNAME
and DOCKER_REGISTRY_SERVER_PASSWORD
variables. For more information, see Custom containers in the App Service settings reference.
Verify your functions on Azure
With the image deployed to your function app in Azure, you can now invoke the function as before through HTTP requests. In your browser, navigate to the following URL:
https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions
https://<APP_NAME>.azurewebsites.net/api/HttpExample?name=Functions
Replace <APP_NAME>
with the name of your function app. When you navigate to this URL, the browser must display similar output as when you ran the function locally.
Enable continuous deployment to Azure
You can enable Azure Functions to automatically update your deployment of an image whenever you update the image in the registry.
Use the following command to enable continuous deployment and to get the webhook URL:
az functionapp deployment container config --enable-cd --query CI_CD_URL --output tsv --name <APP_NAME> --resource-group AzureFunctionsContainers-rg
The
az functionapp deployment container config
command enables continuous deployment and returns the deployment webhook URL. You can retrieve this URL at any later time by using theaz functionapp deployment container show-cd-url
command.As before, replace
<APP_NAME>
with your function app name.Copy the deployment webhook URL to the clipboard.
Open Docker Hub, sign in, and select Repositories on the navigation bar. Locate and select the image, select the Webhooks tab, specify a Webhook name, paste your URL in Webhook URL, and then select Create.
With the webhook set, Azure Functions redeploys your image whenever you update it in Docker Hub.
Enable SSH connections
SSH enables secure communication between a container and a client. With SSH enabled, you can connect to your container using App Service Advanced Tools (Kudu). For easy connection to your container using SSH, Azure Functions provides a base image that has SSH already enabled. You only need to edit your Dockerfile, then rebuild, and redeploy the image. You can then connect to the container through the Advanced Tools (Kudu).
In your Dockerfile, append the string
-appservice
to the base image in yourFROM
instruction.FROM mcr.microsoft.com/azure-functions/dotnet:3.0-appservice
FROM mcr.microsoft.com/azure-functions/node:2.0-appservice
FROM mcr.microsoft.com/azure-functions/powershell:2.0-appservice
FROM mcr.microsoft.com/azure-functions/python:2.0-python3.7-appservice
FROM mcr.microsoft.com/azure-functions/node:2.0-appservice
Rebuild the image by using the
docker build
command again, replace the<docker_id>
with your Docker Hub account ID.docker build --tag <docker_id>/azurefunctionsimage:v1.0.0 .
Push the updated image to Docker Hub, which should take considerably less time than the first push. Only the updated segments of the image need to be uploaded now.
docker push <docker_id>/azurefunctionsimage:v1.0.0
Azure Functions automatically redeploys the image to your functions app; the process takes place in less than a minute.
In a browser, open
https://<app_name>.scm.azurewebsites.net/
and replace<app_name>
with your unique name. This URL is the Advanced Tools (Kudu) endpoint for your function app container.Sign in to your Azure account, and then select the SSH to establish a connection with the container. Connecting might take a few moments if Azure is still updating the container image.
After a connection is established with your container, run the
top
command to view the currently running processes.
Write to Azure Queue Storage
Azure Functions lets you connect your functions to other Azure services and resources without having to write your own integration code. These bindings, which represent both input and output, are declared within the function definition. Data from bindings is provided to the function as parameters. A trigger is a special type of input binding. Although a function has only one trigger, it can have multiple input and output bindings. For more information, see Azure Functions triggers and bindings concepts.
This section shows you how to integrate your function with an Azure Queue Storage. The output binding that you add to this function writes data from an HTTP request to a message in the queue.
Retrieve the Azure Storage connection string
Earlier, you created an Azure Storage account for function app's use. The connection string for this account is stored securely in app settings in Azure. By downloading the setting into the local.settings.json file, you can use the connection to write to a Storage queue in the same account when running the function locally.
From the root of the project, run the following command, replace
<APP_NAME>
with the name of your function app from the previous step. This command overwrites any existing values in the file.func azure functionapp fetch-app-settings <APP_NAME>
Open local.settings.json file and locate the value named
AzureWebJobsStorage
, which is the Storage account connection string. You use the nameAzureWebJobsStorage
and the connection string in other sections of this article.
Important
Because the local.settings.json file contains secrets downloaded from Azure, always exclude this file from source control. The .gitignore file created with a local functions project excludes the file by default.
Register binding extensions
Except for HTTP and timer triggers, bindings are implemented as extension packages. Run the following dotnet add package command in the Terminal window to add the Storage extension package to your project.
dotnet add package Microsoft.Azure.WebJobs.Extensions.Storage
Now, you can add the storage output binding to your project.
Add an output binding definition to the function
Although a function can have only one trigger, it can have multiple input and output bindings, which lets you connect to other Azure services and resources without writing custom integration code.
You declare these bindings in the function.json file in your function folder. From the previous quickstart, your function.json file in the HttpExample folder contains two bindings in the bindings
collection:
The way you declare binding attributes depends on your Python programming model.
You declare these bindings in the function.json file in your function folder. From the previous quickstart, your function.json file in the HttpExample folder contains two bindings in the bindings
collection:
"scriptFile": "__init__.py",
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
}
Each binding has at least a type, a direction, and a name. In the above example, the first binding is of type httpTrigger
with the direction in
. For the in
direction, name
specifies the name of an input parameter that's sent to the function when invoked by the trigger.
The second binding in the collection is of type http
with the direction out
, in which case the special name
of $return
indicates that this binding uses the function's return value rather than providing an input parameter.
To write to an Azure Storage queue from this function, add an out
binding of type queue
with the name msg
, as shown in the code below:
"bindings": [
{
"authLevel": "anonymous",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "$return"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
In this case, msg
is given to the function as an output argument. For a queue
type, you must also specify the name of the queue in queueName
and provide the name of the Azure Storage connection (from local.settings.json file) in connection
.
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
}
]
"bindings": [
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
}
]
The second binding in the collection is named res
. This http
binding is an output binding (out
) that is used to write the HTTP response.
To write to an Azure Storage queue from this function, add an out
binding of type queue
with the name msg
, as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "req",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "res"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
The second binding in the collection is named res
. This http
binding is an output binding (out
) that is used to write the HTTP response.
To write to an Azure Storage queue from this function, add an out
binding of type queue
with the name msg
, as shown in the code below:
{
"authLevel": "function",
"type": "httpTrigger",
"direction": "in",
"name": "Request",
"methods": [
"get",
"post"
]
},
{
"type": "http",
"direction": "out",
"name": "Response"
},
{
"type": "queue",
"direction": "out",
"name": "msg",
"queueName": "outqueue",
"connection": "AzureWebJobsStorage"
}
]
}
In this case, msg
is given to the function as an output argument. For a queue
type, you must also specify the name of the queue in queueName
and provide the name of the Azure Storage connection (from local.settings.json file) in connection
.
In a C# project, the bindings are defined as binding attributes on the function method. Specific definitions depend on whether your app runs in-process (C# class library) or in an isolated worker process.
Open the HttpExample.cs project file and add the following parameter to the Run
method definition:
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
The msg
parameter is an ICollector<T>
type, representing a collection of messages written to an output binding when the function completes. In this case, the output is a storage queue named outqueue
. The StorageAccountAttribute
sets the connection string for the storage account. This attribute indicates the setting that contains the storage account connection string and can be applied at the class, method, or parameter level. In this case, you could omit StorageAccountAttribute
because you're already using the default storage account.
The Run method definition must now look like the following code:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
In a Java project, the bindings are defined as binding annotations on the function method. The function.json file is then autogenerated based on these annotations.
Browse to the location of your function code under src/main/java, open the Function.java project file, and add the following parameter to the run
method definition:
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage") OutputBinding<String> msg
The msg
parameter is an OutputBinding<T>
type, which represents a collection of strings. These strings are written as messages to an output binding when the function completes. In this case, the output is a storage queue named outqueue
. The connection string for the Storage account is set by the connection
method. You pass the application setting that contains the Storage account connection string, rather than passing the connection string itself.
The run
method definition must now look like the following example:
@FunctionName("HttpTrigger-Java")
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.FUNCTION)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue", connection = "AzureWebJobsStorage")
OutputBinding<String> msg, final ExecutionContext context) {
...
}
Add code to use the output binding
With the queue binding defined, you can now update your function to write messages to the queue using the binding parameter.
Update HttpExample\__init__.py to match the following code, add the msg
parameter to the function definition and msg.set(name)
under the if name:
statement:
import logging
import azure.functions as func
def main(req: func.HttpRequest, msg: func.Out[func.QueueMessage]) -> str:
name = req.params.get('name')
if not name:
try:
req_body = req.get_json()
except ValueError:
pass
else:
name = req_body.get('name')
if name:
msg.set(name)
return func.HttpResponse(f"Hello {name}!")
else:
return func.HttpResponse(
"Please pass a name on the query string or in the request body",
status_code=400
)
The msg
parameter is an instance of the azure.functions.Out class
. The set
method writes a string message to the queue. In this case, it's the name passed to the function in the URL query string.
Add code that uses the msg
output binding object on context.bindings
to create a queue message. Add this code before the context.res
statement.
// Add a message to the Storage queue,
// which is the name passed to the function.
context.bindings.msg = (req.query.name || req.body.name);
At this point, your function could look as follows:
module.exports = async function (context, req) {
context.log('JavaScript HTTP trigger function processed a request.');
if (req.query.name || (req.body && req.body.name)) {
// Add a message to the Storage queue,
// which is the name passed to the function.
context.bindings.msg = (req.query.name || req.body.name);
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
Add code that uses the msg
output binding object on context.bindings
to create a queue message. Add this code before the context.res
statement.
context.bindings.msg = name;
At this point, your function must look as follows:
import { AzureFunction, Context, HttpRequest } from "@azure/functions"
const httpTrigger: AzureFunction = async function (context: Context, req: HttpRequest): Promise<void> {
context.log('HTTP trigger function processed a request.');
const name = (req.query.name || (req.body && req.body.name));
if (name) {
// Add a message to the storage queue,
// which is the name passed to the function.
context.bindings.msg = name;
// Send a "hello" response.
context.res = {
// status: 200, /* Defaults to 200 */
body: "Hello " + (req.query.name || req.body.name)
};
}
else {
context.res = {
status: 400,
body: "Please pass a name on the query string or in the request body"
};
}
};
export default httpTrigger;
Add code that uses the Push-OutputBinding
cmdlet to write text to the queue using the msg
output binding. Add this code before you set the OK status in the if
statement.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
At this point, your function must look as follows:
using namespace System.Net
# Input bindings are passed in via param block.
param($Request, $TriggerMetadata)
# Write to the Azure Functions log stream.
Write-Host "PowerShell HTTP trigger function processed a request."
# Interact with query parameters or the body of the request.
$name = $Request.Query.Name
if (-not $name) {
$name = $Request.Body.Name
}
if ($name) {
# Write the $name value to the queue,
# which is the name passed to the function.
$outputMsg = $name
Push-OutputBinding -name msg -Value $outputMsg
$status = [HttpStatusCode]::OK
$body = "Hello $name"
}
else {
$status = [HttpStatusCode]::BadRequest
$body = "Please pass a name on the query string or in the request body."
}
# Associate values to output bindings by calling 'Push-OutputBinding'.
Push-OutputBinding -Name Response -Value ([HttpResponseContext]@{
StatusCode = $status
Body = $body
})
Add code that uses the msg
output binding object to create a queue message. Add this code before the method returns.
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
At this point, your function must look as follows:
[FunctionName("HttpExample")]
public static async Task<IActionResult> Run(
[HttpTrigger(AuthorizationLevel.Anonymous, "get", "post", Route = null)] HttpRequest req,
[Queue("outqueue"),StorageAccount("AzureWebJobsStorage")] ICollector<string> msg,
ILogger log)
{
log.LogInformation("C# HTTP trigger function processed a request.");
string name = req.Query["name"];
string requestBody = await new StreamReader(req.Body).ReadToEndAsync();
dynamic data = JsonConvert.DeserializeObject(requestBody);
name = name ?? data?.name;
if (!string.IsNullOrEmpty(name))
{
// Add a message to the output collection.
msg.Add(name);
}
return name != null
? (ActionResult)new OkObjectResult($"Hello, {name}")
: new BadRequestObjectResult("Please pass a name on the query string or in the request body");
}
Now, you can use the new msg
parameter to write to the output binding from your function code. Add the following line of code before the success response to add the value of name
to the msg
output binding.
msg.setValue(name);
When you use an output binding, you don't have to use the Azure Storage SDK code for authentication, getting a queue reference, or writing data. The Functions runtime and queue output binding do those tasks for you.
Your run
method must now look like the following example:
public HttpResponseMessage run(
@HttpTrigger(name = "req", methods = {HttpMethod.GET, HttpMethod.POST}, authLevel = AuthorizationLevel.ANONYMOUS)
HttpRequestMessage<Optional<String>> request,
@QueueOutput(name = "msg", queueName = "outqueue",
connection = "AzureWebJobsStorage") OutputBinding<String> msg,
final ExecutionContext context) {
context.getLogger().info("Java HTTP trigger processed a request.");
// Parse query parameter
String query = request.getQueryParameters().get("name");
String name = request.getBody().orElse(query);
if (name == null) {
return request.createResponseBuilder(HttpStatus.BAD_REQUEST)
.body("Please pass a name on the query string or in the request body").build();
} else {
// Write the name to the message queue.
msg.setValue(name);
return request.createResponseBuilder(HttpStatus.OK).body("Hello, " + name).build();
}
}
Update the tests
Because the archetype also creates a set of tests, you need to update these tests to handle the new msg
parameter in the run
method signature.
Browse to the location of your test code under src/test/java, open the Function.java project file, and replace the line of code under //Invoke
with the following code:
@SuppressWarnings("unchecked")
final OutputBinding<String> msg = (OutputBinding<String>)mock(OutputBinding.class);
final HttpResponseMessage ret = new Function().run(req, msg, context);
Update the image in the registry
In the root folder, run
docker build
again, and this time update the version in the tag tov1.0.1
. As before, replace<docker_id>
with your Docker Hub account ID.docker build --tag <docker_id>/azurefunctionsimage:v1.0.1 .
Push the updated image back to the repository with
docker push
.docker push <docker_id>/azurefunctionsimage:v1.0.1
Because you configured continuous delivery, updating the image in the registry again automatically updates your function app in Azure.
View the message in the Azure Storage queue
In a browser, use the same URL as before to invoke your function. The browser must display the same response as before, because you didn't modify that part of the function code. The added code, however, wrote a message using the name
URL parameter to the outqueue
storage queue.
You can view the queue in the Azure portal or in the Microsoft Azure Storage Explorer. You can also view the queue in the Azure CLI, as described in the following steps:
Open the function project's local.setting.json file and copy the connection string value. In a terminal or command window, run the following command to create an environment variable named
AZURE_STORAGE_CONNECTION_STRING
, and paste your specific connection string in place of<MY_CONNECTION_STRING>
. (This environment variable means you don't need to supply the connection string to each subsequent command using the--connection-string
argument.)export AZURE_STORAGE_CONNECTION_STRING="<MY_CONNECTION_STRING>"
(Optional) Use the
az storage queue list
command to view the Storage queues in your account. The output from this command must include a queue namedoutqueue
, which was created when the function wrote its first message to that queue.az storage queue list --output tsv
Use the
az storage message get
command to read the message from this queue, which should be the value you supplied when testing the function earlier. The command reads and removes the first message from the queue.echo `echo $(az storage message get --queue-name outqueue -o tsv --query '[].{Message:content}') | base64 --decode`
Because the message body is stored base64 encoded, the message must be decoded before it's displayed. After you execute
az storage message get
, the message is removed from the queue. If there was only one message inoutqueue
, you won't retrieve a message when you run this command a second time and instead get an error.
Clean up resources
If you want to continue working with Azure Function using the resources you created in this tutorial, you can leave all those resources in place. Because you created a Premium Plan for Azure Functions, you'll incur one or two USD per day in ongoing costs.
To avoid ongoing costs, delete the AzureFunctionsContainers-rg
resource group to clean up all the resources in that group:
az group delete --name AzureFunctionsContainers-rg
Next steps
Feedback
Submit and view feedback for