JavaScript Tutorial: Upload and analyze a file with Azure Functions and Blob Storage

In this tutorial, you'll learn how to upload an image to Azure Blob Storage and process it using Azure Functions and Computer Vision. You'll also learn how to implement Azure Function triggers and bindings as part of this process. Together, these services will analyze an uploaded image that contains text, extract the text out of it, and then store the text in a database row for later analysis or other purposes.

Azure Blob Storage is Microsoft's massively scalable object storage solution for the cloud. Blob Storage is designed for storing images and documents, streaming media files, managing backup and archive data, and much more. You can read more about Blob Storage on the overview page.

Azure Functions is a serverless computer solution that allows you to write and run small blocks of code as highly scalable, serverless, event driven functions. You can read more about Azure Functions on the overview page.

In this tutorial, you'll learn how to:

  • Upload images and files to Blob Storage
  • Use an Azure Function event trigger to process data uploaded to Blob Storage
  • Use Cognitive Services to analyze an image
  • Write data to Table Storage using Azure Function output bindings

Prerequisites

Create the storage account and container

The first step is to create the storage account that will hold the uploaded blob data, which in this scenario will be images that contain text. A storage account offers several different services, but this tutorial utilizes Blob Storage and Table Storage.

  1. In Visual Studio Code, select Ctrl + Shift + P to open the command palette.

  2. Search for Azure Storage: Create Storage Account (Advanced).

  3. Use the following table to create the Storage resource.

    Setting Value
    Name Enter msdocsstoragefunction or something similar.
    Resource Group Create the msdocs-storage-function resource group you created earlier.
    Static web hosting No.
    Location Choose the region closest to you.
  4. In Visual Studio Code, select Shift + Alt + A to open the Azure Explorer.

  5. Expand the Storage section, expand your subscription node and wait for the resource to be created.

Create the container in Visual Studio Code

  1. Still in the Azure Explorer with your new Storage resource found, expand the resource to see the nodes.
  2. Right-click on Blob Containers and select Create Blob Container.
  3. Enter the name imageanalysis. This creates a private container.

Change from private to public container in Azure portal

This procedure expects a public container. To change that configuration, make the change in the Azure portal.

  1. Right-click on the Storage Resource in the Azure Explorer and select Open in Portal.
  2. In the Data Storage section, select Containers.
  3. Find your container, imageanalysis, and select the ... (ellipse) at the end of the line.
  4. Select Change access level.
  5. Select Blob (anonymous read access for blobs only then select Ok.
  6. Return to Visual Studio Code.

Retrieve the connection string in Visual Studio Code

  1. In Visual Studio Code, select Shift + Alt + A to open the Azure Explorer.
  2. Right-click on your storage resource and select Copy Connection String.
  3. paste this somewhere to use for later.
  4. Also make note of the storage account name msdocsstoragefunction for later as well.

Create the Computer Vision service

Next, create the Computer Vision service account that will process our uploaded files. Computer Vision is part of Azure Cognitive Services and offers various features for extracting data out of images. You can learn more about Computer Vision on the overview page.

  1. In the search bar at the top of the portal, search for Computer and select the result labeled Computer vision.

  2. On the Computer vision page, select + Create.

  3. On the Create Computer Vision page, enter the following values:

    • Subscription: Choose your desired Subscription.
    • Resource Group: Use the msdocs-storage-function resource group you created earlier.
    • Region: Select the region that is closest to you.
    • Name: Enter in a name of msdocscomputervision.
    • Pricing Tier: Choose Free if it's available, otherwise choose Standard S1.
    • Check the Responsible AI Notice box if you agree to the terms

    A screenshot showing how to create a new Computer Vision service.

  4. Select Review + Create at the bottom. Azure will take a moment validate the information you entered. Once the settings are validated, choose Create and Azure will begin provisioning the Computer Vision service, which might take a moment.

  5. When the operation has completed, select Go to Resource.

Retrieve the keys

Next, we need to find the secret key and endpoint URL for the Computer Vision service to use in our Azure Function app.

  1. On the Computer Vision overview page, select Keys and Endpoint.

  2. On the Keys and EndPoint page, copy the Key 1 value and the EndPoint values and paste them somewhere to use for later. The endpoint should be in the format of https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com/

A screenshot showing how to retrieve the Keys and URL Endpoint for a Computer Vision service.

Download and configure the sample project

The code for the Azure Function used in this tutorial can be found in this GitHub repository, in the JavaScript subdirectory. You can also clone the project using the command below.

git clone https://github.com/Azure-Samples/msdocs-storage-bind-function-service.git \
cd msdocs-storage-bind-function-service/javascript \
code .

The sample project accomplishes the following tasks:

  • Retrieves environment variables to connect to the storage account and Computer Vision service
  • Accepts the uploaded file as a blob parameter
  • Analyzes the blob using the Computer Vision service
  • Sends the analyzed image text to a new table row using output bindings

Once you've downloaded and opened the project, there are a few essential concepts to understand:

Concept Purpose
Function The Azure Function is defined by both the function code and the bindings. The function code is in ./ProcessImageUpload/index.js.
Triggers and bindings The triggers and bindings indicate that data which is expected into or out of the function and which service is going to send or receive that data. The trigger and bindings for this function is in ./ProcessImageUpload/function.json.

Triggers and bindings

The following function.json file defines the triggers and bindings for this function:

{
  "bindings": [
    {
      "name": "myBlob",
      "type": "blobTrigger",
      "direction": "in",
      "path": "imageanalysis/{name}",
      "connection": "StorageConnection"
    },
    {
      "tableName": "ImageText",
      "connection": "StorageConnection",
      "name": "tableBinding",
      "type": "table",
      "direction": "out"
    }

  ]
}
  • Data In - The BlobTrigger ("type": "blobTrigger") is used to bind the function to the upload event in Blob Storage. The trigger has two required parameters:

    • path: The path the trigger watches for events. The path includes the container name,imageanalysis, and the variable substitution for the blob name. This blob name is retrieved from the name property.
    • name: The name of the blob uploaded. The use of the myBlob is the parameter name for the blob coming into the function. Don't change the value myBlob.
    • connection: The connection string of the storage account. The value StorageConnection matches the name in the local.settings.json file.
  • Data Out - The TableBinding ("type": "table") is used to bind the outbound data to a Storage table.

    • tableName: The name of the table to write the parsed image text value returned by the function. The table must already exist.
    • connection: The connection string of the storage account. The value StorageConnection matches the name in the local.settings.json file.
// ProcessImageUpload/index.js
const { v4: uuidv4 } = require('uuid');
const { ApiKeyCredentials } = require('@azure/ms-rest-js');
const { ComputerVisionClient } = require('@azure/cognitiveservices-computervision');
const sleep = require('util').promisify(setTimeout);

const STATUS_SUCCEEDED = "succeeded";
const STATUS_FAILED = "failed"

async function readFileUrl(context, computerVisionClient, url) {

    try {

        context.log(`uri = ${url}`);

        // To recognize text in a local image, replace client.read() with readTextInStream() as shown:
        let result = await computerVisionClient.read(url);

        // Operation ID is last path segment of operationLocation (a URL)
        let operation = result.operationLocation.split('/').slice(-1)[0];

        // Wait for read recognition to complete
        // result.status is initially undefined, since it's the result of read
        while (result.status !== STATUS_SUCCEEDED) {
            await sleep(1000);
            result = await computerVisionClient.getReadResult(operation);
        }

        let contents = "";

        result.analyzeResult.readResults.map((page) => {
            page.lines.map(line => {
                contents += line.text + "\n\r"
            });
        });
        return contents;

    } catch (err) {
        console.log(err);
    }
}

module.exports = async function (context, myBlob) {

    try {
        context.log("JavaScript blob trigger function processed blob \n Blob:", context.bindingData.blobTrigger, "\n Blob Size:", myBlob.length, "Bytes");

        const computerVision_ResourceKey = process.env.ComputerVisionKey;
        const computerVision_Endpoint = process.env.ComputerVisionEndPoint;

        const computerVisionClient = new ComputerVisionClient(
            new ApiKeyCredentials({ inHeader: { 'Ocp-Apim-Subscription-Key': computerVision_ResourceKey } }), computerVision_Endpoint);

        // URL must be full path
        const textContext = await readFileUrl(context, computerVisionClient, context.bindingData.uri);

        context.bindings.tableBinding = [];
        context.bindings.tableBinding.push({
            PartitionKey: "Images",
            RowKey: uuidv4().toString(),
            Text: textContext
        });
    } catch (err) {
        context.log(err);
        return;
    }

};

This code also retrieves essential configuration values from environment variables, such as the Blob Storage connection string and Computer Vision key. These environment variables are added to the Azure Function environment after it's deployed.

The default function also utilizes a second method called AnalyzeImage. This code uses the URL Endpoint and Key of the Computer Vision account to make a request to Computer Vision to process the image. The request returns all of the text discovered in the image. This text is written to Table Storage, using the outbound binding.

Configure local settings

To run the project locally, enter the environment variables in the ./local.settings.json file. Fill in the placeholder values with the values you saved earlier when creating the Azure resources.

Although the Azure Function code runs locally, it connects to the cloud-based services for Storage, rather than using any local emulators.

{
  "IsEncrypted": false,
  "Values": {
    "AzureWebJobsStorage": "",
    "FUNCTIONS_WORKER_RUNTIME": "node",
    "StorageConnection": "your-storage-account-connection-string",
    "StorageAccountName": "your-storage-account-name",
    "StorageContainerName": "your-storage-container-name",    
    "ComputerVisionKey": "your-computer-vision-key",
    "ComputerVisionEndPoint":  "https://REPLACE-WITH-YOUR-RESOURCE-NAME.cognitiveservices.azure.com/"    
  }
}

Create Azure Functions app

You're now ready to deploy the application to Azure using a Visual Studio Code extension.

  1. In Visual Studio Code, select Shift + Alt + A to open the Azure explorer.

  2. In the Functions section, find and right-click the subscription, and select Create Function App in Azure (Advanced).

  3. Use the following table to create the Function resource.

    Setting Value
    Name Enter msdocsprocessimage or something similar.
    Runtime stack Select a Node.js LTS version.
    OS Select Linux.
    Resource Group Choose the msdocs-storage-function resource group you created earlier.
    Location Choose the region closest to you.
    Plan Type Select Consumption.
    Azure Storage Select the storage account you created earlier.
    Application Insights Skip for now.
  4. Azure provisions the requested resources, which will take a few moments to complete.

Deploy Azure Functions app

  1. When the previous resource creation process finishes, right-click the new resource in the Functions section of the Azure explorer, and select Deploy to Function App.
  2. If asked Are you sure you want to deploy..., select Deploy.
  3. When the process completes, a notification appears which a choice which includes Upload settings. Select that option. This copies the values from your local.settings.json file into your Azure Function app. If the notification disappeared before you could select it, continue to the next section.

Add app settings for Storage and Computer Vision

If you selected Upload settings in the notification, skip this section.

The Azure Function was deployed successfully, but it can't connect to our Storage account and Computer Vision services yet. The correct keys and connection strings must first be added to the configuration settings of the Azure Functions app.

  1. Find your resource in the Functions section of the Azure explorer, right-click Application Settings, and select Add New Setting.

  2. Enter a new app setting for the following secrets. Copy and paste your secret values from your local project in the local.settings.json file.

    Setting
    StorageConnection
    StorageAccountName
    StorageContainerName
    ComputerVisionKey
    ComputerVisionEndPoint

All of the required environment variables to connect our Azure function to different services are now in place.

Upload an image to Blob Storage

You're now ready to test out our application! You can upload a blob to the container, and then verify that the text in the image was saved to Table Storage.

  1. In the Azure explorer in Visual Studio Code, find and expand your Storage resource in the Storage section.
  2. Expand Blob Containers and right-click your container name, imageanalysis, then select Upload files.
  3. You can find a few sample images included in the images folder at the root of the downloadable sample project, or you can use one of your own.
  4. For the Destination directory, accept the default value, /.
  5. Wait until the files are uploaded and listed in the container.

View text analysis of image

Next, you can verify that the upload triggered the Azure Function, and that the text in the image was analyzed and saved to Table Storage properly.

  1. In Visual Studio Code, in the Azure Explorer, under the same Storage resource, expand Tables to find your resource.
  2. An ImageText table should now be available. Click on the table to preview the data rows inside of it. You should see an entry for the processed image text of an uploaded file. You can verify this using either the Timestamp, or by viewing the content of the Text column.

Congratulations! You succeeded in processing an image that was uploaded to Blob Storage using Azure Functions and Computer Vision.

Troubleshooting

Please use the following table to help troubleshoot issues during this procedure.

Issue Resolution
await computerVisionClient.read(url); errors with Only absolute URLs are supported Make sure your ComputerVisionEndPoint endpoint is in the format of https://YOUR-RESOURCE-NAME.cognitiveservices.azure.com/.

Clean up resources

If you're not going to continue to use this application, you can delete the resources you created by removing the resource group.

  1. Select Resource groups from the Azure explorer
  2. Find and right-click the msdocs-storage-function resource group from the list.
  3. Select Delete. The process to delete the resource group may take a few minutes to complete.