Create a text labeling project and export labels

Learn how to create and run data labeling projects to label text data in Azure Machine Learning. Specify either a single label or multiple labels to be applied to each text item.

You can also use the data labeling tool to create an image labeling project.

Text labeling capabilities

Azure Machine Learning data labeling is a central place to create, manage, and monitor data labeling projects:

  • Coordinate data, labels, and team members to efficiently manage labeling tasks.
  • Tracks progress and maintains the queue of incomplete labeling tasks.
  • Start and stop the project and control the labeling progress.
  • Review the labeled data and export labeled as an Azure Machine Learning dataset.

Important

Text data must be available in an Azure blob datastore. (If you do not have an existing datastore, you may upload files during project creation.)

Data formats available for text data:

  • .txt: each file represents one item to be labeled.
  • .csv or .tsv: each row represents one item presented to the labeler. You decide which columns the labeler can see in order to label the row.

Prerequisites

  • The data that you want to label, either in local files or in Azure blob storage.
  • The set of labels that you want to apply.
  • The instructions for labeling.
  • An Azure subscription. If you don't have an Azure subscription, create a free account before you begin.
  • A Machine Learning workspace. See Create an Azure Machine Learning workspace.

Create a text labeling project

Labeling projects are administered from Azure Machine Learning. You use the Data Labeling page to manage your projects.

If your data is already in Azure Blob storage, you should make it available as a datastore before you create the labeling project.

  1. To create a project, select Add project. Give the project an appropriate name. The project name can’t be reused, even if the project is deleted in future.

  2. Select Text to create a text labeling project.

    Labeling project creation for text labeling

    • Choose Text Classification Multi-class for projects when you want to apply only a single label from a set of labels to each piece of text.
    • Choose Text Classification Multi-label for projects when you want to apply one or more labels from a set of labels to each piece of text.
    • Choose Text Named Entity Recognition (Preview) for projects when you want to apply labels to individual or multiple words of text in each entry.

    Important

    Text Named Entity Recognition is currently in public preview. The preview version is provided without a service level agreement, and it's not recommended for production workloads. Certain features might not be supported or might have constrained capabilities. For more information, see Supplemental Terms of Use for Microsoft Azure Previews.

  3. Select Next when you're ready to continue.

Add workforce (optional)

Select Use a vendor labeling company from Azure Marketplace only if you've engaged a data labeling company from Azure Marketplace. Then select the vendor. If your vendor doesn't appear in the list, unselect this option.

Make sure you first contact the vendor and sign a contract. For more information, see Work with a data labeling vendor company (preview).

Select Next to continue.

Select or create a dataset

If you already created a dataset that contains your data, select it from the Select an existing dataset drop-down list. Or, select Create a dataset to use an existing Azure datastore or to upload local files.

Note

A project cannot contain more than 500,000 files. If your dataset has more, only the first 500,000 files will be loaded.

Create a dataset from an Azure datastore

In many cases, it's fine to just upload local files. But Azure Storage Explorer provides a faster and more robust way to transfer a large amount of data. We recommend Storage Explorer as the default way to move files.

To create a dataset from data that you've already stored in Azure Blob storage:

  1. Select Create a dataset > From datastore.
  2. Assign a Name to your dataset.
  3. Choose the Dataset type:
    • Select Tabular if you're using a .csv or .tsv file, where each row contains a response. Tabular isn't available for Text Named Entity Recognition projects.
    • Select File if you're using separate .txt files for each response.
  4. (Optional) Provide a description for your dataset.
  5. Select Next.
  6. Select the datastore.
  7. If your data is in a subfolder within your blob storage, choose Browse to select the path.
    • Append "/**" to the path to include all the files in subfolders of the selected path.
    • Append "**/." to include all the data in the current container and its subfolders.
  8. Select Next.
  9. Confirm the details. Select Back to modify the settings or Create to create the dataset.

Create a dataset from uploaded data

To directly upload your data:

  1. Select Create a dataset > From local files.
  2. Assign a Name to your dataset.
  3. Choose the Dataset type.
    • Select Tabular if you're using a .csv or .tsv file, where each row is a response. Tabular isn't available for Text Named Entity Recognition projects.
    • Select File if you're using separate .txt files for each response.
  4. (Optional) Provide a description of your dataset.
  5. Select Next
  6. (Optional) Select or create a datastore. Or keep the default to upload to the default blob store ("workspaceblobstore") of your Machine Learning workspace.
  7. Select Upload to select the local file(s) or folder(s) to upload.
  8. Select Next.
  9. If uploading .csv or .tsv files:
    • Confirm the settings and preview, select Next.
    • Include all columns of text you'd like the labeler to see when classifying that row. If you'll be using ML assisted labeling, adding numeric columns may degrade the ML assist model.
    • Select Next.
  10. Confirm the details. Select Back to modify the settings or Create to create the dataset.

Configure incremental refresh

If you plan to add new files to your dataset, use incremental refresh to add these new files your project.

When incremental refresh at regular intervals is enabled, the dataset is checked periodically for new files to be added to a project, based on the labeling completion rate. The check for new data stops when the project contains the maximum 500,000 files.

Select Enable incremental refresh at regular intervals when you want your project to continually monitor for new data in the datastore.

Unselect if you don't want new files in the datastore to automatically be added to your project.

To add more files to your project, use Azure Storage Explorer to upload to the appropriate folder in the blob storage.

After the project is created, use the Details tab to change incremental refresh, view the timestamp for the last refresh, and request an immediate refresh of data.

Note

Incremental refresh isn't available for projects that use tabular (.csv or .tsv) dataset input.

Specify label classes

On the Label classes page, specify the set of classes to categorize your data. Your labelers' accuracy and speed are affected by their ability to choose among the classes. For instance, instead of spelling out the full genus and species for plants or animals, use a field code or abbreviate the genus.

Enter one label per row. Use the + button to add a new row. If you have more than three or four labels but fewer than 10, you may want to prefix the names with numbers ("1: ", "2: ") so the labelers can use the number keys to speed their work.

Describe the text labeling task

It's important to clearly explain the labeling task. On the Labeling instructions page, you can add a link to an external site for labeling instructions, or provide instructions in the edit box on the page. Keep the instructions task-oriented and appropriate to the audience. Consider these questions:

  • What are the labels they'll see, and how will they choose among them? Is there a reference text to refer to?
  • What should they do if no label seems appropriate?
  • What should they do if multiple labels seem appropriate?
  • What confidence threshold should they apply to a label? Do you want their "best guess" if they aren't certain?
  • What should they do with partially occluded or overlapping objects of interest?
  • What should they do if an object of interest is clipped by the edge of the image?
  • What should they do after they submit a label if they think they made a mistake?
  • What should they do if they discover image quality issues including poor lighting conditions, reflections, loss of focus, undesired background included, abnormal camera angles, and so on?
  • What should they do if there are multiple reviewers who have different opinions on the labels?

Note

Be sure to note that the labelers will be able to select the first 9 labels by using number keys 1-9.

Use ML-assisted data labeling

The ML-assisted labeling page lets you trigger automatic machine learning models to accelerate labeling tasks. ML-assisted labeling is available for both file (.txt) and tabular (.csv) text data inputs. To use ML-assisted labeling:

  • Select Enable ML assisted labeling.
  • Select the Dataset language for the project. All languages supported by the TextDNNLanguages Class are present in this list.
  • Specify a compute target to use. If you don't have one in your workspace, a compute cluster will be created for you and added to your workspace. The cluster is created with a minimum of 0 nodes, which means it doesn't cost anything when it's not in use.

How does ML-assisted labeling work?

At the beginning of your labeling project, the items are shuffled into a random order to reduce potential bias. However, any biases that are present in the dataset will be reflected in the trained model. For example, if 80% of your items are of a single class, then approximately 80% of the data used to train the model will be of that class.

For training the text DNN model used by ML-assist, the input text per training example will be limited to approximately the first 128 words in the document. For tabular input, all text columns are first concatenated before applying this limit. This is a practical limit imposed to allow for the model training to complete in a timely manner. The actual text in a document (for file input) or set of text columns (for tabular input) can exceed 128 words. The limit only pertains to what is internally leveraged by the model during the training process.

The exact number of labeled items necessary to start assisted labeling isn't a fixed number. This can vary significantly from one labeling project to another, depending on many factors, including the number of labels classes and label distribution.

Since the final labels still rely on input from the labeler, this technology is sometimes called human in the loop labeling.

Note

ML assisted data labeling does not support default storage accounts secured behind a virtual network. You must use a non-default storage account for ML assisted data labelling. The non-default storage account can be secured behind the virtual network.

Pre-labeling

After enough labels are submitted for training, the trained model is used to predict tags. The labeler now sees pages that contain predicted labels already present on each item. The task is then to review these predictions and correct any mis-labeled items before submitting the page.

Once a machine learning model has been trained on your manually labeled data, the model is evaluated on a test set of manually labeled items to determine its accuracy at different confidence thresholds. This evaluation process is used to determine a confidence threshold above which the model is accurate enough to show pre-labels. The model is then evaluated against unlabeled data. Items with predictions more confident than this threshold are used for pre-labeling.

Initialize the text labeling project

After the labeling project is initialized, some aspects of the project are immutable. You can't change the task type or dataset. You can modify labels and the URL for the task description. Carefully review the settings before you create the project. After you submit the project, you're returned to the Data Labeling homepage, which will show the project as Initializing.

Note

This page may not automatically refresh. So, after a pause, manually refresh the page to see the project's status as Created.

Run and monitor the project

After you initialize the project, Azure will begin running it. Select the project on the main Data Labeling page to see details of the project.

To pause or restart the project, toggle the Running status on the top right. You can only label data when the project is running.

Dashboard

The Dashboard tab shows the progress of the labeling task.

Text data labeling dashboard

The progress chart shows how many items have been labeled, skipped, in need of review, or not yet done. Hover over the chart to see the number of items in each section.

The middle section shows the queue of tasks yet to be assigned. If ML-assisted labeling is on, you'll also see the number of pre-labeled items.

On the right side is a distribution of the labels for those tasks that are complete. Remember that in some project types, an item can have multiple labels, in which case the total number of labels can be greater than the total number items.

Data tab

On the Data tab, you can see your dataset and review labeled data. Scroll through the labeled data to see the labels. If you see incorrectly labeled data, select it and choose Reject, which will remove the labels and put the data back into the unlabeled queue.

Details tab

View and change details of your project. In this tab you can:

  • View project details and input datasets
  • Enable or disable incremental refresh at regular intervals, or request an immediate refresh.
  • View details of the storage container used to store labeled outputs in your project
  • Add labels to your project
  • Edit instructions you give to your labels

Access for labelers

Anyone who has Contributor or Owner access to your workspace can label data in your project.

You can also add users and customize the permissions so that they can access labeling but not other parts of the workspace or your labeling project. For more information, see Add users to your data labeling project.

Add new label class to a project

During the data labeling process, you may want to add more labels to classify your items. For example, you may want to add an "Unknown" or "Other" label to indicate confusion.

Use these steps to add one or more labels to a project:

  1. Select the project on the main Data Labeling page.
  2. At the top right of the page, toggle Running to Paused to stop labelers from their activity.
  3. Select the Details tab.
  4. In the list on the left, select Label classes.
  5. At the top of the list, select + Add Labels Add a label
  6. In the form, add your new label. Then choose how to continue the project. Since you've changed the available labels, you choose how to treat the already labeled data:
    • Start over, removing all existing labels. Choose this option if you want to start labeling from the beginning with the new full set of labels.
    • Start over, keeping all existing labels. Choose this option to mark all data as unlabeled, but keep the existing labels as a default tag for images that were previously labeled.
    • Continue, keeping all existing labels. Choose this option to keep all data already labeled as is, and start using the new label for data not yet labeled.
  7. Modify your instructions page as necessary for the new label(s).
  8. Once you've added all new labels, at the top right of the page toggle Paused to Running to restart the project.

Export the labels

Use the Export button on the Project details page of your labeling project. You can export the label data for Machine Learning experimentation at any time.

For all project types other than Text Named Entity Recognition, you can export:

For Text Named Entity Recognition projects, you can export:

  • An Azure Machine Learning dataset with labels.

  • A CoNLL file. For this export, you'll also have to assign a compute resource. The export process runs offline and generates the file as part of an experiment run. When the file is ready to download, you'll see a notification on the top right. Select this to open the notification, which includes the link to the file.

    Notification for file download.

Access exported Azure Machine Learning datasets in the Datasets section of Machine Learning. The dataset details page also provides sample code to access your labels from Python.

Exported dataset

Troubleshooting

Use these tips if you see any of these issues.

Issue Resolution
Only datasets created on blob datastores can be used. Known limitation of the current release.
After creation, the project shows "Initializing" for a long time. Manually refresh the page. Initialization should complete at roughly 20 datapoints per second. The lack of autorefresh is a known issue.
Newly labeled items not visible in data review. To load all labeled items, choose the First button. The First button will take you back to the front of the list, but loads all labeled data.
Unable to assign set of tasks to a specific labeler. Known limitation of the current release.

Next steps