Partager via


Launch a background app with voice commands in Cortana (XAML)

In addition to using voice commands within Cortana to access system features, you can also extend Cortana with features and functionality from a background app using voice commands that specify an action or command to execute within the app. When an app handles a voice command in the background it can display feedback on the Cortana canvas and communicate with the user using the Cortana voice.

Note  

A voice command is a single utterance with a specific intent, defined in a Voice Command Definition (VCD) file, directed at an installed app through Cortana.

A voice command definition can vary in complexity. It can support anything from a single, constrained utterance to a collection of more flexible, natural language utterances, all denoting the same intent.

A VCD file defines one or more voice commands, each with a unique intent.

The target app can be launched in the foreground (the app takes focus) or activated in the background (Cortana retains focus but provides results from the app), depending on the complexity of the interaction. For example, voice commands that require additional context or user input (such as sending a message to a specific contact) are best handled in a foreground app, while basic commands can be handled in Cortana through a background app.

 

We demonstrate these features here with a trip planning and management app named Adventure Works.

Here's an overview of the Adventure Works app integrated with the Cortana canvas.

To view an Adventure Works trip without Cortana, a user would launch the app and navigate to the Upcoming trips page.

Using voice commands through Cortana to launch your app in the background, the user can instead just say, "Adventure Works, when is my trip to Las Vegas?". Your app handles the command and Cortana displays results along with your app icon and other app info, if provided. Here's an example of a basic trip query and Cortana result screen that both shows and speaks "Your next trip to Las Vegas is on August 1st".

These are the basic steps to add voice-command functionality and extend Cortana with background functionality from your app using speech or keyboard input:

  1. Create an app service (see Windows.ApplicationModel.AppService) that Cortana invokes in the background.
  2. Create a VCD file. This is an XML document that defines all the spoken commands that the user can say to initiate actions or invoke commands when activating your app. See VCD elements and attributes v1.2.
  3. Register the command sets in the VCD file when the app is launched.
  4. Handle the background activation of the of the app service and the execution of the voice command.
  5. Display and speak the appropriate feedback to the voice command within Cortana.

Objective: To learn how to extend Cortana with voice commands and background apps.

Prerequisites

If you're new to developing Windows Store apps using C++, C#, or Visual Basic:

To complete this tutorial, have a look through these topics to get familiar with the technologies discussed here.

User experience guidelines:

See Cortana design guidelines for how to integrate your app with Cortana and Speech design guidelines for helpful tips on designing a useful and engaging speech-enabled app.

Create an app service

  1. In Visual Studio, right-click the Solution name, select Add->New Project, and then select Windows Runtime Component. This is the component that will implement the app service (see Windows.ApplicationModel.AppService).

  2. Type a name for the project (for example, "VoiceCommandService") and click OK.

  3. In Solution Explorer, select the "VoiceCommandService" project and rename the "Class1.cs" file generated by Visual Studio. For this example, we rename the file to "VoiceCommandService.cs".

  4. In the "VoiceCommandService.cs", create a new class that implements the IBackgroundTask interface. The Run method is the required entry point, called when Cortana recognizes the voice command.

    Note  The background task class itself, and all other classes in the background task project, need to be sealed public classes.

     

    Here's a basic background task class for the Adventure Works app.

    using Windows.ApplicationModel.Background;
    
    namespace AdventureWorks.VoiceCommands
    {
      public sealed class AdventureWorksVoiceCommandService : IBackgroundTask
      {
        public void Run(IBackgroundTaskInstance taskInstance)
        {
          BackgroundTaskDeferral _deferral = taskInstance.GetDeferral();
    
          //
          // TODO: Insert code 
          //
          //
    
          _deferral.Complete();
        }        
      }
    }
    
  5. In Visual Studio, open your app and declare the background task as an AppService in the app manifest.

    1. In Solution Explorer, right click the "Package.appxmanifest" file and select View Code.

    2. Find the Application element.

    3. Add an Extensions element to the Application element.

    4. Add an Extension element to the Extensions element.

    5. Add a Category attribute to the Extension element and set the value of the Category attribute to "windows.appService".

    6. Add an EntryPoint attribute to the Extension element and set the value of the EntryPoint attribute to the name of the class that implements IBackgroundTask, in this case "AdventureWorks.VoiceCommands.AdventureWorksVoiceCommandService".

    7. Add an AppService element to the Extension element.

    8. Add a Name attribute to the AppService element and set the value of the Name attribute to a name for the app service, in this case "AdventureWorksVoiceCommandService".

      <Package>
        <Applications>
          <Application>
      
            <Extensions>
              <Extension Category="windows.appService" 
                EntryPoint=
                  "AdventureWorks.VoiceCommands.AdventureWorksVoiceCommandService">
                <AppService Name="AdventureWorksVoiceCommandService"/>
              </Extension>
            </Extensions>
      
          <Application>
        <Applications>
      </Package>
      

Create a VCD file

  1. In Visual Studio, right-click the project name, select Add->New Item, and then select Text File.
  2. Type a name for the VCD file and be sure to append the ".xml" file extension. For example, "AdventureWorksCommands.xml". Select Add.
  3. In Solution Explorer, select the VCD file.
  4. In the Properties window, set Build action to Content, and then set Copy to output directory to Copy if newer.

Edit the VCD file

For each language supported by your app, create a CommandSet of voice commands that your app can handle.

Each Command declared in a VCD file must include this information:

  • A command Name used by the application to identify the the voice command at runtime

  • An Example element that contains a phrase describing how a user can invoke the command. Cortana shows this example when the user says "What can I say?", "Help", or they tap See more.

  • A ListenFor element that contains the words or phrases that your app recognizes to initiate a command. Each command needs to have at least one ListenFor element.

  • A Feedback element that contains the text for Cortana to display and speak as the application is launched.

  • A VoiceCommandService element to indicate the voice command launches the app in the background.

See the VCD elements and attributes v1.2 reference for more detail.

You can specify multiple language versions for the commands used to activate your app and execute a command. You can create multiple CommandSet elements, each with a different xml:lang attribute to allow your app to be used in different markets. For example, an app for the United States might have a CommandSet for English and a CommandSet for Spanish.

Caution  

To activate an app and initiate an action using a voice command, the app must register a VCD file that contains a CommandSet with a language that matches the speech language that the user selected on their device. This language is set by the user on the device Settings > System > Speech > Speech Language screen.

 

Here's a VCD file that defines a voice command for the Adventure Works app.

For this example, CommandPrefix is set to "Adventure Works", Command is set to "whenIsTripToDestination", ListenFor specifies the text that can be recognized (with a reference to a PhraseList element that constrains the recognized destinations), VoiceCommandService indicates that the voice command is handled by an app service in the background, and Feedback specifies what the user will hear when Cortana launches the app service.

The value of the Target attribute of the VoiceCommandService element needs to be the name of the AppService as specified in the package.appxmanifest. In this example, the name of the AppService is "AdventureWorksVoiceCommandService".

The "whenIsTripToDestination" command has a ListenFor element with a reference to a PhraseList for a constrained set of destinations.

ListenFor elements cannot be programmatically modified. However, PhraseList elements associated with ListenFor elements can be programmatically modified. Applications should modify the content of the PhraseList at runtime based on the data set generated as the user uses the app. See How to dynamically modify VCD phrase lists.

<?xml version="1.0" encoding="utf-8"?>
<VoiceCommands xmlns="https://schemas.microsoft.com/voicecommands/1.2">
  <CommandSet xml:lang="en-us" Name="commandSet_en-us">
    <CommandPrefix> Adventure Works, </CommandPrefix>
    <Example> When is my trip to Las Vegas? </Example>

    <Command Name="whenIsTripToDestination">
      <Example> When is my trip to Las Vegas?</Example>
      <ListenFor> when is [my] trip to {destination} </ListenFor>
      <Feedback> Looking for trip to {destination} </Feedback>
      <VoiceCommandService Target="AdventureWorksVoiceCommandService"/>
    </Command>

    <PhraseList Label="destination">
      <Item> Las Vegas </Item>
      <Item> Dallas </Item>
      <Item> New York </Item>
    </PhraseList>

  </CommandSet>

  <!-- Other CommandSets for other languages -->

</VoiceCommands>

Install the VCD commands

Your app must run once to install the command sets in the VCD.

When your app is activated, call InstallCommandSetsFromStorageFileAsync in the OnLaunched handler to register the commands that the system should listen for.

Note  If a device backup occurs and your app reinstalls automatically, voice command data is not preserved. To ensure the voice command data for your app stays intact, consider initializing your VCD file each time your app launches or activates, or store a setting that indicates if the VCD is currently installed and check the setting each time your app launches or activates.

 

Here's an example that shows how to install the commands specified by a VCD file (vcd.xml).

var storageFile = 
  await Windows.Storage.StorageFile.GetFileFromApplicationUriAsync(
    new Uri("ms-appx:///AdventureWorksCommands.xml"));
await 
  Windows.ApplicationModel.VoiceCommands.VoiceCommandDefinitionManager.
    InstallCommandSetsFromStorageFileAsync(storageFile);

Handle the voice command in the app service

Specify in the app service how your app responds to voice-command activations after it has launched and the voice command sets have been installed.

  1. Take a service deferral so your app service is not terminated while handling the voice command.

  2. Confirm that your background task is running as an app service activated by a voice command.

    1. Cast the IBackgroundTaskInstance.TriggerDetails to Windows.ApplicationModel.AppService.AppServiceTriggerDetails.
    2. Check that IBackgroundTaskInstance.TriggerDetails.Name is the name of the app service in the "Package.appxmanifest" file.
  3. Use IBackgroundTaskInstance.TriggerDetails to create a VoiceCommandServiceConnection to Cortana to retrieve the voice command.

  4. Register an event handler for VoiceCommandServiceConnection.VoiceCommandCompleted to receive notification when the app service is closed due to a user cancellation.

  5. Register an event handler for the IBackgroundTaskInstance.Canceled to receive notification when the app service is closed due to an unexpected failure.

  6. Determine the name of the command and what was spoken.

    1. Use the VoiceCommand.CommandName property to determine the name of the voice command.

    2. To determine what the user said, check the value of Text or the semantic properties of the recognized phrase in the SpeechRecognitionSemanticInterpretation dictionary.

  7. Take the appropriate action in your app service.

  8. Display and speak the feedback to the voice command with Cortana.

    1. Determine the strings that you want Cortana to display and speak to the user in response to the voice command and create a VoiceCommandResponse object. See Cortana design guidelines for guidance on how to select the feedback strings that Cortana shows and speaks.
    2. Use the VoiceCommandServiceConnection instance to report progress or completion to Cortana by calling ReportProgressAsync or ReportSuccessAsync with the VoiceCommandServiceConnection object.

    For this example, we refer back to the VCD in Step 3: Edit the VCD file.

    public sealed class VoiceCommandService : IBackgroundTask
    {
      private BackgroundTaskDeferral serviceDeferral;
      VoiceCommandServiceConnection voiceServiceConnection;
    
      public async void Run(IBackgroundTaskInstance taskInstance)
      {
      //Take a service deferral so the service isn't terminated
        this.serviceDeferral = taskInstance.GetDeferral();
    
        taskInstance.Canceled += OnTaskCanceled;
    
        var triggerDetails = 
          taskInstance.TriggerDetails as AppServiceTriggerDetails;
    
        if (triggerDetails != null && 
          triggerDetails.Name == "AdventureWorksVoiceServiceEndpoint")
        {
          try
          {
            voiceServiceConnection = 
              VoiceCommandServiceConnection.FromAppServiceTriggerDetails(
                triggerDetails);
    
            voiceServiceConnection.VoiceCommandCompleted += 
              VoiceCommandCompleted;
    
            VoiceCommand voiceCommand = await
            voiceServiceConnection.GetVoiceCommandAsync();
    
            switch (voiceCommand.CommandName)
            {
              case "whenIsTripToDestination":
              {
                var destination = 
                  voiceCommand.Properties["destination"][0];
                SendCompletionMessageForDestination(destination);
                break;
              }
    
              // As a last resort launch the app in the foreground
              default:
                LaunchAppInForeground();
                break;
            }
          }
          finally
          {
            if (this.serviceDeferral != null)
            {
              //Complete the service deferral
              this.serviceDeferral.Complete();
            }
          }
        }
      }
    
      private void VoiceCommandCompleted(
        VoiceCommandServiceConnection sender, 
        VoiceCommandCompletedEventArgs args)
      {
        if (this.serviceDeferral != null)
        {
          // Insert your code here
          //Complete the service deferral
          this.serviceDeferral.Complete();
        }
      }
    
      private async void SendCompletionMessageForDestination(
        string destination)
      {
        // Take action and determine when the next trip to destination
        // Inset code here
    
        // Replace the hardcoded strings used here with strings 
        // appropriate for your application.
    
        // First, create the VoiceCommandUserMessage with the strings 
        // that Cortana will show and speak.
        var userMessage = new VoiceCommandUserMessage();
        userMessage.DisplayMessage = "Here’s your trip.";
        userMessage.SpokenMessage = "Your trip to Vegas is on August 3rd.";
    
        // Optionally, present visual information about the answer.
        // For this example, create a VoiceCommandContentTile with an 
        // icon and a string.
        var destinationsContentTiles = new List<VoiceCommandContentTile>();
    
        var destinationTile = new VoiceCommandContentTile();
        destinationTile.ContentTileType = 
          VoiceCommandContentTileType.TitleWith68x68IconAndText;
        // The user can tap on the visual content to launch the app. 
        // Pass in a launch argument to enable the app to deep link to a 
        // page relevant to the item displayed on the content tile.
        destinationTile.AppLaunchArgument = 
          string.Format("destination={0}”, “Las Vegas");
        destinationTile.Title = "Las Vegas";
        destinationTile.TextLine1 = "August 3rd 2015";
        destinationsContentTiles.Add(destinationTile);
    
        // Create the VoiceCommandResponse from the userMessage and list    
        // of content tiles.
        var response = 
          VoiceCommandResponse.CreateResponse(
            userMessage, destinationsContentTiles);
    
        // Cortana will present a “Go to app_name” link that the user 
        // can tap to launch the app. 
        // Pass in a launch to enable the app to deep link to a page 
        // relevant to the voice command.
        response.AppLaunchArgument = 
          string.Format("destination={0}”, “Las Vegas");
    
        // Ask Cortana to display the user message and content tile and 
        // also speak the user message.
        await voiceServiceConnection.ReportSuccessAsync(response);
      }
    
      private async void LaunchAppInForeground()
      {
        var userMessage = new VoiceCommandUserMessage();
        userMessage.SpokenMessage = "Launching Adventure Works";
    
        var response = VoiceCommandResponse.CreateResponse(userMessage);
    
        // When launching the app in the foreground, pass an app 
        // specific launch parameter to indicate what page to show.
        response.AppLaunchArgument = "showAllTrips=true";
    
        await voiceServiceConnection.RequestAppLaunchAsync(response);
      }
    }
    

Note  

Once launched, the app service has .5 seconds to call ReportSuccessAsync. Cortana uses the data provided by the app to show and say the feedback specified in the VCD file. If the app takes longer than .5 seconds to make the call, Cortana inserts a inserts a hand-off screen, as shown here. Cortana displays the hand-off screen until the application calls ReportSuccessAsync, or for up to 5 seconds. If the app service doesn’t call ReportSuccessAsync, or any of the VoiceCommandServiceConnection methods that provide Cortana with information, the user receives an error message and the app service is cancelled.

 

Summary and next steps

Here, you learned how to implement basic voice commands using VCD files to launch an app in the background from Cortana and provide feedback using the Cortana canvas and voice.

Next, learn how to implement background voice commands that prompt the user, through Cortana, for confirmation and dissambiguation. See How to interact with a background app in Cortana.

Cortana interactions

How to define custom recognition constraints

VCD elements and attributes v1.2

Designers

Cortana design guidelines