Partager via


An accessibility case study - Reading List. Part 6: Using Narrator with touch input only

This post discusses accessibility considerations around the use of touch input to control the Narrator screen reader while interacting with the Reading List app available in the Windows 8.1 Preview.

The ability to use touch alone to interact with the Reading List app when using the Narrator screen reader is a very important scenario. The Reading List team wanted to make sure that the touch experience is not only technically available, but as efficient an experience as we could make it.

The accessibility features built into the WinJS framework, including those of the ListView, meant that by default the customer using Narrator could interact with their list through touch. For example, by touching on a list item, the item's content and position in the list would get announced in a similar way to if a keyboard had been used to set focus to the item. In the touch case, Narrator goes on to say "double tap to activate, triple tap to select". And sure enough, if the customer double taps, the item is invoked, and if they triple tap, the item gets selected. Given that so much Narrator touch functionality worked automatically, this meant the Reading List team could focus on the question of whether there was any specific action we could take to make the interaction more efficient.

On a side note, to learn what specific actions are available on an element, first move the Narrator cursor to the element by touching it. (The “Narrator cursor” is represented by the blue rectangle that appears when Narrator’s running.) Then do a four-finger tap on the screen to invoke a list of item-specific actions. To close that list of actions, touch the “Close” button shown on the list, and then double tap anywhere to invoke that button.

There's one very important point that I'd like to make related to verifying that the app provides a great touch experience with Narrator. When analysing the app experience, it's tempting for a sighted person to touch directly on some visuals shown on the screen, and consider the result of doing that. If the expected text gets spoken, they might think that all is well, and then continue the app verification by touching directly on some other visuals on the screen. While having the expected results spoken here is a necessary condition for providing a great touch experience, it's far from sufficient. Many customers who are blind won't be bringing their fingers down on the screen such that they immediately hit some UI of interest. Some customers may be able to move their finger around the screen until they find a particular element, but that can be time-consuming and impractical if the UI visuals are small. It'll certainly often be frustrating to have to do this. Rather, the customer wants to be able to efficiently move the Narrator cursor from where it currently happens to be, over to the element of interest, and then proceed to work at that element. As such, it's important to allow the customer to rapidly move through the app UI by using flick or swipe gestures.

 

Gestures to efficiently move through the app

Narrator supports a collection of gestures, such as the double tap and triple tap mentioned earlier. The set of supported gestures also includes flicks and swipes which will help the Reading List customer interact efficiently with the app's UI. The Reading List team did not want customers to have to spend time moving their finger around the screen searching for elements. The following sections describe the various flick and swipe gestures which we felt would be valuable while interacting with the app.

(I should be clear that this assumes the customer’s device supports four or more contact points. The gestures below aren’t supported on older touch devices that support fewer contact points.)

 

1. Flick left or right with one finger

Flicking left or right moves the Narrator cursor to what Narrator feels is an appropriate "previous" or "next" element in the app UI. Whether these elements are keyboard focusable is irrelevant. Customers may often perform these gestures while they're getting familiar with an app, in order to learn about all the elements shown on the screen.

Don't ask me why a single-finger “flick” isn't called a "swipe" like the other gestures mentioned below are.

Performing these flick gestures in Reading List will move the Narrator cursor to some elements that have visuals shown on the screen and to some that do not. For example, the customer might hear "Reading List" if they move the Narrator cursor to one of the containers in the app UI structure, or "App page content" if they reach the main section of the app. That main section is defined as follows:

    <section id="mainSection" data-win-res="{attributes: {'aria-label' : 'mainPageContentLabel'}}" role="main">

Many of the Reading List app’s UI elements that are reported by the Inspect SDK tool will be reachable when the customer is using the single-finger flick gesture in the app. However, unless the customer is exploring the app's UI as part of getting familiar with the app, moving the Narrator cursor through many of the elements will not be particularly interesting.

Where the single-finger flick left and right gestures are very interesting is when the Narrator cursor is on the list of items. By default, the single-finger flick gesture will move the Narrator cursor item-by-item. So if the Narrator cursor is on a list item, and the customer does a right flick, the Narrator cursor will move to the next item. And because the item's title will be spoken first, the customer can soon determine whether the item is of interest. This means the customer can perform multiple flick gestures in quick succession until they reach an item they want to invoke.

If a right flick is done while the Narrator cursor is on the last item in a group, then the Narrator cursor moves to the heading of the next group. For example, the customer using Narrator might hear "Last week, separator". The term "separator" here indicates to the customer that they're moving between groups.

As it happens, after having moved to a group header, another right flick moves the Narrator cursor on to the text shown in the header. This isn't really of interest to the customer because they already know which header they're at. So they then do another right flick to move on to the first item in the list.

Note that if the customer wanted to explore the contents of the items in the list in more detail, they can do single-finger flick down gesture to change the way Narrator navigates. For example, Narrator could move line-by-line or word-by-word through the items. Typically the default item-by-item navigation is used to move through the list content.

So for efficient navigation through the items in a list, the customer can use the single-finger flick left or right gestures.

 

2. Swipe left or right with two fingers

If the list is fairly large, the customer may well not want to use the single-finger flick right gesture to move to every item in turn. Instead, the customer may want to scroll page-by-page through the list for a while. To do this, once the Narrator cursor is on an item in list, a two-finger swipe left or right will scroll a page of items. Following a scroll like this, Narrator will make a swooshing sound to indicate that the scroll has occurred, and then announce the new scroll position of the list. For example, "Scroll to 83%". Trying to scroll beyond the ends of list will result in an announcement of "Cannot scroll further left" or "Cannot scroll further right".

It’s important to note that the scroll itself does not move the position of the Narrator cursor. So if the customer scrolls a few times and then wants to move through the items that are now being shown on the screen, they need to touch some item on screen before doing the single-finger right flick gestures.

 

3. Swipe left or right with three fingers

This is where things get really interesting!

Navigating around the list of items item-by-item or page-by-page is great in some situations. But when the customer has many items and they want to find a specific one quickly, it may be most efficient to use the SearchBox. And if the SearchBox is being used, it’s important that our customers can move quickly between the SearchBox and the list. In this situation, the most efficient form of navigation relates to the keyboard focus.

Keyboard focus cycles through three areas in Reading List; the current item in the list, the group header associated with that item, and the SearchBox. So leveraging the path that keyboard focus takes is a great way to navigate through the most important areas in the app, without encountering some of the less interesting elements that would be hit when using the single-finger flick navigation.

In order to navigate through those key areas of interest, the customer might choose to do a left or right three-fingered swipe.

To illustrate this, I just performed the following actions on my Windows 8.1 Preview device:

- I get to a point where the Narrator cursor is on one of my 52 items. I decide I want to find all items relating to Jasmine, (my dog).
- I do a 3-finger left swipe to move focus, and hear "Search button".
- I do a double tap to invoke the Search button, and hear " SearchBox, search, enter to submit query, e-s-c to clear text, editing" when the Search edit control is shown.
- I do a double tap while the Narrator cursor is in the Search edit control in order to have the Touch Keyboard shown. (The device has no physical keyboard attached). I hear "Insertion point at end of line" because the Search edit control is empty at this time.
- I touch to find the 'j' key on the Touch Keyboard and double tap to invoke it. I hear 'j', and then a couple of seconds later I hear "11 results found”.
- I touch to find the 'a' key and double tap to invoke it. I hear 'a', and then a couple of seconds later I hear "2 results found”.
- I do a right 3-finger swipe to move keyboard focus away from the SearchBox. I hear "Search button". (The Touch keyboard is hidden at this point because the element with keyboard focus is not a text-related field.)
- At this point, the navigation in the Windows 8.1 Preview did not behave as I expected. I would want to do another 3-finger swipe to move keyboard focus into the list, but instead focus moves back to the Search edit field. As a result of this, I decided to do a single right flick a few times until I hear details of the first search result. After announcing the item content, I hear "1 of 2".
- I then did a right flick to the next (and last) item in the list, and double tap to invoke it.

So despite the unexpected need to single-finger flick right a few times to move from the Search UI into the results list, (rather than doing fewer 3-finger swipes), by performing a search when using Narrator with touch, our customers can often find an item of interest far more quickly than by moving through a long list of items directly.

 

Hit targeting

While all that useful flick and swipe functionality described above worked by default in the Reading List app, there was one explicit change that we did have to make related to touch input. For some UI that appears visually on the screen as a single element, there may be multiple associated elements in the DOM. A key piece of logic involved with controlling a screen reader through touch, is deciding exactly which element has been hit by the finger. A customer using Narrator would not want to attempt to touch on some visual element, only to find that attempt blocked because Narrator is actually told that some not-interactable higher level container element has been hit.

By the way, it's important to remember that not all customers who use Narrator are blind. Customers with different levels of sight can prefer working at the device with Narrator running as an additional tool to help with productivity.

While developing the Reading list app, we found that Narrator could announce details of elements which had a background-color style of "transparent", even though we were attempting to touch other elements shown visually on the screen. It turned out that just because some container elements appear transparent on the screen, that doesn't mean they won’t be found to be the target of touch input. To fix this issue, we added the following style to the transparent elements that we didn't want to be hit.

    pointer-events: visiblePainted;

 

By doing that, Narrator announces all the visual elements as expected when the customer touches them.

It's always interesting to consider how the touch experience could be enhanced. For example, if the customer touches the visual text "Last week" in a group header, Narrator will announce "Last week". But nothing is spoken then the big coloured group header around the text is touched. Given that that big area is far easier to hit than the small text string, it would be preferable for a hit anywhere in that group header to result in "Last week" being announced. While this optimization is interesting to consider, it's expected that most customers will more commonly be using flicks and swipes to interact with the app, rather than moving their finger around the screen in order to find different elements shown above the list.

Sometimes when touching on a text string, Narrator speaks an individual word from the string rather than the whole string. And sometimes it follows up with "double tap to place text insertion point, triple tap to enter selection mode". This unexpected announcement is not due to action taken by the Reading List app, and so isn't something that the app can address. Again this does not have a big impact on the customer, given that other types of interactions are more common and efficient.

 

Summary

All functionality provided by an app should be accessible to all its customers. So it was important for the Reading List team to consider whether a customer using Narrator with touch is blocked from any app functionality that’s available to a sighted customer.

The only change specifically related to touch that the Reading List team made, enabled hit-testing through a transparent container element. All of the functionality around single-finger flicks or multi-finger swipes worked by default. This is hugely important to the development of the app, as these gestures are critical to providing the most efficient interactions for the customers. This meant that the Reading List team could spend time trying out all this gesture interaction and carefully considering the efficiency of the resulting experience.

While it was necessary for us to verify that a touch on an element generated the expected results, it was not ok for us to simply prod an element on the screen, check the result of that prod, and consider our verification done. We had to consider how the customer navigates to and from that element via touch in the most efficient manner possible during commonly performed operations.

Our goal was for the Reading List app to be both fully accessible and efficient to use.

 

And one last comment…

If you have sight, and you’re looking at the screen while verifying that your customers can fully leverage the functionality of your app when using Narrator with touch, you’re not really verifying this at all.

To understand the experience you’re shipping, it’s necessary to run through end-to-end scenarios without seeing what’s on the screen at any point. It may be tempting to take a sneak-peak at something while running through a scenario, (say because there was no announcement of something appearing on the screen, and flick or swipe gestures didn’t take you to that new UI), but that sneak-peak might be an indication that you’re app is actually unusable. The whole end-to-end scenario might be broken due to one moment where your customer’s blocked.

It’s really important to your customers that you run through all your end-to-end scenarios while not looking at the screen at all.

  

An accessibility case study - Reading List. Part 1: Introduction
An accessibility case study - Reading List. Part 2: Colours and Contrast
An accessibility case study - Reading List. Part 3: Keyboard accessibility
An accessibility case study - Reading List. Part 4: Programmatic accessibility
An accessibility case study - Reading List. Part 5: Live Regions
An accessibility case study - Reading List. Part 6: Using Narrator with touch input only

Comments

  • Anonymous
    February 03, 2016
    Navigator does not react  as I expect on surface 3 Windows 8.1! Mike