Special Windows 10 issue 2015

Volume 30 Number 11

Digital Ink - Ink Interaction in Windows 10

By Connor Weins | Windows 2015

In this article, I’ll discuss how you can promote natural user interaction using inking. Digital ink, much like a pen on paper, flows from the tip of your digital pen device, stylus, finger or mouse and is rendered on the screen. To get started with digital ink and inking in Windows 10, I’ll start off by addressing a fundamental question: Why is it important for you to use ink in your app?

Humans have been conveying thoughts and ideas through handwriting for centuries. Despite the invention of the mouse and keyboard, the pen-and-paper approach still plays a key role in our lives—from the sticky notes and whiteboards in our offices, to the notebooks in our schools and the coloring books in our children’s hands.

Pen-and-paper is immediate, freeform and uniquely personal to each of us, which makes it ideal for people to use to express their creativity and emotion. The act of converting thoughts into text and diagrams into notes also makes handwriting better for thinking, remembering and learning, according to a 2013 study published in Psychological Science (bit.ly/1tKDrhv), where researchers from Princeton and UCLA found that handwritten notes were significantly better than typed notes for long-term comprehension.

With these advantages of ink over traditional typed keyboard input, imagine if you could make inking on devices as easy as pen-and-paper and harness the processing power of the computer to do things you can’t do in the physical world. With digital ink, you can easily vary the color and appearance of the ink just like in the real world, but take it one step further, analyzing the content and shape of the ink to provide metadata, or converting ink into other content such as text, shapes or commands. This provides a whole new dimension to inking that can’t be replicated on your everyday notebook, making digital ink a powerful tool for drawing, note-taking, annotating and interacting in your app. As the pen- and touch-enabled device market continues to expand, inking will become a critical method of interaction for users and app developers.

In Windows 10, it’s easy for you to bring digital inking into your app through the DirectInk platform. DirectInk provides a set of rich and extensible Windows Runtime (WinRT) APIs that allow you to collect, render and manage ink in a Universal Windows Platform app. By using DirectInk, you get the same great ink and performance used by the Microsoft Edge browser, Universal OneNote and the Handwriting Panel. Here’s a quick overview of the features DirectInk offers your app:

  • Beautiful Ink: DirectInk uses input smoothing and Bezier rendering algorithms to ensure your ink always looks crisp and beautiful for both touch and pen input.
  • Low Latency, Low Memory: DirectInk uses a high-priority background thread and input prediction to ensure ink is always immediate and responsive to users, and it manages resources effectively to keep your app’s overhead low.
  • Simple and Extensible API Surface: DirectInk offers APIs such as the InkCanvas and InkPresenter that allow you to quickly get started collecting and managing ink, and they offer advanced functionality that lets you build rich and complex functionality in your app.

Hopefully by now you’re excited to get started using digital ink in your app. I’ll now take a look at how you can leverage the DirectInk platform in your app and give users a great inking experience.

Collecting Ink in Your App

To get started using digital ink, the first step is to set up a surface where input can be collected and rendered as ink. In Windows 8.1 Store apps, bringing inking into your app was an extended process that involved creating a Canvas, listening to input events, and creating and rendering strokes piece-by-piece using your own rendering code. For a Universal Windows app, beginning to collect ink is as simple as dropping an InkCanvas into your app:

<Grid>
  <InkCanvas x:Name="myInkCanvas"/>
</Grid>

As you can see in Figure 1, this single line of code gives you a transparent overlay that begins collecting pen input and rendering that input as a black ballpoint pen. The pen’s eraser button will also erase any collected ink it comes in contact with. While this is great for getting started with inking, what if you want to change how ink is collected or displayed?

Using the InkCanvas to Collect Ink Resembling a Black Ballpoint Pen
Figure 1 Using the InkCanvas to Collect Ink Resembling a Black Ballpoint Pen

Through the InkCanvas, you can access your InkPresenter, which exposes functionality for controlling the appearance and input configuration of your ink. While pen input provides the best UX for inking, many systems don’t come equipped with a pen. The InkPresenter lets you collect ink for any combination of pen, touch and mouse input, and input types that you don’t select will simply be delivered as pointer events to the InkCanvas XAML element. Through the InkPresenter, you can also manage the default drawing attributes of ink collected on the InkCanvas, allowing you to change the brush size, color and more. As an example of these features, your app could configure your InkCanvas to collect ink for pen, touch and mouse input, and emulate a calligraphy brush by doing the following:

InkPresenter myPresenter = myInkCanvas.InkPresenter;
myPresenter.InputDeviceTypes = Windows.UI.Core.CoreInputDeviceTypes.Pen |
                               Windows.UI.Core.CoreInputDeviceTypes.Touch |
                               Windows.UI.Core.CoreInputDeviceTypes.Mouse;
InkDrawingAttributes myAttributes = myPresenter.CopyDefaultDrawingAttributes();
myAttributes.Color = Windows.UI.Colors.Crimson;
myAttributes.PenTip = PenTipShape.Rectangle;
myAttributes.PenTipTransform =
  System.Numerics.Matrix3x2.CreateRotation((float) Math.PI/4);
myAttributes.Size = new Size(2,6);
myPresenter.UpdateDefaultDrawingAttributes(myAttributes);

That would produce the result shown in Figure 2.

Emulating a Calligraphy Brush Using the InkPresenter DrawingAttributes
Figure 2 Emulating a Calligraphy Brush Using the InkPresenter DrawingAttributes

DirectInk supports many more built-in configurations for input and ink rendering, allowing you to render ink as a highlighter, receive an event when a stroke is collected, access fine-grained input events and ink with multiple pointers in advanced configurations.

Editing, Saving and Loading Ink

So now that you’ve collected some ink, what can you do with it? Users frequently want the ability to erase or edit the ink they collected, or save that ink to access later. In order to provide this experience to your users, you’ll have to access and modify the ink data for the strokes that DirectInk has rendered on the screen.

When ink is collected on your InkCanvas, DirectInk stores it within an InkStrokeCollection inside the InkPresenter. This InkStrokeCollection contains WinRT objects representing each of the strokes currently on your canvas, and as changes are made to this container by your app, they’ll also be rendered on the screen. This lets you programmatically add, remove or modify strokes, and allows DirectInk to keep you informed of any changes it makes to the strokes on the screen. Let’s take a look at some of the common user interactions you can implement using the connection between your InkPresenter and its InkStrokeContainer.

Erasing While the InkCanvas supports erasing with the pen’s eraser button by default, it requires some configuration in the InkPresenter to erase ink for mouse and touch input. DirectInk offers built-in support for erasing ink for any supported input through the Mode property in the InputProcessingConfiguration. Here’s an example of a button that sets Erasing mode: 

private void Eraser_Click(object sender,
  RoutedEventArgs e)
{
  myInkCanvas.InkPresenter.
    InputProcessingConfiguration.Mode =
    InkInputProcessingMode.Erasing;
}

When this button is pressed, all input DirectInk collects on the InkCanvas will be treated as an eraser. If the user’s input intersects with a stroke after this mode is set, that stroke will be deleted from the InkPresenter’s InkStroke­Container and removed from the screen. When using a pen in Inking mode, the eraser button will always be treated as Erase mode.

Selection Unfortunately, DirectInk doesn’t have support for built-in selection at this time, but it does offer a way to develop it yourself through the Unprocessed Input events. Unprocessed events are raised whenever DirectInk receives input that it has been told to listen to, but not to render into ink. This can be done for all input by setting the DirectInk processing configuration mode to None, and can also be configured to happen just for the mouse right button and pen barrel button using the RightDragAction property:

myInkCanvas.InkPresenter.InputProcessingConfiguration.RightDragAction =
  InkInputRightDragAction.LeaveUnprocessed;

For an example, Figure 3 shows how you can use Unprocessed Input events to make a selection lasso (as shown in Figure 4) to select strokes on the screen.

Figure 3 Using Unprocessed Input Events To Make a Selection Lasso

...
myInkCanvas.InkPresenter.UnprocessedInput.PointerPressed += StartLasso;
myInkCanvas.InkPresenter.UnprocessedInput.PointerMoved += ContinueLasso;
myInkCanvas.InkPresenter.UnprocessedInput.PointerReleased += CompleteLasso;
...
private void StartLasso(
  InkUnprocessedInput sender,Windows.UI.Core.PointerEventArgs args)
{
  selectionLasso = new Polyline()
  {
    Stroke = new SolidColorBrush(Windows.UI.Colors.Black),
    StrokeThickness = 2,
    StrokeDashArray = new DoubleCollection() { 7, 3},
  };
  selectionLasso.Points.Add(args.CurrentPoint.RawPosition);
  AddSelectionLassoToVisualTree();
}
private void ContinueLasso(
  InkUnprocessedInput sender, Windows.UI.Core.PointerEventArgs args)
{
  selectionLasso.Points.Add(args.CurrentPoint.RawPosition);
}
private void CompleteLasso(
  InkUnprocessedInput sender, Windows.UI.Core.PointerEventArgs args)
{
  selectionLasso.Points.Add(args.CurrentPoint.RawPosition);
  bounds =
    myInkCanvas.InkPresenter.StrokeContainer.SelectWithPolyLine(
    selectionLasso.Points);
  DrawBoundingRect(bounds);
}

Selecting Strokes with a Lasso
Figure 4 Selecting Strokes with a Lasso

After you’ve selected strokes, you can use the InkStroke­Container.MoveSelected method to translate the strokes, or use the InkStroke.PointTransform property to apply an affine transform to the strokes. When a stroke or set of strokes managed by the InkStrokeContainer is transformed in this way, DirectInk will pick up these changes and render them to the screen.

Saving and Loading DirectInk natively supports ink saving and loading through the Ink Serialized Format (ISF), which saves ink in a vector format that makes it simple for sharing and editing. This is accessible through the InkStrokeContainer SaveAsync and LoadAsync functions.

SaveAsync takes the stroke data currently stored in the InkStrokeContainer and saves it as a GIF file with embedded ISF data. Figure 5 shows how to save ink from your InkStrokeContainer.

Figure 5 Saving Ink from InkStrokeContainer

var savePicker = new FileSavePicker();
savePicker.SuggestedStartLocation =
  Windows.Storage.Pickers.PickerLocationId.PicturesLibrary;
savePicker.FileTypeChoices.Add("Gif with embedded ISF",
  new System.Collections.Generic.List<string> { ".gif" });
StorageFile file = await savePicker.PickSaveFileAsync();
if (null != file)
{
  try
  {
    using (IRandomAccessStream stream =
      await file.OpenAsync(FileAccessMode.ReadWrite))
    {
      await myInkCanvas.InkPresenter.StrokeContainer.SaveAsync(stream);
    } 
  }
  catch (Exception ex)
  {
    GenerateErrorMessage();
  }
}

LoadAsync will perform the opposite function, clearing the strokes already in your InkStrokeContainer and loading a new set of strokes from an ISF file or GIF file with embedded ISF data. After strokes are loaded into the InkStrokeContainer, DirectInk will automatically render them onto the screen.

Advanced Inking Functionality

While the ability to edit and manipulate ink on the screen is critical for developing user interaction with ink, it might not be enough to suit all your needs. The more creative the interactions you want your app to support, the more your app will have to break out of the default set of interactions DirectInk provides. Let’s dig in to a few of the ways DirectInk enables you to build rich and differentiated inking features:

Inking Recognition Ink is more than just the pixels on the screen. A user’s ink can be interpreted as a picture, diagram, shape or text. Recognizing the content of the user’s ink allows you to associate the ink with its meaning or exchange the ink with the content it represents. For example, if a user is writing text in a note-taking app, your app can recognize the text the ink represents and use that text data for generating search results when the user enters a query into a search bar. Recognizing text in this way is powered by the InkRecognizerContainer. Figure 6 shows how to use the InkRecognizerContainer to interpret ink as Simplified Chinese characters.

Figure 6 Interpreting Ink as Simplified Chinese Characters Using the InkRecognizerContainer

async void OnRecognizeAsync(object sender, RoutedEventArgs e)
{
  InkRecognizerContainer recoContainer = new InkRecognizerContainer();
  IReadOnlyList<InkRecognizer> installedRecognizers =
    recoContainer.GetRecognizers();
  foreach (InkRecognizer recognizer in installedRecognizers)
  {
    if (recognizer.Name.Equals("Microsoft 中文(简体)手写识别器"))
    {
      recoContainer.SetDefaultRecognizer(recognizer);
      break;
    }
  }
  var results = await recoContainer.RecognizeAsync(
    myInkCanvas.InkPresenter.StrokeContainer,InkRecognitionTarget.All);
  if (results.Count > 0)
  {
    string str = "Result:";
    foreach (var r in results)
    {
      str += " " + r.GetTextCandidates()[0];
    }
  }
}

While this will allow you to recognize ink as text, the InkRecognizerContainer does have a limitation in that it currently supports recognizing text from only 33 different language packs. If you wanted to recognize text from another language, or recognize symbols, shapes or other more abstract ink interpretations, you’d have to build that logic from scratch. Fortunately, the InkStroke object offers the GetInkPoints function, which allows you to get the x/y position of each of the input points used to construct the stroke. From there, you can build an algorithm to analyze the input points of a stroke or set of strokes and interpret them however you want—as symbols, shapes, commands or anything your imagination can think of!

Independent Input DirectInk is a powerful engine for rendering ink that operates under a simple set of rules for input—either render ink for a given stroke, or don’t. To make this decision, it looks at the supported input types as well as the input processing configuration provided for the mode and right-drag action configuration. This lacks a lot of the context that your app might want to provide for inking—your app might not allow inking in certain sections of the canvas, or might have a gesture where inking should stop after the gesture is recognized. To enable you to make decisions like this, DirectInk offers access to input before it begins processing it through the Independent Input events. These events allow you to inspect the input before DirectInk has rendered it, so if you receive a pressed event in an area where inking isn’t allowed or a move event that completes a gesture you’ve been looking for, you can simply mark the event as Handled.

When the event is marked as Handled, DirectInk will stop processing the stroke, and if a stroke was already in-process, it will be canceled and removed from the screen. You need to be careful when using these events, though. Because they occur on the DirectInk background thread instead of the UI thread, any heavy processing you do in the event or waiting on activities running in a slower thread such as the UI thread can introduce lag that affects the responsiveness of your ink.

Custom Dry One of the most complex features of DirectInk is the Custom Drying mode, which allows your app to render and manage completed or “dry” ink strokes on your own DirectX surface, while letting DirectInk handle performant rendering of the in-progress or “wet” ink strokes. Although DirectInk’s default drying mode can handle most scenarios you might want to enable in your app, a few scenarios require you to manage ink independently:

  • Interleaving ink and non-ink content (text, shapes) while maintaining z-order
  • Performant panning and zooming on a large ink canvas with a large number of ink strokes
  • Drying ink synchronously into a DirectX object like a straight line or shape

In Windows 10, Custom Drying mode supports synchronization with a SurfaceImageSource (SIS), or VirtualSurfaceImageSource (VSIS). Both SIS and VSIS provide a DirectX shared surface for your app to draw into and compose, although VSIS provides a virtual surface that’s larger than the screen for performant panning and zooming. Because visual updates to these surfaces are synced to the XAML UI thread, when ink is rendered to an SIS or VSIS it can be removed from the DirectInk wet layer simultaneously. Custom Drying also supports drying ink to a SwapChainPanel, but doesn’t guarantee synchronization. Because SwapChainPanel isn’t synchronized with the UI thread, there will be a small overlap between when the ink is rendered to your SwapChainPanel and when ink is removed from the DirectInk wet ink layer.

When you activate Custom Drying, you gain fine-grained control over much of the functionality DirectInk provides by default, allowing you to build logic for how ink is rendered and erased from your dry surface and to determine how ink stroke data is managed by your app. To help you build this functionality, many of the DirectInk components are available as standalone objects to assist your app in filling in the gaps. When Custom Drying is activated, DirectInk provides an InkSynchronizer object, which allows you to begin and end the drying process so ink is removed from the DirectInk wet layer in sync with when you add it to your custom dry layer. The DirectInk default dry ink rendering logic is also available through InkD2DRender to ensure your ink appearance is consistent between the wet and dry layers. For erasing, you can use the Unprocessed Input events to build erase logic similar to the earlier example.

For more info and examples of using Custom Drying, check out the ComplexInk sample available on GitHub at bit.ly/1NkRjt7.

Start Creating with Ink

Using what you’ve learned so far about the InkCanvas, InkPresenter and InkStrokeContainer, you can now collect ink for different types of input, customize how your ink appears on the screen, access stroke data and have your changes to your stroke data be reflected in the ink strokes rendered by DirectInk. With that simple level of functionality, you can build a wide range of user interactions, from simple doodling to more scenario-focused features such as note-taking and collecting a user’s signature. You also have the tools to build more complex interactions through the InkRecognizerContainer, Independent Input events and Custom Drying mode.

With these tools at your disposal, your app should be able to leverage all of the benefits that digital ink can provide to give a great inking experience to your users. As the number of pen- and touch-enabled devices continues to increase, providing a great inking experience to your customers will become even more important for user satisfaction and app differentiation. Hopefully you can take some time to think about how digital inking can work in your app and start experimenting with DirectInk.

As a final note, inking continues to be an important area of investment for Microsoft, and one of the biggest keys to improving and expanding the DirectInk platform for future releases is feedback from our developer community. If you have any questions, comments or ideas while developing with DirectInk, please send them to DirectInk@microsoft.com.


Connor Weins is a program manager working on the Pen, Stylus and Inking team in the Windows Developer Ecosystem Platform group. Reach him at conwei@microsoft.com.

Thanks to the following Microsoft technical experts for reviewing this article: Krishnan Menon and Xiao Tu
Krishnan Menon is a senior software engineer on the Windows Core Input Platform team. In Windows 10, he worked on the design and development of the DirectInk platform and the Universal Windows APIs for Pen and Ink.

Xiao Tu is a principal software engineering lead on the Windows Core Input Platform team. For Windows 10, he has been working on delivering Universal Windows Platform APIs for Pen and Ink. He has been with Microsoft since 2004 and has been working on the Win32 ink platform, Windows Presentation Foundation (WPF) ink platform, Windows 7 and 8 multi-touch platform, and IE 11 Pointer Input and high- DPI support.