August 2010

Volume 25 Number 08

UI Frontiers - Multi-Touch Manipulation Events in WPF

By Charles Petzold | August 2010

Charles PetzoldJust within the past few years, multi-touch has progressed from a futuristic sci-fi film prop to a mainstream UI. Multi-touch displays are now standard on new models of smartphones and tablet computers. Multi-touch is also likely to become ubiquitous on computers in public spaces, such as kiosks or the table computer pioneered by Microsoft Surface.

The only real uncertainly is the popularity of multi-touch on the conventional desktop computer. Perhaps the greatest impediment is the fatigue known as “gorilla arm” associated with moving fingers on vertical screens for long periods of time. My personal hope is that the power of multi-touch will actually provoke a redesign of the desktop display. I can envision a desktop computer with a display resembling the configuration of a drafting table, and perhaps almost as large.

But that’s the future (perhaps). For the present, developers have new APIs to master. The support for multi-touch in Windows 7 has filtered down and settled into various areas of the Microsoft .NET Framework with interfaces both low and high.

Sorting out the Multi-Touch Support

If you consider the complexity of expression that’s possible with the use of multiple fingers on a display, you can perhaps appreciate why nobody seems to know quite yet the “correct” programming interface for multi-touch. This will take some time. Meanwhile, you have several options.

Windows Presentation Foundation (WPF) 4.0 has two multi-touch interfaces available for programs running under Windows 7. For specialized uses of multi-touch, programmers will want to explore the low-level interface consisting of several routed events defined by UIElement named TouchDown, TouchMove, TouchUp, TouchEnter, TouchLeave, with preview versions of the down, move and up events. Obviously these are modeled after the mouse events, except that an integer ID property is necessary to keep track of multiple fingers on the display. Microsoft Surface is built on WPF 3.5, but it supports a more extensive low-level Contact interface that distinguishes types and shapes of touch input.

The subject of this column is the high-level multi-touch support in WPF 4.0, which consists of a collection of events whose names begin with the word Manipulation. These Manipulation events perform several crucial multi-touch jobs:

  • consolidating the interaction of two fingers into a single action
  • resolving the movement of one or two fingers into transforms
  • implementing inertia when the fingers leave the screen

A subset of the Manipulation events is listed in the documentation of Silverlight 4, but that’s a bit deceptive. The events are not yet supported by Silverlight itself, but they are supported in Silverlight applications written for Windows Phone 7. The Manipulation events are listed in Figure 1.

Figure 1 The Manipulation Events in Windows Presentation Foundation 4.0

Event Supported by Windows Phone 7?
ManipulationStarting No
ManipulationStarted Yes
ManipulationDelta Yes
ManipulationInertiaStarted No
ManipulationBoundaryFeedback No
ManipulationCompleted Yes


Web-based Silverlight 4 applications will continue to use the Touch.FrameReported event that I discussed in the article “Finger Style: Exploring Multi-Touch Support in Silverlight” in the March 2010 issue of MSDN Magazine.

Along with the Manipulation events themselves, the UIElement class in WPF also supports overridable methods such as On­ManipulationStarting corresponding to the Manipulation events. In Silverlight for Windows Phone 7, these overridable methods are defined by the Control class.

A Multi-Touch Example

Perhaps the archetypal multi-touch application is a photograph viewer that lets you move photos on a surface, make them larger or smaller with a pair of fingers, and rotate them. These operations are sometimes referred to as pan, zoom and rotate, and they correspond to the standard graphics transforms of translation, scaling and rotation.

Obviously a photograph-viewing program needs to maintain the collection of photos, allow new photos to be added and photos to be removed, and it’s always nice to display the photos in a little graphical frame, but I’m going to ignore all that and just focus on the multi-touch interaction. I was surprised how easy it all becomes with the Manipulation events, and I think you will be as well.

All the source code for this column is in a single downloadable solution named WpfManipulationSamples. The first project is SimpleManipulationDemo, and the MainWindow.xaml file is shown in Figure 2.

Figure 2 The XAML File for SimpleManipulationDemo

<Window x:Class="SimpleManipulationDemo.MainWindow"
  Title="Simple Manipulation Demo">
    <Style TargetType="Image">
      <Setter Property="Stretch" Value="None" />
      <Setter Property="HorizontalAlignment" Value="Left" />
      <Setter Property="VerticalAlignment" Value="Top" />
    <Image Source="Images/112-1283_IMG.JPG"  
      RenderTransform="0.5 0 0 0.5 100 100" />
    <Image Source="Images/139-3926_IMG.JPG"
      RenderTransform="0.5 0 0 0.5 200 200" />
    <Image Source="Images/IMG_0972.JPG"
      RenderTransform="0.5 0 0 0.5 300 300" />
    <Image Source="Images/IMG_4675.JPG"
      RenderTransform="0.5 0 0 0.5 400 400" />

First notice the setting on all three Image elements:


This property is false by default. You must set it to true for any element on which you want to obtain multi-touch input and generate Manipulation events.

The Manipulation events are WPF routed events, meaning that the events bubble up the visual tree. In this program, neither the Grid nor MainWindow have the IsManipulationEnabled property set to true, but you can still attach handlers for the Manipulation events to the Grid and MainWindow elements, or override the OnManipulation methods in the MainWindow class.

Notice also that each of the Image elements has its Render­Transform set to a six-number string:

RenderTransform="0.5 0 0 0.5 100 100"

This is a shortcut that sets the RenderTransform property to an initialized MatrixTransform object. In this particular case, the Matrix object set to the MatrixTransform is initialized to perform a scale of 0.5 (making the photos half their actual size) and a translation of 100 units to the right and down. The code-behind file for the window accesses and modifies this MatrixTransform.

The complete MainWindow.xaml.cs file is shown in Figure 3, and overrides just two methods, OnManipulationStarting and OnManipulationDelta. These methods process the manipulations generated by the Image elements.

Figure 3 The Code-Behind File for SimpleManipulationDemo

using System.Windows;
using System.Windows.Controls;
using System.Windows.Input;
using System.Windows.Media;
namespace SimpleManipulationDemo {
  public partial class MainWindow : Window {
    public MainWindow() {
    protected override void OnManipulationStarting(
      ManipulationStartingEventArgs args) {
      args.ManipulationContainer = this;
      // Adjust Z-order
      FrameworkElement element = 
        args.Source as FrameworkElement;
      Panel pnl = element.Parent as Panel;
      for (int i = 0; i < pnl.Children.Count; i++)
          pnl.Children[i] == 
          element ? pnl.Children.Count : i);
      args.Handled = true;
    protected override void OnManipulationDelta(
      ManipulationDeltaEventArgs args) {
      UIElement element = args.Source as UIElement;
      MatrixTransform xform = 
        element.RenderTransform as MatrixTransform;
      Matrix matrix = xform.Matrix;
      ManipulationDelta delta = args.DeltaManipulation;
      Point center = args.ManipulationOrigin;
        delta.Scale.X, delta.Scale.Y, center.X, center.Y);
        delta.Rotation, center.X, center.Y);
        delta.Translation.X, delta.Translation.Y);
      xform.Matrix = matrix;
      args.Handled = true;

Manipulation Basics

A manipulation is defined as one or more fingers touching a particular element. A complete manipulation begins with the Manipulation­Starting event (followed soon thereafter by ManipulationStarted) and ends with ManipulationCompleted. In between, there might be many ManipulationDelta events.

Each of the Manipulation events is accompanied by its own set of event arguments encapsulated in a class named after the event with EventArgs appended, such as ManipulationStartingEventArgs and ManipulationDeltaEventArgs. These classes derive from the familiar InputEventArgs, which in turn derives from RoutedEvent­Args. The classes include Source and OriginalSource properties indicating where the event originated.

In the SimpleManipulationDemo program, Source and Original­Source will both be set to the Image element generating the Manipulation events. Only an element with its IsManipulation­Enabled property set to true will show up as the Source and OriginalSource properties in these Manipulation events.

In addition, each of the event argument classes associated with the Manipulation events includes a property named Manipulation­Container. This is the element within which the multi-touch manipulation is occurring. All coordinates in the Manipulation events are relative to this container.

By default, the ManipulationContainer property is set to the same element as the Source and OriginalSource properties—the element being manipulated—but this is probably not what you want. In general, you don’t want the manipulation container to be the same as the element being manipulated because tricky interactions get involved with dynamically moving, scaling and rotating the same element that’s reporting touch information. Instead, you want the manipulation container to be a parent of the manipulated element, or perhaps an element further up the visual tree.

In most of the Manipulation events, the ManipulationContainer property is get-only. The exception is the very first Manipulation event that an element receives. In ManipulationStarting you have the opportunity to change ManipulationContainer to something more appropriate. In the SimpleManipulationDemo project, this job is a single line of code:

args.ManipulationContainer = this;

In all subsequent events, ManipulationContainer will then be the MainWindow element rather than the Image element, and all coordinates will be relative to that window. This works fine because the Grid containing the Image elements is also aligned with the window.

The remainder of the OnManipulationStarting method is dedicated to bringing the touched Image element to the foreground by resetting the Panel.ZIndex attached properties of all the Image elements in the Grid. This is a simple way of handling ZIndex but probably not the best because it creates sudden changes.

ManipulationDelta and DeltaManipulation

The only other event handled by SimpleManpulationDemo is ManipulationDelta. The ManipulationDeltaEventArgs class defines two properties of type ManipulationDelta. (Yes, the event and the class have the same name.) These properties are DeltaManipulation and CumulativeManipulation. As the names suggest, DeltaManipulation reflects the manipulation that occurred since the last ManipulationDelta event, and CumulativeManipulation is the complete manipulation that began with the ManipulationStarting event.

ManipulationDelta has four properties:

  • Translation of type Vector
  • Scale of type Vector
  • Expansion of type Vector
  • Rotation of type double

The Vector structure defines two properties named X and Y of type double. One of the more significant differences with the Manipulation support under Silverlight for Windows Phone 7 is the absence of the Expansion and Rotation properties.

The Translation property indicates movement (or a pan) in the horizontal and vertical directions. A single finger on an element can generate changes in translation, but translation can also be part of other manipulations.

The Scale and Expansion properties both indicate a change in size (a zoom), which always requires two fingers. Scale is multiplicative and Expansion is additive. Use Scale for setting a scale transform; use Expansion for increasing or decreasing the Width and Height properties of an element by device-independent units.

In WPF 4.0, the X and Y values of the Scale vector are always the same. The Manipulation events do not give you sufficient information to scale an element anisotropically (that is, differently in the horizontal and vertical directions).

By default, Rotation also requires two fingers, although you’ll see later how to enable one-finger rotation. In any particular ManipulationDelta event, all four properties might be set. A pair of fingers might be enlarging an element, and at the same time rotating it and moving it to another location.

Scaling and rotation are always relative to a particular center point. This center is also provided in ManipulationDeltaEvent­Args in the property named ManipulationOrigin of type Point. This origin is relative to the ManipulationContainer set in the ManipulationStarting event.

Your job in the ManipulationDelta event is to modify the Render­Transform property of the manipulated object in accordance with the delta values in the following order: scaling first, then rotation, and finally translation. (Actually, because the horizontal and vertical scaling factors are identical, you can switch the order of the scaling and rotation transforms and still get the same result.)

The OnManipulationDelta method in Figure 3 shows a standard approach. The Matrix object is obtained from the MatrixTransform set on the manipulated Image element. It’s modified through calls to ScaleAt and RotateAt (both relative to the ManipulationOrigin) and Translate. Matrix is a structure rather than a class, so you must finish up by replacing the old value in the MatrixTransform with the new one.

It’s possible to vary this code a little. As shown, it scales around a center with this statement:

matrix.ScaleAt(delta.Scale.X, delta.Scale.Y, center.X, center.Y);

This is equivalent to translating to the negative of the center point, scaling and then translating back:

matrix.Translate(-center.X, -center.Y);
matrix.Scale(delta.Scale.X, delta.Scale.Y);
matrix.Translate(center.X, center.Y);

The RotateAt method can likewise be replaced with this:

matrix.Translate(-center.X, -center.Y);
matrix.Translate(center.X, center.Y);

The two adjacent Translate calls now cancel each other out, so the composite is:

matrix.Translate(-center.X, -center.Y);
matrix.Scale(delta.Scale.X, delta.Scale.Y);
matrix.Translate(center.X, center.Y);

It’s probably a little bit more efficient.

Figure 4 shows the SimpleManipulationDemo program in action.

Figure 4 The SimpleManipulationDemo Program

Figure 4 The SimpleManipulationDemo Program

Enabling the Container?

One of the interesting features of the SimpleManpulationDemo program is that you can simultaneously manipulate two Image elements, or even more if you have the hardware support and a sufficient number of fingers. Each Image element generates its own ManipulationStarting event and its own series of Manipulation­Delta events. The code effectively distinguishes between the multiple Image elements by the Source property of the event arguments.

For this reason, it’s important not to set any state information in fields that implies that only one element can be manipulated at a time.

The simultaneous manipulation of multiple elements is possible because each of the Image elements has its own IsManipulationEnabled property set to true. Each of them can generate a unique series of Manipulation events.

When approaching these Manipulation events for the first time, you might instead investigate setting IsManpulationEnabled to true only on the MainWindow class or another element serving as a container. This is possible, but it’s somewhat clumsier in practice and not quite as powerful. The only real advantage is that you don’t need to set the ManipulationContainer property in the ManipulationStarting event. The messiness comes later when you must determine which element is being manipulated by hit-testing on the child elements using the ManipulationOrigin property in the ManipulatedStarted event.

You would then need to store the element being manipulated as a field for use in future ManipulationDelta events. In this case, it’s safe to store state information in fields because you’ll only be able to manipulate one element in the container at a time.

The Manipulation Mode

As you saw, one of the crucial properties to set during the ManipulationStarting event is the ManipulationContainer. Another couple of properties are useful to customize the particular manipulation.

You can limit the types of manipulation you can perform by initializing the Mode property with a member of the Manipulation­Modes enumeration. For example, if you were using manipulation solely for scrolling horizontally, you might want to limit the events to just horizontal translation. The ManipulationModesDemo program lets you set the mode dynamically by displaying a list of RadioButton elements listing the options, as shown in Figure 5.

Figure 5 The ManipulationModeDemo Display

Figure 5 The ManipulationModeDemo Display

Of course, the RadioButton is one of the many controls in WPF 4.0 that respond directly to touch.

The Single Finger Rotation

By default, you need two fingers to rotate an object. However, if a real photo is sitting on a real desk, you can put your finger on the corner and rotate it in a circle. The rotation is roughly occurring around the center of the object.

You can do this with the Manipulation events by setting the Pivot property of ManipulationStartingEventArgs. By default the Pivot property is null; you enable one-finger rotation by setting the property to a ManipulationPivot object. The key property of
ManipulationPivot is Center, which you might consider calculating as the center of the element being manipulated:

Point center = new Point(element.ActualWidth / 2, 
                         element.ActualHeight / 2);

But this center point must be relative to the manipulation container, which in the programs I’ve been showing you is the element handling the events. Translating that center point from the element being manipulated to the container is easy:

center = element.TranslatePoint(center, this);

Another little piece of information also needs to be set. If all you’re specifying is a center point, a problem arises when you put your finger right in the center of the element: just a little movement will cause the element to spin around like crazy! For this reason, ManipulationPivot also has a Radius property. Rotation will not occur if the finger is within Radius units of the Center point. The ManipulationPivotDemo program sets this radius to half an inch:

args.Pivot = new ManipulationPivot(center, 48);

Now a single finger can perform a combination of rotation and translation.

Beyond the Basics

What you’ve seen here are the basics of using the WPF 4.0 Manipulation events. Of course, there are some variations on these techniques that I’ll show in future columns, as well as the power of manipulation inertia.

You may also want to take a look at the Surface Toolkit for Windows Touch, which provides touch-optimized controls for your apps. The ScatterView control in particular eliminates the need for using the Manipulation events directly for basic stuff like manipulating photos. It has some snazzy effects and behaviors that will make sure your app behaves the same as other touch apps.

Charles Petzold is a longtime contributing editor to MSDN Magazine. He’s currently writing “Programming Windows Phone 7,” which will be published as a free downloadable e-book in the fall of 2010. A preview edition is currently available through his Web site,

Thanks to the following technical experts for reviewing this article: Doug Kramer, Robert Levy and Anson Tsao