Share via


Custom Touch Devices

WPF4 allows applications to implement custom touch devices. We added this functionality to allow Surface SDK to be built on WPF multi-touch APIs directly. As we announced at PDC09, the next version of the Surface SDK will implement WPF multi-touch devices with the input from the Surface vision system.

A few weeks ago, Microsoft Multipoint SDK version 1.5 was released. Multipoint SDK supports using up-to 25 USB mice as input devices on the same PC. While the original intent of the Multipoint SDK is to allow multiple students to collaborate and share a single PC, the multiple mice support is perfect for simulating multi-touch input.

In this post, I will explain how to implement custom touch devices in WPF4 and use the Multipoint SDK integration as an example.

TouchDevice

TouchDevice is an abstract class in the System.Windows.Input namespace. An instance of TouchDevice represents one touch contact. To add a new type of touch input to WPF, you need to derive from TouchDevice, override the methods to provide current positions of the touch points, and notify the input system of state changes via the ReportDown/Move/Up and Activate/Deactivate methods on TouchDevice.

Deriving from TouchDevice and TouchPoint

TouchDevice constructor takes an integer device ID as parameter. Device IDs should be unique for your set of touch devices, it is not enforced by the system. For developers, it is sufficient to use the instance of the TouchDevice as a key, and it is guaranteed to be valid and consistent during an interaction cycle.

TouchDevice has 2 public abstract methods that you have to override to provide the position of the touch contact.

TouchPoint GetTouchPoint(IInputElement relativeTo) method returns the current position of the touch contact relative to the coordinate system of ‘relativeTo’ element.

To calculate the transformation from the top-level window coordinate system to the ‘relativeTo’ element’s system, you can calculate the transformation via the Visual.TransformToDescendant() method and Transform() the toucd point accordingly.

TouchPointCollection GetIntermediateTouchPoints(IInputElement relativeTo) method returns a collection of intermediate touch points that was collected but not reported since the last touch event. This is used to aggregate a sequence of high frequency touch positions and not overwhelm the system with individual Touch events. For most applications, it’s sufficient to return an empty collection and call ReportMove() on every detected touch movements.

You can provide additional information about your touch contact by deriving from TouchPoint. For example, Surface also associates elliptical bounds and tag ID with TouchPoint, these attributes are not available via the Windows touch device drivers.

Updating TouchDevice States

Each TouchDevice needs to be associated with a PresentationSource which is typically the active top level window. Next you have to Activate() the device. To report touch point activity, you will be using ReportDown(), ReportMove() and ReportUp() methods. You need to make sure that each Activate() is accompanied by an Deactivate() call, and each ReportDown() has a corresporting ReportUp(), otherwise the input system will throw an exception.

image

Use multiple mice for multi-touch input

In this example we will be using the MultiPoint SDK to provide multi-touch input. The full source code is here.

First, we need to reference the MultiPoint assembly from the SDK

image

We will create a MultipointTouchWindow that receives the Multipoint events, and translate into the touch events. With modifications to the Multipoint SDK, we can provide multi-touch event more efficiently, but we will have to wait for a future release of the SDK.

 public class MultipointTouchWindow : Window, IMultipointMouseEvents 
 {
     public MultipointTouchWindow() 
     {
         Loaded += HandleLoaded;
     }
  
     private void HandleLoaded(object sender, RoutedEventArgs args)
     {
         MultipointSdk.Instance.Initialize(this);
         MultipointSdk.Instance.DeviceArrivalEvent += HandleMultipointDeviceArrival;
         foreach (var device in MultipointSdk.Instance.MouseDeviceList)
             device.DeviceVisual.CursorColor = NextColor();
     }
  
  

The window class has to implement the IMultipointMouseEvents interface for the Multipoint SDK to provide mouse events. Default implementations of the events in the interface is sufficient.

The Multipoint SDK instance is initialized after the WPF window is loaded, and the corresponding HWD has been created.

We also assign different colors to the cursor associated with the each mouse device. NextColor() is a simple utility method that returns a different color every time it’s called.

Multipoint works best with full screen windows. In WPF you need to set the WindowState to Maximized and WindowStyle to None.

Next we hook up the preview Multipoint mouse events

 static MultipointTouchWindow()
 {
     EventManager.RegisterClassHandler(
                 typeof(UIElement), 
                 MultipointMouseEvents.MultipointPreviewMouseDownEvent,
                 (RoutedEventHandler)HandleMultipointMouseDown, 
                 false );
     EventManager.RegisterClassHandler(typeof(UIElement), 
                 MultipointMouseEvents.MultipointPreviewMouseMoveEvent,
                 (RoutedEventHandler)HandleMultipointMouseMove, 
                 false);
     EventManager.RegisterClassHandler(typeof(UIElement), 
                 MultipointMouseEvents.MultipointPreviewMouseUpEvent,
                 (RoutedEventHandler)HandleMultipointMouseUp, 
                 false);
 }

The handlers of these events simply delegate to static methods of the MultipointTouchDevice that we’ll discuss next.

MultipointTouchDevice

MultipointTouchDevice derives from TouchDevice. It maintains a static mapping of Multipoint device Ids to their corresponding TouchDevice instances. As we handle Multipoint mouse events, we retrieve the corresponding MultipointTouchDevice and updates its state.

 public static void MouseDown(MultipointMouseEventArgs e)
  {            
      var device = GetDevice(e);
      device.SetActiveSource(PresentationSource.FromVisual(e.CurrentWindow));
      device.Position = GetPosition(e);
      device.Activate();
      device.ReportDown();
  
  }

The MouseMove and MouseUp methods are equally simple, first updating the device’s position, then calling ReportMove() or ReportUp(). We deactivate the device after the a mouse up event.

MultipointTouchDevice.Position is a simple property of the current position in Window coordinate system. GetPosition() is a helper function that retrieves the position from MultipointMouseEventArgs.

 public override TouchPoint GetTouchPoint(IInputElement relativeTo)
 {
     Point pt = Position;
     if (relativeTo != null)
     {
         pt = this.ActiveSource.RootVisual.
              TransformToDescendant((Visual)relativeTo).
              Transform(Position);
     }
  
     var rect = new Rect(pt, new Size(1.0, 1.0));
  
     return new TouchPoint(this, pt, rect, TouchAction.Move);
 }

The GetTouchPoint() implementation simply transform the Position to the appropriate coordinate system, and return the point as having a 1.0 by 1.0 pixel size.

Comments

  • Anonymous
    May 10, 2011
    How would you use this with a touchscreen?  Would the touchscreen need to be a multi touch screen?  If so, will the MultiPoint SDK regard each possible touch as a different device?
  • Anonymous
    June 25, 2014
    Please address this :connect.microsoft.com/.../wpf-touch-services-are-badly-broken