What's New

Kinect for Windows 1.5, 1.6, 1.7, 1.8

This section provides release notes for each released version of the Kinect for Windows SDK and the Developer Toolkit.

  • What's New in version 1.8 of the SDK and the Developer Toolkit
  • What's New in version 1.7 of the SDK and the Developer Toolkit
  • What's New in version 1.6 of the SDK and the Developer Toolkit
  • What's New in version 1.5.2 of the Developer Toolkit
  • What's New in version 1.5.1 of the Developer Toolkit
  • What's New in version 1.5 of the SDK and the Developer Toolkit

What's New in version 1.8 of the SDK and the Developer Toolkit

Here's a link to the known issues for this release.

This release provides many new features, all part of the Developer Toolkit 1.8.0. The SDK/Runtime v1.8 will contain minor changes.

Kinect Background Removal

The new Background Removal API provides "green screen" capabilities for a single person. The user can be specified using skeleton ID. The BackgroundRemovedColorStream API uses various image processing techniques to improve the stability and accuracy of the player mask originally contained in each depth frame. The stream can be configured to select any single player as the foreground and remove the remaining color pixels from the scene.

  • New Background Removal samples demonstrate basic use of the Background Removal APIs.

    Note

    Previous Green Screen samples in 1.7 have been renamed to CoordinateMappingBasics in 1.8.

  • Native DLLs for Background Removal are KinectBackgroundRemoval180_32.dll and KinectBackgroundRemoval180_64.dll under %KINECT_TOOLKIT_DIR%\Redist.

Webserver for Kinect Data Streams

New web components and samples give HTML5 applications access to Kinect data for interactions and visualization. This is intended to allow HTML5 applications running in a browser to connect to the sensor through a server running on the local machine. Use this to create kiosk applications on dedicated machines. The webserver component is a template that can be used as-is or modified as needed.

Note

The Webserver components and sample require .NET 4.5 (Visual Studio 2012) and Windows 8 (or later) for web socket functionality. The Kinect for Windows SDK JavaScript APIs support Internet Explorer 10 and later, Mozilla Firefox, and Google Chrome.

Microsoft.Samples.Kinect.Webserver is a webserver component that provides:

  • Web socket end points that expose Kinect interactions (hand pointer movement plus push and grip gestures), background removal, user viewer, skeleton data, etc. as data streams and events sent from server to a HTML5-capable browser client
  • REST end point that support GET operations to retrieve current stream configuration and POST operations to modify current stream configuration
  • REST end point that acts as a simple file server that serves static content

WebserverBasics-WPF is a sample application that:

  • Configures the Microsoft.Samples.Kinect.Webserver component to serve Kinect data on a localhost port
  • Serves sample web page and associated static content
  • Provides UI to start/stop the server and view errors encountered while serving data
  • Writes informational messages to a log file

Redistributable web content (Kinect-1.8.0.js, KinectWorker-1.8.0.js and Kinect-1.8.0.css) provide:

  • An API layer that abstracts communication with the server which web applications can use to get access to and manipulate Kinect data
  • Button and cursor controls, with associated styles, which web applications can include in their overall user experience

Color Capture and Camera Pose Finder for Kinect Fusion

Kinect Fusion provides 3D object scanning and model creation using a Kinect for Windows sensor. With this release, users can scan a scene with the Kinect camera (now optionally also capturing low-resolution color) and simultaneously see, and interact with, a detailed 3D model of the scene. There is also support for recovering the camera pose once tracking has been lost, without having to find the very last tracked position or reset the whole reconstruction, by moving the camera close to one of the original camera positions during initial reconstruction. We also added an API to both the original depth reconstruction and new color reconstruction interfaces to match the "ExportVolumeBlock" and import a volume block. This is restricted to importing at the resolution of the created volume. Please see the feature documentation for an in depth discussion of the new functionality.

Kinect Color Fusion Samples:

  • Kinect Fusion Color Basics-WPF, Kinect Fusion Color Basics-D2D: Demonstrate basic use of the Kinect Fusion APIs for 3D reconstruction with the option of using color.
  • Kinect Fusion Explorer-WPF, Kinect Fusion Explorer-D2D: Similar to previous Kinect Fusion Explorer in 1.7 with an extra feature to additionally capture color while performing 3D reconstruction.
  • Kinect Fusion Explorer Multi Static Cameras-WPF: This demonstrates having multiple static Kinect cameras integrate into the same reconstruction volume, given user-defined transformations for each camera.
  • Kinect Fusion Head Scanning-WPF: Demonstrates how to leverage a combination of Kinect Fusion and Face Tracking to scan high resolution models of faces and heads.

Introducing New Samples!

  • Adaptive UI-WPF: This new sample demonstrates the basics of adaptive UI that is displayed on screen in the appropriate location and size based on the user's height, distance from the screen, and field of view. The sample provides settings for interaction zone boundaries, tracks users and transitions as they move between far range and tactile range.
  • Webserver Basics-WPF: This sample demonstrates how to use the Microsoft.Samples.Kinect.Webserver component to serve Kinect data on a localhost port. The component and sample require .NET 4.5 (Visual Studio 2012) and Windows 8 (or later).
  • Background Removal Basics-D2D, Background Removal Basics-WPF: Demonstrates how to use the KinectBackgroundRemoval API. This is an improved version of the Coordinate Mapping sample (previously named Green Screen sample in 1.7).
  • Kinect Fusion Explorer Multi Static Cameras-WPF: This demonstrates having multiple static Kinect cameras integrate into the same reconstruction volume, given user-defined transformations for each camera. A new 3rd person view and basic WPF graphics are also enabled to provide way for users to visually explore a reconstruction scene during setup and capture.
  • Kinect Fusion Color Basics-D2D, Kinect Fusion Color Basics-WPF: Demonstrates the basics of Kinect Fusion for 3D reconstruction, now including low-resolution color capture.
  • Kinect Fusion Head Scanning-WPF: Demonstrates how to leverage a combination of Kinect Fusion and Face Tracking to scan high resolution models of faces and heads.

Updated Samples

  • Kinect Fusion Explorer-D2D, Kinect Fusion Explorer-WPF: Demonstrates additional features of Kinect Fusion for 3D reconstruction, now including low-resolution color capture. Please review the documentation for the hardware and software requirements. (Note: Suitable DirectX11 graphics card required for real-time reconstruction). The Explorer samples have been updated to always create a volume with the option of capturing color, hence GPU memory requirements have doubled compared to the v1.7 Explorer samples for the same voxel resolutions. Kinect Fusion Explorer-D2D also now integrates the Camera Pose Finder for increased robustness to failed tracking.
  • Coordinate Mapping Basics-WPF, Coordinate Mapping Basics-D2D: These samples were previously named Green Screen in 1.7, renamed to Coordinate Mapping Basics in 1.8.

What's New in version 1.7 of the SDK and the Developer Toolkit

Here's a link to the known issues for this release.

This release provides many new features, all part of the Developer Toolkit 1.7.0. The SDK/Runtime v1.7 will contain minor changes.

Introducing new Kinect Interactions

We've built a new Interactions framework which provides pre-packaged, reusable components that allow for even more exciting interaction possibilities. These components are supplied in both native and managed packages for maximum flexibility, and are also provided as a set of WPF controls. Among the new features are:

  • Press for Selection. This provides, along with the new KinectInteraction Controls, improved selection capability and faster interactions. If you're familiar with previous Kinect for Windows interaction capabilities, this replaces the hover select concept.
  • Grip and Move for Scrolling. This provides, along with the new KinectInteraction Controls, 1-to-1 manipulation for more precise scrolling, as well as large fast scrolls with a fling motion. If you're familiar with previous Kinect for Windows interaction capabilities, this replaces the hover scroll model.

New interactions work best with the following setup:

  • User stands 1.5 - 2.0 meters away from the sensor
  • Sensor mounted directly above or below the screen showing the application, and centered
  • Screen size < 46 inches
  • Avoid extreme tilt angles
  • As always, avoid lots of natural light and reflective materials for more reliable tracking

Engagement Model Enhancements

The Engagement model determines which user is currently interacting with the Kinect-enabled application.

This has been greatly enhanced to provide more natural interaction when a user starts interacting with the application, and particularly when the sensor detects multiple people. Developers can also now override the supplied engagement model as desired.

APIs, Samples, and DLL Details

A set of WPF interactive controls are provided to make it easy to incorporate these interactions into your applications.

Two samples use these controls: ControlsBasics-WPF and InteractionGallery-WPF. The controls can also be installed in source form via Toolkit Browser -> "Components" -> Microsoft.Kinect.Toolkit.Controls.

  • InteractionGallery - WPF uses the new KinectInteraction Controls in a customized app experience that demonstrates examples of navigation, engagement, article reading, picture viewing, video playback, and panning with grip. It was designed for 1920x1080 resolution screens in landscape layout.

    For those building applications with UI technologies other than WPF, the lower level InteractionStream APIs (native or managed) are available to build on top of. Native DLLs for InteractionStream are Kinect_Interaction170_32.dll and Kinect_Interaction170_64.dll under %KINECT_TOOLKIT_DIR%\Redist. Managed DLL for InteractionStream is Microsoft.Kinect.Toolkit.Interaction.dll found in %KINECT_TOOLKIT_DIR%\Assemblies.

  • There is no sample of InteractionStream API usage, however, Microsoft.Kinect.Toolkit.Controls source code (see info about controls samples above) is available and is a great example of using InteractionStream.

Kinect Fusion

KinectFusion provides 3D object scanning and model creation using a Kinect for Windows sensor. The user can paint a scene with the Kinect camera and simultaneously see, and interact with, a detailed 3D model of the scene. Kinect Fusion can be run at interactive rates on supported GPUs, and can run at non-interactive rates on a variety of hardware. Running at non-interactive rates may allow larger volume reconstructions.

Kinect Fusion Samples:

  • Kinect Fusion Basics - WPF, Kinect Fusion Basics - D2D: Demonstrates basic use of the Kinect Fusion APIs for 3D reconstruction.
  • Kinect Fusion Explorer - WPF, Kinect Fusion Explorer - D2D: Demonstrates advanced 3D reconstruction features of Kinect Fusion, allowing adjustment of many reconstruction parameters, and export of reconstructed meshes.

Kinect Fusion Tech Specs

Kinect Fusion can process data either on a DirectX 11 compatible GPU with C++ AMP, or on the CPU, by setting the reconstruction processor type during reconstruction volume creation. The CPU processor is best suited to offline processing as only modern DirectX 11 GPUs will enable real-time and interactive frame rates during reconstruction.

Minimum Hardware Requirements for GPU based reconstruction

DirectX 11 compatible graphics card.

Kinect Fusion has been tested on the NVidia GeForce GTX560, and the AMD Radeon 6950. These cards, or higher end cards from the same product lines are expected to be able to run at interactive rates.

For those building applications with technologies other than WPF, the lower level Fusion APIs (native or managed) are available to build on top of. Native DLLs for Fusion are Kinect_Fusion170_32.dll and Kinect_Fusion170_64.dll under %KINECT_TOOLKIT_DIR%\Redist. Managed DLL for Fusion is Microsoft.Kinect.Toolkit.Fusion.dll found in %KINECT_TOOLKIT_DIR%\Assemblies.

Desktop PC with 3GHz (or better) multi-core processor and a graphics card with 2GB or more of dedicated on-board memory. Kinect Fusion has been tested for high-end scenarios on a NVidia GeForce GTX680 and AMD Radeon HD 7850.

Note: It is possible to use Kinect Fusion on laptop class GPU hardware, but this typically runs significantly slower than desktop-class hardware. In general, aim to process at the same frame rate as the Kinect sensor (30fps) to enable the most robust camera pose tracking.

Getting Started with Kinect Fusion (Important!)
  1. Ensure you have compatible hardware (see Tech Specs section above).

  2. Download and install the latest graphics display drivers for your GPU.

Kinect Sensor Chooser - native

The Kinect Sensor Chooser is a native component that allows simplified management of the Kinect Sensor lifetime, and an enhanced user experience when dealing with missing sensors, unpowered sensors, or sensors that get unplugged while an application is running. It provides similar capabilities to the KinectSensorChooser in the Microsoft.Kinect.Toolkit component. A NuiSensorChooserUI control is also provided for use in native applications. It provides a user experience similar to the managed KinectSensorChooserUI.

Introducing New Samples!

  • Controls Basics - WPF: Demonstrates the new KinectInteraction Controls, including hands-free button pressing and scrolling through large lists. This replaces the Basic Interactions sample from previous releases.
  • Interaction Gallery - WPF: Demonstrates basic interactions using the new KinectInteraction Controls.
  • KinectBridge with MATLAB Basics - D2D: Demonstrates how to do image processing with the Kinect sensor using MATLAB API.
  • KinectBridge with OpenCV Basics - D2D: Demonstrates how to do image processing with the Kinect sensor using OpenCV API.
  • Kinect Explorer - D2D: Demonstrates how to use the Kinect's ColorImageStream, DepthImageStream, SkeletonStream, and AudioSource with C++ and Direct2D. This replaces the SkeletalViewer C++ sample.
  • Kinect Fusion Basics - WPF, Kinect Fusion Basics - D2D: Demonstrates basic use of the Kinect Fusion APIs for 3D reconstruction.
  • Kinect Fusion Explorer - WPF, Kinect Fusion Explorer - D2D: Demonstrates advanced 3D reconstruction features of Kinect Fusion, allowing adjustment of many reconstruction parameters, and export of reconstructed meshes.

What's New in version 1.6 of the SDK and the Developer Toolkit

Here's a link to the known issues for this release.

Windows 8 Support

Using the Kinect for Windows SDK, you can develop a Kinect for Windows application for a desktop application in Windows 8.

Visual Studio 2012 Support

The SDK supports development with Visual Studio 2012, including the new .NET Framework 4.5.

Accelerometer Data APIs

Data from the sensor's accelerometer is now exposed in the API. This enables detection of the sensor's orientation.

Extended Depth Data Is Now Available

CopyDepthImagePixelDataTo() now provides details beyond 4 meters; please note that the quality of data degrades with distance. In addition to Extended Depth Data, usability of the Depth Data API has been improved. (No more bit masking is required.)

Color Camera Setting APIs

The Color Camera Settings can now be optimized to your environment.

  • You can now fine-tune white balance, contrast, hue, saturation, and other settings.
  • To see the full list, launch Kinect Explorer from Developer Toolkit Browser and review the Exposure Settings and Color Settings controls for a full list of settings that can be optimized.

More Control over Decoding

New RawBayer Resolutions for ColorImageFormat give you the ability to do your own Bayer to RGB conversions on CPU or GPU.

New Coordinate Space Conversion APIs

There are several new APIs to convert data between coordinate spaces: color, depth, and skeleton. There are two sets of APIs: one for converting individual pixels and the other for converting an entire image frame.

German Language Pack for Speech Recognition

The SDK ships with a German speech recognition language pack that has been optimized for the sensor's microphone array.

Infrared Emitter Control API

The sensor's infrared emitter has previously always been on when the sensor is active, which can cause depth detection degradation in a scenario where multiple sensors are observing the same space. There is a new API (KinectSensor.ForceInfraredEmitterOff) for turning the infrared emitter off.

Introducing New Samples!

  • Basic Interactions-WPF: Demonstrates basic gestures, such as targeting and selecting with a cursor, as well as appropriate feedback mechanisms for an optimal user experience.
  • WPF D3D Interop: Demonstrates DirectX 11 interop with WPF, including full WPF composition of DirectX surfaces.
  • Infrared Basics-WPF, Infrared Basics-D2D: Demonstrates using an infrared stream and displaying an image using depth data.

Kinect Studio 1.6.0

Kinect Studio has been updated to support the Infrared, RawBayer, Extended Depth Data, and Accelerometer features.

The Infrared Stream Is Now Exposed in the API

The Kinect sensor's infrared stream is now exposed as a new ColorImageFormat. You can use the infrared stream in many scenarios, such as:

  • Calibrating other color cameras to the Kinect's depth sensor
  • Capturing grayscale images in low-light situations

Two infrared samples have been added to the toolkit, and you can also try out infrared in KinectExplorer.

The sensor is not capable of capturing an infrared stream and a color stream simultaneously. However, you can capture an infrared stream and a depth stream simultaneously.

Support for Virtual Machines

The Kinect for Windows sensor now works on Windows running in a virtual machine and has been tested with the following VM environments:

  • Microsoft Hyper-V
  • VMWare
  • Parallels

This greatly expands the utility of the Kinect for Windows SDK, as it can now be used on any machine whose native OS supports running Windows in one of the VM packages listed above. In particular, this enables several developer scenarios, such as certain automated testing approaches.

Setup and configuration details for using the Kinect with the tested VMs are contained in the Getting Started section of this documentation.

Note that only one Kinect at a time will work with a given VM, and you may experience lower frame rates on lower-end computers as some computing resources are consumed by the VM itself.

 

------------------------------------------------------------------

 

What's New in version 1.5.2 of the Developer Toolkit

Here's a link to the known issues for this release.

New Interoperability Sample

WPFD3Dinterop, a new sample that demonstrates DirectX 11 interoperability with Windows Presentation Foundation (WPF), including full WPF composition of DirectX surfaces. This enables powerful DirectX rendering intermixed with quicker-to-develop WPF user interfaces.

 

------------------------------------------------------------------

 

What's New in version 1.5.1 of the Developer Toolkit

Here's a link to the known issues for this release.

Improvements to Kinect Studio 1.5.1

We made the following performance and stability improvements to Kinect Studio 1.5.1:

  • Reduced CPU usage overhead.
  • Improved toolbar icons in Normal and High Contrast mode.
  • Eliminated distortion and flickering that occurred in the Color Viewer display during playback using some video cards.
  • Fixed hanging that occurred using AMD Radeon HD 5xxx series graphics cards.
  • Fixed crashing that occurred during playback of .xed files that were recently saved to a network share.

Improvements to Face Tracking

We made the following improvements to the Face Tracking feature:

  • The Microsoft.Kinect.Toolkit.FaceTracking project now automatically copies the appropriate native libraries to the output directory in a post-build step.
  • The FaceTracking3D-WPF sample now builds all configurations.

Sample Updates

We made the following fixes to the samples:

  • Audio Explorer – Fixed label captions that were clipped.
  • Avateering – Reference to Microsoft.Kinect.dll is no longer version-specific.
  • Kinect Explorer – Fixed crash that occurred in locales where "," is used as the numeric separator.

Offline Documentation

SDK documentation is now available offline. For details see Kinect for Windows SDK Offline Docs.

Windows 8

We tested SDK 1.5 and Toolkit 1.5.1 on top of Windows 8 Release Preview and found no new problems. Although that is not yet a supported production platform for Kinect for Windows, we continue to track well towards supporting it sometime after Windows 8 ships.

 

------------------------------------------------------------------

 

What's New in version 1.5 of the SDK and the Developer Toolkit

Here's a link to the known issues for this release.

Backward Compatibility

The Kinect for Windows 1.5.0 SDK, driver, and runtime are 100% compatible with Kinect for Windows 1.0 application binaries.

Kinect for Windows Developer Toolkit

Please note the following changes to the developer tookit:

  • As of this release, the SDK has been divided into a core SDK and a developer toolkit. These are two separate installations.
  • All samples have been moved into the toolkit.
  • We've continued significant sample investments in SDK 1.5.0. There are many new samples in both C++ and C#. In addition, we've included a "Basics" series of samples with language coverage in C++, C#, and Visual Basic. To explore the list of new and updated samples, please launch the Developer Toolkit Browser and explore.
  • We've taken KinectSensorChooser, formerly part of the WpfViewers, and split the logic and UI into two different classes: KinectSensorChooser and KinectSensorChooserUI in Microsoft.Kinect.Toolkit.dll.
  • KinectSensorChooser can be used in non-WPF scenarios because it provides logic only and has no UI.
  • We have significantly improved the user experience for KinectSensorChooserUI (used with a KinectSensorChooser instance). The ShapeGame sample now uses it.

Kinect Studio

Kinect Studio is a new tool that allows you to record and replay Kinect data to aid in development. For example, a developer writing a Kinect for Windows application for use in a shopping center can record clips of users in the target environment and then replay those clips at a later time for development and test purposes.

Please note the following when using Kinect Studio:

  • Kinect Studio must be used in conjunction with a Kinect for Windows application. Start the application, then start Kinect Studio, and you can then record and replay clips. When you replay a clip, it will be fed into the application as if it were live Kinect data.
  • Kinect Studio puts additional load on your machine and it may cause the frame rate to drop. Using a faster CPU and memory will improve performance.
  • Your temporary file location (set using Tools > Options) should not be a network location.
  • For more information, see 1.5.0 SDK and Developer Toolkit Known Issues.

Skeletal Tracking in Near Range Now Available

It is now possible to receive skeletal tracking data when the Kinect camera is in the Near Range setting. When using near-range skeletal tracking, please note the following:

  • This setting is disabled by default to ensure backward compatibility with Kinect for Windows 1.0 applications.
  • Enable this setting through the SkeletonStream.EnableTrackingInNearRange property (in managed code) or by including the NUI_SKELETON_TRACKING_FLAG_ENABLE_IN_NEAR_RANGE flag when calling NuiSkeletonTrackingEnable (in native code).
  • This feature works whether SkeletonTrackingMode is set to Default or Seated.
  • We suggest using Seated mode when using Near Range, because in most scenarios the player body will not be entirely visible. Ensure the torso and the head of the players are visible for locking.

Seated Skeletal Tracking Now Available

Skeletal tracking is now available when Seated mode is selected.

  • This mode tracks a 10-joint skeleton (consisting of head, shoulders, and arms) and ignores the leg and hip joints.
  • This mode can be used regardless of whether the player is standing or sitting.
  • Seated mode has been added to the Skeletal Viewer (C++) and Kinect Explorer (C#) samples. To try out the mode to understand its tracking ability, launch one of those applications and change the tracking setting from Default to Seated.
  • For information on enabling Seated mode, see Natural User Interface for Kinect for Windows.
  • Seated mode skeletal tracking has higher performance requirements than Default mode, especially when tracking two players. You may notice a reduction in frame rate with some system configurations.

Runtime Improvements

We have made the following improvements to the runtime:

  • The performance of the KinectSensor.MapDepthFrameToColorFrame method has been significantly improved. It is now five times faster on average.
  • Depth and color frames are now kept in sync. The Kinect for Windows runtime continuously monitors the depth and color streams and ensures that there is minimal drift between them.
  • In managed code you will see that the frames returned from the KinectSensor.AllFramesReady event will have been captured at nearly the same time and will have timestamps that verify this.

RGB Image quality

We have made the following improvements to image quality:

  • The RGB camera stream quality has been improved for the RGB 640x480/30fps and YUV 640x480/15fps video modes.
  • The image quality is now sharper and more color-accurate in high and low lighting conditions.

Joint Orientation

Kinect for Windows runtime provides joint orientation information for the skeletons tracked by the ST pipeline. The joint orientation is provided in two forms:

  • A hierarchical rotation based on a bone relationship defined on the ST joint structure
  • An absolute orientation in Kinect camera coordinates.

The orientation information is provided in the form of quaternions and rotation matrices. This information can be used in avatar animation scenarios as well as simple pose detection.

New Supported Languages for Speech Recognition

Acoustic models have been created to allow speech recognition in several additional locales. These are runtime components that are packaged individually and are available here. The following locales are now supported:

  • en-AU
  • en-CA
  • en-GB
  • en-IE
  • en-NZ
  • es-ES
  • es-MX
  • fr-CA
  • fr-FR
  • it-IT
  • ja-JP

Face Tracking SDK

We have added a Face Tracking component, which offers the following features:

  • The Face Tracking component tracks face position, orientation, and facial features in real time.
  • A 3D mesh of the tracked face, along with eyebrow position and mouth shape, is animated in real time.
  • Multiple faces can be tracked simultaneously.
  • Face Tracking components can be used in native C++. A managed wrapper is provided for C# and VB projects.

Documentation Improvements

We made the following improvements to the documentation:

  • The documentation is now online in MSDN Library. To view the documentation, press F1 in Visual Studio.
  • The SDK Documentation .chm file is no longer distributed by setup. Please use the online documentation.