Laptops Recs for Azure Kinect-DK

ArjunTech 6 Reputation points
2021-01-26T23:07:40.66+00:00

Anybody have any laptop recs for the Azure Kinect? I am hoping to run body tracking sdk on at most 3 Kinect sensors. I understand that compatibility with a laptop's native host controller may be an issue. I have tried looking up host controllers for the best-in-class (Razer, ASUS ROG Strix g15s, HP Omen) but they don't mention who supplies their host controllers.

Anybody have any advice on what laptop I should get? I am looking for portability as I may need to use the sensors in multiple locations.

Azure Kinect DK
Azure Kinect DK
A Microsoft developer kit and peripheral device with advanced artificial intelligence sensors for sophisticated computer vision and speech models.
288 questions
{count} vote

1 answer

Sort by: Most helpful
  1. QuantumCache 20,106 Reputation points
    2021-01-29T08:20:29.907+00:00

    Hello @ArjunTech Thanks for reaching out to us on this very useful and important query!.

    TL;DR Hardware requirements depend on your use case. But please keep on reading the below references, it might be useful as a guide and not scale at all, as the requirements may demand upgrading your hardware or replacing it with a new one.

    1) I hope you have already visited this official MS Doc : Azure Kinect sensor SDK system requirements

    2) A recent similar Github discussion on the requirements,

    • I have a new XMG laptop with RTX 2070 Super max-p 8gb at 115w.
      A single Kinect Azure sensor running body tracking consumes ~70% of the RTX GPU and delivers 30fps

    3) one PC, 4 kinect sensor, The maximum number of body tracking supported?

    4) Azure Kinect slowed down & Body Tracking not working on CPU 8 core Ryzen 7 3700x & GPU rtx 3070

    5) To search more discussions on this topic please visit-->github.com/microsoft/Azure-Kinect-Sensor-SDK/

    6) Let's see a scenario of One host with 5 Azure Kinect devices. Refer to this Github comment. Below is the same comment quote,

    During the Microsoft ignite conference we created a volumetric capture experience booth in partnership with Scatter and their DepthKit.

    The set up included:

    • 5 Azure Kinect DK devices (tested with 10 with no issues)
    • For USB 3.0 Data extension, the Cable Matters 5 Meter Active USB 3.0 extension
    • For power, the Monoprice 5 Meter USB 2.0 extension and the power adapters that come with the Azure Kinect

    -PC

    • Intel i9 9920x Extreme (3.5GHz,12-core / 24-thread)
    • 32GB DDR4 memory (8GB x 4)
    • NVIDIA RTX2080Ti video card
    • Asus X299Tuf MK2 Motherboard
    • 3x StarTech.com 4 Port USB3.0 PCIe Cards with 4 Dedicated 5Gbps Channels

    Data steaming options

    • 720p color
    • 640x576 depth

    Software running on the PC

    • Sensor SDK
    • Demo app that was built in the Unity game engine
    • Plugin for Unity that supports the Unity VFX graph for particle system effects
    • DepthKit

    CPU/ GPU usage
    With 10 devices at these settings, the CPU utilization was around 43% and the GPU utilization was around 50% with zero dropped frames; with only 5 Devices, utilization is roughly half of that. In this scenario the work being done by Depthkit is an optimized pipeline that handles frame acquisition from the device, frame synchronization, 3D viewport rendering of the depth and color data, as well as local GPU texture memory sharing to other applications, like the Unity based Ignite demo app.

    There were some issues where a device may fail to properly open, and cause other devices on the system to "disappear" and not be recognized until a physical unplug/replug cycle happens. There were not much testing done on this, but it is something to be aware of when working with multiple devices.
    By far the biggest expense in this multi-camera processing pipeline is copying video frames from the device SDK buffer to the rest of the pipeline, especially when using color resolutions above 720p.
    As for synchronization, we used the SDK to automatically determine the sync cable connection topology, and assign sync delay offsets to each subordinate in the chain.

    Hope it helps to answer any of the questions about multicamera synchronization.

    Please comment on this response to get more help in this regard.