Share via

Dual graphics card in a setup with two Azure Kinect DK devices

Patrik Svensson 31 Reputation points
28 Jul 2020, 8:52 am

As mentioned here: https://learn.microsoft.com/en-us/azure/kinect-dk/multi-camera-sync#host-computers, you typically use one host computer per Kinect device.

I was wondering if a setup with a single computer and two graphics cards utilising NVIDIA SLI would also work, and if the Azure Kinect DK can take advantage of both graphics cards in that setup?

Currently I have two kinects connected to a single Computer with 1 graphics card and if I use lower color/depth/fps modes it works but on higher settings the graphics card can't keep up.

Azure Kinect DK
Azure Kinect DK
A Microsoft developer kit and peripheral device with advanced artificial intelligence sensors for sophisticated computer vision and speech models.
294 questions
0 comments No comments
{count} vote

Accepted answer
  1. QuantumCache 20,271 Reputation points
    28 Jul 2020, 6:58 pm

    Hello @PatrikSvensson-5712 , Thanks for reaching out to us!

    Please refer to the related Github issue Dt: Dec 12, 2019 discussion which has provided similar experience and what to expect when connecting multiple Azure Kinect Devices to a single host computer.

    You can leverage your ideas [such as increase/decrease the Hardware capacity] on top of the below-mentioned discussion.

    Let me quote the reply over here for better visibility:

    During the Microsoft ignite conference we created a volumetric capture experience booth in partnership with Scatter and their DepthKit.

    The set up included:

    5 Azure Kinect DK devices (tested with 10 with no issues)
    For USB 3.0 Data extension, the Cable Matters 5 Meter Active USB 3.0 extension
    For power, the Monoprice 5 Meter USB 2.0 extension and the power adapters that come with the Azure Kinect
    -PC
    Intel i9 9920x Extreme (3.5GHz,12-core / 24-thread)
    32GB DDR4 memory (8GB x 4)
    NVIDIA RTX2080Ti video card
    Asus X299Tuf MK2 Motherboard
    3x StarTech.com 4 Port USB3.0 PCIe Cards with 4 Dedicated 5Gbps Channels

    • Data steaming options
      720p color
      640x576 depth
    • Software running on the PC**
      Sensor SDK
      Demo app that was built in the Unity game engine
      Plugin for Unity that supports the Unity VFX graph for particle system effects
      DepthKit
      CPU/ GPU usage
      With 10 devices at these settings, the CPU utilization was around 43% and the GPU utilization was around 50% with zero dropped frames; with only 5 Devices, utilization is roughly half of that. In this scenario the work being done by Depthkit is an optimized pipeline that handles frame acquisition from the device, frame synchronization, 3D viewport rendering of the depth and color data, as well as local GPU texture memory sharing to other applications, like the Unity based Ignite demo app.

    There were some issues where a device may fail to properly open, and cause other devices on the system to "disappear" and not be recognized until a physical unplug/replug cycle happens. There were not much testing done on this, but it is something to be aware of when working with multiple devices.
    By far the biggest expense in this multi-camera processing pipeline is copying video frames from the device SDK buffer to the rest of the pipeline, especially when using color resolutions above 720p.
    As for synchronization, we used the SDK to automatically determine the sync cable connection topology, and assign sync delay offsets to each subordinate in the chain.

    Hope it helps to answer any of the questions about multicamera synchronization.

    Please let us know if you need further info on this matter.

    0 comments No comments

0 additional answers

Sort by: Most helpful

Your answer

Answers can be marked as Accepted Answers by the question author, which helps users to know the answer solved the author's problem.