Hello @PatrikSvensson-5712 , Thanks for reaching out to us!
Please refer to the related Github issue Dt: Dec 12, 2019 discussion which has provided similar experience and what to expect when connecting multiple Azure Kinect Devices to a single host computer.
You can leverage your ideas [such as increase/decrease the Hardware capacity] on top of the below-mentioned discussion.
Let me quote the reply over here for better visibility:
During the Microsoft ignite conference we created a volumetric capture experience booth in partnership with Scatter and their DepthKit.
The set up included:
5 Azure Kinect DK devices (tested with 10 with no issues)
For USB 3.0 Data extension, the Cable Matters 5 Meter Active USB 3.0 extension
For power, the Monoprice 5 Meter USB 2.0 extension and the power adapters that come with the Azure Kinect
-PC
Intel i9 9920x Extreme (3.5GHz,12-core / 24-thread)
32GB DDR4 memory (8GB x 4)
NVIDIA RTX2080Ti video card
Asus X299Tuf MK2 Motherboard
3x StarTech.com 4 Port USB3.0 PCIe Cards with 4 Dedicated 5Gbps Channels
- Data steaming options
720p color
640x576 depth- Software running on the PC**
Sensor SDK
Demo app that was built in the Unity game engine
Plugin for Unity that supports the Unity VFX graph for particle system effects
DepthKit
CPU/ GPU usage
With 10 devices at these settings, the CPU utilization was around 43% and the GPU utilization was around 50% with zero dropped frames; with only 5 Devices, utilization is roughly half of that. In this scenario the work being done by Depthkit is an optimized pipeline that handles frame acquisition from the device, frame synchronization, 3D viewport rendering of the depth and color data, as well as local GPU texture memory sharing to other applications, like the Unity based Ignite demo app.There were some issues where a device may fail to properly open, and cause other devices on the system to "disappear" and not be recognized until a physical unplug/replug cycle happens. There were not much testing done on this, but it is something to be aware of when working with multiple devices.
By far the biggest expense in this multi-camera processing pipeline is copying video frames from the device SDK buffer to the rest of the pipeline, especially when using color resolutions above 720p.
As for synchronization, we used the SDK to automatically determine the sync cable connection topology, and assign sync delay offsets to each subordinate in the chain.Hope it helps to answer any of the questions about multicamera synchronization.
Please let us know if you need further info on this matter.