Κοινή χρήση μέσω


Retrieve Azure Kinect image data

This page provides details about how to retrieve images from the Azure Kinect. The article demonstrates how to capture and access images coordinated between the device's color and depth. To access images, you must first open and configure the device, then you can capture images. Before you configure and capture an image, you must Find and open device.

You can also refer to the SDK Streaming Example that demonstrates how to use the functions in this article.

The following functions are covered:

Configure and start the device

The two cameras available on your Kinect device support multiple modes, resolutions, and output formats. For a complete list, refer to the Azure Kinect Development Kit hardware specifications.

The streaming configuration is set using values in the k4a_device_configuration_t structure.

k4a_device_configuration_t config = K4A_DEVICE_CONFIG_INIT_DISABLE_ALL;
config.camera_fps = K4A_FRAMES_PER_SECOND_30;
config.color_format = K4A_IMAGE_FORMAT_COLOR_MJPG;
config.color_resolution = K4A_COLOR_RESOLUTION_2160P;
config.depth_mode = K4A_DEPTH_MODE_NFOV_UNBINNED;

if (K4A_RESULT_SUCCEEDED != k4a_device_start_cameras(device, &config))
{
    printf("Failed to start device\n");
    goto Exit;
}

Once cameras are started, they'll continue to capture data until k4a_device_stop_cameras() is called or the device is closed.

Stabilization

When starting up devices using the multi device synchronization feature, it is highly recommended to do so using a fixed exposure setting. With a manual exposure set, it can take up to eight captures from the device before images and framerate stabilize. With auto exposure, it can take up to 20 captures before images and framerate stabilize.

Get a capture from the device

Images are captured from the device in a correlated manner. Each captured image contains a depth image, an IR image, a color image, or a combination of images.

By default, the API will only return a capture once it has received all of the requested images for the streaming mode. You can configure the API to return partial captures with only depth or color images as soon as they're available by clearing the synchronized_images_only parameter of the k4a_device_configuration_t.

// Capture a depth frame
k4a_capture_t capture = NULL;
switch (k4a_device_get_capture(device, &capture, TIMEOUT_IN_MS))
{
case K4A_WAIT_RESULT_SUCCEEDED:
    break;
case K4A_WAIT_RESULT_TIMEOUT:
    printf("Timed out waiting for a capture\n");
    continue;
    break;
case K4A_WAIT_RESULT_FAILED:
    printf("Failed to read a capture\n");
    goto Exit;
}

Once the API has successfully returned a capture, you must call k4a_capture_release() when you have completed using the capture object.

Get an image from the capture

To retrieve a captured image, call the appropriate function for each image type. One of:

You must call k4a_image_release() on any k4a_image_t handle returned by these functions once you're done using the image.

Access image buffers

k4a_image_t has many accessor functions to get properties of the image.

To access the image's memory buffer, use k4a_image_get_buffer.

The following example demonstrates how to access a captured depth image. This same principle applies to other image types. However, make sure you replace the image-type variable with the correct image type, such as IR, or color.

// Access the depth16 image
k4a_image_t image = k4a_capture_get_depth_image(capture);
if (image != NULL)
{
    printf(" | Depth16 res:%4dx%4d stride:%5d\n",
            k4a_image_get_height_pixels(image),
            k4a_image_get_width_pixels(image),
            k4a_image_get_stride_bytes(image));

    // Release the image
    k4a_image_release(image);
}

// Release the capture
k4a_capture_release(capture);

Next steps

Now you know how to capture, and coordinate the cameras' images between the color and depth, using your Azure Kinect device. You also can: