Share via


INuiFusionReconstruction::AlignDepthFloatToReconstruction Method

Aligns a depth float image to the reconstruction volume to calculate the new camera pose.

Syntax

public:
HRESULT AlignDepthFloatToReconstruction(
         const NUI_FUSION_IMAGE_FRAME *pDepthFloatFrame,
         USHORT maxAlignIterationCount,
         const NUI_FUSION_IMAGE_FRAME *pDeltaFromReferenceFrame,
         FLOAT *pAlignmentEnergy,
         const Matrix4 *pWorldToCameraTransform
)

Parameters

  • pDepthFloatFrame
    Type: NUI_FUSION_IMAGE_FRAME
    The depth float frame to be processed.

  • maxAlignIterationCount
    Type: USHORT
    The maximum number of iterations of the algorithm to run. The minimum value is one. Using only a small number of iterations will have a faster run time, but the algorithm may not converge to the correct transformation.

  • pDeltaFromReferenceFrame
    Type: NUI_FUSION_IMAGE_FRAME
    A pre-allocated float image frame, to be filled with information about how well each observed pixel aligns with the passed-in reference frame. This could be processed to create a color rendering, or could be used as input to additional vision algorithms such as object segmentation. These residual values are normalized −1 to 1 and represent the alignment cost/energy for each pixel. Larger magnitude values (either positive or negative) represent more discrepancy, and lower values represent less discrepancy or less information at that pixel.

    Note that if valid depth exists, but no reconstruction model exists behind the depth pixels, a value of zero (which indicates perfect alignment) will be returned for that area. In contrast, where no valid depth occurs a value of one will always be returned. Pass null to this parameter if you do not want to use this functionality.

  • pAlignmentEnergy
    Type: FLOAT
    The threshold in the range [0.0f, 1.0f] that describes how well the observed frame aligns to the model with the calculated pose (mean distance between matching points in the point clouds).

  • pWorldToCameraTransform
    Type: Matrix4
    The best guess at the current camera pose. This is usually the camera pose result from the most recent call to the INuiFusionReconstruction::AlignPointClouds Method or INuiFusionReconstruction::AlignDepthFloatToReconstruction Method method.

Return value

Type: HRESULT
S_OK if successful; otherwise, returns a failure code.

Requirements

Header: nuikinectfusionvolume.h

Library: TBD