INuiFusionReconstruction::ProcessFrame Method

Processes the specified depth frame and color frame through the Kinect Fusion pipeline.


HRESULT ProcessFrame(
         const NUI_FUSION_IMAGE_FRAME *pDepthFloatFrame,
         USHORT maxAlignIterationCount,
         USHORT maxIntegrationWeight,
         FLOAT *pAlignmentEnergy,
         const Matrix4 *pWorldToCameraTransform


  • pDepthFloatFrame
    The depth float frame to be processed. The maximum resolution of this frame is 640×480.
  • maxAlignIterationCount
    Type: USHORT
    The maximum number of iterations of the algorithm to run. The minimum value is one. Using only a small number of iterations will have a faster run time, but the algorithm may not converge to the correct transformation.
  • maxIntegrationWeight
    Type: USHORT
    A parameter to control the temporal smoothing of depth integration. The minimum value is one. Lower values have more noisy representations, but are suitable for more dynamic environments because moving objects integrate and disintegrate faster. Higher values integrate objects more slowly, but provide finer detail with less noise.
  • pAlignmentEnergy
    Type: FLOAT
    The Gets threshold in the range [0.0f, 1.0f] that describes how well the observed frame aligns to the model with the calculated pose (mean distance between matching points in the point clouds).
  • pWorldToCameraTransform
    Type: Matrix4
    The best guess at the current camera pose. This is usually the camera pose result from the most recent call to the AlignPointClouds or AlignDepthFloatToReconstruction method.

Return value

S_OK if successful; otherwise, returns a failure code.


This method is equivalent to calling the AlignDepthFloatToReconstruction and IntegrateFrame methods on the specified depth frame. You can call these low-level methods individually to have more control over the operation, but calling ProcessFrame will complete faster due to the integrated nature of the calls.


If a tracking error occurs during the AlignDepthFloatToReconstruction call, no depth data integration will be performed and the camera pose will remain unchanged.

If you need a visible output image of the reconstruction, call the CalculatePointCloud method and then call the NuiFusionShadePointCloud function.


Header: nuikinectfusionvolume.h

Library: TBD