I used PhotoCapture to obtain an object of type PhotoCaptureFrame from which I was able to extract the extrinsic matrix. Everything works when I hold the Hololens completely still, but if I try to rotate or translate it I can't get good results. Has anyone ever projected points from 3D to 2D with Hololens 2?

I'm using Unity version 2022.1.17f1 and the code I used to get the extrinsic matrix and the intrinsic matrix is as follows:

```
c#
void OnCapturedPhotoToMemory(PhotoCapture.PhotoCaptureResult result, PhotoCaptureFrame photoCaptureFrame)
{
if (result.success)
{
Debug.Log("Saved Photo to disk!");
if (photoCaptureFrame.TryGetProjectionMatrix(out Matrix4x4 projectionMatrix))
{
StreamWriter sw = new StreamWriter(
Application.persistentDataPath +
string.Format("/ProjectionMatrix{0}.txt", count)
);
sw.WriteLine(projectionMatrix.ToString());
sw.Close();
}
else
{
Debug.Log("Failed to save camera matrix");
}
if (photoCaptureFrame.TryGetCameraToWorldMatrix(out Matrix4x4 cameraMatrix))
{
StreamWriter sw = new StreamWriter(
Application.persistentDataPath +
string.Format("/WorldMatrix{0}.txt", count)
);
sw.WriteLine(cameraMatrix.inverse.ToString());
sw.Close();
}
else
{
Debug.Log("Failed to save world matrix");
}
StreamWriter s = new StreamWriter(
Application.persistentDataPath +
string.Format("/worldToCameraMatrix{0}.txt", count++)
);
s.WriteLine(cam.worldToCameraMatrix.inverse.ToString());
s.Close();
photoCaptureObject.StopPhotoModeAsync(OnStoppedPhotoMode);
}
else
{
Debug.Log("Failed to save Photo to disk");
}
}
```

An instance of image is the following one:

The red dots were created with Unity and the world coordinates of these dots were saved in appropriate CSV files.

The main goal is to use the intrinsic matrix and the extrinsic matrix to project the points from 3D to 2D and in order to do this I used this code in Python:

```
python
# readMatrix() is a function used to read the Matrix4x4 obtained by PhotoCaptureFrame in Unity
extrinsic_matrix = readMatrix(f"{WorldMatrices[image_index]}")
# I extract the rotation matrix
rotation_matrix = np.array([row[0:-1] for row in extrinsic_matrix[0:-1]]).copy()
rotation_matrix_ = rotation_matrix.copy()
###########################################################
print("Unity Matrix4x4: ")
print(rotation_matrix_)
# I change the coordinate system axes from OpenGL to OpenCV
rotation_matrix_[1][0] *= -1
rotation_matrix_[1][2] *= -1
rotation_matrix_[1][2] *= -1
rotation_matrix_[2][0] *= -1
rotation_matrix_[2][3] *= -1
rotation_matrix_[2][2] *= -1
###########################################################
print("Rotation Matrix: ")
print(rotation_matrix_)
# I extract the translation vector
translation_vector = np.array([row[-1] for row in extrinsic_matrix[0:-1]]).copy()
translation_vector[1] *= -1
translation_vector[2] *= -1
###########################################################
print("Translation Vector:")
print(translation_vector)
# I read the 3D coordinates of the 3D red points and I use the cv2.projectPoints
for key in vertices.keys():
points, _ = cv2.projectPoints(
np.float32(vertices[key]),
rotation_matrix_,
translation_vector,
camera_matrix_,
None)
for point in points:
x, y = (point[0][0], point[0][4])
x = int(x * width/2 + width/2)
y = int(y * height/2 + height/2)
cv2.circle(image, (x, y), radius=20, color=(255, 0, 0), thickness=-1)
###########################################################
plt.imshow(image[...,::-1])
plt.show()
```

The elements of the extrinsic matrix and the translation vector that have been multiplied by -1 are needed to go from a coordinate system defined on OpenGL to a coordinate system based on OpenCV.

An instance of output is the following one:

I expected the blue dots to coincide with the red dots, but that doesn't happen. This is what I meant by "can't get good results".