Log in or register to post comments

Retrieving camera orientation

January 7, 2013 - 7:42pm #1

Forgive me if this has been answered and I wasn't able to find it.

We are trying to incorporate an existing C++ game into a vuforia app.  The game already handles all of its own graphics primitives, so what we really need to do is pass the camera info into the existing renderer and let it do its thing.  I realize that this is duplicating work that vuforia does, but the quickest and most portable path is to handle it this way.

The existing game world wants to get the camera info in the following form:

  • pos - x, y, z position of camera with center of trackable as origin
  • front - vector of camera lens orientation
  • up - vector which passes bottom to top through the camera orthogonal to front

We are getting the camera position correctly from the trackable pose data, but are having difficulties mapping from the rotation matrix (upper left 3x3) of the pose data to the desired front and up vectors.

The world coordinates are x rightward, y outward, z upward.  My understanding is that vurforia pose data is x rightward, y downward, z inward.

Do we need to do some matrix transformations, or can we just map the vectors appropriately, remapping axes and flipping signs as needed?

Fairly urgent request.  Help greatly appreciated.

Just for info: the same

January 21, 2013 - 1:27am #3

Just for info:

the same technique has been published in the FAQ section:



Hi AquaGeneral,you can

January 8, 2013 - 1:52am #2

Hi AquaGeneral,

you can follow this approach:

1- copy the SampleMath.h and SampleMath.cpp from the "Dominoes" sample

2- use the function " static QCAR::Matrix44F Matrix44FInverse(QCAR::Matrix44F& m); " to invert the modelview matrix that you get from QCAR (and you also need to transpose it), i.e.:

    QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());

    QCAR::Matrix44F inverseMV = SampleMath::Matrix44FTranspose( SampleMath::Matrix44FInverse(modelViewMatrix) );

Note:  the inverseMV matrix represents the camera position and orientation with respect to your world reference frame (the world being defined by your trackable reference); 

3- Get the position of the camera by taking the translation part of the matrix, i.e.:

    float x = inverseMV.data[12];

    float y = inverseMV.data[13];

    float z = inverseMV.data[14];

4- Get the camera direction and up-vector by extracting the rotation parts of the matrix, i.e.:

    //camera direction is the Z-axis of the camera reference frame represented in world coordinates, i.e.:

    float dir_x = inverseMV.data[8];  float dir_y = inverseMV.data[9];  float dir_z = inverseMV.data[10];

    //camera UP vector is the negative Y-axis of the camera reference frame represented in world coordinates, i.e.:

    float up_x = -inverseMV.data[4]; float up_y = -inverseMV.data[5]; float up_z = -inverseMV.data[6];


Finally, note that the target trackable reference has X and Y aligned with the target surface, and Z orthogonal to the target, so your world reference should be defined accordingly.


I hope this helps.



Log in or register to post comments