"We offer new support options and therefor the forums are now in read-only mode! Please check out our Support Center for more information." - Vuforia Engine Team

Technical - How do I get the camera position

 

This article explains how to extract the 3D position and the orientation axis of the AR camera with respect to the reference frame defined by a given trackable.  

The trackable pose

For each trackable detected and tracked by Vuforia, the SDK provides you with the pose of such trackable; the pose represents the combination of the position and orientation of the trackable local reference frame, with respect to the 3D reference frame of the camera. In Vuforia, the trackable reference frame is defined with the X and Y axis aligned with the plane tangent to the given image target or frame marker, and with the Z axis orthogonal to such plane; on the other hand, the camera reference frame is defined with the Z axis pointing in the camera viewing direction, and the X and Y axis aligned with the view plane (with X pointing to the right and Y pointing downward).

The pose is mathematically represented by a 3-by-4 matrix (QCAR::Matrix34F) and can be obtained in native code via the following API:

                                            const QCAR::Matrix34F & QCAR::TrackableResult::getPose() const; 

As shown in the Image Targets sample code (see ImageTargets.cpp – renderFrame function), the pose matrix can also be converted to a 4-by-4 format (QCAR::Matrix44F) which is immediately suitable for rendering in OpenGL as model-view matrix.

However, in some cases (for instance when using high-level 3D engines for rendering) it can be useful or necessary to specify the camera position and its orientation represented in the reference frame of the trackable.

Extracting the camera position and orientation

These are the necessary steps for extracting the camera position and its orientation axis, with coordinates represented in the reference frame of a given trackable; for simplicity, the process here refers to the Image Targets sample:

·         Copy the SampleMath.h and SampleMath.cpp from the “Dominoes” sample (in the /JNI folder) to the Image Targets project

·         Edit the Android.mk of the project and add the SampleMath.cpp file to the list of source files defined by LOCAL_SRC_FILES:

        LOCAL_SRC_FILES:= ImageTargets.cpp SampleUtils.cpp Texture.cpp SampleMath.cpp

·         In ImageTargets.cpp, add an #include statement to include the “SampleMath.h”

·         Compute the inverse of the model-view matrix, and transpose it:

        const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);

        const QCAR::Trackable& trackable = result->getTrackable();

        QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(result->getPose());       

        QCAR::Matrix44F inverseMV = SampleMath::Matrix44FInverse(modelViewMatrix);

        QCAR::Matrix44F invTranspMV = SampleMath::Matrix44FTranspose(inverseMV);

 

·         Extract the camera position from the last column of the matrix computed before:

        float cam_x = invTranspMV.data[12];

        float cam_y = invTranspMV.data[13];

        float cam_z = invTranspMV.data[14];

 

·         Extract the camera orientation axis (camera viewing direction, camera right direction and camera up direction):

        float cam_right_x = invTranspMV.data[0];

        float cam_right_y = invTranspMV.data[1];

        float cam_right_z = invTranspMV.data[2];

 

        float cam_up_x = -invTranspMV.data[4];

        float cam_up_y = -invTranspMV.data[5];

        float cam_up_z = -invTranspMV.data[6];

 

        float cam_dir_x = invTranspMV.data[8];

        float cam_dir_y = invTranspMV.data[9];

        float cam_dir_z = invTranspMV.data[10];