In my project I want to use front facing camera for tracking whilst the augmentation is done with back facing camera. The augmentation and tracking plane are parallel.
Here is a video of where I got till now: https://www.youtube.com/watch?v=ZPT7Ejce2nM
Note: Only the front camera is used for tracking.
The result is far from perfect. Bellow is a detailed description of transformations I use.
I work in OpenGL coordinate system and my system has 4 transformations: RT_fc_bc(front cam to bottom cam), RT_fc_tm(front cam to top marker), RT_tm_bm(top marker to bottom marker) and RT_bc_bm(bottom camera to bottom marker).
RT_fc_bc = RT_fc_tm * RT_tm_bm * inv(RT_bc->bm) => I get the (RT_fc_tm, RT_bc->bm) pair during calibration. I place the device on a stand into a box(see video) and capture pose using front and back camera tracking. The phone is kept still during callibration.
RT_tm_bm = see bottom of the post.
RT_bc_bm = inverse(convertPose2GLMatrix(QCAR::TrackableResult::getPose) // only used when calibration RT_fc_bc
Now I get my modelViewMatrix= inv(RT_bc_bm) = inv(RT_fc_bc) * inv(RT_fc_tm) * inv(RT_tm_bm). Using projectionMatrix of back facing camera I can render the scene.
Are transformation correct? If so, what can I do to improve the accuracy of the system? Is there any reason why such settup would not work with higher precision?
Let me note that the distance between the top and bottom marker is 30cm, however, in my demo video my RT_tm_bm moves in z only by 11cm (defined by experimenting).
When I looked at translation vector of getPose() matrix (i.e. matrix.data, matrix.data, matrix.data) , I notice z translation error (i.e. 20%), whilst translation x and y match perfectly with what I measure with my ruler, hence the target size in .xml file is set correctly. Why is this the case? Is this causing the error in my system?
Thank you very much for your help! Greatly appreciated!
-1 0 0 0
0 1 0 0
0 0 -1 0
0 0 110 1