I'm trying to use Core Motion to supplement the tracking once the image target has been lost, so that the user can still rotate the device around and view the virtual objects in the 3d environment albeit with reduced spatial accuracy.
I think I have the math working to get the appropriate information from the core motion side, that enables me to replicate the position and orientation of the device from the perspective of the tracked object.
I'm basing this test code on the image tracker example, and I have a teapot rendered nicely on my image.
My next step is to try to render another teapot in the same position, but by recreating the modelViewMatrix by using the position and view direction of the device. I'm using a view position and direction derived from the tracking modelViewMatrix, so this should in theory once I get the maths correct match exactly the original teapot.
Once I get this working I can use the tracking position and direction derived from CoreMotion.
What I'm struggling with currently is recreating the modelViewMatrix that is derived from the trackable's pose matrix. I guess I'm effectively trying to reverse engineer the matrix transforms made in the getPose() function call.
I suspect I'm missing something to do with the device orientation.
Is any of this code available or at least a technical description of the steps available?