Log in or register to post comments

Using Core Motion after tracking is lost

December 7, 2011 - 4:16am #1

Hi,

I'm trying to use Core Motion to supplement the tracking once the image target has been lost, so that the user can still rotate the device around and view the virtual objects in the 3d environment albeit with reduced spatial accuracy.

I think I have the math working to get the appropriate information from the core motion side, that enables me to replicate the position and orientation of the device from the perspective of the tracked object.

I'm basing this test code on the image tracker example, and I have a teapot rendered nicely on my image.
My next step is to try to render another teapot in the same position, but by recreating the modelViewMatrix by using the position and view direction of the device. I'm using a view position and direction derived from the tracking modelViewMatrix, so this should in theory once I get the maths correct match exactly the original teapot.
Once I get this working I can use the tracking position and direction derived from CoreMotion.

What I'm struggling with currently is recreating the modelViewMatrix that is derived from the trackable's pose matrix. I guess I'm effectively trying to reverse engineer the matrix transforms made in the getPose() function call.
I suspect I'm missing something to do with the device orientation.

Is any of this code available or at least a technical description of the steps available?

many thanks!

Re: Using Core Motion after tracking is lost

December 8, 2011 - 7:13am #3

Note that it may be easier to pass the modified pose matrix back into the view to continue moving the OpenGL image rather than try to overlay something, especially as it's in a different 3D domain.

For that you'll just need the pose matrix -> vectors conversion and back again, without worrying about CATransform3D - but you may have your own reasons why you're uisng that route.

Re: Using Core Motion after tracking is lost

December 8, 2011 - 7:09am #2

Hi nebulusdesign

The usual CGAffineTransform on a layer will only rotate, translate and skew the layer, not distort in the way you need to reproduce the 3D projection of the object.

So you'll need to use CATransform3D (which I guess you're doing already).

You'll also need to calibrate your measured position to the pose whilst the object is still detected, so that you know what the offsets in each dimension should be. Then, as you say, you need to convert the pose matrix into vectors for each dimension, and back again in a form suitable for the CATransform3D. You'll need to take account of the viewpoint, aperture and scene depth somewhere in there.

I'm not familiar with how to do that, but I'll check with those who do - but it would be worth searching the 3D math forums/posts for such a solution.

Log in or register to post comments