"We offer new support options and therefor the forums are now in read-only mode! Please check out our Support Center for more information." - Vuforia Engine Team

Difference between COORDINATE_SYSTEM_CAMERA and COORDINATE_SYSTEM_WORLD in AR/VR

Could someone explain me more about the camera and world coordinate system? Unfortunately, there is not much documentation about it available.

My simple application is based on Vuforia’s AR/VR example – it can track an image or use the DeviceTracker (IMU) to determine my current pose to render the world. If it tracks an image, the image is overlaid with a green plain (therefore I’m using the camera coordinate system for the projection matrix as well as the unaltered pose of the ImageTargetResult). If the DeviceTracker is used, then the plain is rendered somewhere in the world and gets “balanced” by the IMU – the plain remains at its position (therefore I’m using the world coordinate system and the inverted and transposed pose of the DeviceTrackableResult). I copied both behaviours from Vuforia’s AR/VR sample. I would like to combine these two systems so that if I lose image-based tracking, my objects should still be rendered and tracked by the DeviceTrackable (IMU) in one coordinate system, e.g. the camera coordinate system. Therefore, I thought that I only need to store the poses of the DeviceTrackableResult and the ImageTrackableResult as long as an image is tracked. When I lose the ImageTarget, I calculate the difference between the last stored DeviceTrackableResult pose and the current DeviceTrackableResult pose and multiply the difference with the last known ImageTrackableResult pose to get my new current pose.

However, this isn’t working as expected and I have no idea how to solve it. I think the problem is in the two different coordinate systems. Can someone explain me how to realize IMU and image tracking in one coordinate system?