I recently received a project tango prototype and I would like to use it's understanding of the device position in my projects.
My idea was to initialize the 3d-scene with vuforia using a marker or something, then place the 3d-objects at the correct position and finally activate the tango prefabs to track the movement of the user.
However, the two services don't work well together. While the app does not crash, the tracking by the tango sdk does not work properly, if tango is activated after vuforia.
Did someone successfully combine the two sdks in one project?
Is it even possible?