I am researching technologies to make an app used for tabletop RPGs and am wondering if what I hope to achieve is viable.
The premise is that User 1 will have multiple 2d image targets on a table representing walls, monsters and player characters - with 3d models rendered on top of these targets. I need to be able to record where each target is relative to eachother using the camera, so that the same images can be replicated and projected onto another surface on another device without these markers. The positions of the image markers relative to eachother will be sent to user 2 for rendering.
Is it possible to extract the data of each image target's position in relation to the camera from the VR camera? If so where would I look to get this data.
Is it also possible with Vuforia to render models without image targets in the camera's view, but just the information of where they should be?
Thanks in advance!