Hello,
I'm currently trying to augment a entire empty room with furnitures and lining (almost VR). I can't use decades of markers which would each augment a specific furniture because it would pop when tracking and losing tracking. We want everything to be augmented at anytime.
Questions and thoughts:
* If we set up properly the real room with markers and the virtual room to match exactly. The camera will flow through markers and the virtual room will augment the room correctly while we track a marker. But this means everything has to be ajusted before use, for each room. No adaptation.
* Using Unified Coordinate System, I set up the virtual room with their calibration tool wherever the markers are (only the main marker has to be correctly placed). More flexibility.
* Using a marker and extended tracking. Too inaccurate (empty room nothing to SLAM track).
* Using MutliTargets and inverse the sides to make us inside the cube. I'd need a big marker on each wall.Is this possible? This also means I'd not be able to zoom somewhere where there is no marker visible.
Does anyone want to discuss what would be the best idea, even those I haven't think of (space sensor?, plugins, ...), or has been in similar project. I would like advices, ideas and brains.
Thanks,
Antoine.
Hello there..
Is there any progress on this topic ?
I am also trying to do the same...If anybody get the solution then please help