We have a project involving Cloud Reco, however, we're running into a few snags in our workflow. Since we can't seem to find answers in the documentation, it was suggested that we post here.
Here are our questions:
• Is there any formal documentation that normal users don't have access to?
• Using the Vuforia native SDK (iOS & Android) is it possible to get a dataset (dictionary or whatever) of the image target before the browser is displayed? We've tried using the SDK to do this, but thus far we haven't had any success.. The only places we've been able to get the data back is right at tracking time (onVuforiaUpdate).
• Is it possible, or is there a working example of, initializing the AR (onInitARDone) after the camera has started? For example, we would like to create a render object after the camera scans a marker and we get the necessary meta data.
• Currently in the samples, there is only data for a single experience. Is there a way to dynamically load in and swap out these experiences at scan time? Perhaps in onVuforiaUpdate?
• It looks like Vuforia is using the same GL View to render camera frames and experience objects. Is this this correct, or is there one GL View for the camera and another GL View for the objects?
• Is it possible to get meta data via the RESTful web services per marker? We have been able to make a target request on an ID, but we can only get the name, ID, width and a few other things back. We have no been able to get meta data and data created back. (Date created would be a big deal.
Thank you very much.