Log in or register to post comments

Cloud Reco & Native SDKs

November 9, 2016 - 1:21pm #1


We're working on a project involving cloud recognition and variable user experiences. However, we're hitting some speed bumps and have a few questions. Perhaps the fine people here can help us out with the following.


• Is it possible to see a working example of onInitARDone being initialized after the camera has already been stated?

• Using the native Vuforia SDK (iOS & Android) is it possible to get cloud database information before the browser is loaded?

• Is it possible to render an experience object after a scan is already complete and we have the meta data from the target?

• We seem to think that Vuforia uses the same GL view to render the camera and the experience objects, or does it use a separate GL view for the experience objects and the camera?

• Is there any formal documentation that we don’t have access to?

A final one, though this isn't directly related to cloud reco: 

Vuforia Web Service: 

• We’re unable to get the meta data from a trackable via the Vuforia Web Service, is it possible to get this data or perhaps the date created via the VWS?


Thanks everyone in advance. 

Log in or register to post comments