Hi guys,
I've been working for the longest to integrate an Augmented Reality module, into my Cyborg framework for Android, I've completed 2/3 of the task yesterday, and I'm happy with the results, I've divided the POC into three, camera feed viewer (Done!), 3D models loading, rendering and displaying (Done!), Image recognition (Next...).
I planned to use Vuforia SDK from the get go, but didn't realized it has an AR implementation for Android (Though from the looks of it, it would have been harder to integrate into the framework...)
Since I integrate one feature at the time, and each feature is unique, I would like to encapsulate only the Image recognition process, I've noticed the CameraCacheInfo.class, and wondered if I could externally feed the data to the SDK, and to receive events upon the detection of a marker... so before I get down to it... is it even possible (reflection is ok as well!!)? if so... any pointers would be nice...
Thanks,
Adam.
Using only the Image recognition feature of the SDK
What you're describing can be accomplished using our Cloud Recognition service, which will allow you to store images online and retrieve trackables based on their detection. Otherwise they'll need to be bundled into a dataset and loaded either at build or run time.
Using only the Image recognition feature of the SDK
I was considering the Cloud Recognition it would serve the porpuse I need, but I'm just not sure if that is the best solution for a framework module...
How big is the entire sdk? (Compiled and compressed in the Jar, runtime memory)
update: