Log in or register to post comments

Please explain iOS Cloud Recognition sample application

July 1, 2013 - 12:59am #1

Hello, 

This is my first post to the Vuforia developer forum. For the past couple of months I have been using the Vuforia iOS SDK and created some iOS apps like the Image Targets sample app using my own 3d models and textures. Now I need to learn how to maintain the target images in the cloud rather than in the device database. So, I registered for the cloud service, read the developer articles and downloaded the Cloud Recognition sample app. Up to now, I am having a fair understanding on how to create cloud databases, add image targets to it, downloading the access keys and incorporating them to the kAccessKey and kSecretKey constants declared in the CRQCARutils.mm file.

But after that I have no idea on how to modify the sample app catering to my image targets and to display my own content when the user point the camera to an image target maintained in the cloud. I find it difficult to understand the execution flow of the sample app. I have read the "https://developer.vuforia.com/forum/unity-3-extension-technical-discussion/please-explain-step-step-how-use-vuforia-cloud" thread but it is written for Android. Can someone please help me explaining the sample app code? It would be great if you could explain what classes and methods to be modified to display my own content on top of the book cover of the sample app. Forgive me if what I asked above is too much and by the way you guys are doing an amazing job here.

Thank you,

Amila.

Please explain iOS Cloud Recognition sample application

July 1, 2013 - 8:36pm #3

Thanks Nalin,

I'll go through the sample code again and post the clarifications to be resolved if I find any. Unfortunately, I havent downloaded or used Unity 3D but I'll give a try when time permits.

Please explain iOS Cloud Recognition sample application

July 1, 2013 - 5:31am #2

But after that I have no idea on how to modify the sample app catering to my image targets and to display my own content when the user point the camera to an image target maintained in the cloud. I find it difficult to understand the execution flow of the sample app

I can empathise with this, however please note that this is not easy, and the native SDK is meant for advanced developers who are able to understand the samples and apply them.

Can I suggest you try the Unity SDK as it is offers a far more plug-and-play approach and it will save you many weeks if not months of effort - and Unity basic for iOS and Android is now free.

 

N

 

 

 

 

 

Log in or register to post comments