For a research project, I've been trying out the Vuforia API for a few weeks now however I am struggling to the get the cloud recognition service to fit my needs. It seems to me that the cloud recognition is only able to match flat/printed targets which makes it unsuitable for arbitrary user defined targets.
At first I thought this was because of a limitation in the algorithms for the image registration, but my issue here is that when using the User-Defined targets options of the Sample App, you can define almost anything as a target and the recognition works marvelously.
I have tried several modifications of the images before uploading them to the cloud recognition database, to enhance their features, but nevertheless it is really hard to match any object that is not printed things like cover of books or posters. Looking through the code I can see that the User-Defined targets sample app builds the target immediately out of the camera frame which probably yields a high quality template which enables the recognition of more arbitrary objects and I was wondering if it is somehow possible to have this matching quality with cloud objects.
Or is there something else I am missing when using the cloud recognition service?? My desired workflow is as follows:
1. User takes a picture (at the moment I do this step for testing, I first check that the image works with the User-defined targets options of the sample app).
2. The picture is uploaded through our server to the cloud recognition database using the VWS API.
3. After the processing, I would expect that another user can come and match the target uploaded by the user from step 1.
However, it seems that there is a required amount of manual enhancement, processing and classsifying of the images that prevents this workflow for working with arbitrary targets, even though the sample app User-Defined targets seems to show otherwise.
Any advice or comments from experienced users and developers about this are welcomed. Btw, I'm using the Java API in Android