"We offer new support options and therefor the forums are now in read-only mode! Please check out our Support Center for more information." - Vuforia Engine Team

Finding the recognition part

Hi,I am trying to make a simple AR app using the ImageTargets (sample) as my base code.Now, I have been able to render my own target object instead of the teapot. However I am stuck at a point i.e I can only display (render) my object when it detects either the stoneimage (or chips/tarmac). I don't want this to happen. However i also don't intend to upload my images in target manager and generate the .dat file and the xml. I want the target object to be rendered when my program detects a human face. It's simple and i am able to do it using android's native code only. I can now enable a flag from my java file when i detect a face and render my object whenever the flag is true.Only what i need to do is to bypass this recognition part in the imagetargets sample code. Can someone help me with that???Thanks :)

AlessandroB

Wed, 01/16/2013 - 07:04

Hi buzz,

if I understand correctly, you simply want to use Vuforia to render the video-background, and re-use the OpenGL code to render your own 3D model upon face recognition;

Hi, thanks for the clarification;

so youre basically using Vuforia to render the video-background and the 3D models in OpenGL.

Is your application (before integrating with Vuforia) able to render the videobackground alone ? (and if so, is it using OpenGL as well ?)

 

No, the application was not using OpenGL. What this particular application does is, it detect faces and draws a rectangle by extending surfaceView and implementing SurfaceHolder.callback and Camera.PreviewCallback.

Well, I am not sure I want to do this that way or whether at all that will work. However, can u suggest me a way so that I can stop the camera call from jni (and start my own camera) but to call just the render method so that my 3d model is rendered while mu camera is on? Will it work?

Hey, I was looking for a possible wayout. So it just came to my mind that is it somehow possible to call the FaceDetect class of Android from the ImageTargets sample code so that it can use the camera feed (provided by vuforia camera call)?

Hey,

sounds good, if you manage to do it that way, I think it's going to simplify your life quite a bit.

So, to get a camera frame image from QCAR you need to do the following (I recommend to put this code in the QCAR_onUpdate() method):

Thanks, Actually I need RGB565 image format, so I need the corresponding index number. Are these imagesstored anywhere in the project?And can I get the RGB565 image in my java class?

Hi, the index number for the specific format (RGB565 in your case) may vary from one device to another (chances are that it is either at index 0 or index 1, for the high resolution);

Hi,
In the thread mentioned by you, on the 9th post you have placed the code for getting the code from C++ to java in the renderFrame() method. Why is that?
I am supposed to render the 3d object from the renderframe() method. Isn't it?

I invoked the java method in the jni in the onUpdate() method but keep getting the following error :

invalid use of incomplete type 'struct QCAR::Image'

on the following lines :

Hey Thanks that solved the problem...I mean the native code builds without error. However I don't know whether the ultimate aim is achieved and I will get back to you in case I encounter other problems