Topic locked

Finding the recognition part

January 15, 2013 - 10:35pm #1

Hi,
I am trying to make a simple AR app using the ImageTargets (sample) as my base code.
Now, I have been able to render my own target object instead of the teapot. However I am stuck at a point i.e I can only display (render) my object when it detects either the stoneimage (or chips/tarmac). I don't want this to happen. However i also don't intend to upload my images in target manager and generate the .dat file and the xml. I want the target object to be rendered when my program detects a human face. It's simple and i am able to do it using android's native code only. I can now enable a flag from my java file when i detect a face and render my object whenever the flag is true.Only what i need to do is to bypass this recognition part in the imagetargets sample code. Can someone help me with that???

Thanks :)

Finding the recognition part

March 5, 2014 - 4:08am #76

Hi Lynnette,

your post is not related to this thread. You should create a new thread.

Also, Vuforia 2.0 is not supported anymore.

The latest SDK version is 2.8.7 (as of today).

image target

March 5, 2014 - 3:18am #75

Hi,

I am trying to build the image targets sample project 2-0-7 using ndk-build and i get this error in cygwin

 

Compile++ arm    : ImageTargets <= ImageTargets.cpp
jni/ImageTargets.cpp: In member function 'virtual void ImageTargets_UpdateCallback::QCAR_onUpdate(QCAR::State&)':
jni/ImageTargets.cpp:98:43: error: 'IMAGE_TRACKER' is not a member of 'QCAR::Tracker'
jni/ImageTargets.cpp: In function 'int Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_initTracker(JNIEnv*, jobject)':
jni/ImageTargets.cpp:155:89: error: expected primary-expression before ')' token
jni/ImageTargets.cpp: In function 'void Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_deinitTracker(JNIEnv*, jobject)':
jni/ImageTargets.cpp:174:34: error: 'IMAGE_TRACKER' is not a member of 'QCAR::Tracker'
jni/ImageTargets.cpp: In function 'int Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_loadTrackerData(JNIEnv*, jobject)':
jni/ImageTargets.cpp:186:47: error: 'IMAGE_TRACKER' is not a member of 'QCAR::Tracker'
jni/ImageTargets.cpp: In function 'int Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_destroyTrackerData(JNIEnv*, jobject)':
jni/ImageTargets.cpp:242:35: error: 'IMAGE_TRACKER' is not a member of 'QCAR::Tracker'
jni/ImageTargets.cpp: In function 'void Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_startCamera(JNIEnv*, jobject)':
jni/ImageTargets.cpp:613:93: error: expected primary-expression before ')' token
jni/ImageTargets.cpp: In function 'void Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargets_stopCamera(JNIEnv*, jobject)':
jni/ImageTargets.cpp:626:61: error: 'IMAGE_TRACKER' is not a member of 'QCAR::Tracker'
/cygdrive/d/android/android-ndk-r8e-windows-x86/android-ndk-r8e/build/core/build-binary.mk:272: recipe for target 'obj/local/armeabi/objs/ImageTargets/ImageTargets.o' failed
make: *** [obj/local/armeabi/objs/ImageTargets/ImageTargets.o] Error 1
 
Please help me rectify this problem

Finding the recognition part

March 27, 2013 - 2:37am #74

 

thank you

Finding the recognition part

March 22, 2013 - 5:34am #73

Hi, concerning the performance, copying the full image from native memory to a Bitmap can take some time (although this may vary significantly from one device to another).

One thing I would probably do is to avoid creating a new Bitmap every time you call the processCameraImage; you could probably have a class member variable and create it just the first time, and then only update the pixels in subsequent calls.

Also, the reason why the image is not displayed is because you create a new canvas but that canvas is not associated to any View.

The way to do it would be to create a View (for instance through an XML layout), add that view to the main view of the App (see addContentView() which is already used in ImageTargets.java, as an example).

Then you could override the onDraw() method of your View and use the canvas which is passed as argument to the onDraw(), something similar to this sample code snippet:

private class MyBitmapView extends View{

 public MyBitmapView(Context context) {
     super(context);
 }

 @Override
 protected void onDraw(Canvas canvas) {
  canvas.drawBitmap(mBitmap, 0, 0, null);
 }
}

I hope this helps.

 

Finding the recognition part

March 22, 2013 - 3:41am #72

ok ,then i tried to use rgb565 and do the example of facedetction :

 public void processCameraImage(byte[] buffer, int width, int height)  {
    	
    	System.out.println("setRGB565CameraImage....intent received..."); 
        Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
        bitmap.copyPixelsFromBuffer(ByteBuffer.wrap(buffer));
        FaceDetector detector = new FaceDetector(width, height,2);
        Face[] faces = new Face[2];
		Paint ditherPaint = new Paint();
		Paint drawPaint = new Paint();
		ditherPaint.setDither(true);
		drawPaint.setColor(Color.RED);
		drawPaint.setStyle(Paint.Style.STROKE);
		drawPaint.setStrokeWidth(2);
		Canvas canvas = new Canvas();
		// canvas.setBitmap(bitmap);
		// canvas.drawBitmap(bitmap, 0, 0, ditherPaint);
        int facesFound = detector.findFaces(bitmap, faces);
        PointF midPoint = new PointF();
        float eyeDistance = 0.0f;
        float confidence = 0.0f;
        Log.i("FaceDetector", "Number of faces found: " + facesFound);
      
        if(facesFound > 0)
        {
                for(int index=0;index<facesFound;++index){
                        faces[index.getMidPoint(midPoint);
                        eyeDistance=faces[index.eyesDistance();
                        confidence=faces[index.confidence();
                         
 Log.i("FaceDetector","Confidence: " + confidence + ", Eye distance: " + eyeDistance +", Mid Point: (" + midPoint.x + ", " + midPoint.y + ")");
                      //  QCAR:drawPaint.isDither();
                     //   QCAR:drawPaint.set(drawPaint);
                                                
                        canvas.drawRect((int)midPoint.x - eyeDistance , 
                                                        (int)midPoint.y - eyeDistance , 
                                                        (int)midPoint.x + eyeDistance, 
                                                        (int)midPoint.y + eyeDistance, drawPaint);
                        
                }
              //  mGlView.setRenderer(mRenderer);
        }
     }

 The video 's performance is not good, and i can't display rectangle with canvas.is there another way using QCAR function?

Finding the recognition part

March 22, 2013 - 1:44am #71

Ok, then this seem to indicate that the device unfortunately does not support that format.

 

Finding the recognition part

March 22, 2013 - 1:05am #70

no it does'nt work.

it work just with

Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);

but not with 

Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);

 

Finding the recognition part

March 21, 2013 - 11:34am #69

Hi, check this code snippet:

if ((javaVM != 0) && (activityObj != 0) && (javaVM->GetEnv((void**)&env, JNI_VERSION_1_4) == JNI_OK)) {

 
        const short* pixels = (const short*) imageRGBA8888->getPixels();
        int width = imageRGBA8888->getWidth();
        int height = imageRGBA8888->getHeight();
        int numPixels = width * height;
 
        jbyteArray pixelArray = env->NewByteArray(numPixels * 4);
        env->SetByteArrayRegion(pixelArray, 0, numPixels * 4, (const jbyte*) pixels);
        jclass javaClass = env->GetObjectClass(activityObj);
        jmethodID method = env-> GetMethodID(javaClass, "processCameraImage", "([BII)V");
        env->CallVoidMethod(activityObj, method, pixelArray, width, height);
        env->DeleteLocalRef(pixelArray);
    }

Does it work then ?

Finding the recognition part

March 21, 2013 - 10:06am #68

Thank you, it's  clear now that the JNI part invoke the java function :

jmethodID method = env-> GetMethodID(javaClass, "processCameraImage", "([BII)V");

now i want to work with RGBA8888 so i change the ImageTarget.cpp:

 // Select the default mode:
    if (!QCAR::CameraDevice::getInstance().selectVideoMode(
                                QCAR::CameraDevice::MODE_DEFAULT))
        return;
    QCAR::setFrameFormat(QCAR::RGBA8888, true);
    // Start the camera:
    if (!QCAR::CameraDevice::getInstance().start())
        return;
    QCAR::setFrameFormat(QCAR::RGBA8888, true);

and in startCamera():

if (image->getFormat() == QCAR::RGBA8888) 

i made changes also in ImageTarget.java:

 Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);

but i can't now display the video any more i'm stucked on the aboutScreen activity.

this is a part of the logcat:

03-20 17:38:19.487: I/System.out(15046): setRGB565CameraImage....intent received...
03-20 17:38:19.597: W/dalvikvm(15046): JNI WARNING: JNI method called with exception pending
03-20 17:38:19.597: W/dalvikvm(15046): in Lcom/qualcomm/ar/pl/CameraPreview;.newFrameAvailable:(IIII[B)V (CallIntMethodV)
03-20 17:38:19.597: W/dalvikvm(15046): Pending exception is:
03-20 17:38:19.597: I/dalvikvm(15046): java.lang.RuntimeException: Buffer not large enough for pixels
03-20 17:38:19.597: I/dalvikvm(15046): 	at android.graphics.Bitmap.copyPixelsFromBuffer(Bitmap.java:383)
03-20 17:38:19.597: I/dalvikvm(15046): 	at com.qualcomm.QCARSamples.ImageTargets.ImageTargets.processCameraImage(ImageTargets.java:499)
03-20 17:38:19.597: I/dalvikvm(15046): 	at com.qualcomm.ar.pl.CameraPreview.newFrameAvailable(Native Method)
03-20 17:38:19.597: I/dalvikvm(15046): 	at com.qualcomm.ar.pl.CameraPreview.onPreviewFrame(CameraPreview.java:760)
03-20 17:38:19.597: I/dalvikvm(15046): 	at android.hardware.Camera$EventHandler.handleMessage(Camera.java:705)

i tried to use : 

pixelArray = env->NewByteArray(numPixels * 4);

but with no result:a black screen then return to AboutScreenActivity.

Finding the recognition part

March 20, 2013 - 12:24pm #67

That code (Bitmap.createBitmap....) is just an example of a Java function that creates a Bitmap object (in Android a Bitmap can represent any image) from a buffer.

That function is meant to be called from the native code and it is just an example of how to transfer the camera frame pixels from native to Java.

 

Finding the recognition part

March 20, 2013 - 10:18am #66

 

Hi
but i didn't understand how can i acces the camera image.Does the bitmap represent the camera image?
Bitmap bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.ARGB_8888);
 bitmap.copyPixelsFromBuffer(ByteBuffer.wrap(buffer)); 

 

Finding the recognition part

March 19, 2013 - 8:09am #65

Hi, first check if you have added this code as mentioned in that article:

//global variables
JavaVM* javaVM = 0;
jobject activityObj = 0;
 
JNIEXPORT jint JNICALL
JNI_OnLoad(JavaVM* vm,  void* reserved) {
    LOG("JNI_OnLoad");
    javaVM = vm;
    return JNI_VERSION_1_4;
}

then, if you have added that, activityObj and javaVM should be OK.

Also, make sure that you have added this code at the end of initApplicationNative() (otherwise you will get a runtime crash):

activityObj = env->NewGlobalRef(obj);

then, in the QCAR_onUpdate(), make sure that the "state" variable is uncommented, i.e.:

virtual void QCAR_onUpdate(QCAR::State& /*state*/)

it might be that in the original sample code it is like:

virtual void QCAR_onUpdate(QCAR::State& state)

(as you can see, state is commented ...)

Finding the recognition part

March 19, 2013 - 8:04am #64

 

AlessandroB wrote:

Hi, please refer to this thread (which is probably much clearer):

https://developer.vuforia.com/forum/faq/android-how-can-i-access-th-camera-image

 

 

i tried to acces the camera as it is mentioned in that thread.But i had this error:
 
jni/ImageTargets.cpp:111: error: 'javaVM' was not declared in this scope
jni/ImageTargets.cpp:111: error: 'activityObj' was not declared in this scope
make: *** [obj/local/armeabi/objs/ImageTargets/ImageTargets.o] Error 1
 
do you have any idea how can i fix it ?

Finding the recognition part

February 12, 2013 - 5:52am #63

Hi, wasn't it your initial requirement, not to use any tracker ? So, if you have no tracker and no trackable, you cannot have any Pose matrix; that's by definition.

however, you can set your ModelViewMatrix (which is the "pose matrix" of your 3D model, if you like) the way you want, so to translate it / scale it/ rotate it, so here you are completely free to play with your modelview matrix in the way you want.

 

 

Finding the recognition part

February 12, 2013 - 4:15am #62

Hi,

Now the problem is I am not getting the pose matrix thats why the 3d model(teapot) is fixed at the center of the screen. So how can i get the pose matrix without trackables?

Finding the recognition part

January 30, 2013 - 4:23am #61

No, that's not a threading error, it's just that you need to adjust the name of your JNI function so to match the full (package + class) name of your java class:

JNIEXPORT void JNICALL Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame

you need to replace the "Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_" name with a name that matches the fully qualified class name of YOUR java class;

for instance, if your Java class is called com.mycompany.myproject.ImageTargetsRenderer, the name of your JNI "renderFrame" function will have to be:

"Java_com_mycompany_myproject_ImageTargetsRenderer_renderFrame"

Then also make sure to re-run ndk-build and refresh the project in Eclipse, as usual.

 

Finding the recognition part

January 30, 2013 - 3:59am #60

Now, I get an error (I guess it's thread related issue)

01-30 17:30:18.098: E/AndroidRuntime(6049): FATAL EXCEPTION: GLThread 14
01-30 17:30:18.098: E/AndroidRuntime(6049): java.lang.UnsatisfiedLinkError: renderFrame
01-30 17:30:18.098: E/AndroidRuntime(6049):  at com.qualcomm.QCARSamples.ImageTargets.ImageTargetsRenderer.renderFrame(Native Method)
01-30 17:30:18.098: E/AndroidRuntime(6049):  at com.qualcomm.QCARSamples.ImageTargets.ImageTargetsRenderer.onDrawFrame(ImageTargetsRenderer.java:77)
01-30 17:30:18.098: E/AndroidRuntime(6049):  at android.opengl.GLSurfaceView$GLThread.guardedRun(GLSurfaceView.java:1372)
01-30 17:30:18.098: E/AndroidRuntime(6049):  at android.opengl.GLSurfaceView$GLThread.run(GLSurfaceView.java:1118)
 

Can you identify what's going wrong?

Finding the recognition part

January 30, 2013 - 2:05am #59

Oh, OK, I just realized from your code that you are using Vuforia SDK 1.5 then.

Is there a reason why you are using that version? I would encourage you to use Vuforia 2.0 (available since December 2012), which contains a lot of improvements and new features.

However, in case you want to keep using 1.5, just remove the code related to mReflection and the code using texSampler2DHandle. That should be sufficient to fix the error.

 

 

Finding the recognition part

January 30, 2013 - 1:28am #58

texSampler2DHandle this variable is not used in the original sample code...neither was the code for the front/back camera was used...

if(QCAR::Renderer::getInstance().getVideoBackgroundConfig().mReflection == QCAR::VIDEO_BACKGROUND_REFLECTION_ON)
 glFrontFace(GL_CW); //Front camera    
    else      
 glFrontFace(GL_CCW);//Rear camera

The above code was not used in the original code.

And I have included all the header files!

Finding the recognition part

January 30, 2013 - 12:07am #57

Hi, it looks like you might have removed some of the #include statements from the original sample:

can you check that you have ALL of the following:

 

#include <QCAR/QCAR.h>
#include <QCAR/CameraDevice.h>
#include <QCAR/Renderer.h>
#include <QCAR/VideoBackgroundConfig.h>
#include <QCAR/Trackable.h>
#include <QCAR/TrackableResult.h>
#include <QCAR/Tool.h>
#include <QCAR/Tracker.h>
#include <QCAR/TrackerManager.h>
#include <QCAR/ImageTracker.h>
#include <QCAR/CameraCalibration.h>
#include <QCAR/UpdateCallback.h>
#include <QCAR/DataSet.h>
 
check the "texSampler2DHandle" variable in the original sample code and verify all occurrences of texSampler2DHandler; you might have remove that as well, for some reasons.
 

Finding the recognition part

January 29, 2013 - 10:06pm #56

Thanks, now there are a few errors as :

In function 'void JNICALLJava_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv*, _jobject*)

1. 'const struct QCAR::VideoBackgroundConfig' has no member named 'mReflection'
2. 'VIDEO_BACKGROUND_REFLECTION_ON' is not a member of 'QCAR'
3. 'texSampler2DHandle' was not declared in this scope

Finding the recognition part

January 29, 2013 - 6:39am #55

Just to summarize my previous message with a fully working code snippet, this is the complete code of renderFrame():

JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv *, jobject)
{
	//LOG("Java_com_qualcomm_QCARSamples_ImageTargets_GLRenderer_renderFrame");

	    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

	    QCAR::State state = QCAR::Renderer::getInstance().begin();

	    QCAR::Renderer::getInstance().drawVideoBackground();

	    glEnable(GL_DEPTH_TEST);

	    glEnable(GL_CULL_FACE);
	    glCullFace(GL_BACK);
	    if(QCAR::Renderer::getInstance().getVideoBackgroundConfig().mReflection == QCAR::VIDEO_BACKGROUND_REFLECTION_ON)
	        glFrontFace(GL_CW);  //Front camera
	    else
	        glFrontFace(GL_CCW);   //Back camera

	    const Texture* const thisTexture = textures[0];

	    QCAR::Matrix44F modelViewMatrix;
	    for (int i = 0; i < 16; ++i) modelViewMatrix.data[i] = 0;
	    modelViewMatrix.data[0] = 1.0f;
	    modelViewMatrix.data[5] = 1.0f;
	    modelViewMatrix.data[10] = 1.0f;
	    modelViewMatrix.data[15] = 1.0f;

	    SampleUtils::translatePoseMatrix(0.0f, 0.0f, 150.0f,
	                                         &modelViewMatrix.data[0]);
	    SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
	                                     &modelViewMatrix.data[0]);


	    QCAR::Matrix44F modelViewProjection;

	    SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
	                                    &modelViewMatrix.data[0] ,
	                                    &modelViewProjection.data[0]);

	    glUseProgram(shaderProgramID);

	    glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,
	                              (const GLvoid*) &teapotVertices[0]);
	    glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0,
	                              (const GLvoid*) &teapotNormals[0]);
	    glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,
	                              (const GLvoid*) &teapotTexCoords[0]);

	    glEnableVertexAttribArray(vertexHandle);
	    glEnableVertexAttribArray(normalHandle);
	    glEnableVertexAttribArray(textureCoordHandle);

	    glActiveTexture(GL_TEXTURE0);
	    glBindTexture(GL_TEXTURE_2D, thisTexture->mTextureID);
	    glUniform1i(texSampler2DHandle, 0);
	    glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
	                           (GLfloat*)&modelViewProjection.data[0] );

	    glDrawElements(GL_TRIANGLES, NUM_TEAPOT_OBJECT_INDEX, GL_UNSIGNED_SHORT,
	                       (const GLvoid*) &teapotIndices[0]);

	    SampleUtils::checkGlError("ImageTargets renderFrame");

	    glDisable(GL_DEPTH_TEST);

	    glDisableVertexAttribArray(vertexHandle);
	    glDisableVertexAttribArray(normalHandle);
	    glDisableVertexAttribArray(textureCoordHandle);

	    QCAR::Renderer::getInstance().end();
}

 

Finding the recognition part

January 29, 2013 - 6:18am #54

Great. So if you look at your current code (snippet here):

 // Did we find any trackables this frame?
    for(int tIdx = 0; tIdx < state.getNumActiveTrackables(); tIdx++)
    {
        // Get the trackable:
        const QCAR::Trackable* trackable = state.getActiveTrackable(tIdx);
        QCAR::Matrix44F modelViewMatrix =
            QCAR::Tool::convertPose2GLMatrix(trackable->getPose());        
 
        // Choose the texture based on the target name:
        int textureIndex;

You need to adjust it so to:

  1. remove the " for (int tIdx = 0; ...) " part   (i.e.  just execute the whole OpenGL code which is currently enclosed inside the for loop)
  2. for the modelview matrix, instead of using the < trackable = state.getActiveTrackable(tIdx); > code, you need to setup your own modelview matrix, for instance, if you want to see the model standing in front of your camera at a fixed location, you could use the following:
QCAR::Matrix modelViewMatrix;

// Set the modelview as "identity" matrix
for (int i=0; i < 16; ++i) modelViewMatrix.data[i] = 0;

modelViewMatrix.data[0] = 1.0f; modelViewMatrix.data[5] = 1.0f; modelViewMatrix.data[10] = 1.0f; modelViewMatrix.data[15] = 1.0f; 

// Apply a translation back (move the model away from the camera):
SampleUtils::translatePoseMatrix(0.0f, 0.0f, 100.0f,  &modelViewMatrix.data[0]);

 

 
 

Finding the recognition part

January 29, 2013 - 3:24am #53

Yes, renderframe is executed and I see the LOG

Finding the recognition part

January 29, 2013 - 2:59am #52

Hi, the renderFrame method should be executed, since it is called from onDrawFrame, so you should at least see this log:

Java_com_qualcomm_QCARSamples_ImageTargets_GLRenderer_renderFrame

If not, it means that something in your sample code prevents the onDrawFrame to be called or to call the "renderFrame" native function (are you sure you only disabled the "loadTracker" function with respect to the original sample code ?)

Maybe I would suggest to put logs everywhere (at every line of code in onDrawFrame so to see clearly what is executed and what is not)

 

Finding the recognition part

January 29, 2013 - 1:53am #51

I think this part is not executed... I have put a LOG in this part and that doesn't show...anyway my renderframe method is as follows:

 

JNIEXPORT void JNICALL
Java_com_qualcomm_QCARSamples_ImageTargets_ImageTargetsRenderer_renderFrame(JNIEnv *, jobject)
{
    LOG("Java_com_qualcomm_QCARSamples_ImageTargets_GLRenderer_renderFrame");

    // Clear color and depth buffer 
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);

    // Get the state from QCAR and mark the beginning of a rendering section
    QCAR::State state = QCAR::Renderer::getInstance().begin();
    
    // Explicitly render the Video Background
    QCAR::Renderer::getInstance().drawVideoBackground();
       
#ifdef USE_OPENGL_ES_1_1
    // Set GL11 flags:
    glEnableClientState(GL_VERTEX_ARRAY);
    glEnableClientState(GL_NORMAL_ARRAY);
    glEnableClientState(GL_TEXTURE_COORD_ARRAY);

    glEnable(GL_TEXTURE_2D);
    glDisable(GL_LIGHTING);
        
#endif

    glEnable(GL_DEPTH_TEST);
    glEnable(GL_CULL_FACE);

    // Did we find any trackables this frame?
    for(int tIdx = 0; tIdx < state.getNumActiveTrackables(); tIdx++)
    {
        // Get the trackable:
        const QCAR::Trackable* trackable = state.getActiveTrackable(tIdx);
        QCAR::Matrix44F modelViewMatrix =
            QCAR::Tool::convertPose2GLMatrix(trackable->getPose());        

        // Choose the texture based on the target name:
        int textureIndex;
        if (strcmp(trackable->getName(), "chips") == 0)
        {
            textureIndex = 0;
        }
        else if (strcmp(trackable->getName(), "stones") == 0)
        {
            textureIndex = 1;
        }
        else
        {
            textureIndex = 2;
        }

        const Texture* const thisTexture = textures[textureIndex];

#ifdef USE_OPENGL_ES_1_1
        // Load projection matrix:
        glMatrixMode(GL_PROJECTION);
        glLoadMatrixf(projectionMatrix.data);

        // Load model view matrix:
        glMatrixMode(GL_MODELVIEW);
        glLoadMatrixf(modelViewMatrix.data);
        glTranslatef(0.f, 0.f, kObjectScale);
        glScalef(kObjectScale, kObjectScale, kObjectScale);

        // Draw object:
        glBindTexture(GL_TEXTURE_2D, thisTexture->mTextureID);
        glVertexPointer(3, GL_FLOAT, 0, bananaVerts);
  glNormalPointer(GL_FLOAT, 0, bananaNormals);
  glTexCoordPointer(2, GL_FLOAT, 0, bananaTexCoords);
        glDrawArrays(GL_TRIANGLES, 0, bananaNumVerts);
#else

        QCAR::Matrix44F modelViewProjection;

        SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale,
                                         &modelViewMatrix.data[0]);
        SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
                                     &modelViewMatrix.data[0]);
        SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
                                    &modelViewMatrix.data[0] ,
                                    &modelViewProjection.data[0]);

        glUseProgram(shaderProgramID);
         
        glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,bananaVerts);
        glVertexAttribPointer(normalHandle, 3, GL_FLOAT, GL_FALSE, 0,bananaNormals);
        glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0,bananaTexCoords);
        
        glEnableVertexAttribArray(vertexHandle);
        glEnableVertexAttribArray(normalHandle);
        glEnableVertexAttribArray(textureCoordHandle);
        
        glActiveTexture(GL_TEXTURE0);
        glBindTexture(GL_TEXTURE_2D, thisTexture->mTextureID);
        glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE,
                           (GLfloat*)&modelViewProjection.data[0] );
        glDrawArrays(GL_TRIANGLES, 0, bananaNumVerts);

        SampleUtils::checkGlError("ImageTargets renderFrame");
#endif

    }

    glDisable(GL_DEPTH_TEST);

#ifdef USE_OPENGL_ES_1_1        
    glDisable(GL_TEXTURE_2D);
    glDisableClientState(GL_VERTEX_ARRAY);
    glDisableClientState(GL_NORMAL_ARRAY);
    glDisableClientState(GL_TEXTURE_COORD_ARRAY);
#else
    glDisableVertexAttribArray(vertexHandle);
    glDisableVertexAttribArray(normalHandle);
    glDisableVertexAttribArray(textureCoordHandle);
#endif

    QCAR::Renderer::getInstance().end();
}


void
configureVideoBackground()
{
    // Get the default video mode:
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    QCAR::VideoMode videoMode = cameraDevice.
                                getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);


    // Configure the video background
    QCAR::VideoBackgroundConfig config;
    config.mEnabled = true;
    config.mSynchronous = true;
    config.mPosition.data[0] = 0.0f;
    config.mPosition.data[1] = 0.0f;
    
    if (isActivityInPortraitMode)
    {
        //LOG("configureVideoBackground PORTRAIT");
        config.mSize.data[0] = videoMode.mHeight
                                * (screenHeight / (float)videoMode.mWidth);
        config.mSize.data[1] = screenHeight;

        if(config.mSize.data[0] < screenWidth)
        {
            LOG("Correcting rendering background size to handle missmatch between screen and video aspect ratios.");
            config.mSize.data[0] = screenWidth;
            config.mSize.data[1] = screenWidth * 
                              (videoMode.mWidth / (float)videoMode.mHeight);
        }
    }
    else
    {
        //LOG("configureVideoBackground LANDSCAPE");
        config.mSize.data[0] = screenWidth;
        config.mSize.data[1] = videoMode.mHeight
                            * (screenWidth / (float)videoMode.mWidth);

        if(config.mSize.data[1] < screenHeight)
        {
            LOG("Correcting rendering background size to handle missmatch between screen and video aspect ratios.");
            config.mSize.data[0] = screenHeight
                                * (videoMode.mWidth / (float)videoMode.mHeight);
            config.mSize.data[1] = screenHeight;
        }
    }

    LOG("Configure Video Background : Video (%d,%d), Screen (%d,%d), mSize (%d,%d)", videoMode.mWidth, videoMode.mHeight, screenWidth, screenHeight, config.mSize.data[0], config.mSize.data[1]);

    // Set the config:
    QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);
}

 

Please help...

Finding the recognition part

January 29, 2013 - 1:35am #50

Hi, can you confirm that you ahve this code (or very similar code) being executed at the moment:

 

 QCAR::Matrix44F modelViewProjection;
 
        SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale,
                                         &modelViewMatrix.data[0]);
        SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,
                                     &modelViewMatrix.data[0]);
        SampleUtils::multiplyMatrix(&projectionMatrix.data[0],
                                    &modelViewMatrix.data[0] ,
                                    &modelViewProjection.data[0]);
?
 
Maybe safer if you paste here your full code in renderFrame().
Then I'll guide you for the necessary modifications.

Finding the recognition part

January 29, 2013 - 1:21am #49

How do I set the modelViewMatrix then? Is it because of this that my object is not rendered properly? What is the way out in that case?

Finding the recognition part

January 29, 2013 - 1:15am #48

OK, then you almost certainly need to set the modelview matrix somehow (in the original sample, the modelview is set from the Pose of the trackable, but since you have no trackable, your modelview is probably just not set...)

could you check that ?

 

Finding the recognition part

January 29, 2013 - 1:12am #47

Yes, I cannot see the model. however the video background is there... you got that right.

Finding the recognition part

January 29, 2013 - 12:42am #46

Hi, I will clarify my previous message:

onDrawFrame is called automatically, if you have attached the mGLView to your main activity view (see variable mGLView in ImageTargets.java) and if you have set the rendererer (mGLView.setRenderer).

In the first part ("onDrawFrame is called automatically"), I am just saying that jnormally you don't need to do anything special to have onDrawFrame called, provided that the other conditions (mGLView, etc.) in my message are met;

in the second part of my message (" if you have attached the mGLView to your main activity view..."), I am just asking you to double-check that your code about "mGLView" in ImageTargets.java is exactly the same as in the original sample; since you confirm that you did not change that code, that's absolutely fine, so no problem with that;

in the 3rd part of my message ("and if you have set the rendererer (mGLView.setRenderer)"), I simply mean that there is a line of code in initApplicationAR() that does "mGLView.setRenderer(); so, all I'm saying here is to double-check again that you did not removed that line; as you confirmed that that line is already included in your initApplicationAR() function, that's fine too, so no problem with that. Very important: I am not saying that you need to add another mGLView.setRenderer somewhere else in your code, all I was saying is "to keep the existing code" (just in case you removed it for some reasons)

 

So, if the above is all in order, your renderFrame is certainly executed (and not need to do anything special for that);

then, the reason why you don't see the 3D model is probably due to some issue in the rendering code (in C++);

in particular, since you don't use the trackables anymore, how do you set the modelview matrix ? 

(I assume you can see the video background, but not the model, correct ?)

 

 

Finding the recognition part

January 28, 2013 - 11:39pm #45

I think I am notf able to make you understand. Firstly, my ImageTargets.java file is the same as the sample code except for two differences....

1. We have excluded the loading of trackables in the java file (so that the corresponding native methods are never called)

2. The setRGB565CameraImage() has been added and the method body is as under.

public void setRGB565CameraImage(byte[] buffer, int width, int height) { 
     try {
     bitmap = Bitmap.createBitmap(width, height, Bitmap.Config.RGB_565);
     bitmap.copyPixelsFromBuffer(ByteBuffer.wrap(buffer));   
     
     ...
     ...
     ...
{Code for face detection}
          if (face detected) 
      mGlView.setRenderer(mRenderer);
        
        
        

         }
    catch (Exception ex) {
     System.out.println("error...."+ex);
     ex.printStackTrace();
    }
 }

 Now mGlView.setRenderer(mRenderer) is already called in initApplicationAR() method. So, I am getting an error like this...

 


01-29 12:54:55.143: W/System.err(29033): java.lang.IllegalStateException: setRenderer has already been called for this instance.
01-29 12:54:55.143: W/System.err(29033):  at android.opengl.GLSurfaceView.checkRenderThreadState(GLSurfaceView.java:1623)
01-29 12:54:55.143: W/System.err(29033):  at android.opengl.GLSurfaceView.setRenderer(GLSurfaceView.java:299)
01-29 12:54:55.143: W/System.err(29033):  at com.qualcomm.QCARSamples.ImageTargets.ImageTargets.setRGB565CameraImage(ImageTargets.java:871)
01-29 12:54:55.143: W/System.err(29033):  at com.qualcomm.ar.pl.CameraPreview.newFrameAvailable(Native Method)
01-29 12:54:55.143: W/System.err(29033):  at com.qualcomm.ar.pl.CameraPreview.onPreviewFrame(CameraPreview.java:755)
01-29 12:54:55.143: W/System.err(29033):  at android.hardware.Camera$EventHandler.handleMessage(Camera.java:547)
01-29 12:54:55.143: W/System.err(29033):  at android.os.Handler.dispatchMessage(Handler.java:99)
01-29 12:54:55.143: W/System.err(29033):  at android.os.Looper.loop(Looper.java:130)
01-29 12:54:55.143: W/System.err(29033):  at android.app.ActivityThread.main(ActivityThread.java:3695)
01-29 12:54:55.143: W/System.err(29033):  at java.lang.reflect.Method.invokeNative(Native Method)
01-29 12:54:55.143: W/System.err(29033):  at java.lang.reflect.Method.invoke(Method.java:507)
01-29 12:54:55.143: W/System.err(29033):  at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:842)
01-29 12:54:55.143: W/System.err(29033):  at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:600)
01-29 12:54:55.143: W/System.err(29033):  at dalvik.system.NativeStart.main(Native Method)

what I need to do is render the object as long as I can detect the face. That's all.
Now, I have been trying different ways, but there is no significant output!

 

Finding the recognition part

January 28, 2013 - 6:09am #44

onDrawFrame is called automatically, if you have attached the mGLView to your main activity view (see variable mGLView in ImageTargets.java) and if you have set the rendererer (mGLView.setRenderer).

Please check the original code in ImageTargets (see mGLView and related code)

Finding the recognition part

January 28, 2013 - 5:35am #43

I have renderframe() method inside onDrawFrame()...that's fine, but how to call this method from ImageTargets class? In order to call onDrawFrame() from ImageTargets.java, I sshould pass a GL10 variable, but how to initialize this variable?

Finding the recognition part

January 28, 2013 - 5:20am #42

If you just need to call renderFrame() without passing any info, that's even simpler then (you can ignore that part).

Just make sure to have renderFrame inside onDrawFrame. That's all.

 

Finding the recognition part

January 28, 2013 - 5:11am #41

Can you elaborate a bit more on this?
What you have said I agree on that, but here my requirement is not regarding any information passing. I need to call the renderframe() method from ImageTargets class.
So, can you explain a bit more?

Finding the recognition part

January 28, 2013 - 4:58am #40

Hi,

thanks for the update on the call-chain;

so, yes, you need to keep the rendering part in rendering thread (now I also see why you were asking about how to call render_frame() from ImageTargets.java ..);

so, what you need to do is to keep the renderFrame() native method call inside the onDrawFrame() like in the original ImageTargets samle code;

so, due to the fact that your RGB565 image processing is done in ImageTargets.java and it is run on a different thread (not the redering one), then you will need to exchange some information between your code in ImageTargets.java and your code in ImageTargetsRenderer.java;

for example, suppose your code in ImageTargets.java computes the position of a 3D object based on some custom algorithm, then you can simply pass thatb information over to ImageTargetsRenderer and store it into a member variable in the ImageTargetsRenderer class;

then, when onDrawframe() is executed, you will be able to use for example that member variable...

I hope this makes sense...

 

 

Finding the recognition part

January 28, 2013 - 4:31am #39

I am not too sure but I think that the renderframe() method is being called from the UI thread.
So, what is happening is the object is rendered for a veryyy short time that too NOT ALWAYS. I am pretty sure this is because of some threading issue.

Though I have already explained you my chain of method calls still I would try to elaborate:
1. In ImageTargets.cpp from the onUpdate() method (native) I am calling the setRGBCameraImage() method (java invocation) as suggested by you in another thread in the forum.

2. In setRGBCameraImage() method (in ImageTargets.java), I am performing operations on the RGB565 image and when the face is detected (in setRGBCameraImage() method body itself) I am calling render_method().

3. render_method is in the class ImageTargetsRenderer (I have already given you the code) that is calling the renderframe() method.

Now, I am doing all this from the same thread that is calling the setRGBCameraImage() method. However, I am not sure if that is the right thing to do.

Finding the recognition part

January 28, 2013 - 2:16am #38

Hi, I see, you have correctly implemented what I was suggesting (calling code from different classes); however you also need to pay attention that the rendering is executed from within the OpenGL rendering thread; so, for instance if you do something like the following:

  1. Java: ImageTargetRenderer onDrawFrame() => invokes renderFrame() (native); here you are in the OpenGL rendering thread (because onDrawFrame is exectued in such thread);
  2. then from renderFrame (C++) => you pass the camera frame to Java for face detection (here you are still in the rendering thread)
  3. in Java you do some processing and then call back to native method renderFrame();

The sequence above is fine if steo 3 is executed right after step 2, so that you are always in the rendering thread; however, if you initiate a call to mRenderer.render_method() from a thread that is not the OpenGL one, then you will have problems (e.g. nothing is rendered, or even crashing)

Could you a detailed explanation of what you are doing with your code ? (i.e. explain the call-chain of your functions)

 

 

Finding the recognition part

January 27, 2013 - 11:06pm #37

The code snippet in ImageTargets.java in my method :
 

mRenderer.render_method();

And in ImageTargetsRenderer.java, there is a public method :

 public void render_method(){
     renderFrame();
    }

In the renderframe() method I have a LOG...and in DDMS the log is being printed, that means it enters the renderframe(), but not rendering anything, what may be the possible reason?

Finding the recognition part

January 27, 2013 - 11:01pm #36

Ok, I tried the way you suggested, now the problem is I am calling a public method of ImageTargetsRenderer.java (that calls the renderframe() method) from ImageTargets.java. So it enters the renderframe method, but doesnot render anything!

Finding the recognition part

January 25, 2013 - 12:53am #35

Hi, the initApplicationAR() function should only be called once, during the initialization phase; I don't really see why you call that function to render your 3D object (please check the code in the ImageTarget sample to see where and why this function is called at initialization).

Second point, you say:

as I cannot directly call the native _renderframe() method outside the ImageTargetsRenderer class

not sure to understand here why you can't do that; you could for instance create a public method in ImageTargetRenderer to call the native method, and then call that method from your Java class (whatever it is).

 

 

Finding the recognition part

January 24, 2013 - 9:26pm #34

What I meant is that when my face is detected, I want to render my 3d object...now for that we are calling initApplicationAR() method (as I cannot directly call the native _renderframe() method outside the ImageTargetsRenderer class) when the face is detected in the ImageTargets.java through the method setRGB565CameraImage() but when I do that, the camera stops and the program exits without any errors.
So I tried another way...instead of calling the initApplicationAR() method I tried the two lines of code that I have already mentioned in the last comment, but then there was error...and the log is there in the last comment.
Now what I need to know is how to invoke the native _renderframe() method to render my object when face is detected?

Finding the recognition part

January 24, 2013 - 5:37am #33

Hey, can you be a bit clearer here; what do you mean by "we are essentially doing is calling the initApplicationAR() method" ?

from which function are you calling such method ?

Finding the recognition part

January 24, 2013 - 5:20am #32

Hey,
It worked successfully...however, we are struck at one last point and that is invoking the _renderFrame() method from our method...so what we are essentially doing is calling the initApplicationAR() method.
But as soon as the method is called, it exits and the application stops. However, there are no errors!!!
I tried the following two lines of code:

 

        mRenderer = new ImageTargetsRenderer();
        mGlView.setRenderer(mRenderer);

and there was error...
And the logcat output is as under :

01-24 17:50:05.391: W/System.err(9770): java.lang.IllegalStateException: setRenderer has already been called for this instance.
01-24 17:50:05.391: W/System.err(9770): at android.opengl.GLSurfaceView.checkRenderThreadState(GLSurfaceView.java:1623)
01-24 17:50:05.391: W/System.err(9770): at android.opengl.GLSurfaceView.setRenderer(GLSurfaceView.java:299)
01-24 17:50:05.391: W/System.err(9770): at com.qualcomm.QCARSamples.ImageTargets.ImageTargets.setRGB565CameraImage(ImageTargets.java:867)
01-24 17:50:05.391: W/System.err(9770): at com.qualcomm.ar.pl.CameraPreview.newFrameAvailable(Native Method)
01-24 17:50:05.391: W/System.err(9770): at com.qualcomm.ar.pl.CameraPreview.onPreviewFrame(CameraPreview.java:755)
01-24 17:50:05.391: W/System.err(9770): at android.hardware.Camera$EventHandler.handleMessage(Camera.java:547)
01-24 17:50:05.391: W/System.err(9770): at android.os.Handler.dispatchMessage(Handler.java:99)
01-24 17:50:05.391: W/System.err(9770): at android.os.Looper.loop(Looper.java:130)
01-24 17:50:05.391: W/System.err(9770): at android.app.ActivityThread.main(ActivityThread.java:3695)
01-24 17:50:05.391: W/System.err(9770): at java.lang.reflect.Method.invokeNative(Native Method)
01-24 17:50:05.391: W/System.err(9770): at java.lang.reflect.Method.invoke(Method.java:507)
01-24 17:50:05.391: W/System.err(9770): at com.android.internal.os.ZygoteInit$MethodAndArgsCaller.run(ZygoteInit.java:842)
01-24 17:50:05.391: W/System.err(9770): at com.android.internal.os.ZygoteInit.main(ZygoteInit.java:600)
01-24 17:50:05.391: W/System.err(9770): at dalvik.system.NativeStart.main(Native Method)

Finding the recognition part

January 23, 2013 - 1:39am #31

OK, that means that the JavaVM has not been initialized;

do you have this code in your cpp file:

//global variables
JavaVM* javaVM = 0;
jobject activityObj = 0;
 
JNIEXPORT jint JNICALL
JNI_OnLoad(JavaVM* vm, void* reserved) {
LOG("JNI_OnLoad");
javaVM = vm;
return JNI_VERSION_1_4;
}

Is the JNI_OnLoad being executed (see LOG "JNI_OnLoad") ?

(Maybe double-check once again the guide here, to see if you have missed some steps:

https://developer.vuforia.com/forum/faq/android-how-can-i-access-th-camera-image)

 

Finding the recognition part

January 23, 2013 - 1:32am #30

Hi,

I am getting JavaVM  is NULL.

Finding the recognition part

January 23, 2013 - 12:43am #29

Hi, can you check (via Log) if javaVM or activityObj are NULL ?

if one of them is null, it means that it has not been initialized correctlty (or at all)

you can add (before the if ((javaVM != 0) && (activityObj != 0) && (javaVM->GetEnv((void**)&env, JNI_VERSION_1_4) ),

something like:

if (!javaVM) LOG("javaVM is null");

if (!activityObj) LOG("activityOBJ is null");

Finding the recognition part

January 22, 2013 - 11:26pm #28

No, it doesn't...has it something to do with the JNI version check that you are doing?
 

(javaVM->GetEnv((void**)&env, JNI_VERSION_1_4) == JNI_OK)

though I included the blocks, I didn't get it.
Anyway, you got any ideas why it is not happening?

Finding the recognition part

January 22, 2013 - 10:59pm #27

Hi, what does your log say ?

is it printing the message About to invoke the java method.......setRGB565CameraImage.........

?

 

Topic locked