Log in or register to post comments

Android NativeActivity Integration

November 25, 2013 - 8:37am #1

Hi,

I'm trying to integrate the Vuforia SDK with our existing Android app. Our app is a large native app with minimal Java code. My approach has been to try to get the camera feed rendering over the top of our GL clear calls by integrating the relevant changes found in either the ImageTargets sample, or as found here...

https://developer.vuforia.com/resources/dev-guide/android-native-activities

At this point I think I've made all the required changes, but the sample app works and my app, while appearing to work, renders nothing more than it usually would.

In brief the things I've done are...

- Modify our build process such that libQCAR.so is packaged with the APK.

- Loaded libQCAR.so inside a static block of Java code.

- Called QCAR setInitParameters and QCAR.init. We used GL_20 and these calls both succeed.

- I've added hooks for onPause and onResume and forwarded to QCAR.

- I've added a Java method named onGLInitialized, called from native code after the window and GL context have been created and from here have called QCAR.onSurfaceCreated and onSurfaceChanged.

- I've added a block of native code to initialize and start the camera, called in the middle of our apps initialisation. This appears to complete without error.

- I've added a call to drawVideoBackground, called once per frame inside begin/end calls on the QCAR renderer instance.

- I've also added manifest permissions for the camera.

I was expecting at this point I'd see the camera feed rendering behind my own 3D graphics, but instead I just see the clear colour.

I'm not sure what I could be missing. I can't see much else relevant in the sample code. I can provide more details on any of the changes I've made if that helps but otherwise if anyone could point out what I'm missing I'd be very appreciative!

Thanks,

Dave.

Android NativeActivity Integration

November 28, 2013 - 9:37am #32

Well,

the FBO part is "optional" (i.e. you can also build it somewhere else);

the important part of what I'm suggesting below is to try and keep together in one "monolithic" block the the camera initialisation, the background configuration and the starting of camera and tracker, so that everything related to QCAR and camera  initialization is done in the same thread (as opposed to having the video background configured in the render thread),  

while keeping the rendering specific part in the render thread.

Then, the boolean flag would allow you to only render if camera and tracker are correctly initialised in the main thread (otherwise skip).

So, really what I mean is this (forgetting the FBO issue):

** Main Thread - Initialise, configure and start Camera and Tracker (all in one block)  **

QCAR::registerCallback(&updateCallback);
QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
imageTracker = trackerManager.initTracker(QCAR::Tracker::IMAGE_TRACKER);

QCAR::CameraDevice::getInstance().init(QCAR::CameraDevice::CAMERA_DEFAULT)

configureVideoBackground();

QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);

QCAR::CameraDevice::getInstance().selectVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
QCAR::setFrameFormat(QCAR::RGB565, true);
QCAR::CameraDevice::getInstance().start();
QCAR::setFrameFormat(QCAR::RGB565, true);
imageTracker->start();

set some boolean flag like 'cameraAndTrackerStarted' (set it here to TRUE)

** Render Thread - Render the video background (once per frame) **

          check boolean flag 'cameraAndTrackerStarted' => if false, skip rendering

else

QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::Renderer::getInstance().drawVideoBackground();
QCAR::Renderer::getInstance().end();

etc...

 

I have also sent you a PM.

 

Android NativeActivity Integration

November 28, 2013 - 9:24am #31

I'm happy to try different things to try to make this work, but I'm not sure I understand in this case how the QCAR calls could be influenced in any way by a change to to thread where the FBO is constructed? Is it possible you could explain the motivation for this line of thinking?

Android NativeActivity Integration

November 28, 2013 - 9:09am #30

Could you try this scheme ?

 

** Main Thread - Initialise, configure and start Camera and Tracker (all in one block)  **

QCAR::registerCallback(&updateCallback);
QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
imageTracker = trackerManager.initTracker(QCAR::Tracker::IMAGE_TRACKER);

QCAR::CameraDevice::getInstance().init(QCAR::CameraDevice::CAMERA_DEFAULT)

configureVideoBackground();

QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);

QCAR::CameraDevice::getInstance().selectVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
QCAR::setFrameFormat(QCAR::RGB565, true);
QCAR::CameraDevice::getInstance().start();
QCAR::setFrameFormat(QCAR::RGB565, true);
imageTracker->start();

set some boolean flag like 'cameraAndTrackerStarted' (set it here to TRUE)

** Render Thread - Render the video background (once per frame) **

          check boolean flag 'cameraAndTrackerStarted' => if false, skip rendering

else:

if (first time rendering) => build the FBO 

QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::Renderer::getInstance().drawVideoBackground();
QCAR::Renderer::getInstance().end();

etc...

 

 

Android NativeActivity Integration

November 28, 2013 - 8:07am #29

I did try moving the majority of the QCAR calls over to our renderthread, initialising everything just ahead of the first QCAR rendering calls, but this seemed to upset the call to selectVideoMode on the camera for some reason (it started returning false).

I've just tried this configuration, but the results are as per the original configuration, where the call to drawVideoBackground silently fails.

** Main Thread - Initialise the camera **

QCAR::registerCallback(&updateCallback);
QCAR::TrackerManager& trackerManager = QCAR::TrackerManager::getInstance();
imageTracker = trackerManager.initTracker(QCAR::Tracker::IMAGE_TRACKER);
QCAR::CameraDevice::getInstance().init(QCAR::CameraDevice::CAMERA_DEFAULT)

** Render Thread - Initialise the video background **

QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);

** Main Thread - Start the camera/tracker **

QCAR::CameraDevice::getInstance().selectVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
QCAR::setFrameFormat(QCAR::RGB565, true);
QCAR::CameraDevice::getInstance().start();
QCAR::setFrameFormat(QCAR::RGB565, true);
imageTracker->start();

** Render Thread - Render the video background (once per frame) **

QCAR::State state = QCAR::Renderer::getInstance().begin();
QCAR::Renderer::getInstance().drawVideoBackground();
QCAR::Renderer::getInstance().end();

Android NativeActivity Integration

November 28, 2013 - 7:37am #28

Have you tried moving the FBO construction to the render thread itself ?

As you said: 

For QCAR we call configureVideoBackground on the main thread at the same point where we constuct our FBOs, and then call into the QCAR render code from the render thread at an appropriate point (currently inside the clear).

You could maybe keep the configureVideoBackground in the main thread, but only build the FBO in the render thread (basically, you can build it just before rendering the first frame).

Not sure if this could play a role in the issue, but perhaps worth trying .... 

meanwhile I'm digging into the multi-threading issue, will get back to you asap.

 

Android NativeActivity Integration

November 28, 2013 - 7:26am #27

Thats right.

I'm a little surprised by this though. Theres no techincal reason I know of why we can't call into the GL layer from multiple threads. I assume the QCAR layer is running some validation of the thread in use and refusing to execute the draw call under some circumstance.

Is it the case the all the Renderer calls need to be on our render thread and all the others on our main thread? If so it looks like I'm going to have to put something fairly complex in place to distribute the QCAR calls between two thread which is a shame.

Android NativeActivity Integration

November 28, 2013 - 6:58am #26

Oh, I see. Happy to hear of the progress.

         ...If I force our rendering process to execute on the main thread, things seem to start working....

So, can you confirm that you can see the video background (and your stuff on top) when doing it on the main thread (although, I understand that this is not the final solution, as you don't want to put your rendering in the main thread) ?

 

 

Android NativeActivity Integration

November 28, 2013 - 6:47am #25

I'm getting somewhere now. I have at least identified something I can change which allows me to get the feed working. The problem seems to be linked to threading.

Our application uses three threads which interface to the GL context. We store a context per thread using thread local storage. We have a rendering thread which holds the primary context, our main thread and a loading thread which each hold a shared context each. The main and loading threads are only used for asset creation. The rendering thread is the only thread that executes actual rendering commands.

For QCAR we call configureVideoBackground on the main thread at the same point where we constuct our FBOs, and then call into the QCAR render code from the render thread at an appropriate point (currently inside the clear).

If I force our rendering process to execute on the main thread, things seem to start working.

So I could do with working out what aspect of our thread setup is causing problems for the QCAR calls. Moving the render to the main thread is not an option really, but hopefully there is another simple workaround for whatever the problem turns out to be.

Android NativeActivity Integration

November 28, 2013 - 4:53am #24

Hi, concerning the state changes; the page you refer to is basically correct.

It does not mention the binding of a shader explicitely,  just because in OpenGL ES 2.0 you always bind a shader when you render something, so, the background texture is also rendered with its own 'ad hoc' shader (could not be otherwise, unless you use OpenGL ES 1.x);

so, the shader program is unbound at the end of the drawVideoBackground call , but this is not relevant, because any subsequent OpenGL rendering operations (e.g. rendering your own 3d model for example) would require anyway you to bind your own custom shader to render your piece of geometry.

Concerning the Textures and texture units: 

you made a good observation here; actually, the drawVideoBackground internally selects Active Texture Unity 0, so, the video texture is always exclusively rendered on Unit 0; selecting a different texture unit before calling drawVideoBackground() will not have any effect, because drawVideoBackground will re-select anyway Unit 0 just before binding its video texture. This also explains all your observations.

 

FBOs:

ok, good that you raise this point, as I now make the link with the "render targets" that you were mentioning in a previous message, and I think this is where the culprit may be:

You should definitely be able to bind an FBO, and then the drawVideoBackground() should just render into that FBO (namely into the depth render buffer and the color texture that you have attached to your FBO); 

but could you then confirm that you are following this order of operations:

1. Bind your FBO

2. glClear (+ optional: extra cleaning calls + glViewport)

3. QCAR renderer begin()

4. QCAR drawVideoBackground()

5. QCAR renderer end()

6. your 3D rendering here (bind your shaders, bind your textures, draw your geometry, ...)

7. render your color render target by binding the FBO color texture to a screen-aligned quad 

Does this represent (more or less correctly) your rendering pipeline ?

Also, if this is the pipeline that you are implementing, what happens if you remove step 6 (i.e. if you don't render your own geometry) ?

 

 

 

Android NativeActivity Integration

November 28, 2013 - 3:33am #23

Hi,

I'm still testing and investigating various things here but I do have a few observations to share.

First, I found a page documenting the GL state changes that occur inside the begin() call here...

https://developer.vuforia.com/resources/dev-guide/opengl-state-changes-video-background-renderer

I find it interesting that it doesn't mention that a new shader may be bound by the call? Is this the full set of state changes?

It does state though, that the texture bound to unit 0 should change. On this basis I thought I would try to confirm that the texture had been bound. I added some calls as here...

glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);

QCAR::State state = QCAR::Renderer::getInstance().begin();

GLint boundTexture1;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &boundTexture1);

QCAR::Renderer::getInstance().drawVideoBackground();

GLint boundTexture2;
glGetIntegerv(GL_TEXTURE_BINDING_2D, &boundTexture2);

QCAR::Renderer::getInstance().end();

Basically I'm trying to confirm that a new texture was bound in an attempt to prove that these calls are actually doing some GL work. After this sequence boundTexture1 and boundTexture2 are both set to 0. If I bind some other texture at the top rather than clearing the bound texture to 0, that texture remains bound accross the entire sequence.

Something else worth mentioning is that you recommending clearing any FBO bindings. Our engine exclusively uses FBOs so adding this code to out current setup is likely to break a lot of things. Can I expect drawVideoBackground to be able to draw to the current bound FBO?

Thanks,

Dave.

 

Thanks.

Android NativeActivity Integration

November 28, 2013 - 3:20am #22

I made a few more tests and double-checked with my team, and it appears you can also call your custom rendering code AFTER the renderer.end(), i.e., you can try this:

  • glClear() (+ some extra OpenGL cleanup statements)
  • QCAR::Renderer::getInstance().begin();
  • QCAR::Renderer::getInstance().drawVideoBackground();
  • QCAR::Renderer::getInstance().end()
  • LAST STEP: render your stuff here, without calling any glClear, so that it will be rendered right on top of the video background

 

As said before, a safe approach should also add some extra OpenGL settings cleanup statements before rendering the video, as shown in the following:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
// Clear color and depth buffer
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepthf(1.0f);
 
glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
 
 
// optionally: glViewport( ... ) and glScissor( ... )
 
 
// Safer to disable depth testing explicitly, before rendering QCAR video
glDisable(GL_DEPTH_TEST);
 
// Safer to disable face culling explicitly, in case your own engine messes up with it
glDisable(GL_CULL_FACE);
 
// Safer to cleanup (unbind) all textures and FBOs
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);
 
 
// Get the state from QCAR and mark the beginning of a rendering section
QCAR::State state = QCAR::Renderer::getInstance().begin();
    
// Explicitly render the Video Background
QCAR::Renderer::getInstance().drawVideoBackground();

 

Have you already tried this, including all the OpenGL cleanups ?

 

Android NativeActivity Integration

November 27, 2013 - 10:35am #21

I see. Let me check with our team what are the OpenGL state issues with putting code after the renderer.end() call, and get back to you.

Meanwhile, would it be possible for you, just for the sake of testing, to try do the sequence I describe but without calling your own rendering code ?

i.e. just glClear / begin / drawVideoBack ground / end ?

(also include the "safe" OpenGL state settings, like glDisable(GL_DEPTH_TEST) etc...)

This would at least prove if the videobackground is rendered to the screen.....

 

 

 

Android NativeActivity Integration

November 27, 2013 - 10:13am #20

I think we will struggle to make this fit the sequence you describe. From out perspective the QCAR call is the custom rendering code and we are looking to find the correct location to insert it into our rendering pipeline, rather than the other way around. To fit the structure you describe I think I would need to wrap our entire rendering process (possibly 100's of draw calls, 1000's of OpenGL calls, multiple render target changes, etc) with begin()/end() calls? At he very least I would need to isolate the main rendertarget and wrap all of the draw calls we throw.

Would there be any negative consequences to the rendering code inside the begin()/end() section taking a long time to complete (e.g 20-30ms)?

Also I'm not sure I can see why our rendering code being either before or afer the end() call would make a difference. It should have already filled the screen by then so either way I would expect to see something on screen.

I suppose I need to experiment with a few different approaches here and so what works.

Android NativeActivity Integration

November 27, 2013 - 8:41am #19

Ok, so, yes, please check that you are not clearing with glClear in more than one place, as this could be an issue (you only need to glClear() right before drawing the video background)

 

Also, the workflow that you describe (drawbackground, call renderer.end() and then rendering your own stuff) is incorrect;

the correct order of operations should basically be like the following:

  • glClear() 
  • QCAR::Renderer::getInstance().begin();
  • QCAR::Renderer::getInstance().drawVideoBackground();
  • render your stuff here, without calling any glClear, so that it will be rendered right on top of the video background
  • QCAR::Renderer::getInstance().end()

 

On a safer approach, you could also add some extra OpenGL settings cleanup, as shown in the following:

// Clear color and depth buffer
    glClearColor(0.0f, 0.0f, 0.0f, 1.0f);
glClearDepthf(1.0f);

glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);


// optionally: glViewport( ... ) and glScissor( ... )


// Safer to disable depth testing explicitly, before rendering QCAR video
glDisable(GL_DEPTH_TEST);

// Safer to disable face culling explicitly, in case your own engine messes up with it
glDisable(GL_CULL_FACE);

// Safer to cleanup (unbind) all textures and FBOs
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, 0);


    // Get the state from QCAR and mark the beginning of a rendering section
    QCAR::State state = QCAR::Renderer::getInstance().begin();
    
    // Explicitly render the Video Background
    QCAR::Renderer::getInstance().drawVideoBackground();
Let me know if this solves the issue...
 
 
 
 

Android NativeActivity Integration

November 27, 2013 - 5:46am #18

Yes it feels like we are nearly there. In answer to you questions...

We do call glClear before calling renderer.begin. We actually clear color, depth and stencil I believe.

I believe our viewport and scissor are correct accross our clear I can try adding an explicit glViewport call before we call into the QCAR render, or before glClear just in case the viewport is not correctly configured. Is there any other state we must set before the QCAR render call?

After calling drawVideoBackground, we then call renderer.end, before going on to render the remainder of our frame. Inside the begin/end section we don't do anything other than call drawVideoBackground.

In this instance we should only be calling glClear once per frame, but I will try to verify that this is definitely the case. Once I move beyond simply testing this I'll of course look to decouple the clear call from the QCAR call as we would usually call glClear many times per frame. What would happen if we tried to call drawVideoBackground more than once per frame?
 

Android NativeActivity Integration

November 27, 2013 - 3:37am #17

All right. So, we have made some nice progress. 

Now the QCAR camera is getting you the right pixels, which solves half of the problem.

The remaining half of the problem is that the video texture is not rendered to the screen for some reasons, so there must be some issue at OpenGL level.

Here I have some questions:

  • do you call glClear( GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT ) before calling renderer.begin() ?
  • are you setting the GL viewport ? since you are doing rendering from native, it is safe to add an explicit glViewport( ) call right before calling glClear()
  • after calling the drawVideoBackground(), are you executing some code with your3D engine ? or are you merely calling drawVideoBackground() followed by renderer.end() ?
  • if you are executing your custom engine OpenGL code, can you verify that you are not calling glClear() anywhere in your engine code ? (for instance, suppose you have a custom class Camera or Renderer in your engine, like many 3D engines have, are you sure you are not executing any glClear() call in any of those classes ) ?

 

 

Android NativeActivity Integration

November 27, 2013 - 1:01am #16

Looks like I was calling setFrameFormat too early, and it was actually failing (returning false). Replacing the call I was making with a call before and after the call to start the camera has made a difference.

I now get 2 formats returned in the callback.

QCAR_onUpdate: frame.getNumImages() = 2   
 image->getFormat() = 4 (GREYSCALE)
 720 x 480 @ 7deee010   
 image->getFormat() = 1 (RGB565)
 720 x 480 @ 7df43010   

which seems like progress, but I still don't see anything on screen.

From what you have said so far, I assume the texture data now looks OK, so we can assume that the rendering is now working correctly?

Android NativeActivity Integration

November 26, 2013 - 2:50pm #15

OK

1. Try setting a different frame format, for instance RGB888 (instead of RGB565), as some devices support one but not the other; so there is chance that your device may support RGB888 but not RGB565.

2. Make sure to call setFrameFormat() twice, i.e.  once before the CameraDevice::getInstance().start(), and once again right after that.

3. Last thing, could you use this code to check your formats? it sounds weird that you only get GRAYSCALE; usually you should get at least 2 formats (and up to 4)

QCAR::Frame frame = state.getFrame();
    	for (int i = 0; i < frame.getNumImages(); ++i)
    	{
    	      const QCAR::Image *image = frame.getImage(i);

    	      LOG("------------------");

    	      LOG("Image %d - size %d %d", i, image->getWidth(), image->getHeight());

    	      if (image->getFormat() == QCAR::RGB565)
    	    	  LOG("RGB565");
    	      else if (image->getFormat() == QCAR::RGB888)
    	          LOG("RGB888");
    	      else if (image->getFormat() == QCAR::RGBA8888)
    	          LOG("RGBA8888");
    	      else if (image->getFormat() == QCAR::YUV)
    	          LOG("YUV");
    	      else if (image->getFormat() == QCAR::GRAYSCALE)
       	          LOG("GRAYSCALE");

    	      LOG("------------------");
    	}

 

Android NativeActivity Integration

November 26, 2013 - 9:32am #14

Just so you know I have tried calling QCAR::setFrameFormat(QCAR::RGB565, true) ahead of configuring the video background texture but this hasn't seemed to have made a difference.

Does QCAR in any way query the config that we have selected from OpenGL? Could that have any bearing on this?

Android NativeActivity Integration

November 26, 2013 - 9:13am #13

Ok, let me try the getImage as well and get back to you; you kight need to explicitly assign a frame format ...

 

Android NativeActivity Integration

November 26, 2013 - 8:44am #12

getVideoMode() does return 720x480, so that seems OK.

Only 1 frame appears to be returned each time the callback triggers as frame.getNumImages() always returns 1, so I think Greyscale is the only format I am able to get back. I don't remember needing to specify the format anywhere, so otherwise I'm not sure where it comes from.

Android NativeActivity Integration

November 26, 2013 - 8:36am #11

Ok, thanks a lot for doing the test.

720 x 480 sounds quite right: but please check that the videoMode.mWidth and videoMode.mHeight (in your video background configuration code that you showed previously) are actually matching those values.

Second: Grayscale sounds suspicious; are you using the first image ( i.e.  frame.getImage( 0 ) ) from the frame ?

 

 

Android NativeActivity Integration

November 26, 2013 - 8:29am #10

Hi,

I should add that I have not inspected the contents of the buffer that is returned, only the dimensions and format.

Thanks,

Dave.

Android NativeActivity Integration

November 26, 2013 - 8:25am #9

Hi,

I've done that now.

I do get back frames of data it would seem. The image seems to be 720x480 and interestingly seems to be flagged as GREYSCALE.

Does that sounds right?

Thanks,

Dave.

Android NativeActivity Integration

November 26, 2013 - 5:44am #8

Ok. I think what you are doing here is correct.

I tried on the Java samples to remove the configureVideoBackground() code from the onSurfaceChanged(), and the only side effect is that the texture appears stretched, but it is still visible and well centered on the screen;

in your case you don't see the texture at all, so we can exclude that this is the issue.

 

Next step would be to verify that the camera actually delivers frames with valid pixel values.

You can do this by following this simple tutorial which explains how to retrieve the camera pixels via the Vuforia API:

https://developer.vuforia.com/forum/faq/android-how-can-i-access-camera-image

Note: you can ignore all the Java and JNI related stuff and simply use the relevant C++ code to get the camera image pixels. The important thing is to register the QCAR_onUpdate() callback with QCAR and call the getPixels() function inside this callback. Then you could print some of the first few pixels in the pixel buffer and log their values, just to see that they are non-zero.

However, to do that, you will need to start the ImageTracker (see the sample code in ImageTargets.cpp), otherwise the QCAR_onUpdate() will not get called.

If the callback reports valid pixels from the camera, then we can say that the problem is really just in the building / rendering of the OpenGL texture and we can focus on this issue. On the other hand, if QCAR does not report valid pixel values for the camera frame, the issue is then somewhere else.

 

 

Android NativeActivity Integration

November 26, 2013 - 4:54am #7

This is perhaps where things differ a little between our code and the samples.

We don't call in GL code anywhere from our Java code. We have only one class dervied from NativeActivity.

We also don't really distinguish between surface-created and surface-changed. We call both onSurfaceCreated and onSurfaceChanged together, in that order, inside onGLInitialized. This is done once both the window and the GL context have been created. This size passed to onSurfaceChanged is based on our chosen back buffer size, which would be different to the window size usually.

From our logs the following calls are executed...

[QCAR] onResume()  

<Window created here>

<OpenGL context created here>

[QCAR] onSurfaceCreated() <-- inside onGLInitialized
[QCAR] onSurfaceChanged(1280,720) <-- inside onGLInitialized

[QCAR] init(QCAR::CameraDevice::CAMERA_DEFAULT)   
[QCAR] setVideoBackgroundConfig(1280 x 853)   
[QCAR] selectVideoMode(QCAR::CameraDevice::MODE_DEFAULT)   
[QCAR] start()   

Based on your comments this doesn't seem quite right. We are not calling setVideoBackgroundConfig before onSurfaceChanged anywhere in this sequence. I assume we likely have some sort of order problem here then?

As I said it's hard to describe where our rendering code is called. We don't have any Java code that deals with rendering, so there is no onDrawFrame. Our rendering code is simply triggered inside a loop in the NativeActivity. As a temporary measure I've added the QCAR render calls into our Clear() function, which would usually just call glClear, but now calls glClear, and then tries to render the video feed.

Dave.

Android NativeActivity Integration

November 26, 2013 - 2:57am #6

Ok, The code below looks correct;

however, to be sure that the video background is configured properly, you also need to configure the video background in the OnSurfaceChanged method (right before calling QCAR.onSurfaceChanged(width, height);...) :

(but without re-initializing the camera, just the part handling the videoMode and setting it with QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);)

 

Also, where do you call the rendering code (begin / drawVideobackground / end) ?

in the Java version this would be triggered from onDrawFrame

 

Android NativeActivity Integration

November 26, 2013 - 2:16am #5

We actually initialize the camera and configure the video texture in one block of code.

At the moment we are only supporting landscape rendering, so the code is a little more simple than the sample. The width and height we feed to this process are taken from the screen resolution but clamped to 1280x720. We then create render targets of that size which are what we render to. The end result of what we render each frame is scaled to fill the screen.

        if (QCAR::CameraDevice::getInstance().init(QCAR::CameraDevice::CAMERA_DEFAULT))
        {
            QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
            QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);

            uint32_t screenWidth = g_renderDevice.GetRenderFrameWidth();
            uint32_t screenHeight = g_renderDevice.GetRenderFrameHeight();

            QCAR::VideoBackgroundConfig config;
            config.mEnabled = true;
            config.mSynchronous = true;
            config.mPosition.data[0] = 0.0f;
            config.mPosition.data[1] = 0.0f;    
            config.mSize.data[0] = screenWidth;
            config.mSize.data[1] = videoMode.mHeight * (screenWidth / (float)videoMode.mWidth);
    
            if(config.mSize.data[1] < screenHeight)
            {
                config.mSize.data[0] = screenHeight * (videoMode.mWidth / (float)videoMode.mHeight);
                config.mSize.data[1] = screenHeight;
            }    
    
            QCAR::Renderer::getInstance().setVideoBackgroundConfig(config);

            bool selectVideoModeResult = QCAR::CameraDevice::getInstance().selectVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
            assert(selectVideoModeResult);

            bool startResult = QCAR::CameraDevice::getInstance().start();
            assert(startResult);
        }

On the device I am using I end up with a video texture size of 1280 x 853.

None of the asserts in this block are tripped.

Thanks,

Dave.

Android NativeActivity Integration

November 25, 2013 - 1:46pm #4

Ok, Thanks for the clarifications. so, if you are merely trying to show the video, indeed the tracker is not needed; we can exclude the Tracker issue.

Another important element to render the video feed properly is to call the function configureVideoBackground();

such function is called in 2 places in the sample:

1. after camera is initialized: see sample code in ImageTargets.cpp in the native function called _startCamera()

2. in the _updateRendering() function (see again ImageTargets.cpp), after setting the screenWidth and screenHeight (see sample code)

This is a key function, as it sets the size of the video background texture, so make sure you are executing it; if you don't, the video background will be assigned a width and height of zero (or undefined) and will not render on screen.

Also, the configureVideoBackground() sample code also logs the values of the videoMode size, so you can also verify those values, if successfull.

Could you verify this ?

 

Android NativeActivity Integration

November 25, 2013 - 10:11am #3

Hi,

Thanks for getting back to me so quickly.

The code we use to draw the video background is simply...

        QCAR::State state = QCAR::Renderer::getInstance().begin();
        QCAR::Renderer::getInstance().drawVideoBackground();
        QCAR::Renderer::getInstance().end();

I can't realistically share our custome rendering code as we have an awful lot of it, but it would hopefully be enough to say that we call glClear() right before the call to getInstance().begin() here and later call eglSwapBuffers and that what I'm expecting to see is the video feed instead of the cleared screen. I presume the rest of the code in our main loop if not all that relevant, unless there was anything in particular you wanted to check in which case I'm happy to find and share whatever you need to see.

The sample seemed to perform custom rendering inside the begin()/end() block which we could do, but not easily, and it would mean keeping our loop would sit inside that block for a long time every frame. I assumed this was not needed as we should have ownership of our own GLcontext outside of that block anyway.

I'm not starting the ImageTracker, because at the moment I have nothing to track and am just trying to integrate the rendering component. If the ImageTracker is also required I can try to plant some dummy code in its place to get things going, if you think that is needed?

Thanks,

Dave.

Android NativeActivity Integration

November 25, 2013 - 9:58am #2

Hi Dave,

have you started the ImageTracker as well, right after starting the camera (as shown in our samples) ?

Also, could you share the relevant piece of OpenGL code in which you call drawVideoBackground and your custom rendering code ?

 

Log in or register to post comments