"We offer new support options and therefor the forums are now in read-only mode! Please check out our Support Center for more information." - Vuforia Engine Team

camera position

How to infer the camera postion and orientation from the pose of a trackable? I need to know this ie: to compute a raycast from the camera into the scene to check if an object attached to the trackable is in the line of fire of the camera. TIA

There are a few ways to approach this problem, but most of them involve both the pose matrix (which is also a base modelview matrix in OpenGL terms) and the projection matrix. You can get the projection matrix using the Tool::getProjectionGL method.

Ok great, will look at this btw: i found the math lib in the OGLES20_Shader sample code archive. Q: what's the license for these files? Can I use them freely in my projects? Thanks

If you downloaded it from the Qualcomm developer site then it should be free to use. I used the math libraries included in the Adreno SDK distribution. - Kim

Mhhh, i've tried your sample (using x = 0.0f and y = 0.0f for the center of the camera) but I'm not sure it's working correctly... how should i read those values let say my image target is on a table, my camera is perpendicular to the table just above the center of target. what values will change i

What are the near/far values that you are feeding to the getProjectionGL method? Also, what is the size of your target in the config.xml file (located in the assets folder)?

My near value is 0.1f and my far value is 100.0f the target size is 1.0 x 0.7 for a physical size of about a full page print. I don't want to find a 2d point on the target.j I want to know where in the object space is the camera and its direction.

Sorry, the linePlaneIntersection method was just for debugging. If you could accurately pick out a point on the target plane, you would know that your line shooting from the camera was correct. Let's try a different approach.

ok lookAt vector seems ok to me but position vector always return 0.0, 0.0, 0.0

Perhaps your inverse modelview matrix is stored in a different format than mine (column vs. row major).

my matrix are ROW_MAJOR here are my matrixes modelViewMatrix 0.998775 -0.007211 0.048951 0.000000 -0.006447 -0.999855 -0.015741 0.000000 0.049058 0.015406 -0.998677 0.000000 0.085168 -0.049039 1.303295 1.000000 inverseModelViewMatrix 0.998775 -0.007211 0.048951 -0.149216 -0.006447 -0.999855 -0.01

So if you look at your inverse modelview matrix, the 3rd column should be the camera lookAt and the 4th column should be the camera position. Pull those values out however you'd like! - Kim

Ok great thanks for the help. A side question: any idea on why the position vector is always 0.0, 0.0, 0.0 ? would it matter if i was in column major for my matrices? Ok, got it...

There are two factors at play here: how your matrices are stored (row vs. column major) and how your math library performs matrix operations (whether it's set to expect row vs. column matrices).

Ok thanks for the explanation an help. I just have one more issue (not related to this topic) You said that if I have a target image with a size of 1.0 x 0.6, and if i have a model with a size of 1.0 it should take the size of the target (make sense) But in my app, my model (which has a size of 1.

No, the near/far camera parameters just adjust the bounds of the viewing frustum. Objects must be at least "near" units away from the camera, but no further than "far" units away from the camera, to be visible. You need to pick near/far values that work with your target size.

In my trackable section i had uploaded some image, so that 1) i am finding only some markings in that trackable (when i am analyzing the target) how can i place the big virtual objects by using that markings,boz i dont know all the co-ordinates plz answer to me asap it will help me.

Sorry, the question isn't very clear. Are you just trying to line up 3d content with features on your target? You can move an object on the target by changing the X and Y values of the translation.

I had question that is it possible to place the virtual glasses to a person standing infront of the mobile phone?

Perhaps if you stick a marker on their forehead :) The QCAR SDK cannot be used for face tracking. Please open new threads for new questions, this is not related to the original topic. - Kim

[QUOTE]If you downloaded it from the Qualcomm developer site then it should be free to use. I used the math libraries included in the Adreno SDK distribution. - Kim [/QUOTE] Are you using FrmMath.h? How did you integrate this?