Log in or register to post comments

camera position

November 14, 2010 - 12:43pm #1

How to infer the camera postion and orientation from the pose of a trackable?

I need to know this ie: to compute a raycast from the camera into the scene to check if an object attached to the trackable is in the line of fire of the camera.

TIA

Re: camera position

July 31, 2011 - 7:46am #24

Sure, this is a great use for multi targets.

To start, you can create a standard box multi target using the Target Management System. Then edit the config.xml file by hand to reconfigure the targets. You aren't limited to six targets, any image targets in your config file can be added to the multi target. As a tip, if you're configuring a complex multi target arrangement and want to check the config file our Unity extension lets you visualize the targets in the Unity editor (you don't need the Android addon to take advantage of this).

With multi targets, all the targets are in a unified coordinate system, so you can use the pose to find the camera position if any part of the multi target is in view. Here's some code that might help, note that it depends on SampleMath from the Dominoes sample:

// take the inverse of the modelview matrix to find the camera orientation in relation to a target at the origin
QCAR::Matrix44F inverseModelView = SampleMath::Matrix44FTranspose(SampleMath::Matrix44FInverse(modelViewMatrix));

// pull the camera position and look at vectors from this matrix
QCAR::Vec3F cameraPosition(inverseModelView.data[12], inverseModelView.data[13], inverseModelView.data[14]);
QCAR::Vec3F cameraLookAt(inverseModelView.data[8], inverseModelView.data[9], inverseModelView.data[10]);

- Kim

Re: camera position

July 28, 2011 - 10:21am #23

I'm wondering if it's possible to use Multi Targets configured in an 'open box' arrangement to create a space where the camera position and orientation are known granted one of (say 6 in this example) the Multi Target planes are in view.

To clarify, and as an example, would it be possible to know camera position and orientation by putting Multi Target-defined targets on each wall of a normal room such that at least one target is in frame at all times? The camera would naturally be in this room. Am I making sense here?

Thanks!

Re: camera position

July 2, 2011 - 8:12pm #22
Quote:

If you downloaded it from the Qualcomm developer site then it should be free to use. I used the math libraries included in the Adreno SDK distribution.

- Kim

Are you using FrmMath.h?

How did you integrate this? I'm trying to include it from the Adreno framework, but there seems to be a problem w/ the inlining, which I'm guessing has to do with the platform definition.

Update: Nevermind, I got it - found a version for Android

Re: camera position

January 6, 2011 - 9:23am #21

Perhaps if you stick a marker on their forehead :) The QCAR SDK cannot be used for face tracking.

Please open new threads for new questions, this is not related to the original topic.

- Kim

Re: camera position

January 6, 2011 - 8:39am #20

I had question that is it possible to place the virtual glasses to a person standing infront of the mobile phone?

Re: camera position

January 6, 2011 - 6:25am #19

Sorry, the question isn't very clear. Are you just trying to line up 3d content with features on your target? You can move an object on the target by changing the X and Y values of the translation. See the call to translatePoseMatrix in the samples (in the native code).

Note that the center of the target is the origin. Positive X is to the right of the origin, and positive Y points towards the top of the target.

Look at the config.xml file to find the size of the target, that represents the total width and height.

- Kim

Re: camera position

January 6, 2011 - 3:40am #18

In my trackable section i had uploaded some image, so that
1) i am finding only some markings in that trackable (when i am analyzing the target)

how can i place the big virtual objects by using that markings,boz i dont know all the co-ordinates plz answer to me asap it will help me.

Re: camera position

November 27, 2010 - 4:31pm #17

No, the near/far camera parameters just adjust the bounds of the viewing frustum. Objects must be at least "near" units away from the camera, but no further than "far" units away from the camera, to be visible. You need to pick near/far values that work with your target size. Typically the near value should be smaller than the width of the target, perhaps by a factor of 10. So for your target of width 1.0, you'd have a near value of 0.1.

I suggest adding a scale factor to your model, and playing with the values until it looks right on the target. If your model is more or less square, you may want to go with 0.6 to fill the height of the target, rather than the width (for your 1.0x0.6 size target).

- Kim

Re: camera position

November 26, 2010 - 4:55am #16

Ok thanks for the explanation an help.

I just have one more issue (not related to this topic)

You said that if I have a target image with a size of 1.0 x 0.6, and if i have a model with a size of 1.0 it should take the size of the target (make sense)
But in my app, my model (which has a size of 1.0) is much bigger that the target image.
could it be because of my projection near/far parameters? what values should i use for this?

TIA

Re: camera position

November 25, 2010 - 7:24am #15

There are two factors at play here: how your matrices are stored (row vs. column major) and how your math library performs matrix operations (whether it's set to expect row vs. column matrices). Sometimes your matrix is in the wrong format for the math library, and a simple transpose fixes the issue. I find that printing out the matrix to see its layout can help. Clearly in this case you should have been pulling out a position with the values -0.149216, -0.027968, 1.298148, so if you were getting 0, 0, 0 you would know a transpose was required.

- Kim

Re: camera position

November 25, 2010 - 1:28am #14

Ok great thanks for the help.

A side question: any idea on why the position vector is always 0.0, 0.0, 0.0 ?

would it matter if i was in column major for my matrices?

Ok, got it... as you said i need to transpose the inverse matrix before transforming my vectors...
I'm a bit a noob with transformation matrices... (pity since i'm trying to do 3d) but could you shortly en-light me on the "why"?

TIA

Re: camera position

November 24, 2010 - 2:35pm #13

So if you look at your inverse modelview matrix, the 3rd column should be the camera lookAt and the 4th column should be the camera position. Pull those values out however you'd like!

- Kim

Re: camera position

November 24, 2010 - 2:30pm #12

my matrix are ROW_MAJOR

here are my matrixes

modelViewMatrix
0.998775 -0.007211 0.048951 0.000000
-0.006447 -0.999855 -0.015741 0.000000
0.049058 0.015406 -0.998677 0.000000
0.085168 -0.049039 1.303295 1.000000

inverseModelViewMatrix
0.998775 -0.007211 0.048951 -0.149216
-0.006447 -0.999855 -0.015741 -0.027968
0.049058 0.015406 -0.998677 1.298148
0.000000 0.000000 0.000000 1.000000

here is the code i'm using to print the matrix
void printMATRIX4X4(MATRIX4X4& matSrcMatrix) {
MATRIX4X4* s = &matSrcMatrix;
LOGI("%f %f %f %f",s->M(0,0), s->M(0,1), s->M(0,2), s->M(0,3));
LOGI("%f %f %f %f",s->M(1,0), s->M(1,1), s->M(1,2), s->M(1,3));
LOGI("%f %f %f %f",s->M(2,0), s->M(2,1), s->M(2,2), s->M(2,3));
LOGI("%f %f %f %f",s->M(3,0), s->M(3,1), s->M(3,2), s->M(3,3));
}

Re: camera position

November 24, 2010 - 1:39pm #11

Perhaps your inverse modelview matrix is stored in a different format than mine (column vs. row major). Print it out, you might need to transpose the matrix before you can pull out the position (it should be in the 4th column).

If that's the case your lookAt probably isn't correct right now, make sure you pull that out of the transposed matrix as well.

- Kim

Re: camera position

November 24, 2010 - 1:02pm #10

ok lookAt vector seems ok to me
but position vector always return 0.0, 0.0, 0.0

Re: camera position

November 24, 2010 - 6:51am #9

Great, I'll try this.
Thanks again for your help.

Re: camera position

November 24, 2010 - 6:37am #8

Sorry, the linePlaneIntersection method was just for debugging. If you could accurately pick out a point on the target plane, you would know that your line shooting from the camera was correct.

Let's try a different approach. I believe you can find the camera position and direction from the inverse of the modelview matrix. I just worked through it, and this seems to work for me:

FRMMATRIX4X4 *mvMatrix = (FRMMATRIX4X4 *) modelViewMatrix.data;
FRMMATRIX4X4 inverseModelViewMatrix = FrmMatrixInverse(*mvMatrix);

FRMVECTOR4 position(0, 0, 0, 1);
FRMVECTOR4 lookAt(0, 0, 1, 0);

position = FrmVector4Transform(position, inverseModelViewMatrix);
lookAt = FrmVector4Transform(lookAt, inverseModelViewMatrix);

FRMVECTOR3 camPosition(position.x, position.y, position.z);
FRMVECTOR3 camLookAt(lookAt.x, lookAt.y, lookAt.z);

Note that the camLookAt here should be quite similar to normalize(lineEnd - lineStart) in the previous examples.

Hope this helps!

- Kim

Re: camera position

November 23, 2010 - 11:59pm #7

My near value is 0.1f and my far value is 100.0f the target size is 1.0 x 0.7 for a physical size of about a full page print.
I don't want to find a 2d point on the target.j
I want to know where in the object space is the camera and its direction. I need to know that because i want to shoot a bullet coming out of the camera.

Re: camera position

November 23, 2010 - 5:42pm #6

What are the near/far values that you are feeding to the getProjectionGL method? Also, what is the size of your target in the config.xml file (located in the assets folder)? These values will determine the near/far world values, if you use this approach.

Here is the linePlaneIntersection method, if you want to take what you've got and try to find a 2D point on the target. It might be easier to think about that value, and you could also use it to draw a small texture to see if it ends up where you expect.

bool
linePlaneIntersection(FRMVECTOR3 lineStart, FRMVECTOR3 lineEnd,
					  FRMVECTOR3 pointOnPlane, FRMVECTOR3 planeNormal,
					  FRMVECTOR3 &intersection)
{
	FRMVECTOR3 lineDir = lineEnd - lineStart;
	lineDir = FrmVector3Normalize(lineDir);
	
	FRMVECTOR3 planeDir = pointOnPlane - lineStart;
	
	float n = FrmVector3Dot(planeNormal, planeDir);
	float d = FrmVector3Dot(planeNormal, lineDir);
	
	if (fabs(d) < 0.00001) {
		// Line is parallel to plane
		return false;
	}
	
	float dist = n / d;
	
	FRMVECTOR3 offset = FrmVector3Mul(lineDir, dist);
	intersection = lineStart + offset;
}

You'll probably want to use a plane center of (0, 0, 0) and a plane normal of (0, 0, 1).

- Kim

Re: camera position

November 23, 2010 - 1:03pm #5

Mhhh, i've tried your sample (using x = 0.0f and y = 0.0f for the center of the camera) but I'm not sure it's working correctly...
how should i read those values

let say my image target is on a table, my camera is perpendicular to the table just above the center of target.
what values will change if i lift up the camera (changing its height from the target)

here are the values when the target just fit my device screen (camera is perpendicular to the target)
nearWorld x=-0.145102, y=-0.096156, z=1.234984
farWorld x=-0.096775, y=-0.064131, z=0.823665

here are the values when the target take 1/4 of my device screen
nearWorld x=-0.042375, y=0.066976, z=1.244670
farWorld x=-0.028262, y=0.044669, z=0.830126

TIA

just for reference here is my code

float x = 0.0f;
float y = 0.0f;

VECTOR4 ndcNear(x, y, -1, 1);
VECTOR4 ndcFar(x, y, 1, 1);

// Normalized Device Coordinates to Eye Coordinates
MATRIX4X4 *pMatrix = (MATRIX4X4 *) projectionMatrix.data;
MATRIX4X4 inverseProjMatrix = MatrixInverse(*pMatrix);

VECTOR4 pointOnNearPlane = Vector4Transform(ndcNear, inverseProjMatrix);
VECTOR4 pointOnFarPlane = Vector4Transform(ndcFar, inverseProjMatrix);

pointOnNearPlane /= pointOnNearPlane.w;
pointOnFarPlane /= pointOnFarPlane.w;

// Eye Coordinates to Object Coordinates
MATRIX4X4 *mvMatrix = (MATRIX4X4 *) modelViewMatrix.data;
MATRIX4X4 inverseModelViewMatrix = MatrixInverse(*mvMatrix);

VECTOR4 nearWorld = Vector4Transform(pointOnNearPlane, inverseModelViewMatrix);
VECTOR4 farWorld = Vector4Transform(pointOnFarPlane, inverseModelViewMatrix);

LOGI("nearWorld x=%f, y=%f, z=%f", nearWorld.x, nearWorld.y, nearWorld.z);
LOGI("farWorld x=%f, y=%f, z=%f", farWorld.x, farWorld.y, farWorld.z);

Re: camera position

November 15, 2010 - 2:32pm #4

If you downloaded it from the Qualcomm developer site then it should be free to use. I used the math libraries included in the Adreno SDK distribution.

- Kim

Re: camera position

November 15, 2010 - 2:15pm #3

Ok great, will look at this

btw: i found the math lib in the OGLES20_Shader sample code archive.
Q: what's the license for these files? Can I use them freely in my projects?

Thanks

Re: camera position

November 15, 2010 - 10:58am #2

There are a few ways to approach this problem, but most of them involve both the pose matrix (which is also a base modelview matrix in OpenGL terms) and the projection matrix. You can get the projection matrix using the Tool::getProjectionGL method. One approach is to use the inverse of these matrices to bring screen points (or the central camera point) into object space for intersection testing.

Here is some sample code that creates a line from a touch point on the screen, running from the near plane to the far plane, in object-space. You could easily adapt it to create a ray to intersect with your objects, using the center of the screen to represent the camera. The code isn't optimized, but hopefully it will help get you started. Note that the math functions are not included in the SDK, but should be easy enough to find elsewhere.

void
projectScreenPointToPlane(FRMVECTOR2 point, FRMVECTOR3 planeCenter, FRMVECTOR3 planeNormal,
						  FRMVECTOR3 &intersection, FRMVECTOR3 &lineStart, FRMVECTOR3 &lineEnd)
{
	// Window Coordinates to Normalized Device Coordinates
	VideoBackgroundConfig config = QCAR::Renderer::getInstance().getVideoBackgroundConfig();
	
	float halfScreenWidth = screenWidth / 2.0f;
	float halfScreenHeight = screenHeight / 2.0f;
	
	float halfViewportWidth = config.size.data[0] / 2.0f;
	float halfViewportHeight = config.size.data[1] / 2.0f;
	
	float x = (point.x - halfScreenWidth) / halfViewportWidth;
	float y = (point.y - halfScreenHeight) / halfViewportHeight * -1;
	
	FRMVECTOR4 ndcNear(x, y, -1, 1);
	FRMVECTOR4 ndcFar(x, y, 1, 1);
	
	// Normalized Device Coordinates to Eye Coordinates
	FRMVECTOR4 pointOnNearPlane = FrmVector4Transform(ndcNear, inverseProjMatrix);
	FRMVECTOR4 pointOnFarPlane = FrmVector4Transform(ndcFar, inverseProjMatrix);
	pointOnNearPlane /= pointOnNearPlane.w;
	pointOnFarPlane /= pointOnFarPlane.w;
	
	// Eye Coordinates to Object Coordinates
	FRMMATRIX4X4 *mvMatrix = (FRMMATRIX4X4 *) modelViewMatrix.data;
	FRMMATRIX4X4 inverseModelViewMatrix = FrmMatrixInverse(*mvMatrix);
	
	FRMVECTOR4 nearWorld = FrmVector4Transform(pointOnNearPlane, inverseModelViewMatrix);
	FRMVECTOR4 farWorld = FrmVector4Transform(pointOnFarPlane, inverseModelViewMatrix);
	
	lineStart = FRMVECTOR3(nearWorld.x, nearWorld.y, nearWorld.z);
	lineEnd = FRMVECTOR3(farWorld.x, farWorld.y, farWorld.z);
	linePlaneIntersection(lineStart, lineEnd, planeCenter, planeNormal, intersection);
}

- Kim

Log in or register to post comments