Hi there,

I've had a good hunt around on this topic and haven't had a lot of luck.

I've implemented a system where pan gestures (iOS) can be used to move the object around in the plane of the target (x and y). However I want it to take into account the position of the camera. ie, I want the user to be able to swipe up with their finger, and effectively push the object away from the camera regardless of the orientation of the target. Obviously, at the moment, this works with one orientation of the target, but if I walk around the target and try again from the side, swiping up pushes that object off the side of the screen.

Can someone point me in the right direction?

Thanks!

Carl.

Hi, I think the problem is that you need to convert your "swipe" motion into a "translation vector" in the world reference frame (i.e. the target reference frame);

so, let's suppose that when you "swipe up" on your device screen, you want the object to move away from you (while also being constrained in its X-Y plane, if I understand right):

first, you need to define a 3d vector (direction) in your camera reference frame;

we know that the camera reference frame in OpenGL is defined with X pointing to the right of the screen, Y pointing upward, and Z axis pointin to you (i.e. exiting the screen);

so, we can say that a "move away" direction vector is defined as a QCAR::Vec3F (0, 0, -1) in such camera ref frame (note the Z = -1, i.e. direction is "entering the screen")

Now you need to convert this "move away" vector from camera coordinates to a representation in world coordinates (i.e. to represent our vector in the target reference frame);

To do this coordinate transformation you must use the inverse matrix of the ModelView matrix;

so, take the SampleMath.cpp from the Dominoes sample in the Vuforia sample disribution; you will see there are two functions:

- QCAR::Matrix44F SampleMath::Matrix44FInverse(QCAR::Matrix44F& m)

and

- QCAR::Vec3F SampleMath::Vec3FTransformNormal(QCAR::Vec3F& v, QCAR::Matrix44F& m)

You can use the first function to get the inverse ModelView matrix:

QCAR::Matrix44F inverseModelViewMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);

And the second function to turn your "move away" vector from camera coordinates to world coordinates:

QCAR::Vec3F moveAwayInWorldCoords = SampleMath::Vec3FTransformNormal(moveAwayInCameraCoords, inverseModelViewMatrix);

Also, you may want to normalize this vector to make sure it is a unit length vector (as it just represents a direction):

QCAR::Vec3F normalizedMoveAwayInWorldCoords = SampleMath::Vec3FNormalize(moveAwayInWorldCoords);

Once you have this vector, you can project it on the target XY plane by simply suppressing the Z coordinate of the vector, i.e.:

normalizedMoveAwayInWorldCoords.data[2] = 0.0f; //data[2] is Z coordinate, we set it null so to have vector in XY

And then, you will have to normalize the vector once again.

QCAR::Vec3F moveAwayFinal = SampleMath::Vec3FNormalize(normalizedMoveAwayInWorldCoords);

Then you can multiply it by some "motion" factor depending on how fast you want to move your object in response to your "swipe"

You can generalize this approach to fit different cases of course (e.g. to "move close" as opposed to "move away", etc.)

I hope this helps.

Hi Alessandro,

Thanks so much for the detailed reply. I think it's got me on the right track, however the results aren't quite what I expected (my error I'm sure!). The effect is tough to explain, I'm happy to send the project for you to try, however I'm wondering if you can spot an obvious error...?

If I swipe up with the target straight on, the model moves to the left. If I turn the target 90 degrees clockwise and swipe up, the target moves to the right. If I turn the target another 90 degrees clockwise so it is upside down and swipe up, the model moves to the left.

I have used your example of the swipe up, eg a vector (0,0,-1) to test the system. Once I get the desired effect with this I'll introduce more realistic scenarios. I will pass the vector components into a method in EAGLView (setNewOrientationValues, see below) using NSNotifications from my overlay view.

// This method updates the model orientation data and also tells EAGLView what model to show. buttonOverlayViewController provides this information.

// This vector is the desired movement of the model with respect to the camera view. ie, should the model move away from the camera (-Z axis), or side to side in the view (right is +Y and left -Y).

QCAR::Vec3F movementVectorFromCamerasPerspective = QCAR::Vec3F(0, 0, -1);

// Render video background and retrieve tracking state

QCAR::State state = QCAR::Renderer::getInstance().begin();

for (int i = 0; i < state.getNumActiveTrackables(); ++i) {

// Get the model view matrix

const QCAR::Trackable* trackable = state.getActiveTrackable(i);

QCAR::Matrix44F modelViewMatrix = QCAR::Tool::convertPose2GLMatrix(trackable->getPose());

// Now we need to convert this movement (vector) from camera coordinates to a representation in world coordinates (i.e. to represent our vector in the target reference frame). To do this coordinate transformation we use the inverse matrix of the ModelView matrix;

QCAR::Matrix44F inverseModelViewMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);

// And the second function to turn your movement vector from camera coordinates to world coordinates:

QCAR::Vec3F moveAwayInWorldCoords = SampleMath::Vec3FTransformNormal(movementVectorFromCamerasPerspective, inverseModelViewMatrix);

// Normalise this so it just represents a direction.

QCAR::Vec3F normalizedMoveAwayInWorldCoords = SampleMath::Vec3FNormalize(moveAwayInWorldCoords);

// Once we have this vector, project it on the target XY plane by simply suppressing the Z coordinate of the vector.

normalizedMoveAwayInWorldCoords.data[2] = 0.0f; // data[2] is Z coordinate, we set it null so to have vector in XY

// Normalise again.

QCAR::Vec3F moveAwayFinal = SampleMath::Vec3FNormalize(normalizedMoveAwayInWorldCoords);

// Multiply by the swipe speed factor

forwardBackFloat = forwardBackFloat + ( moveAwayFinal.data[0] * 10 ); // Multiply by a speed factor

sideToSideFloat = sideToSideFloat + ( moveAwayFinal.data[1] * 10 ); // Multiply by a speed factor

}

[movementVector release];

}

I use the final values in the renderFrameQCAR method as below.

Any obvious mistakes?

As an aside, is there a better way to fetch the modelViewMatrix than the way I've done it here?

Thanks again. Your help is appreciated!

Hi, if I read your code, I see that you compute the modelview Inverse right after getting the modelview matrix from QCAR (i.e. from the pose);

this is correct, provided that you actually render your target with that same modelview matrix; but if you apply further scaling, rotation and/or translation to your targets somewhere, you need to take them into account here as well, i.e. I'm referring to this code for instance:

SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);

SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);

So, you should really invert your actual modelview matrix (not just the one that comes from the trackable pose);

Could it be the problem ?

This could be the issue. I will definitely try this modification. What's the best way to get the actual modelViewMatrix when outside the renderFrameQCAR method?

Thanks!

I would suggest to compute the modelview matrix and the inverse modelview matrix both in the render frame method (in practice just before exiting the render frame method you can ompute the inverse of the modelview that you just used for rendering)

and then just store the inverse matrix into some global variable, so that you can read it back from other methods.

Could you let me know if I'm on the right track with calculating the actual modelViewMatrix? I've got this code in my renderFrameQCAR method:

ShaderUtils::scalePoseMatrix(kObjectScale * sizeFloat, kObjectScale * sizeFloat, kObjectScale * sizeFloat, &modelViewMatrix.data[0]);

ShaderUtils::rotatePoseMatrix(0.0f + rotationAngleFloat, 0.0f, 0.0f, 1.0f, &modelViewMatrix.data[0]);

QCAR::Matrix44F actualModelViewMatrix = modelViewMatrix;

actualModelViewMatrix.data[12] = actualModelViewMatrix.data[12] + sideToSideFloat;

actualModelViewMatrix.data[13] = actualModelViewMatrix.data[13] + forwardBackFloat;

// Compute the inverseModelViewMatrix for use in the setNewOrientationValues method.

inverseModelViewMatrix = SampleMath::Matrix44FInverse(actualModelViewMatrix);

It seems like that should update the actualModelViewMatrix with the translation data, however I'm unsure of how to include the rotation and scaling info. I tried the above and there's no improvement.

For anone else that may follow this thread in the future, I've found a wonderful page that gives some background on this stuff - http://www.songho.ca/opengl/gl_transform.html

Hi, I don't see why you modify the .data[12] and .data[13] in the actualModelView (as this means translating that matrix in the XY axis in the camera reference frame, not in world reference frame).

For the rest the code looks Ok, except for the actualModelView matrix (I don't see its goal).

I'm posting here below a code snippet that works, as I just tested it.

It's written for Android, but you can basically use it on iOS too, or adjust your code by comparing it with mine.

To give a bit of explanation to it:

what you need to do is to add your "backMotion_InWorldCoordinates" translation at the end of all your "usual" transformations by using multiplyMatrix function like for any other translation; please check how this is done in the code I paste here below.

Also, note that the inverseModelViewMatrix must be initalized to identity matrix in the rendering initialization function (otherwise at the first frame the matrix will be undefined or all zeros).

So, here is the code snippet:

for(int tIdx = 0; tIdx < state.getNumTrackableResults(); tIdx++)

{

// Get the trackable:

const QCAR::TrackableResult* result = state.getTrackableResult(tIdx);

const QCAR::Trackable& trackable = result->getTrackable();

QCAR::Matrix44F modelViewMatrix =  QCAR::Tool::convertPose2GLMatrix(result->getPose());

QCAR::Matrix44F modelViewProjection;

//apply usual transformations here

SampleUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale,

&modelViewMatrix.data[0]);

SampleUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale,

&modelViewMatrix.data[0]);

/////

//NEW: apply our custom backward translation here

QCAR::Vec3F backMoveCameraRef(0.0f, 0.0f, 1.0f);

QCAR::Vec3F backMoveWorldRef = SampleMath::Vec3FTransformNormal(backMoveCameraRef, inverseModelViewMatrix);

backMoveWorldRef = SampleMath::Vec3FNormalize(backMoveWorldRef);

float speed = 0.2f;

backTranslation.data[0] += speed*backMoveWorldRef.data[0];

backTranslation.data[1] += speed*backMoveWorldRef.data[1];

backTranslation.data[2] = 0.0f;

SampleUtils::translatePoseMatrix(backTranslation.data[0], backTranslation.data[1], backTranslation.data[2],

&modelViewMatrix.data[0]);

/////////////////////

//NEW: update inverseModelViewMatrix

inverseModelViewMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);

//multiply modelview and projection matrix as usual

SampleUtils::multiplyMatrix(&projectionMatrix.data[0],

&modelViewMatrix.data[0] ,

&modelViewProjection.data[0]);

glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0,

(const GLvoid*) &teapotVertices[0]);

// etc.

Let me know if you have more questions on this.

Perfect. Thanks Alessandro. Very much appreciated!! Hope someone else finds this of use also.

You're welcome.