Log in or register to post comments

projectScreenPointToPlane Issue

March 21, 2014 - 1:06pm #1

Hi,

I recently have been learning the iOS side of the Vuforia SDK (I started with Android). I am trying to replicate an App I made on Android, and in this app I use the projectScreenPointToPlane function to translate a touch in 2D screen space to the 3D AR Target space. I then check the translation and determine if it is within a certain area on the target.

I have this functionality working perfectly on Android. I mirrored my implementation in my iOS project. However, when I print the translated values of my touch in iOS ( intersection.data[0] and intersection.data[1] ), the values are inconsistent.

For instance, if I positon my device to view the target straight on, I get a translated value A [ ie: (0.0, 0.0) ] when I touch the center of the target through my screen. But as I move the device, the translated coordinates are skewed. If I view the target with my device from a greater distance/different angle and touch the same point on the target through the screen, I receive a different translated value B [ ie: (50.0, -20.0) ]. The translation value becomes so skewed from different view points that it is impossible to run it through any sort of conditionals.

I didn't have this problem with Android, is there something I am missing here? I would really like to resolve this issue, as I am frustrated that what I thought I understood now suddenly doesn't work for iOS. 

- Alex

 

PS: Here is the function from the code provided in the Video Playback sample. In my app, I simply changed the conditional statement to check for a smaller range than the entire target:

 

- (int)tapInsideTargetWithID

{

    QCAR::Vec3F intersection, lineStart, lineEnd;

    // Get the current projection matrix

    QCAR::Matrix44F projectionMatrix = [vapp projectionMatrix];

    QCAR::Matrix44F inverseProjMatrix = SampleMath::Matrix44FInverse(projectionMatrix);

    CGRect rect = [self bounds];

    int touchInTarget = -1;

   

    // ----- Synchronise data access -----

    [dataLock lock];    

    // The target returns as pose the centre of the trackable.  Thus its

    // dimensions go from -width / 2 to width / 2 and from -height / 2 to

    // height / 2.  The following if statement simply checks that the tap is

    // within this range

    for (int i = 0; i < NUM_TARGETS; ++i) {

        SampleMath::projectScreenPointToPlane(inverseProjMatrix, videoData[i].modelViewMatrix, rect.size.width, rect.size.height,

                                              QCAR::Vec2F(touchLocation_X, touchLocation_Y), QCAR::Vec3F(0, 0, 0), QCAR::Vec3F(0, 0, 1), intersection, lineStart, lineEnd);        

        // print out the translated values

        NSLog(@"---->intersection[%i]: x: %f  y: %f  z:%f", i, intersection.data[0], intersection.data[1], intersection.data[2]);  

        if ((intersection.data[0] >= -videoData[i].targetPositiveDimensions.data[0]) && (intersection.data[0] <= videoData[i].targetPositiveDimensions.data[0]) &&

            (intersection.data[1] >= -videoData[i].targetPositiveDimensions.data[1]) && (intersection.data[1] <= videoData[i].targetPositiveDimensions.data[1])) {

            // The tap is only valid if it is inside an active target

            if (YES == videoData[i].isActive) {

                touchInTarget = i;

                break;

            }

       }

    }

 

 

 

projectScreenPointToPlane Issue

June 5, 2014 - 8:55am #6

Hello here at Prisma we have same problem our implementation of the tap converted to the target scene works very fine on Android but on IOS values are always scketcy and quite inconsistent.

I personally check the Dominoes code and follow your instructions, ported to my code doesn't work

Can you provide an example function for the projectScreenPointToPlane on IOS?

Our app is only in Potrait even if rotating the device the modelView Matrix works fine...

The qUtils.inverseProjMatrix is created when the camera is created

Any guess?

Here there is the one i use on IOS:

void

projectScreenPointToPlane(QCAR::Vec2F point, QCAR::Vec3F planeCenter, QCAR::Vec3F planeNormal,

                          QCAR::Vec3F &intersection, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd)

{

    QCARutils *qUtils = [QCARutils getInstance];

    

    // Window Coordinates to Normalized Device Coordinates

    QCAR::VideoBackgroundConfig config = QCAR::Renderer::getInstance().getVideoBackgroundConfig();

    

    float halfScreenWidth = qUtils.viewSize.height / 2.0f; // note use of height for width

    float halfScreenHeight = qUtils.viewSize.width / 2.0f; // likewise

    

    float halfViewportWidth = config.mSize.data[0] / 2.0f;

    float halfViewportHeight = config.mSize.data[1] / 2.0f;

    

    float x = (qUtils.contentScalingFactor * point.data[0] - halfScreenWidth) / halfViewportWidth;

    float y = (qUtils.contentScalingFactor * point.data[1] - halfScreenHeight) / halfViewportHeight * -1;

    

    QCAR::Vec4F ndcNear(x, y, -1, 1);

    QCAR::Vec4F ndcFar(x, y, 1, 1);

    

    // Normalized Device Coordinates to Eye Coordinates

    QCAR::Matrix44F inverseProjMatrix = qUtils.inverseProjMatrix;

    

    QCAR::Vec4F pointOnNearPlane = SampleMath::Vec4FTransform(ndcNear, inverseProjMatrix);

    QCAR::Vec4F pointOnFarPlane = SampleMath::Vec4FTransform(ndcFar, inverseProjMatrix);

    pointOnNearPlane = SampleMath::Vec4FDiv(pointOnNearPlane, pointOnNearPlane.data[3]);

    pointOnFarPlane = SampleMath::Vec4FDiv(pointOnFarPlane, pointOnFarPlane.data[3]);

    

    // Eye Coordinates to Object Coordinates

    QCAR::Matrix44F inverseModelViewMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);

    

    QCAR::Vec4F nearWorld = SampleMath::Vec4FTransform(pointOnNearPlane, inverseModelViewMatrix);

    QCAR::Vec4F farWorld = SampleMath::Vec4FTransform(pointOnFarPlane, inverseModelViewMatrix);

    

    lineStart = QCAR::Vec3F(nearWorld.data[0], nearWorld.data[1], nearWorld.data[2]);

    lineEnd = QCAR::Vec3F(farWorld.data[0], farWorld.data[1], farWorld.data[2]);

    linePlaneIntersection(lineStart, lineEnd, planeCenter, planeNormal, intersection);

}

 

 

 

projectScreenPointToPlane Issue

March 26, 2014 - 4:59am #5

A few ideas:

1 - check your code against the Dominoes sample and see which coordinates get printed out.

2 - check the orientation code as it could be that portrait and landscape might be getting mixed up

3 - check your units of the Image Target in the database i.e. that you are using the same image as your Android sample.

 

N

projectScreenPointToPlane Issue

March 24, 2014 - 1:34pm #4

What I am most confused about is why what I implemented for my Android app's 2D->3D conversion isn't able to also work on my iOS device. Is this a known issue? Or is there something specific I need to change that I haven't realized yet?

You can probably answer this question by yourself if you compare the mathematical aspects of your implementation with the one explained in the article I am referring to;

for example, one reason might be in the fact that the camera frame is rendered in a viewport which is partially falling outside the screen (i.e. the viewport origin can be at negative coordinates, and the viewport width or height can be larger than the actual screen width/height.... this is to take into account the different aspect ratios of the screen and the camera frame). So, it could be this or maybe other small maths details.

 

projectScreenPointToPlane Issue

March 24, 2014 - 9:21am #3

Just checked it out....for what I am trying to do, isn't it better to convert the 2D touch -> 3D space and then check if an object was selected? What would be the benefit of doing it the way the article explained (3D space -> 2D touch)? 

What I am most confused about is why what I implemented for my Android app's 2D->3D conversion isn't able to also work on my iOS device. Is this a known issue? Or is there something specific I need to change that I haven't realized yet?

projectScreenPointToPlane Issue

March 24, 2014 - 3:57am #2
Log in or register to post comments