Hi,
I recently have been learning the iOS side of the Vuforia SDK (I started with Android). I am trying to replicate an App I made on Android, and in this app I use the projectScreenPointToPlane function to translate a touch in 2D screen space to the 3D AR Target space. I then check the translation and determine if it is within a certain area on the target.
I have this functionality working perfectly on Android. I mirrored my implementation in my iOS project. However, when I print the translated values of my touch in iOS ( intersection.data[0] and intersection.data[1] ), the values are inconsistent.
For instance, if I positon my device to view the target straight on, I get a translated value A [ ie: (0.0, 0.0) ] when I touch the center of the target through my screen. But as I move the device, the translated coordinates are skewed. If I view the target with my device from a greater distance/different angle and touch the same point on the target through the screen, I receive a different translated value B [ ie: (50.0, -20.0) ]. The translation value becomes so skewed from different view points that it is impossible to run it through any sort of conditionals.
I didn't have this problem with Android, is there something I am missing here? I would really like to resolve this issue, as I am frustrated that what I thought I understood now suddenly doesn't work for iOS.
- Alex
PS: Here is the function from the code provided in the Video Playback sample. In my app, I simply changed the conditional statement to check for a smaller range than the entire target:
- (int)tapInsideTargetWithID
{
QCAR::Vec3F intersection, lineStart, lineEnd;
// Get the current projection matrix
QCAR::Matrix44F projectionMatrix = [vapp projectionMatrix];
QCAR::Matrix44F inverseProjMatrix = SampleMath::Matrix44FInverse(projectionMatrix);
CGRect rect = [self bounds];
int touchInTarget = -1;
// ----- Synchronise data access -----
[dataLock lock];
// The target returns as pose the centre of the trackable. Thus its
// dimensions go from -width / 2 to width / 2 and from -height / 2 to
// height / 2. The following if statement simply checks that the tap is
// within this range
for (int i = 0; i < NUM_TARGETS; ++i) {
SampleMath::projectScreenPointToPlane(inverseProjMatrix, videoData[i].modelViewMatrix, rect.size.width, rect.size.height,
QCAR::Vec2F(touchLocation_X, touchLocation_Y), QCAR::Vec3F(0, 0, 0), QCAR::Vec3F(0, 0, 1), intersection, lineStart, lineEnd);
// print out the translated values
NSLog(@"---->intersection[%i]: x: %f y: %f z:%f", i, intersection.data[0], intersection.data[1], intersection.data[2]);
if ((intersection.data[0] >= -videoData[i].targetPositiveDimensions.data[0]) && (intersection.data[0] <= videoData[i].targetPositiveDimensions.data[0]) &&
(intersection.data[1] >= -videoData[i].targetPositiveDimensions.data[1]) && (intersection.data[1] <= videoData[i].targetPositiveDimensions.data[1])) {
// The tap is only valid if it is inside an active target
if (YES == videoData[i].isActive) {
touchInTarget = i;
break;
}
}
}
Hello here at Prisma we have same problem our implementation of the tap converted to the target scene works very fine on Android but on IOS values are always scketcy and quite inconsistent.
I personally check the Dominoes code and follow your instructions, ported to my code doesn't work
Can you provide an example function for the projectScreenPointToPlane on IOS?
Our app is only in Potrait even if rotating the device the modelView Matrix works fine...
The qUtils.inverseProjMatrix is created when the camera is created
Any guess?
Here there is the one i use on IOS:
void
projectScreenPointToPlane(QCAR::Vec2F point, QCAR::Vec3F planeCenter, QCAR::Vec3F planeNormal,
QCAR::Vec3F &intersection, QCAR::Vec3F &lineStart, QCAR::Vec3F &lineEnd)
{
QCARutils *qUtils = [QCARutils getInstance];
// Window Coordinates to Normalized Device Coordinates
QCAR::VideoBackgroundConfig config = QCAR::Renderer::getInstance().getVideoBackgroundConfig();
float halfScreenWidth = qUtils.viewSize.height / 2.0f; // note use of height for width
float halfScreenHeight = qUtils.viewSize.width / 2.0f; // likewise
float halfViewportWidth = config.mSize.data[0] / 2.0f;
float halfViewportHeight = config.mSize.data[1] / 2.0f;
float x = (qUtils.contentScalingFactor * point.data[0] - halfScreenWidth) / halfViewportWidth;
float y = (qUtils.contentScalingFactor * point.data[1] - halfScreenHeight) / halfViewportHeight * -1;
QCAR::Vec4F ndcNear(x, y, -1, 1);
QCAR::Vec4F ndcFar(x, y, 1, 1);
// Normalized Device Coordinates to Eye Coordinates
QCAR::Matrix44F inverseProjMatrix = qUtils.inverseProjMatrix;
QCAR::Vec4F pointOnNearPlane = SampleMath::Vec4FTransform(ndcNear, inverseProjMatrix);
QCAR::Vec4F pointOnFarPlane = SampleMath::Vec4FTransform(ndcFar, inverseProjMatrix);
pointOnNearPlane = SampleMath::Vec4FDiv(pointOnNearPlane, pointOnNearPlane.data[3]);
pointOnFarPlane = SampleMath::Vec4FDiv(pointOnFarPlane, pointOnFarPlane.data[3]);
// Eye Coordinates to Object Coordinates
QCAR::Matrix44F inverseModelViewMatrix = SampleMath::Matrix44FInverse(modelViewMatrix);
QCAR::Vec4F nearWorld = SampleMath::Vec4FTransform(pointOnNearPlane, inverseModelViewMatrix);
QCAR::Vec4F farWorld = SampleMath::Vec4FTransform(pointOnFarPlane, inverseModelViewMatrix);
lineStart = QCAR::Vec3F(nearWorld.data[0], nearWorld.data[1], nearWorld.data[2]);
lineEnd = QCAR::Vec3F(farWorld.data[0], farWorld.data[1], farWorld.data[2]);
linePlaneIntersection(lineStart, lineEnd, planeCenter, planeNormal, intersection);
}