• Sort Posts • 4 replies

Hi,

I know this has been asked and answered several times, however I find the presented solutions no sufficient.

So what I want to do: I want to know which pixel (or coordinate on the screen) a 3D point in model space is projected (basically if it is rendered black the pixel I get should be the rendered black pixel of the coordinate).

There is QCAR::projectPoint which as far as I know projects a 3D point given its pose matrix and the camera calibration (basicially to get the project matrix right?) to a point on the camera image. Ok now the camera image is not the screen, therefore there are a couple of solutions, what I found in the samples and online is the "cameraPointToScreenPoint" function. Okay, using these two method, I get roughly the right result, though the points returned (I tried with a flat cube) do not match the pixels (of the rectangle, which results from the cube's projection). The result is accurate in terms of proportionality, the size of my rectangle I overlay with iOS does not match, but it scales in the right proportion when moving closer with the camera and moving away from the target.

The iOS view using the projected results, seems to hover over the rendered cube. When holding the camera in 90° angle towards the target they match the region of the rendering, however are too large, when lowering the angle and viewing the target from the side, they do not cover the rendered area.

So, as I could not get it work, I decided to project it myself, e.g. do what opengl does to find the screenPixel.

1. ModelViewProjection = ProjectionMatrix * ModelViewMatrix(Pose Matrix)    (i.e. the resulting matrix is used also as input for the opengl renderer)

2. Use the modelViewProjection to project a point in homogeneous modelView coordinates to homogenous device coordinates.

3. Use the homogenous coordinate to scale the result to a 3Dim vector in range [-1,1]: the Normalized Device Coordinates.

3.1 Now we have (x,y,depth) in ndc.

4. There is the viewport transform which is setup on ARInit, which I hope corresponds to glViewPort, so to to account for this:

we have to scale the ndc to [0,1]  (adding one and dividing the result by 2) and multiply by viewPortSize for width and height and add the viewport position for x and y.

5. Now as I use portait mode: the resulting x coordinate should actually be the y coordinate (as far as I understood it, vuforia always uses landscape on the camera image) and vice versa. Also I think the resulting y coordinate needs to be flipped to have the origin at top left (iOS style).

6. Additionally I scale the coordinates using screensize/viewPort size as the camera view has a different resolution than the screen.

The results do not fit either, what am I missing? Do I need to account for something else?

Here the complete code i'm using:

```
QCAR::Vec3F SampleMath::projectToScreen2(QCAR::Matrix44F& modelViewProjection, QCAR::Vec3F& modelSpaceCoordinates, struct tagViewport viewPort, float screenScale, QCAR::Vec2F& screenSize)
{

printf("ScreenSize is: %f, %f\n", screenSize.data,screenSize.data);
printf("Viewport is X,Y: %d,%d  SizeX,SizeY: %d, %d\n", viewPort.posX, viewPort.posY, viewPort.sizeX, viewPort.sizeY);

QCAR::Vec4F homogeneousCoordinates(modelSpaceCoordinates.data,modelSpaceCoordinates.data, modelSpaceCoordinates.data, 1.0);

QCAR::Vec4F deviceCoordinatesCoordinates = Vec4FTransform(homogeneousCoordinates, modelViewProjection);

printf("Device Coordinates: %f, %f, %f, %f\n", deviceCoordinatesCoordinates.data,deviceCoordinatesCoordinates.data,deviceCoordinatesCoordinates.data,deviceCoordinatesCoordinates.data);

QCAR::Vec4F ndc = Vec4FDiv(deviceCoordinatesCoordinates, deviceCoordinatesCoordinates.data);

printf("NDC [-1, 1]: %f, %f, %f\n", ndc.data,ndc.data,ndc.data);

//transform to screen coordinates
QCAR::Vec3F windowCoordinates;
// bring to [0,1] and scale to viewPort

windowCoordinates.data = ((ndc.data+1)/2.0)*viewPort.sizeX + viewPort.posX;
windowCoordinates.data = ((ndc.data+1)/2.0)*viewPort.sizeY + viewPort.posY;
windowCoordinates.data = (ndc.data+1)/2.0;  // depth buffer

printf("Windowcoords: %f, %f, %f\n", windowCoordinates.data,windowCoordinates.data,windowCoordinates.data);

float aspectRatio = screenSize.data/viewPort.sizeX;    // window Height / viewPortHeight  for portrait

QCAR::Vec3F screenCoordinates(windowCoordinates.data * aspectRatio, (viewPort.sizeX - windowCoordinates.data)*aspectRatio, windowCoordinates.data);

printf("Screencoordinates: %f, %f, %f\n", screenCoordinates.data,screenCoordinates.data,screenCoordinates.data);

return screenCoordinates;
}```

This example doesn't work with the current sdk. I was able to get it to at least compile but it's not showing up right at all. I believe because it's confusing x and y values due to orientation. Regardless, I can't get it to accurately work. Even getting the center of the target would be great.

My non working code:

```- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset
{
CGPoint converted;

QCAR::Vec3F vec(coord.x,coord.y,0);
QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
//switched this, thinking maybe that'd help. nope.
converted.y = sc.data - offset.y;
converted.x = sc.data - offset.x;

return converted;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
// 0,0 is at centre of target so extremities are at w/2,h/2
// 0,0 is at centre of target so extremities are at w/2,h/2
CGFloat w = target.width/2;
CGFloat h = target.height/2;
// need to account for the orientation on view size
CGFloat viewWidth = self.frame.size.height; // Portrait
CGFloat viewHeight = self.frame.size.width; // Portrait
UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;

// viewWidth = self.frame.size.width;
//viewHeight = self.frame.size.height;

// calculate any mismatch of screen to video size
CGFloat scale = viewWidth/videoMode.mWidth;
if (videoMode.mHeight * scale < viewHeight)
scale = viewHeight/videoMode.mHeight;

CGFloat scaledWidth = videoMode.mWidth * scale;
CGFloat scaledHeight = videoMode.mHeight * scale;
CGPoint margin = {(scaledWidth - viewWidth)/2, (scaledHeight - viewHeight)/2};

// now project the 4 corners of the target
CGPoint s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin];
CGPoint s1 = [self projectCoord:CGPointMake(-w,-h) inView:cameraCalibration andPose:pose withOffset:margin];
CGPoint s2 = [self projectCoord:CGPointMake(w,-h) inView:cameraCalibration andPose:pose withOffset:margin];
CGPoint s3 = [self projectCoord:CGPointMake(w,h) inView:cameraCalibration andPose:pose withOffset:margin];

}```

https://developer.vuforia.com/forum/faq/technical-how-can-i-project-target-point-screen

Yes, I have looked at the dominos example, but there the reverse direction is done, however in a different way. There a point on screen is projected as a ray through the near and far plane intersecting the marker. So they project from screen to object and not the other ways around.

Have you looked at the Dominoes sample?

N