Log in or register to post comments

Target Coordinates To Screen Coordinates

March 7, 2014 - 3:15am #5

Hi,

 

I know this has been asked and answered several times, however I find the presented solutions no sufficient.

 

So what I want to do: I want to know which pixel (or coordinate on the screen) a 3D point in model space is projected (basically if it is rendered black the pixel I get should be the rendered black pixel of the coordinate).

 

There is QCAR::projectPoint which as far as I know projects a 3D point given its pose matrix and the camera calibration (basicially to get the project matrix right?) to a point on the camera image. Ok now the camera image is not the screen, therefore there are a couple of solutions, what I found in the samples and online is the "cameraPointToScreenPoint" function. Okay, using these two method, I get roughly the right result, though the points returned (I tried with a flat cube) do not match the pixels (of the rectangle, which results from the cube's projection). The result is accurate in terms of proportionality, the size of my rectangle I overlay with iOS does not match, but it scales in the right proportion when moving closer with the camera and moving away from the target.

The iOS view using the projected results, seems to hover over the rendered cube. When holding the camera in 90° angle towards the target they match the region of the rendering, however are too large, when lowering the angle and viewing the target from the side, they do not cover the rendered area.

 

So, as I could not get it work, I decided to project it myself, e.g. do what opengl does to find the screenPixel.

1. ModelViewProjection = ProjectionMatrix * ModelViewMatrix(Pose Matrix)    (i.e. the resulting matrix is used also as input for the opengl renderer)

2. Use the modelViewProjection to project a point in homogeneous modelView coordinates to homogenous device coordinates.

3. Use the homogenous coordinate to scale the result to a 3Dim vector in range [-1,1]: the Normalized Device Coordinates.

3.1 Now we have (x,y,depth) in ndc.

4. There is the viewport transform which is setup on ARInit, which I hope corresponds to glViewPort, so to to account for this:

we have to scale the ndc to [0,1]  (adding one and dividing the result by 2) and multiply by viewPortSize for width and height and add the viewport position for x and y.

5. Now as I use portait mode: the resulting x coordinate should actually be the y coordinate (as far as I understood it, vuforia always uses landscape on the camera image) and vice versa. Also I think the resulting y coordinate needs to be flipped to have the origin at top left (iOS style).

6. Additionally I scale the coordinates using screensize/viewPort size as the camera view has a different resolution than the screen.

The results do not fit either, what am I missing? Do I need to account for something else?

 

 

Here the complete code i'm using:

 


QCAR::Vec3F SampleMath::projectToScreen2(QCAR::Matrix44F& modelViewProjection, QCAR::Vec3F& modelSpaceCoordinates, struct tagViewport viewPort, float screenScale, QCAR::Vec2F& screenSize)
{
    
    printf("ScreenSize is: %f, %f\n", screenSize.data[0],screenSize.data[1]);
    printf("Viewport is X,Y: %d,%d  SizeX,SizeY: %d, %d\n", viewPort.posX, viewPort.posY, viewPort.sizeX, viewPort.sizeY);
    
    QCAR::Vec4F homogeneousCoordinates(modelSpaceCoordinates.data[0],modelSpaceCoordinates.data[1], modelSpaceCoordinates.data[2], 1.0);
    
    QCAR::Vec4F deviceCoordinatesCoordinates = Vec4FTransform(homogeneousCoordinates, modelViewProjection);
    
    printf("Device Coordinates: %f, %f, %f, %f\n", deviceCoordinatesCoordinates.data[0],deviceCoordinatesCoordinates.data[1],deviceCoordinatesCoordinates.data[2],deviceCoordinatesCoordinates.data[3]);
    
    QCAR::Vec4F ndc = Vec4FDiv(deviceCoordinatesCoordinates, deviceCoordinatesCoordinates.data[3]);
    
    printf("NDC [-1, 1]: %f, %f, %f\n", ndc.data[0],ndc.data[1],ndc.data[2]);
    
    
    //transform to screen coordinates
    QCAR::Vec3F windowCoordinates;
    // bring to [0,1] and scale to viewPort
    
    windowCoordinates.data[0] = ((ndc.data[0]+1)/2.0)*viewPort.sizeX + viewPort.posX;
    windowCoordinates.data[1] = ((ndc.data[1]+1)/2.0)*viewPort.sizeY + viewPort.posY;
    windowCoordinates.data[2] = (ndc.data[2]+1)/2.0;  // depth buffer
    
    printf("Windowcoords: %f, %f, %f\n", windowCoordinates.data[0],windowCoordinates.data[1],windowCoordinates.data[2]);
    
    float aspectRatio = screenSize.data[1]/viewPort.sizeX;    // window Height / viewPortHeight  for portrait
    
    
    QCAR::Vec3F screenCoordinates(windowCoordinates.data[1] * aspectRatio, (viewPort.sizeX - windowCoordinates.data[0])*aspectRatio, windowCoordinates.data[2]);
    
    printf("Screencoordinates: %f, %f, %f\n", screenCoordinates.data[0],screenCoordinates.data[1],screenCoordinates.data[2]);
    
    return screenCoordinates;
}

 

Target Coordinates To Screen Coordinates

March 7, 2014 - 3:20am #4

Have you looked at the Dominoes sample?

 

N

Target Coordinates To Screen Coordinates

March 7, 2014 - 3:29am #3

Yes, I have looked at the dominos example, but there the reverse direction is done, however in a different way. There a point on screen is projected as a ray through the near and far plane intersecting the marker. So they project from screen to object and not the other ways around.

Target Coordinates To Screen Coordinates

March 7, 2014 - 4:32am #2

Have a look at this article, which addresses your question:

https://developer.vuforia.com/forum/faq/technical-how-can-i-project-target-point-screen

 

Target Coordinates To Screen Coordinates

May 14, 2014 - 2:37pm #1

This example doesn't work with the current sdk. I was able to get it to at least compile but it's not showing up right at all. I believe because it's confusing x and y values due to orientation. Regardless, I can't get it to accurately work. Even getting the center of the target would be great.

 

My non working code: 

- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset
{
    CGPoint converted;
    
    QCAR::Vec3F vec(coord.x,coord.y,0);
    QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
//switched this, thinking maybe that'd help. nope.
    converted.y = sc.data[0] - offset.y;
    converted.x = sc.data[1] - offset.x;
    
    return converted;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
    // 0,0 is at centre of target so extremities are at w/2,h/2
    // 0,0 is at centre of target so extremities are at w/2,h/2
    CGFloat w = target.width/2;
    CGFloat h = target.height/2;
    // need to account for the orientation on view size
    CGFloat viewWidth = self.frame.size.height; // Portrait
    CGFloat viewHeight = self.frame.size.width; // Portrait
    UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;

       // viewWidth = self.frame.size.width;
        //viewHeight = self.frame.size.height;
  
    // calculate any mismatch of screen to video size
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    const QCAR::CameraCalibration& cameraCalibration = cameraDevice.getCameraCalibration();
    QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
    CGFloat scale = viewWidth/videoMode.mWidth;
    if (videoMode.mHeight * scale < viewHeight)
        scale = viewHeight/videoMode.mHeight;
    
    CGFloat scaledWidth = videoMode.mWidth * scale;
    CGFloat scaledHeight = videoMode.mHeight * scale;
    CGPoint margin = {(scaledWidth - viewWidth)/2, (scaledHeight - viewHeight)/2};

    
    // now project the 4 corners of the target
    CGPoint s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin];
    CGPoint s1 = [self projectCoord:CGPointMake(-w,-h) inView:cameraCalibration andPose:pose withOffset:margin];
    CGPoint s2 = [self projectCoord:CGPointMake(w,-h) inView:cameraCalibration andPose:pose withOffset:margin];
    CGPoint s3 = [self projectCoord:CGPointMake(w,h) inView:cameraCalibration andPose:pose withOffset:margin];
   
}

 

Log in or register to post comments