Hi,
I need some confirmations about my thoughts.
OpenGL use 4 different transformations
- Modeling Transformation
- Viewing Transformation
- Projection Transformation
- Viewport Transformation
It is usual to combine the modeling and the viewing transformation into a ModelView Matrix.
The Modeling Transformation defines the position of an object and the Viewing Transformation defines the position of the camera. The Projection Transformations stands for the near and far view and the perspectiv/orthographic view. The Viewport Transformations brings everything to the screen with screen-coordinates.
To get screen-coordinates I have to use the following code-snippet
const QCAR::CameraCalibration& cameraCalibration = QCAR::CameraDevice::getInstance().getCameraCalibration(); QCAR::Vec2F screenPoint = QCAR::Tool::projectPoint(cameraCalibration, trackable->getPose(), QCAR::Vec3F(0, 0, 0)); QCAR::VideoMode videoMode = QCAR::CameraDevice::getInstance().getVideoMode(QCAR::CameraDevice::MODE_DEFAULT); QCAR::VideoBackgroundConfig config = QCAR::Renderer::getInstance().getVideoBackgroundConfig(); float xOffset = ((float) screenWidth - config.mSize.data[0]) / 2.0f + config.mPosition.data[0]; float yOffset = ((float) screenHeight - config.mSize.data[1]) / 2.0f - config.mPosition.data[1]; float x = (screenPoint.data[0] * config.mSize.data[0]) / ((float) videoMode.mWidth + xOffset); float y = (screenPoint.data[1] * config.mSize.data[1]) / ((float) videoMode.mHeight + yOffset);
For the Projection Transformation I found the following code-snippet for the near/far view
projectionMatrix = QCAR::Tool::getProjectionGL(cameraCalibration, 2.0f, 2000.0f);
Which informations includes the cameraCalibration?
What about the perspectiv/orthographic setting?
Finally a question about the ModelView Matrix. How are the information gathered for the camera position and the object? I know that it is possible to position objects relative to the camera position with this Matrix. Does the SDK use a default Matrix and defines the origin as (0,0,0)?
Maybe everything what i ask is a big secret :D
http://paulbourke.net/miscellaneous/lens/
The pose returned by the tracker assumes a camera sitting at the origin, pointing in the positive Z direction with X to the right and Y down. Each trackable has its own coordinate frame as defined in this image: https://ar.qualcomm.at/resources/images/coordinateSystems.jpg
- Kim