Log in or register to post comments

Playing Video Over the image

August 5, 2011 - 8:35am #1

Hi All,

I understood the concept of tracking an image in Image Targets sample code.
I am able to track the image which I uploaded using Target Management Tool.
When ever image got tracked, instead of displaying another image on top of tracked image is it possible to play a video on top of the traced image using AR?

I tired to play using below code

-(void)renderFrameQCAR{
[self performSelectorOnMainThread:@selector(playVideo) withObject:nil waitUntilDone:NO];
}

-(void)playVideo{
MPMoviePlayerController *videoViewCont = [[MPMoviePlayerController alloc]initWithContentURL:[NSURL fileURLWithPath:@"WP_000019-qcif-180k.mov"]];
[videoViewCont play];
[videoViewCont.view setFrame:self.bounds];
[self addSubview:videoViewCont.view];
}

the method is getting called but video is not playing.

please give your valuable suggestions.

Playing Video Over the image

October 15, 2012 - 12:45am #68

Download vuforia-videoplayback-is-1-0-2.zip and try it......thats the thing u r searching....

Playing Video Over the image

July 25, 2012 - 7:29am #67

Hi jamesgilmartin,

Can I suggest you try the Video Playback Sample release recently, as it should run out of the box.

Video Playback Sample App Posted | Vuforia

Video Playback Sample - guidelines | Vuforia

 

N

Playing Video Over the image

July 25, 2012 - 6:39am #66

Hi,

When I try use this code I get a:

"Member access into incomplete type 'CALayer'"

error on the line

cornerTrackingV.layer.position = pos;

Can anybody offer any advice?

 

James

Re: Playing Video Over the image

June 11, 2012 - 5:09am #65
MoSR wrote:

For QCAR iOS SDK 1.5 (without the overlayed object autorotating - I'll spin another version later where the corner tracking view is always in the user's orientation, and the coordinates match):

In this sample I'll keep the coordinates in the delegate as it's somewhat easier to get to that than it is for the delegate to navigate up the more complex view hierarchy.

In ImageTargetsAppDelegate.h:

@interface ImageTargetsAppDelegate : NSObject <UIApplicationDelegate> {
    UIWindow* window;
    ARParentViewController* arParentViewController;
    UIImageView *splashV;

    UIView *s0V;
    UIView *s1V;
    UIView *s2V;
    UIView *s3V;
    
@public
    CGPoint s0;
    CGPoint s1;
    CGPoint s2;
    CGPoint s3;

}

@property (nonatomic)     CGPoint s0;
@property (nonatomic)     CGPoint s1;
@property (nonatomic)     CGPoint s2;
@property (nonatomic)     CGPoint s3;

In ImageTargetsAppDelegate.mm synthesize the properties and add the following imports:

#import <QCAR/CameraDevice.h>

In application:didFinishLaunching... insert the following to set up the views:

// Add the EAGLView and the overlay view to the window
arParentViewController = [[ARParentViewController alloc] init];
arParentViewController.arViewRect = screenBounds;
[window insertSubview:arParentViewController.view atIndex:0];

CGRect frame = {0,0,10,10};
s0V = [[UIView alloc] initWithFrame:frame];
s1V = [[UIView alloc] initWithFrame:frame];
s2V = [[UIView alloc] initWithFrame:frame];
s3V = [[UIView alloc] initWithFrame:frame];
s0V.backgroundColor = [UIColor redColor];
s1V.backgroundColor = [UIColor greenColor];
s2V.backgroundColor = [UIColor blueColor];
s3V.backgroundColor = [UIColor yellowColor];

frame = CGRectMake(0, 0, screenBounds.size.height, screenBounds.size.width);
UIView *cornerTrackingV = [[UIView alloc] initWithFrame:frame];
[cornerTrackingV addSubview:s0V];
[cornerTrackingV addSubview:s1V];
[cornerTrackingV addSubview:s2V];
[cornerTrackingV addSubview:s3V];

CGPoint pos;
pos.x = screenBounds.size.width / 2;
pos.y = screenBounds.size.height / 2;
CGAffineTransform rotate = CGAffineTransformMakeRotation(90 * M_PI  / 180);
cornerTrackingV.layer.position = pos;
cornerTrackingV.transform = rotate;

[window addSubview:cornerTrackingV];

[window makeKeyAndVisible];

And add the timer callback method to move the views to the new coordinates:

- (void)moveCorners:(NSTimer*)theTimer
{
    CGRect frame = CGRectMake(s0.x, s0.y, 10, 10);
    s0V.frame = frame;
    frame = CGRectMake(s1.x, s1.y, 10, 10);
    s1V.frame = frame;
    frame = CGRectMake(s2.x, s2.y, 10, 10);
    s2V.frame = frame;
    frame = CGRectMake(s3.x, s3.y, 10, 10);
    s3V.frame = frame;
}

In EAGLView.mm, add the two methods for calculating the target corner projections:

- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset andScale:(CGFloat)scale
{
    CGPoint converted;
    
    QCAR::Vec3F vec(coord.x,coord.y,0);
    QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
    converted.x = sc.data[0]*scale - offset.x;
    converted.y = sc.data[1]*scale - offset.y;
    
    return converted;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
    // 0,0 is at centre of target so extremities are at w/2,h/2
    CGFloat w = target.width/2;
    CGFloat h = target.height/2;
    
    // need to account for the orientation on view size
    CGFloat viewWidth = self.frame.size.height; // Portrait
    CGFloat viewHeight = self.frame.size.width; // Portrait    
    UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
    if (UIInterfaceOrientationIsLandscape(orientation))
    {
        viewWidth = self.frame.size.width;
        viewHeight = self.frame.size.height;        
    }
    
    // calculate any mismatch of screen to video size
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    const QCAR::CameraCalibration& cameraCalibration = cameraDevice.getCameraCalibration();
    QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
    
    CGFloat scale = viewWidth/videoMode.mWidth;
    if (videoMode.mHeight * scale < viewHeight)
        scale = viewHeight/videoMode.mHeight;
    CGFloat scaledWidth = videoMode.mWidth * scale;
    CGFloat scaledHeight = videoMode.mHeight * scale;
        
    CGPoint margin = {(scaledWidth - viewWidth)/2, (scaledHeight - viewHeight)/2};
    
    // now project the 4 corners of the target
    ImageTargetsAppDelegate *delegate = [[UIApplication sharedApplication] delegate];
    delegate.s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s1 = [self projectCoord:CGPointMake(-w,-h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s2 = [self projectCoord:CGPointMake(w,-h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s3 = [self projectCoord:CGPointMake(w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
}

These are invoked from within renderFrameQCAR:

            ShaderUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);
            ShaderUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);
            ShaderUtils::multiplyMatrix(&qUtils.projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);
            
            CGSize target = {247,173};
            [self calcScreenCoordsOf:target inView:&modelViewProjection.data[0] inPose:trackable->getPose()];
            
            glUseProgram(shaderProgramID);

@MoSR
hii
I did video overlay using AVURLAsset. this works very well with local files but i need to do this for remote files on server, so i need to use AVPlayer. I did everything as you said in this post. Now i am using MPVideoPlayerController class. Video is being overlaid. Now the problem is of rotation. How can I rotate the frame of MPMoviePlayerController. I think i need to calculate it according to some mathematical calculations. If you have some idea plz help.

I read you have written this on some post
"Yes it is possible - I did some preliminary research to use the trackable's pose matrix as a CGAffineTransform on a view.
The trick is getting the transformation into the same 'space' as OpenGL - but since you're after an 'impression' and not a exact alignment that should be easier."
So i guess i can use the modelview matrix for CGAffineTransform. if its possible can you plz tell me how can i do this....
regards
pankaj bansal

Re: Playing Video Over the image

June 10, 2012 - 10:50pm #64

http://stackoverflow.com/questions/4934886/avurlasset-cannot-load-with-remote-file

but here Some peoples have tried some approaches and some of them are working
http://stackoverflow.com/questions/6242131/using-avassetreader-to-read-stream-from-a-remote-asset

One more thing that I like to know Are you playing sound with Video ?

Regards
Omkar :)

hey man

Even read that to stream we have to use AVPlayerItem, I tried the codes by MOsr in first few posts but thing is the video is not getting overlaid on marker perfectly. Actually by my methode i was using just shader to overlay video. I was grabbing frames form the video and mapping them on marker. But now i am using the AVPlayerItem but thing is the frame of AVPlayerItem is not getting set perfectly onto marker. Well i am trying and will tell you if i am able to. The stackflow post you mention i already have gone through them and they all say we cant use AVURLAsset for remote file. So i guess i have to use AVPlayerItem.
regarding sound, In my shader methode I was not playing any sound but in this AVPlayerItem the sound is also playing. But i guess if i have tried in that methode also we can play soung. We just need to make a AVPlayer item and play stop it. I dont know . It may be dificult i havnt tried that much. This remote file is my priority right now.

Pankaj Bansal

Re: Playing Video Over the image

June 9, 2012 - 1:27pm #63

http://stackoverflow.com/questions/4934886/avurlasset-cannot-load-with-remote-file

but here Some peoples have tried some approaches and some of them are working
http://stackoverflow.com/questions/6242131/using-avassetreader-to-read-stream-from-a-remote-asset

One more thing that I like to know Are you playing sound with Video ?

Regards
Omkar :)

Re: Playing Video Over the image

June 9, 2012 - 4:09am #62
MoSR wrote:

Hi shivintu,

To add additional UI, whether video views or UI controls, it's probably best practise to create a new parent view, that parents both the EAGLView and a view that contains the overlaid content.

I've done that here as a test, to play the video in a 2D pop-up overlaying the 3D/Camera view and it works fine.

Remember that the UIScreen bounds will not be in the same orientation as the sub-views in the ImageTargets case as they are forced landscape, so set the EAGLView subview frame to be [0,0,screenBounds.size.height, screenBounds.size.width].

CGRect screenBounds = [[UIScreen mainScreen] bounds];

window = [[UIWindow alloc] initWithFrame: screenBounds];
viewController = [[ARViewController alloc] init];
parentV = [[UIView alloc] initWithFrame: screenBounds];

screenBounds = CGRectMake(0, 0, parentV.frame.size.height, parentV.frame.size.width);
view = [[EAGLView alloc] initWithFrame: screenBounds];
[parentV addSubview:view];

CGRect subScreenBounds = CGRectMake(500, 100, 500, 500);
videoV = [[UIView alloc] initWithFrame:subScreenBounds];
[parentV addSubview:videoV];

[viewController setView: parentV];

Note that to play video within the 3D world you'd need to sync the video frame rate with the camera frame rate and insert each frame into the 3D view as the texture of a displayed object. This is unlikely to yield good performance as you are bypassing all the optimisation in the movie player.

can u tell me where should I write this code?

Re: Playing Video Over the image

June 7, 2012 - 10:21pm #61
mp3il wrote:

Any chance someone can post an example project with based on imageTargets that not rendering 3D, but an image / video ?

I'm trying to modify the renderFromQcar method to draw something else and have too much problems which I don't find a good way to debug.

Any tutorial / Simple code (probally much simpler then the 3d thing with the teapot)

Would be really appricated. thanks again.

Hey man
Look at my code in last 2-3 posts.. It worked for me to play a movie on marker.. Hope it will help

Cheeers.
Pankaj Bansal

Re: Playing Video Over the image

June 7, 2012 - 10:06am #60

Any chance someone can post an example project with based on imageTargets that not rendering 3D, but an image / video ?

I'm trying to modify the renderFromQcar method to draw something else and have too much problems which I don't find a good way to debug.

Any tutorial / Simple code (probally much simpler then the 3d thing with the teapot)

Would be really appricated. thanks again.

Re: Playing Video Over the image

June 7, 2012 - 5:53am #59
omkarois wrote:

Hi Pankaj,
I hope as you are using CVImageBufferRef that might be a problem. I have tried using Same like AVAsset But was converting CVImageBufferRef to UIImage with desired Size.
Following is Code Sample
-(void)loadTextureWithImage:(UIImage *)textureImage withId:(GLuint)textureID{

if (textureImage) {
// 1

CGImageRef textureImageRef = textureImage.CGImage;
if (!textureImageRef) {
NSLog(@"Failed to load image ");
}

// 2

size_t width = CGImageGetWidth(textureImageRef);
size_t height = CGImageGetHeight(textureImageRef);

GLubyte * textureData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));

//@: Rotate image for openGL

CGAffineTransform transform = CGAffineTransformIdentity;
CGSize imageSize = CGSizeMake(CGImageGetWidth(textureImageRef), CGImageGetHeight(textureImageRef));

transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);

CGContextRef textureContext = CGBitmapContextCreate(textureData, width, height, 8, width*4, CGImageGetColorSpace(textureImageRef), kCGImageAlphaPremultipliedLast);
// Add rotation
CGContextConcatCTM(textureContext, transform);

// 3
//@: Set Width And height in POT so GL can Digest that
CGRect glRect = CGRectMake(0, 0, 512, 256);
CGContextDrawImage(textureContext, glRect, textureImageRef);

CGContextRelease(textureContext);

// 4

glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);

} else {
NSLog(@"Invalid Image");
}

}

You Can optimize this by sending CVImageBufferRef as parameter then UIImage.

@omkarois
Hey man
thanks for replying. The video overlay is working for me now. I guess there was some mistake with mapping the vertices.
Now i have another problem. I did video overlay for a video on my computer. Now i need to this for an video on server. When i try to this I get the error that "AVASSETREADER CAN NOT BE INITIALIZED FOR NON LOCAL URLS " Do you have some idea about this. See I am not using AVVideoPlayer so i think i will have to Deal with the streaming my self. In AVVideoPlayer the steraming and all the work is done by player itself. How can i do this. How can i steam a video and simultaneoulsy play it using the AvURLAsset and AvAssetReader.
If you have some idea about this plz let me now.

Thanks man for replying..
pankaj Bansal

Re: Playing Video Over the image

June 6, 2012 - 1:01am #58
pankajbansaljiit wrote:

hey thanks for replying..
I have tried the texture which is power of 2 but it does not work.
The glTexSubImage2D gives me error whenever i use it. Its glerror 502 . It stands for GL_INVALID_OPERATION. I will search on it.
You said you also had this problem. How did you solved it. If you have some working code can you please share that . I wont copy it. I just wanna learn how you did it. See i am stuck here for more than 10 days now. Problem is I dont know what is problem with my code. So plz do share some code if possible.

One thing more i made a small video of images of teapot textures which come with vuforia. even it didnt worked. So i think there is some problem with grabing the frames or may be making buffers. I really cant figure out the problem as the width height of frames comes correct. but if i use the teapot texture image png and make a texture with it in the loop, i again and again make the texture and then delete it in renderFromnQcar It worked. So there is problem with the frames which i grab. May be because the recorder grabs them in BGRA . I am using GL_BGRA in gltexImage2D but it is not working. If you have some idea about this plz share.

I will try to do this with glTexSubImage2D. IOf i am not wrong i just had to make changes in the renderFromQcar function mainly. Plz Correct me if in am wrong. I think people have done it by some other methode also. The answers early in this post use some other methodes to do this overlay. But i was not able to understand that. so if you can help me there i would really appriciate that.

Thanks for replying

@MoSR can you answer my question. I Need answer ASAP..
pankaj Bansal.

Hi Pankaj,
I hope as you are using CVImageBufferRef that might be a problem. I have tried using Same like AVAsset But was converting CVImageBufferRef to UIImage with desired Size.
Following is Code Sample
-(void)loadTextureWithImage:(UIImage *)textureImage withId:(GLuint)textureID{

if (textureImage) {
// 1

CGImageRef textureImageRef = textureImage.CGImage;
if (!textureImageRef) {
NSLog(@"Failed to load image ");
}

// 2

size_t width = CGImageGetWidth(textureImageRef);
size_t height = CGImageGetHeight(textureImageRef);

GLubyte * textureData = (GLubyte *) calloc(width*height*4, sizeof(GLubyte));

//@: Rotate image for openGL

CGAffineTransform transform = CGAffineTransformIdentity;
CGSize imageSize = CGSizeMake(CGImageGetWidth(textureImageRef), CGImageGetHeight(textureImageRef));

transform = CGAffineTransformMakeTranslation(0.0, imageSize.height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);

CGContextRef textureContext = CGBitmapContextCreate(textureData, width, height, 8, width*4, CGImageGetColorSpace(textureImageRef), kCGImageAlphaPremultipliedLast);
// Add rotation
CGContextConcatCTM(textureContext, transform);

// 3
//@: Set Width And height in POT so GL can Digest that
CGRect glRect = CGRectMake(0, 0, 512, 256);
CGContextDrawImage(textureContext, glRect, textureImageRef);

CGContextRelease(textureContext);

// 4

glActiveTexture(GL_TEXTURE0);
glGenTextures(1, &textureID);
glBindTexture(GL_TEXTURE_2D, textureID);

glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, textureData);
free(textureData);

} else {
NSLog(@"Invalid Image");
}

}

You Can optimize this by sending CVImageBufferRef as parameter then UIImage.

Re: Playing Video Over the image

June 4, 2012 - 6:07am #57

http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
http://stackoverflow.com/questions/6004688/opengl-es-texture-not-power-of-two-iphone

hey thanks for replying..
I have tried the texture which is power of 2 but it does not work.
The glTexSubImage2D gives me error whenever i use it. Its glerror 502 . It stands for GL_INVALID_OPERATION. I will search on it.
You said you also had this problem. How did you solved it. If you have some working code can you please share that . I wont copy it. I just wanna learn how you did it. See i am stuck here for more than 10 days now. Problem is I dont know what is problem with my code. So plz do share some code if possible.

One thing more i made a small video of images of teapot textures which come with vuforia. even it didnt worked. So i think there is some problem with grabing the frames or may be making buffers. I really cant figure out the problem as the width height of frames comes correct. but if i use the teapot texture image png and make a texture with it in the loop, i again and again make the texture and then delete it in renderFromnQcar It worked. So there is problem with the frames which i grab. May be because the recorder grabs them in BGRA . I am using GL_BGRA in gltexImage2D but it is not working. If you have some idea about this plz share.

I will try to do this with glTexSubImage2D. IOf i am not wrong i just had to make changes in the renderFromQcar function mainly. Plz Correct me if in am wrong. I think people have done it by some other methode also. The answers early in this post use some other methodes to do this overlay. But i was not able to understand that. so if you can help me there i would really appriciate that.

Thanks for replying

@MoSR can you answer my question. I Need answer ASAP..
pankaj Bansal.

Re: Playing Video Over the image

May 31, 2012 - 1:35pm #56

http://www.opengl.org/sdk/docs/man/xhtml/glTexImage2D.xml
http://stackoverflow.com/questions/6004688/opengl-es-texture-not-power-of-two-iphone

Re: Playing Video Over the image

May 27, 2012 - 10:46pm #55

plz can someone answer my question i asked in above post.
I really need to finish this thing ASAP

Re: Playing Video Over the image

May 25, 2012 - 5:54am #54

hello

i am posting first time here.
i am trying the same thing to play a video over the marker in Qualcom instead of 3D model..

i have made changes in imagetarget project. I have worked on openGL and i have made a 2D texture. If i use a still image the result is coming fine but when i get frames form the video and try to map those texture i get a black screen.

here is the code. I have used AVAsset to get the video frames from the video.

here is my code..

- (void) readMovie
{

NSString *filePath = [[[NSBundle mainBundle] resourcePath]stringByAppendingPathComponent:@"movie.mp4"];
NSURL *url=[NSURL fileURLWithPath:filePath];
asset = [[AVURLAsset alloc]initWithURL:url options:nil];

AVAssetTrack * videoTrack = [[asset tracksWithMediaType:AVMediaTypeVideo] objectAtIndex:0]; _movieReader = [[AVAssetReader alloc] initWithAsset:asset error:&error];
NSString* key = (NSString*)kCVPixelBufferPixelFormatTypeKey;
NSNumber* value = [NSNumber numberWithUnsignedInt:kCVPixelFormatType_32BGRA];
NSDictionary* videoSettings = [NSDictionary dictionaryWithObject:value forKey:key];
[_movieReader addOutput:[AVAssetReaderTrackOutput assetReaderTrackOutputWithTrack:videoTrack outputSettings:videoSettings]];
[_movieReader startReading];
}

- (CVImageBufferRef ) readNextMovieFrame
{
if (_movieReader.status == AVAssetReaderStatusReading)
{
AVAssetReaderTrackOutput * output = [_movieReader.outputs objectAtIndex:0];
CMSampleBufferRef sampleBuffer = [output copyNextSampleBuffer];
if (sampleBuffer)
{
CVImageBufferRef imageBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);
return (imageBuffer);
return 0;
}

In renderFromQcar i have made following changes

CVImageBufferRef image=[self readNextMovieFrame];
CVPixelBufferUnlockBaseAddress(image, 0);
GLfloat width = CVPixelBufferGetWidth(image);
GLfloat height=CVPixelBufferGetHeight(image);

NSLog(@"width = %f height= %f",width, height);
glGenTextures(1, &tID);
glBindTexture(GL_TEXTURE_2D, tID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, width, height, 0, GL_RGBA, GL_UNSIGNED_BYTE, CVPixelBufferGetBaseAddress(image));

glUseProgram(shaderProgramID);

glVertexAttribPointer(vertexHandle, 3, GL_FLOAT, GL_FALSE, 0, vertices);
glVertexAttribPointer(textureCoordHandle, 2, GL_FLOAT, GL_FALSE, 0, texCoor);
glEnableVertexAttribArray(vertexHandle);
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, tID);

glUniformMatrix4fv(mvpMatrixHandle, 1, GL_FALSE, (const GLfloat*)&modelViewProjection.data[0]);
glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_SHORT, indices);
glDeleteTextures(1, &tID);
CVPixelBufferUnlockBaseAddress(image, 0);
}

See if i use the still image i get correct result. Also by using the video frames i get correct width and height of texture so the imagebuffer is giving the correct textures. But thing is the textures are not rendering and coming black.

I made a different project without AR and its playing fine there so the problem is with AR code.

I am astuck at this point for 4-5 days now. Plz someone help me.

thnks in advance.

Re: Playing Video Over the image

May 21, 2012 - 3:56am #53
shivintu wrote:

Thank you very much MoSR, I tried it and its working, I have another problem. How we can place the video screen exactly on top of the traced image area with same width and height of traced image. and also as image is scaling (if i move the camera close to the image then phone entire screen will be occupied by image and when i am going far to the image then occupation of image on mobile real estate will be reduced) how can we scale the video screen.

I have tired below code to place the video screen on top of traced image.

const QCAR::Tracker& tracker = QCAR::Tracker::getInstance();
const QCAR::CameraCalibration& cameraCalibration = tracker.getCameraCalibration();
QCAR::Vec2F screenPoint = QCAR::Tool::projectPoint(cameraCalibration, trackable->getPose(), QCAR::Vec3F(0, 0, 0));
targetXPos = screenPoint.data[0];
targetYPos = screenPoint.data[1];
videoV setFrame:cgrectMake(targetXpos,targetYPos,237,150);

Here i am give width and height are static, bcz i am not able to get image width and height. i have tried QCAR::Image::getWidth(); API but as there is no static function to get the Image class instance, not getting to get the image width and height dynamically.

The video screen is not placing exactly on top of image. but its moving along with image with some rect difference as image x and y pos's are traced out.

I need help on how we can place video screen exactly on top of traced image with same width and height of traced image as well how we can scale it.

Please help me.

can u tell me in which file u have inserted that code?

Re: Playing Video Over the image

May 21, 2012 - 3:48am #52

Hello,
I have just installed the vuforia SDK & run the sample application "Image Target" apps, I want to add video instead teapot image...
how can i do it?
pls do help me....pls pls pls pls reply

Re: Playing Video Over the image

May 9, 2012 - 2:18am #51

where to add this code

Re: Playing Video Over the image

May 8, 2012 - 1:54am #50

Hi all,
can someone upload an example code? I have tried to implement but there are always some problems. :confused:

Re: Playing Video Over the image

April 24, 2012 - 5:28am #49

Hi Sentio,

You need some logic to handle the undetect case. You may want to timeout the lack of detection, rather than for it dissappear straight away. Reset a timer every detection, and if it times out then you've lost detection. The length of the timer will depend on the required behaviour, and whether you want to bridge short periods of non-detection.

You may also want to use the pose to animate your object, so that when you stop detecting the augmentation glides to a stop; or if it redetects it animates to the new position rather than jumping.

Once you've decided that the target is not showing you can message your overlay view code to hide, remove itself from the view hierarchy and/or destruct.

Re: Playing Video Over the image

April 22, 2012 - 2:09pm #48

Hello,

after some work i can change video, image with teapot.

My problem is how can i remove points when it' is not tracking.

when i turn to camera somewhere else from target. points and shape stay on screen.

I found place where disable vertex and etc.

How can i call from this sub to removesubview for CornerTracking.m

thank for help

Re: Playing Video Over the image

April 13, 2012 - 5:06am #47

FYI - I'm about to edit my earlier posted examples to fix a scaling problem on the iPad.

Re: Playing Video Over the image

April 10, 2012 - 9:28am #46

@Supereid86

On your question about the squares not lining up with the target corners is probably because you need to change the following to match your target size,

CGSize target = {247,173};

or read the dimensions direct from the target object using QCAR::ImageTarget::getSize()

Re: Playing Video Over the image

April 9, 2012 - 12:49pm #45

I tried the following
-(void)renderFrameQCAR{
[self performSelectorOnMainThread:@selector(playVideo) withObject:nil waitUntilDone:NO];
}

-(void)playVideo{
MPMoviePlayerController *videoViewCont = [[MPMoviePlayerController alloc]initWithContentURL:[NSURL fileURLWithPath:@"testvideo.flv"]];
[videoViewCont play];
[videoViewCont.view setFrame:self.bounds];
[self addSubview:videoViewCont.view];
}

but it give me this error "undefined symbols for architecture armv7" for MPMoviePlayerController

Re: Playing Video Over the image

March 27, 2012 - 10:19am #44

In that case I need to point you at Phil Rideout's book on "iPhone 3D Programming" by O'Reilly, or at Jeff Lamarche's blog at iPhoneDevelopment.blogspot.com. There are other good OpenGL Tutorials out there, but we can't do this by posts on a forum, sorry. :-)

Re: Playing Video Over the image

March 27, 2012 - 10:08am #43
MoSR wrote:

Hi again,

(I think there's a little confusion over the use of 2D here - I effectively mean "on the plane of the screen" whereas I think you mean "on a 2D plane within the 3D world")

If you only want an image to appear fixed to the target, and not an interactive UIView, then you should implement that in OpenGL within renderFrameQCAR. Make your augmentation a 2D plane (that is, replace the teapot with a rectangle) and use your image as its texture.

That is exactly what I want to do. I am a complete openGL beginner and don't know how to make my augmentation a 2D plane or add a texture.
Can you point me in the right direction?

Thanks so much MoSR.

Re: Playing Video Over the image

March 27, 2012 - 9:21am #42

Hi again,

(I think there's a little confusion over the use of 2D here - I effectively mean "on the plane of the screen" whereas I think you mean "on a 2D plane within the 3D world")

If you only want an image to appear fixed to the target, and not an interactive UIView, then you should implement that in OpenGL within renderFrameQCAR. Make your augmentation a 2D plane (that is, replace the teapot with a rectangle) and use your image as its texture.

Re: Playing Video Over the image

March 27, 2012 - 9:06am #41

@MoSR

Thanks for the help,

I think I am in a good place to ask a question.

I want to create a 2D plane on the tracker area, and overlay a 2D image on top, that sticks to the tracker image 100%, regardless of camera position, and always covers it's entire area. I read in some other threads that this can be done by creating a 2D plane and overlaying a texture on top. ?
I want to do everything in 2D.

It sounds like what I need to do is your suggestion 3:
3) Arbitrarily rotated with the target, but still 2D.

As you can see in the image below, the corners of the image I overlay does not match the corners of the tracker image.

What I also don't understand is why the other views (the small squares in the corners) don't line up perfectly with the tracker image.

Thanks in advance!

Re: Playing Video Over the image

March 27, 2012 - 8:20am #40

@Supereid86:

There are many possible behaviour models here, amongst which are:

1) Static to the phone screen, rectilinear to the screen, and 2D
2) Auto-rotated with the UI, rectilinear to the screen and 2D
3) Arbitrarily rotated with the target, but still 2D
4) Arbitrarily rotated with the target in 3D (similar to a planar OpenGL augmentation)

1) and 2) are given in the earlier posted examples

3) is where choffman is trying to get to, above, and is just a matter of applying the right geometry.

4) requires a mapping of the pose-Matrix and projection matrix into a CATransform.

To be honest I've not got very far with 4) but not spent a lot of time on it either. I can get the target exhibiting the same movement as the target, but not superimposed precisely over the target. I think I need to understand CATransform a little better.

Anybody who knows how to do this, please pipe up! :-)

Re: Playing Video Over the image

March 27, 2012 - 8:08am #39

@choffman:

First of all does it work okay when you are just scaling, not rotating? Does the view take on the size you want? (Rotation won't alter the size of the view).

If so you then need to think about the rotation centre-point, where 0 degrees is and what constitutes a positive angle. You are using the angle of the s0-s3 line - does that orient the view correctly, or do you need to be adding 90, 180 or 270 degrees?

Re: Playing Video Over the image

March 27, 2012 - 8:02am #38

Sorry, realized I needed to implement
For QCAR iOS SDK 1.5 (without the overlayed object autorotating)

... Will keep posted on progress

Re: Playing Video Over the image

March 27, 2012 - 7:54am #37

MoSR:

I'm trying to overlay an image onto the target image that sticks to the corners od the target and stretches/shrinks depending on the angle of the camera.

I used this code to get an orange center view, but this view doesn't stick to the corners of the tracking point. It does, but when I turn the camera to the tracker from the side, the orange view is perpendicular to the target, not sticking to it.

How do I make the image stick to the target?
I'm using Qualcomm V1.5

Thanks in advance!

Re: Playing Video Over the image

March 26, 2012 - 11:10am #36

http://dl.dropbox.com/u/55338579/Screenshots/IMG_0101.PNG
http://dl.dropbox.com/u/55338579/Screenshots/IMG_0102.PNG

Is there no chance to apply the GL 4x4 pose to the layer? Basically that should be possible (CATransform3D is also a 4x4 matrix), but this gives also strange results!

Any ideas?

Regards,
choffmann

Re: Playing Video Over the image

March 26, 2012 - 3:13am #35

Hi choffman,

It depends what effect you want to achieve.

1) UIViews stay fixed to "landscape left"

2) UIViews stay rectilinear with phone orientation, or

3) UIViews rotate arbitrarily with the target

The second example I've provided does 2) and for 3) you need to would use the first example but alter the way the corner information is used to position the UIView. If you calculate an angle from the coordinates and apply that to the CATransform of the UIView's CGLayer the view should rotate with the target. I've created a simple scaling calculation in the second sample.

Re: Playing Video Over the image

March 23, 2012 - 12:26pm #34

Ah, I understand that now.

But now that the position is correct, how to apply the correct rotation and the correct scale of the pose to the uiview objects?

Thanks and regards,
choffmann

Re: Playing Video Over the image

March 12, 2012 - 7:43am #33

Hi choffman,

If in both examples you set the block size to 10x30 in moveCorners you'll see the situation more clearly.

One example keeps the augmented corner shapes oriented with the camera view (not the world), the other auto-rotates those shapes with the UI. So if you are showing text or a movie it will keep in an orientation where the user can still appreciate the content.

Re: Playing Video Over the image

March 8, 2012 - 3:15am #32

Hi MoSR,

I copied your code from posting "For QCAR iOS SDK 1.5 (without the overlayed object autorotating..." and everything works, but:
the overlayed objects rotate, just like in the follow up example (On QCAR iOS 1.5 with the UIView attached to the target and autorotated).

Any ideas?

Regards,
choffmann

Re: Playing Video Over the image

March 7, 2012 - 2:56am #31

Hi Sentio - maybe you didn't start with the ImageTargets sample app? (And maybe I didn't explicitly say you should, sorry).

Re: Playing Video Over the image

March 5, 2012 - 2:06pm #30

Hello, when i paste code to my didFinishLaunchingWithOptions

i get error. You can see which error i get

http://www.haai.tv/error.png
Thanx for help

Re: Playing Video Over the image

March 2, 2012 - 10:37am #29

On QCAR iOS 1.5 with the UIView attached to the target and autorotated:

We're going to demonstrate drawing a view at each of the corner points of a target, and a scaled view central to the target, all of which will stay oriented correctly.

Create a subclass of ARParentViewController (CornerTrackingVC) to insert a new layer between the ARView and the popup Overlay View:

#import <UIKit/UIKit.h>
#import "ARParentViewController.h"

@interface CornerTrackingVC : ARParentViewController
{
    UIView *s0V;
    UIView *s1V;
    UIView *s2V;
    UIView *s3V;
    UIView *centreV;
}

@property (nonatomic, assign) UIView *s0V;
@property (nonatomic, assign) UIView *s1V;
@property (nonatomic, assign) UIView *s2V;
@property (nonatomic, assign) UIView *s3V;
@property (nonatomic, assign) UIView *centreV;

@end

And:

#import <QuartzCore/QuartzCore.h>
#import "CornerTrackingVC.h"
#import "ARViewController.h"
#import "OverlayViewController.h"

@implementation CornerTrackingVC

@synthesize s0V;
@synthesize s1V;
@synthesize s2V;
@synthesize s3V;
@synthesize centreV;

- (void) loadView
{
    NSLog(@"ARParentVC: creating");
    parentView = [[UIView alloc] initWithFrame:arViewRect];
    parentView.autoresizesSubviews = YES;
    
    // Add the EAGLView and the overlay view to the window
    arViewController = [[ARViewController alloc] init];
    
    // need to set size here to setup camera image size for AR
    arViewController.arViewSize = arViewRect.size;
    [parentView addSubview:arViewController.view];
    
    // These are the four corner points
    CGRect frame = {0,0,4,4};
    s0V = [[UIView alloc] initWithFrame:frame];
    s1V = [[UIView alloc] initWithFrame:frame];
    s2V = [[UIView alloc] initWithFrame:frame];
    s3V = [[UIView alloc] initWithFrame:frame];
    
    s0V.backgroundColor = [UIColor redColor];
    s1V.backgroundColor = [UIColor greenColor];
    s2V.backgroundColor = [UIColor blueColor];
    s3V.backgroundColor = [UIColor yellowColor];

    // And this is the centre-piece
    centreV = [[UIView alloc] initWithFrame:CGRectMake(20, 50, 4, 4)]; 
    centreV.backgroundColor = [UIColor orangeColor];
    [parentView addSubview:centreV];
    
    // Create a container that is correctly resized and rotated
    frame = CGRectMake(0, 0, arViewRect.size.width, arViewRect.size.height);
    UIView *cornerTrackingV = [[UIView alloc] initWithFrame:frame];
    [cornerTrackingV addSubview:s0V];
    [cornerTrackingV addSubview:s1V];
    [cornerTrackingV addSubview:s2V];
    [cornerTrackingV addSubview:s3V];
    cornerTrackingV.autoresizingMask = UIViewAutoresizingFlexibleWidth | UIViewAutoresizingFlexibleHeight;
    [parentView addSubview:cornerTrackingV];
     
    // Create an auto-rotating overlay view and its view controller (used for
    // displaying UI objects, such as the camera control menu)
    overlayViewController = [[OverlayViewController alloc] init];
    [parentView addSubview: overlayViewController.view];
    
    self.view = parentView;
}

- (void)dealloc
{
    self.s0V = nil;
    self.s1V = nil;
    self.s2V = nil;
    self.s3V = nil;
    self.centreV = nil;
    
    [super dealloc];
}

In the app delegate, hold the coordinates and ref the subclass:

#import <UIKit/UIKit.h>
@class CornerTrackingVC;

@interface ImageTargetsAppDelegate : NSObject <UIApplicationDelegate> {
    UIWindow* window;
    CornerTrackingVC* cornerTrackingVC;
    UIImageView *splashV;

@public
    CGPoint s0;
    CGPoint s1;
    CGPoint s2;
    CGPoint s3;
    CGPoint centre;

}

@property (nonatomic)     CGPoint s0;
@property (nonatomic)     CGPoint s1;
@property (nonatomic)     CGPoint s2;
@property (nonatomic)     CGPoint s3;
@property (nonatomic)     CGPoint centre;

@end

Set up the subclass in place of ARParentViewController, and add a timer for the animation:

    // Add the EAGLView and the overlay view to the window
    cornerTrackingVC = [[CornerTrackingVC alloc] init];
    cornerTrackingVC.arViewRect = screenBounds;
    [window insertSubview:cornerTrackingVC.view atIndex:0];

    [window makeKeyAndVisible];

    [NSTimer scheduledTimerWithTimeInterval:0.04 target:self selector:@selector(moveCorners:) userInfo:NULL repeats:YES];

And add the method to manipulate the views per time period, using the target coordinates:

- (void)moveCorners:(NSTimer*)theTimer
{
    // move each of the corner 'tags'
    CGRect frame = CGRectMake(s0.x, s0.y, 4, 4);
    cornerTrackingVC.s0V.frame = frame;
    frame = CGRectMake(s1.x, s1.y, 4, 4);
    cornerTrackingVC.s1V.frame = frame;
    frame = CGRectMake(s2.x, s2.y, 4, 4);
    cornerTrackingVC.s2V.frame = frame;
    frame = CGRectMake(s3.x, s3.y, 4, 4);
    cornerTrackingVC.s3V.frame = frame;
    
    // simplistically make the scale the average of the two diagonals
    // we could extract the scale from the pose
    CGFloat diag1 = sqrtf(powf(s0.x-s2.x,2.0) + powf(s0.y-s2.y,2.0));
    CGFloat diag2 = sqrtf(powf(s1.x-s3.x,2.0) + powf(s1.y-s3.y,2.0));

    // draw a 4:3 rectangle scaled to the target 
    frame.size.width = (diag1 + diag2)/2 / 2.0;
    frame.size.height = frame.size.width * 3.0/4.0;
    frame.origin.x = centre.x - frame.size.width/2;
    frame.origin.y = centre.y - frame.size.height/2;
    
    cornerTrackingVC.centreV.frame = frame;
}

In the EAGLView add methods that calculate the correct positions of target points, accounting for the screen orientation:

- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset andScale:(CGFloat)scale
{
    CGPoint converted;
    
    QCAR::Vec3F vec(coord.x,coord.y,0);
    QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
    converted.x = sc.data[0]*scale - offset.x;
    converted.y = sc.data[1]*scale - offset.y;
    
    return converted;
}

- (CGPoint) toPortrait:(CGPoint) coord
{
    CGPoint newCoord;
    newCoord.x = self.frame.size.width - coord.y;
    newCoord.y = coord.x;
    
    return newCoord;
}

- (CGPoint) toPortraitUD:(CGPoint) coord
{
    CGPoint newCoord;
    newCoord.x = coord.y;
    newCoord.y = self.frame.size.height - coord.x;
    
    return newCoord;
}

- (CGPoint) toLandscapeLeft:(CGPoint) coord
{
    CGPoint newCoord;
    newCoord.x = self.frame.size.width - coord.x;
    newCoord.y = self.frame.size.height - coord.y;
    
    return newCoord;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
    // 0,0 is at centre of target so extremities are at w/2,h/2
    CGFloat w = target.width/2;
    CGFloat h = target.height/2;
    
    // need to account for the orientation on view size
    CGFloat viewWidth = self.frame.size.height; // Portrait
    CGFloat viewHeight = self.frame.size.width; // Portrait    
    UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
    if (UIInterfaceOrientationIsLandscape(orientation))
    {
        viewWidth = self.frame.size.width;
        viewHeight = self.frame.size.height;        
    }
    
    // calculate any mismatch of screen to video size
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    const QCAR::CameraCalibration& cameraCalibration = cameraDevice.getCameraCalibration();
    QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
    
    CGFloat scale = viewWidth/videoMode.mWidth;
    if (videoMode.mHeight * scale < viewHeight)
        scale = viewHeight/videoMode.mHeight;
    CGFloat scaledWidth = videoMode.mWidth * scale;
    CGFloat scaledHeight = videoMode.mHeight * scale;
        
    CGPoint margin = {(scaledWidth - viewWidth)/2, (scaledHeight - viewHeight)/2};
    
    // now project the 4 corners of the target
    ImageTargetsAppDelegate *delegate = [[UIApplication sharedApplication] delegate];
    delegate.s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];

... ditto for s1-s3 and centre using (-w,-h), (w,-h), (w,h) and (0,0) ...
    
    // correct the coordinates for screen orientation
    switch (orientation) {
        case UIInterfaceOrientationPortrait:
            delegate.s0 = [self toPortrait:delegate.s0];
            delegate.s1 = [self toPortrait:delegate.s1];
            delegate.s2 = [self toPortrait:delegate.s2];
            delegate.s3 = [self toPortrait:delegate.s3];
            delegate.centre = [self toPortrait:delegate.centre];
            break;
            
        case UIInterfaceOrientationPortraitUpsideDown:
            delegate.s0 = [self toPortraitUD:delegate.s0];
            delegate.s1 = [self toPortraitUD:delegate.s1];
            delegate.s2 = [self toPortraitUD:delegate.s2];
            delegate.s3 = [self toPortraitUD:delegate.s3];
            delegate.centre = [self toPortraitUD:delegate.centre];
            break;
            
        case UIInterfaceOrientationLandscapeLeft:
            delegate.s0 = [self toLandscapeLeft:delegate.s0];
            delegate.s1 = [self toLandscapeLeft:delegate.s1];
            delegate.s2 = [self toLandscapeLeft:delegate.s2];
            delegate.s3 = [self toLandscapeLeft:delegate.s3];
            delegate.centre = [self toLandscapeLeft:delegate.centre];
            break;
            
        default:
            break;
    }
}

And in renderFrameQCAR add the code to the loop over detected trackables:

            CGSize target = {247,173};
            [self calcScreenCoordsOf:target inView:&modelViewProjection.data[0] inPose:trackable->getPose()];

Re: Playing Video Over the image

March 1, 2012 - 7:53am #28

For QCAR iOS SDK 1.5 (without the overlayed object autorotating - I'll spin another version later where the corner tracking view is always in the user's orientation, and the coordinates match):

In this sample I'll keep the coordinates in the delegate as it's somewhat easier to get to that than it is for the delegate to navigate up the more complex view hierarchy.

In ImageTargetsAppDelegate.h:

@interface ImageTargetsAppDelegate : NSObject <UIApplicationDelegate> {
    UIWindow* window;
    ARParentViewController* arParentViewController;
    UIImageView *splashV;

    UIView *s0V;
    UIView *s1V;
    UIView *s2V;
    UIView *s3V;
    
@public
    CGPoint s0;
    CGPoint s1;
    CGPoint s2;
    CGPoint s3;

}

@property (nonatomic)     CGPoint s0;
@property (nonatomic)     CGPoint s1;
@property (nonatomic)     CGPoint s2;
@property (nonatomic)     CGPoint s3;

In ImageTargetsAppDelegate.mm synthesize the properties and add the following imports:

#import <QCAR/CameraDevice.h>

In application:didFinishLaunching... insert the following to set up the views:

// Add the EAGLView and the overlay view to the window
arParentViewController = [[ARParentViewController alloc] init];
arParentViewController.arViewRect = screenBounds;
[window insertSubview:arParentViewController.view atIndex:0];

CGRect frame = {0,0,10,10};
s0V = [[UIView alloc] initWithFrame:frame];
s1V = [[UIView alloc] initWithFrame:frame];
s2V = [[UIView alloc] initWithFrame:frame];
s3V = [[UIView alloc] initWithFrame:frame];
s0V.backgroundColor = [UIColor redColor];
s1V.backgroundColor = [UIColor greenColor];
s2V.backgroundColor = [UIColor blueColor];
s3V.backgroundColor = [UIColor yellowColor];

frame = CGRectMake(0, 0, screenBounds.size.height, screenBounds.size.width);
UIView *cornerTrackingV = [[UIView alloc] initWithFrame:frame];
[cornerTrackingV addSubview:s0V];
[cornerTrackingV addSubview:s1V];
[cornerTrackingV addSubview:s2V];
[cornerTrackingV addSubview:s3V];

CGPoint pos;
pos.x = screenBounds.size.width / 2;
pos.y = screenBounds.size.height / 2;
CGAffineTransform rotate = CGAffineTransformMakeRotation(90 * M_PI  / 180);
cornerTrackingV.layer.position = pos;
cornerTrackingV.transform = rotate;

[window addSubview:cornerTrackingV];

[window makeKeyAndVisible];

And add the timer callback method to move the views to the new coordinates:

- (void)moveCorners:(NSTimer*)theTimer
{
    CGRect frame = CGRectMake(s0.x, s0.y, 10, 10);
    s0V.frame = frame;
    frame = CGRectMake(s1.x, s1.y, 10, 10);
    s1V.frame = frame;
    frame = CGRectMake(s2.x, s2.y, 10, 10);
    s2V.frame = frame;
    frame = CGRectMake(s3.x, s3.y, 10, 10);
    s3V.frame = frame;
}

In EAGLView.mm, add the two methods for calculating the target corner projections:

- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset andScale:(CGFloat)scale
{
    CGPoint converted;
    
    QCAR::Vec3F vec(coord.x,coord.y,0);
    QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
    converted.x = sc.data[0]*scale - offset.x;
    converted.y = sc.data[1]*scale - offset.y;
    
    return converted;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
    // 0,0 is at centre of target so extremities are at w/2,h/2
    CGFloat w = target.width/2;
    CGFloat h = target.height/2;
    
    // need to account for the orientation on view size
    CGFloat viewWidth = self.frame.size.height; // Portrait
    CGFloat viewHeight = self.frame.size.width; // Portrait    
    UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
    if (UIInterfaceOrientationIsLandscape(orientation))
    {
        viewWidth = self.frame.size.width;
        viewHeight = self.frame.size.height;        
    }
    
    // calculate any mismatch of screen to video size
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    const QCAR::CameraCalibration& cameraCalibration = cameraDevice.getCameraCalibration();
    QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
    
    CGFloat scale = viewWidth/videoMode.mWidth;
    if (videoMode.mHeight * scale < viewHeight)
        scale = viewHeight/videoMode.mHeight;
    CGFloat scaledWidth = videoMode.mWidth * scale;
    CGFloat scaledHeight = videoMode.mHeight * scale;
        
    CGPoint margin = {(scaledWidth - viewWidth)/2, (scaledHeight - viewHeight)/2};
    
    // now project the 4 corners of the target
    ImageTargetsAppDelegate *delegate = [[UIApplication sharedApplication] delegate];
    delegate.s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s1 = [self projectCoord:CGPointMake(-w,-h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s2 = [self projectCoord:CGPointMake(w,-h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s3 = [self projectCoord:CGPointMake(w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
}

These are invoked from within renderFrameQCAR:

            ShaderUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);
            ShaderUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);
            ShaderUtils::multiplyMatrix(&qUtils.projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);
            
            CGSize target = {247,173};
            [self calcScreenCoordsOf:target inView:&modelViewProjection.data[0] inPose:trackable->getPose()];
            
            glUseProgram(shaderProgramID);

Re: Playing Video Over the image

March 1, 2012 - 4:30am #27

Folowing sommeralex's observations I've updated the sample code for tracking the corners of a target, to be compatible with QCAR iOS SDK 1.0 release (rather than beta), shown below. In the beta there was an additional view controller which was optimised out in the release. I'll create a new sample for QCAR iOS SDK 1.5.

For QCAR iOS SDK 1.0:

We're going to modify the ImageTargets sample app to draw a coloured flag on each corner of the detected target.

ImageTargetsAppDelegate.h add:

    UIView *s0V;
    UIView *s1V;
    UIView *s2V;
    UIView *s3V;

ImageTargetsAppDelegate.mm add:

    // Add the EAGLView and the overlay view to the window
    [window addSubview:view];
    

    // add a mini view for each target corner in a parent view
    UIView *parentV = [[UIView alloc] initWithFrame:viewBounds];
    CGRect frame = {0,0,10,10};
    s0V = [[UIView alloc] initWithFrame:frame];
    s1V = [[UIView alloc] initWithFrame:frame];
    s2V = [[UIView alloc] initWithFrame:frame];
    s3V = [[UIView alloc] initWithFrame:frame];
    s0V.backgroundColor = [UIColor redColor];
    s1V.backgroundColor = [UIColor greenColor];
    s2V.backgroundColor = [UIColor blueColor];
    s3V.backgroundColor = [UIColor yellowColor];
    [parentV addSubview:s0V];
    [parentV addSubview:s1V];
    [parentV addSubview:s2V];
    [parentV addSubview:s3V];
    
    // rotate the parent view to match the camera view orientation
    parentV.layer.position = pos;
    parentV.transform = rotate;
    
    [window addSubview:parentV];
    
    [window addSubview: overlayViewController.view];
    [window makeKeyAndVisible];
    
    // Perform actions on the EAGLView now it has been created
    [view onCreate];

    // for this demo update the target corners on a simple timer 
    [NSTimer scheduledTimerWithTimeInterval:0.04 target:self selector:@selector(moveCorners:) userInfo:NULL repeats:YES];

Plus the timer callback function:

- (void)moveCorners:(NSTimer*)theTimer
{
    CGRect frame = CGRectMake(view.s0.x, view.s0.y, 10, 10);
    s0V.frame = frame;
    frame = CGRectMake(view.s1.x, view.s1.y, 10, 10);
    s1V.frame = frame;
    frame = CGRectMake(view.s2.x, view.s2.y, 10, 10);
    s2V.frame = frame;
    frame = CGRectMake(view.s3.x, view.s3.y, 10, 10);
    s3V.frame = frame;
}

In EAGLView.h add the following properties to the interface in the usual places:

@public
    CGPoint s0;
    CGPoint s1;
    CGPoint s2;
    CGPoint s3;

@property (nonatomic)     CGPoint s0;
@property (nonatomic)     CGPoint s1;
@property (nonatomic)     CGPoint s2;
@property (nonatomic)     CGPoint s3;

And in EAGLView.mm synthesize the properties, import QCAR/Vectors.h, and add the following (noting the change in which dimension is now used for calculating the offset):

- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset andScale:(CGFloat)scale
{
    CGPoint converted;
    
    QCAR::Vec3F vec(coord.x,coord.y,0);
    QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
    converted.x = sc.data[0]*scale - offset.x;
    converted.y = sc.data[1]*scale - offset.y;
    
    return converted;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
    // 0,0 is at centre of target so extremities are at w/2,h/2
    CGFloat w = target.width/2;
    CGFloat h = target.height/2;
    
    // need to account for the orientation on view size
    CGFloat viewWidth = self.frame.size.height; // Portrait
    CGFloat viewHeight = self.frame.size.width; // Portrait    
    UIInterfaceOrientation orientation = [UIApplication sharedApplication].statusBarOrientation;
    if (UIInterfaceOrientationIsLandscape(orientation))
    {
        viewWidth = self.frame.size.width;
        viewHeight = self.frame.size.height;        
    }
    
    // calculate any mismatch of screen to video size
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    const QCAR::CameraCalibration& cameraCalibration = cameraDevice.getCameraCalibration();
    QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
    
    CGFloat scale = viewWidth/videoMode.mWidth;
    if (videoMode.mHeight * scale < viewHeight)
        scale = viewHeight/videoMode.mHeight;
    CGFloat scaledWidth = videoMode.mWidth * scale;
    CGFloat scaledHeight = videoMode.mHeight * scale;
        
    CGPoint margin = {(scaledWidth - viewWidth)/2, (scaledHeight - viewHeight)/2};
    
    // now project the 4 corners of the target
    ImageTargetsAppDelegate *delegate = [[UIApplication sharedApplication] delegate];
    delegate.s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s1 = [self projectCoord:CGPointMake(-w,-h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s2 = [self projectCoord:CGPointMake(w,-h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
    delegate.s3 = [self projectCoord:CGPointMake(w,h) inView:cameraCalibration andPose:pose withOffset:margin andScale:scale];
}

And in renderFrameQCAR, during the tracking loop add:

            ShaderUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);
            ShaderUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);
            ShaderUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);

            CGSize target = {247,173};
            [self calcScreenCoordsOf:target inView:&modelViewProjection.data[0] inPose:trackable->getPose()];

Re: Playing Video Over the image

February 27, 2012 - 12:16pm #26

Hi,

I tried to implement the code as shown below into AR_EAGLView.mm class, but the QCAR::Tracker, QCAR::CameraDevice and QCAR::VideoMode classes are not found. Error from XCode: No member named "CameraDevice" in Namespace QCAR. I guess i have to use a simple import command for the classes, but i dont know which one..

EDIT:

by including

#import
#import
#import

some errors dissapeared. But QCAR::Tracker::getInstance() is undefined, as well as tracker.getCameraCalibration();

EDIT 2 the example from MoSR seems to be outdated, as the API shows no tracker.getInstance() anymore. my updated approach:

CGFloat w = target.width/2;
CGFloat h = target.height/2;

// calculate any mismatch of screen to video size
QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();

const QCAR::CameraCalibration& cameraCalibration = cameraDevice.getCameraCalibration();

MoSR wrote:

The following will give you the four coordinates of the target corners within the EAGLView. Using the max/min in each dimension will allow you to specify another UIView that hides the target.

I used this code to display a different coloured 10x10-pixel UIView at each corner of the target. Note the adjustment for the camera view being deeper than the screen (the screen clips the camera view - this may explain why your earlier attempt was offset).

s0 thro' s3 are new public properties of type CGPoint, in EAGLView. Your delegate code can access this to reposition other views.

- (CGPoint) projectCoord:(CGPoint)coord inView:(const QCAR::CameraCalibration&)cameraCalibration andPose:(QCAR::Matrix34F)pose withOffset:(CGPoint)offset
{
    CGPoint converted;
    
    QCAR::Vec3F vec(coord.x,coord.y,0);
    QCAR::Vec2F sc = QCAR::Tool::projectPoint(cameraCalibration, pose, vec);
    converted.x = sc.data[0] - offset.x;
    converted.y = sc.data[1] - offset.y;
    
    return converted;
}

- (void) calcScreenCoordsOf:(CGSize)target inView:(CGFloat *)matrix inPose:(QCAR::Matrix34F)pose
{
    // 0,0 is at centre of target so extremities are at w/2,h/2
    CGFloat w = target.width/2;
    CGFloat h = target.height/2;
    
    const QCAR::Tracker& tracker = QCAR::Tracker::getInstance();
    const QCAR::CameraCalibration& cameraCalibration = tracker.getCameraCalibration();

    // calculate any mismatch of screen to video size
    QCAR::CameraDevice& cameraDevice = QCAR::CameraDevice::getInstance();
    QCAR::VideoMode videoMode = cameraDevice.getVideoMode(QCAR::CameraDevice::MODE_DEFAULT);
    CGPoint margin = {(videoMode.mWidth - self.frame.size.width)/2, (videoMode.mHeight - self.frame.size.height)/2};

    // now project the 4 corners of the target
    s0 = [self projectCoord:CGPointMake(-w,h) inView:cameraCalibration andPose:pose withOffset:margin];
    s1 = [self projectCoord:CGPointMake(-w,-h) inView:cameraCalibration andPose:pose withOffset:margin];
    s2 = [self projectCoord:CGPointMake(w,-h) inView:cameraCalibration andPose:pose withOffset:margin];
    s3 = [self projectCoord:CGPointMake(w,h) inView:cameraCalibration andPose:pose withOffset:margin];
}

----8<---- in renderFrameQCAR ----8<----

ShaderUtils::translatePoseMatrix(0.0f, 0.0f, kObjectScale, &modelViewMatrix.data[0]);
ShaderUtils::scalePoseMatrix(kObjectScale, kObjectScale, kObjectScale, &modelViewMatrix.data[0]);
ShaderUtils::multiplyMatrix(&projectionMatrix.data[0], &modelViewMatrix.data[0], &modelViewProjection.data[0]);
            
 CGSize target = {247,173};
[self calcScreenCoordsOf:target inView:&modelViewProjection.data[0] inPose:trackable->getPose()];

Re: Playing Video Over the image

February 7, 2012 - 3:30am #25

Take a look at iOS AVURLAsset - there are examples from Apple and 3rd parties. Getting the frames is well documented - sync'ing with renderFrameQCAR isn't and we've not yet tried it (but will in the future). One way would be to store frames in a pipe (FIFO) so that in renderFrameQCAR you can pull the frames out and show the one that sync's with the actual time.

Re: Playing Video Over the image

February 6, 2012 - 10:19pm #24

hi, question about syncing the video frame rate with the camera frame rate.

MoSR wrote:

Note that to play video within the 3D world you'd need to sync the video frame rate with the camera frame rate and insert each frame into the 3D view as the texture of a displayed object. This is unlikely to yield good performance as you are bypassing all the optimisation in the movie player.

how exactly can this be done?

thanks,
L.

Re: Playing Video Over the image

January 16, 2012 - 5:57am #23

Hi iossif,

Note that the 'performSelectorOnMainThread' doesn't have a 'target' parameter so the method called has to be in the same class.

To call a selector in another class you need to get the class instance and call the method on that. Here's how when that class is the app delegate:

YourDelegate *appDelegate = (YourDelegate *)[[UIApplication sharedApplication] delegate];
[appDelegate performSelectorOnMainThread:... ];

Re: Playing Video Over the image

January 16, 2012 - 4:56am #22

can you specify how i can store the moviecontroller in the eaglview?

i have a method called - (void)showMovieController in my appdelegate where i switch on the video but when i call it via selector in my eaglview it crashes cause it does not find this method.

Re: Playing Video Over the image

September 2, 2011 - 11:20am #21

sallespro and nm:

It seems like your questions are related (one for Android, one for iOS). In either case, I might set up the video view ahead of time, add it to the view hierarchy, hide it, and wait until a target is found to show the view and play the video. For both Android and iOS you'll need to make sure that you change the view visibility on the main thread. The toast sample (http://ar.qualcomm.at/node/2000032) shows how to do this on Android using a Handler. On iOS, you can use the performSelectorOnMainThread method.

So for the iOS code above, you can add this to the didFinishLaunchingWithOptions method:

view.movieController = video;
video.view.hidden = YES;
video.shouldAutoplay = NO;

(Set up a property in the EAGLView to store the MPMoviePlayerController object.)

Then, in EAGLView.mm you can add something like this:

bool videoPlaying = NO;

- (void)showMovieController
{
    movieController.view.hidden = NO;
}

- (void)renderFrameQCAR
{
    ...
    for (int i = 0; i < state.getNumActiveTrackables(); ++i)
    {
        ...
        if (!videoPlaying)
        {
            [self performSelectorOnMainThread:@selector(showMovieController) withObject:nil waitUntilDone:NO];
            videoPlaying = YES;
        }

        ...
    }
    ...
}

- Kim

Re: Playing Video Over the image

September 2, 2011 - 12:10am #20

Hi ksiva,

Thanks for replying but using your way the video starts playing when the app startsbut i want to play the video when the camera focuses on the trackable.

Plz help me with this...

Playing Video Over the image in Android

August 28, 2011 - 7:06pm #19

Hi MoSr,

it is good to know you're pushing functionality ahead !

I have already successfuly implemented the "Toast" example and called a video intent, but I would like to know how overlay the AR & video playback ( Not to include the video frames as a texture into the 3D scene ).

The problem I foresee is how to call video play within the JNI renderframe through the message handler and let it know what View to use ?

thanks, raf

Log in or register to post comments