Log in or register to post comments

CameraDevice and AVCaptureSession

November 30, 2011 - 4:01am #1

Greetings!

I want to record video from camera for further processing while tracking image with QCAR Tracker (my purpose is to capture video separately from AR overlay). Having no control over QCAR CameraDevice inputs, I am creating my own AVCaptureSession like this:

self.captureSession = [[AVCaptureSession alloc] init];
self.captureSession.sessionPreset = AVCaptureSessionPresetHigh;

AVCaptureDeviceInput *videoInput = [AVCaptureDeviceInput deviceInputWithDevice:[AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo] error:NULL];
if ([self.captureSession canAddInput:videoInput]) {
[self.captureSession addInput:videoInput];
}

self.movieFileOutput = [[AVCaptureMovieFileOutput alloc] init];
if ([self.captureSession canAddOutput:self.movieFileOutput]) {
[self.captureSession addOutput:self.movieFileOutput];
}

[self.captureSession startRunning];

But it seems that CameraDevice just doesn't work along with AVCaptureSession - when one starts running the other immediately gets stuck (reproduced on iPhone3GS with iOS5.0 and iPhone4S with iOS5.0.1).
I can retrieve images from camera with AVCaptureSession using AVCaptureVideoDataOutputSampleBufferDelegate. Is there any way to pass them to Tracker directly without using CameraDevice (just like some other AR tracking libraries allow)?

Thank you in advance,
Zmicier Predka

Re: CameraDevice and AVCaptureSession

December 22, 2011 - 12:49pm #9
MoSR wrote:

Hi Dmitry,

We have no difference in our code for iPhone 4S and all the sample apps in QCAR 1.5 beta have been tested on this platform - they wouldn't work if this were the case. I've just tested ImageTargets with QCAR 1.0.0 on a 4S and that works fine. Can you confirm that by running a sample app?

MoSR,

I've meant my code above in this thread for recording video doesn’t work with iPhone 4S. There is no code that creates images with YUV in samples, right? ;) I'm retrieving images from QCAR in YUV format, and the problem is that format differs for 4S. Probably not your fault, it seems that pixel format 420YpCbCr8BiPlanarVideoRange is device specific and had been described differently for 4S device (I've found some OpenGLES specific keys in its description on 4S which wasn't there on older devices).

It's very weird that pixel buffers from pool for that format on 4S has completely different bytes per row count and data size in comparison to those that QCAR retrieves from AVCaptureVideoDataOutputSampleBufferDelegate with that format, and so we have little chance to assemble QCAR YUV pixels back to pixel buffer.

Whole thing looks like Apple's bug, but there could be straightforward workaround - if QCAR will allow to get entire CMSampleBuffer or CVPixelBuffer (that it recieves from camera) instead of (or along with) just their bytes. This could easily solve problem with recording video from QCAR images.

Re: CameraDevice and AVCaptureSession

December 21, 2011 - 9:21am #8
Zmicier wrote:

MoSR,

I found another weird thing - my code works fine with iPhone 3GS and 4, but not on 4S. It seems that QCAR v1.0.0 returns images in different pixel format for that model (according to release notes of v1.5.3b1, we can't retrieve images in YUV with that version, so I have to stay on v1.0.0 till it's fixed).
Could you please shed some light on format for 4S (it could likely be some sort of 422YpCbCr8 or close to that) and the way we should decode it?

Thank you,
Dmitry.

Hi Dmitry,

We have no difference in our code for iPhone 4S and all the sample apps in QCAR 1.5 beta have been tested on this platform - they wouldn't work if this were the case. I've just tested ImageTargets with QCAR 1.0.0 on a 4S and that works fine. Can you confirm that by running a sample app?

Re: CameraDevice and AVCaptureSession

December 21, 2011 - 6:51am #7

MoSR,

I found another weird thing - my code works fine with iPhone 3GS and 4, but not on 4S. It seems that QCAR v1.0.0 returns images in different pixel format for that model (according to release notes of v1.5.3b1, we can't retrieve images in YUV with that version, so I have to stay on v1.0.0 till it's fixed).
Could you please shed some light on format for 4S (it could likely be some sort of 422YpCbCr8 or close to that) and the way we should decode it?

Thank you,
Dmitry.

Re: CameraDevice and AVCaptureSession

December 20, 2011 - 9:38am #6
Zmicier wrote:

One last question - which of the two formats QCAR actually uses?

Hi Zmicier,

The specific data format is kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange.

Re: CameraDevice and AVCaptureSession

December 19, 2011 - 2:05pm #5

[=Times New Roman" />[=3" />MoSR, thank you for reply!

I’ve found out that QCAR’s 12-bit long YUV pixel format is equal to neither Apple’s ‘yuvs’ (aka 422YpCbCr8_yuvs) nor ‘yuvf’ (aka 422YpCbCr8FullRange), but do match one of following almost identical formats:[/SIZE" />[/FONT" />
[" />
[*" />[FONT=Times New Roman" />[SIZE=3" />'420v' (marked with kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange key);[/SIZE" />[/FONT" />
[*" />[FONT=Times New Roman" />[SIZE=3" />'420f' (kCVPixelFormatType_420YpCbCr8BiPlanarFullRange).[/SIZE" />[/FONT" />
[/LIST" />
[FONT=Times New Roman" />[SIZE=3" />These formats ARE supported by Core Video framework, so I’ve finally been able to record my video with minimal performance costs. :cool:
Although both of them are bi-planar (contain two chunk of data each), QCAR::Image getPixels() method returns single pointer to a bytes. After several attempts I've managed to decode the data and put together into capable pixel buffers. Here is the code:
[CODE" />// Setup
self.assetWriter = [[AVAssetWriter alloc" /> initWithURL:videoUrl fileType:AVFileTypeQuickTimeMovie error:&error" />;
self.assetWriterInput = [AVAssetWriterInput assetWriterInputWithMediaType:AVMediaTypeVideo outputSettings:videoSettings" />;
self.assetWriterInput.expectsMediaDataInRealTime = YES;
NSDictionary *bufferAttributes = [NSDictionary dictionaryWithObjectsAndKeys:[NSNumber numberWithInt:kCVPixelFormatType_420YpCbCr8BiPlanarVideoRange" />, kCVPixelBufferPixelFormatTypeKey, nil" />;
self.pixelBufferAdaptor = [AVAssetWriterInputPixelBufferAdaptor assetWriterInputPixelBufferAdaptorWithAssetWriterInput:self.assetWriterInput sourcePixelBufferAttributes:bufferAttributes" />;
[self.assetWriter addInput:self.assetWriterInput" />;[/CODE" />[/SIZE" />[/FONT" />
[CODE" />[FONT=Times New Roman" />[SIZE=3" />// Write picture to video
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />if (![self.assetWriterInput isReadyForMoreMediaData" />) {
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" /> NSLog(@"Writer input is not ready");
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" /> return;
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />}
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />CVPixelBufferRef pixelBuffer = NULL;
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />CVReturn status = CVPixelBufferPoolCreatePixelBuffer(kCFAllocatorDefault, self.pixelBufferAdaptor.pixelBufferPool, &pixelBuffer);
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />if (status != kCVReturnSuccess) {
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" /> NSLog(@"Error creating pixel buffer: status=%d", status);
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" /> return;
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />}

[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />CVPixelBufferLockBaseAddress(pixelBuffer, 0);
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />// Each picture has 480*360*12/8 = 259200 bytes length
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />// pixels = qcarImage->getPixels()
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />memcpy(CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 0), pixels, 172800); // First plane is 2/3 of data size
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />memcpy(CVPixelBufferGetBaseAddressOfPlane(pixelBuffer, 1), &((int8_t *)pixels)[172800" />, 86400); // Second plane is the rest
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />CVPixelBufferUnlockBaseAddress(pixelBuffer, 0);
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />
if (![self.pixelBufferAdaptor appendPixelBuffer:pixelBuffer withPresentationTime:time" />) {
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" /> NSLog(@"Unable to append buffer to video");
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />}
[/SIZE" />[/FONT" />[FONT=Times New Roman" />[SIZE=3" />CVPixelBufferRelease(pixelBuffer);[/SIZE" />[/FONT" />[/CODE" />One last question - which of the two formats QCAR actually uses?

Thanks,
Zmicier[/list]

Re: CameraDevice and AVCaptureSession

December 15, 2011 - 9:26am #4

Hi Zmicier,

For optimal performance you may want to try grabbing YUV via the Image API, then writing some optimized C code for converting that to BGRA. It's possible that using RGB888 would be quicker, if the internal conversion to RGB888 then shuffling the bytes is quicker than calculating/reconstructing from YUV->BGR (alpha will always be 1.0 of course).

Re: CameraDevice and AVCaptureSession

December 15, 2011 - 4:51am #3

Hi again guys,

I’m facing some serious troubles trying to record video while QCAR is active.

Looks like there is no compatibility between image formats we can get from QCAR and those we can use for a video recording.

To be able to combine images to a video we have to create pixel buffers similar to this:
CVPixelBufferRef pixelBuffer = NULL;
CVReturn status = CVPixelBufferCreate(kCFAllocatorDefault, 480, 360, kCVPixelFormatType_32BGRA, (CFDictionaryRef)options, &pixelBuffer);

The problem is: pixel buffers work great with 32BGRA format, but if we set 24-bit RGB (using kCVPixelFormatType_24RGB key) we’ll be unable to append these buffers to a movie. Other formats like RGB565 or 12-bit YUV seem to be unsupported either.

Having no opportunity to get BGRA from QCAR directly, I’m grabbing it from EAGL layer. But copying pixels from video memory drops fps down by half and makes user experience poor on single-core devices (although it’s allowable on iPhone4S).

Is there really going to be no way to get images from camera more efficiently (and perhaps with better quality than 480x360)?

I really appreciate any help or advice.

Thanks,
Zmicier

P.S. By the way, I’ve found that setting QCAR hints doesn’t affects its performance, e.g. there is no tangible difference between
QCAR::setHint(QCAR::HINT_IMAGE_TARGET_MULTI_FRAME_ENABLED, 1);
QCAR::setHint(QCAR::HINT_IMAGE_TARGET_MILLISECONDS_PER_MULTI_FRAME, 10);
and
QCAR::setHint(QCAR::HINT_IMAGE_TARGET_MULTI_FRAME_ENABLED, 1);
QCAR::setHint(QCAR::HINT_IMAGE_TARGET_MILLISECONDS_PER_MULTI_FRAME, 100);
or even
QCAR::setHint(QCAR::HINT_IMAGE_TARGET_MULTI_FRAME_ENABLED, 0);

Re: CameraDevice and AVCaptureSession

November 30, 2011 - 4:10am #2

and here:
http://ar.qualcomm.at/node/2000721

Log in or register to post comments