I'm currently implementing Vuforia on my native iOS app and I need to enable photo capture while using it (Like the usual camera with AR capability). The problem is I'm stuck with the photo capture feature. I've tried to look for resources for this but unfortunately, I haven't seen any. How do I implement this? or am I missing something here? Is it not allowed?
I've tried implementing it with a custom camera view but I don't think its the right way to do it since custom camera view uses `AVCapturePhotoCaptureDelegate`, `AVCaptureSession`, `AVCapturePhotoOutput`, `AVCaptureVideoPreviewLayer`.
Your help would be very much appreciated. Thank you in advance.
- Sort Posts
- 4 replies
- Last post
Enable Photo Capture
Enable Photo Capture
Enable Photo Capture
Hi Strasza,
I have tried something very similar to this by getting a frame image from the presenting view (example below). The Vuforia::Frame only provides the background view from the camera but I need to get augmented scene in the same image. Is there a way to do that?
- (UIImage*)renderFrameIntoImage
{
@synchronized(self)
{
Vuforia::setFrameFormat(Vuforia::RGB888, true);
Vuforia::State state = Vuforia::Renderer::getInstance().begin();
Vuforia::Frame frame = state.getFrame();
int frameCount = frame.getNumImages();
UIImage* image = NULL;
if(frameCount > 0){
int lastFrame = frameCount-1;
const Vuforia::Image *qcarImage = frame.getImage(lastFrame);
NSLog(@"Creating frame image %d pixel format %d", lastFrame, qcarImage->getFormat() );
image = [self createUIImage:qcarImage];
}
Vuforia::Renderer::getInstance().end();
return image;
}
}
- (UIImage *)createUIImage:(const Vuforia::Image *)qcarImage
{
int width = qcarImage->getWidth();
int height = qcarImage->getHeight();
int bitsPerComponent = 8;
int bitsPerPixel = Vuforia::getBitsPerPixel(Vuforia::RGB888);
int bytesPerRow = qcarImage->getBufferWidth() * bitsPerPixel / bitsPerComponent;
CGColorSpaceRef colorSpaceRef = CGColorSpaceCreateDeviceRGB();
CGBitmapInfo bitmapInfo = kCGBitmapByteOrderDefault | kCGImageAlphaNone;
CGColorRenderingIntent renderingIntent = kCGRenderingIntentDefault;
CGDataProviderRef provider = CGDataProviderCreateWithData(NULL, qcarImage->getPixels(), Vuforia::getBufferSize(width, height, Vuforia::RGB888), NULL);
CGImageRef imageRef = CGImageCreate(width, height, bitsPerComponent, bitsPerPixel, bytesPerRow, colorSpaceRef, bitmapInfo, provider, NULL, NO, renderingIntent);
UIImage *image = [UIImage imageWithCGImage:imageRef];
CGDataProviderRelease(provider);
CGColorSpaceRelease(colorSpaceRef);
CGImageRelease(imageRef);
return image;
}
Enable Photo Capture
Hello,
This article explains how to get the camera frame in Vuforia: https://library.vuforia.com/content/vuforia-library/en/articles/Solution/Working-with-the-Camera.html#How-To-Access-the-Camera-Image-in-Native
Is this what you are having issues with?
Thanks,
-Vuforia Support
Enable Photo Capture
Did you manage to find a solution for this? I am trying to do the same thing but struggling to find a way that works reliably.
The closest I have got so far is to snapshot both the Frame and Scene renderer and put them together but the camera frame sometimes produces a messed up image and always landscape dimension
I have managed to get this working so sharing the solution incase anyone else needs to do the same.
I have added a method to the VuforiaEAGLView instance that takes the pixels rendered from the buffer and creates an image that is the same size as the viewport.
- (UIImage*) snapshot {
UIImage *outputImage = nil;
// Setup pixel stream sizes
CGFloat scale = [[UIScreen mainScreen] scale];
static const size_t kComponentsPerPixel = 4;
static const size_t kBitsPerComponent = sizeof(unsigned char) * 8;
CGRect s = CGRectMake(0,0, (self.frame.size.width * scale), (self.frame.size.height * scale));
glViewport(0, 0, s.size.width, s.size.height);
size_t bufferLength = s.size.width * s.size.height * kComponentsPerPixel;
GLubyte* buffer = (GLubyte*) malloc(bufferLength);
// Get the pixels for the defined screen size and create a CG image
glReadPixels(0, 0, s.size.width, s.size.height, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
CGDataProviderRef ref = CGDataProviderCreateWithData(NULL, buffer, s.size.width * s.size.height * 4, NULL);
CGImageRef iref = CGImageCreate(s.size.width, s.size.height, 8, 32, s.size.width * 4, CGColorSpaceCreateDeviceRGB(), kCGBitmapByteOrderDefault, ref, NULL, true, kCGRenderingIntentDefault);
size_t width = CGImageGetWidth(iref);
size_t height = CGImageGetHeight(iref);
size_t length = width * height * 4;
uint32_t *pixels = (uint32_t *)malloc(length);
CGContextRef context1 = CGBitmapContextCreate(pixels, width, height, 8, width * 4, CGImageGetColorSpace(iref), kCGImageAlphaNoneSkipFirst | kCGBitmapByteOrder32Big);
// Flip the dimensions
CGAffineTransform transform = CGAffineTransformIdentity;
transform = CGAffineTransformMakeTranslation(0.0f, height);
transform = CGAffineTransformScale(transform, 1.0, -1.0);
// Redraw the image
CGContextConcatCTM(context1, transform);
CGContextDrawImage(context1, CGRectMake(0.0f, 0.0f, width, height), iref);
CGImageRef outputRef = CGBitmapContextCreateImage(context1);
// Create the UI image
outputImage = [UIImage imageWithCGImage: outputRef];
// Release from memory
CGDataProviderRelease(ref);
CGImageRelease(iref);
CGContextRelease(context1);
CGImageRelease(outputRef);
free(pixels);
free(buffer);
return outputImage;
}