welcome to linkAR technical documentation

next previous

Processing Preview Frames

This allows us to process each frame that the camera preview receives.

In the ViewController.m we have:

- (void)viewDidLoad
{
    [super viewDidLoad];
    …
    // 1.- Init
    _cvView = [[ImageMatcher alloc] initWithAppKey:API_KEY useDefaultCamera:NO];
 
    …
    // 2.- Set filter
    [_cvView setEnableMedianFilter:YES];
 
    …
}


1- Initiates the Matcher. Notice that we set useDefaultCamera:NO
2- The median filter is recommended because we are processing continuous frames.

2.1.- Initiating own camera:


In ViewController.h we have:

#import "myCameraCapture.h"@property (nonatomic,strong) myCameraCapture *captureManager;
...
}


In ViewController.m we have:

- (void)viewDidLoad
{//1.- Initialize Capture Manager.
    [self setCaptureManager:[[myCameraCapture alloc] init]];
    //2.- 
    [[self captureManager] setVideocontroller:self];
    [[self captureManager] addVideoInput];
    [[self captureManager] addVideoOutput];
 
    //3.- Add Video Preview Layer and set the frame
    [[self captureManager] addVideoPreviewLayer];
    CGRect layerRect = [[[self view] layer] bounds];
    [[[self captureManager] previewLayer] setBounds:layerRect];
    [[[self captureManager] previewLayer] setPosition:CGPointMake(CGRectGetMidX(layerRect),
                                                                  CGRectGetMidY(layerRect))];
    [[[self view] layer] addSublayer:[[self captureManager] previewLayer]];
    [[captureManager captureSession] startRunning];
 
    …
}


1- Initialize Capture Manager.
2- We set ourselves as the managing VideoController to receive preview frame updates.
3- Add Video Preview Layer and set the frame.


This function is called for every frame. According to your output video settings (RGB or YUV) you will need to call processNewCameraFrameRGB or processNewCameraFrameYUV

-(void)processImageCamera
{
        [_cvView processNewCameraFrameYUV:captureManager.imageReference];
}

2.2.- File to manage the camera:


In myCameraCapture.h we have:

#import <CoreMedia/CoreMedia.h>
#import <AVFoundation/AVFoundation.h>
 
@interface myCameraCapture : NSObject <AVCaptureVideoDataOutputSampleBufferDelegate> {    
}
 
@property (strong,nonatomic) AVCaptureVideoPreviewLayer *previewLayer;
@property (strong,nonatomic) AVCaptureSession *captureSession;
@property (nonatomic, strong) UIViewController*videocontroller;
@property (nonatomic) CVImageBufferRef imageReference;
 
- (void)addVideoInput;
- (void)addVideoOutput;
- (void)addVideoPreviewLayer;
@end


In myCameraCapture.m we have:

// Initializes AVCaptureSession.
- (id)init {
	if ((self = [super init])) {
		[self setCaptureSession:[[AVCaptureSession alloc] init]];
	}
	return self;
}


// Add preview layer.
- (void)addVideoPreviewLayer {
	[self setPreviewLayer:[[AVCaptureVideoPreviewLayer alloc] initWithSession:[self captureSession]]];
	[[self previewLayer] setVideoGravity:AVLayerVideoGravityResizeAspectFill];
 
}


// Add video as device input.
- (void)addVideoInput {
    AVCaptureDevice *videoDevice = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];	
	if (videoDevice) {
		NSError *error;
		AVCaptureDeviceInput *videoIn = [AVCaptureDeviceInput deviceInputWithDevice:videoDevice error:&error];
		if (!error) {
			if ([[self captureSession] canAddInput:videoIn])
				[[self captureSession] addInput:videoIn];
			else
				NSLog(@"Couldn't add video input");		
		}
		else
			NSLog(@"Couldn't create video input");
	}
	else
		NSLog(@"Couldn't create video capture device");
}


// Add video output. This function is called only if the mode is MODEVIDEO.
- (void)addVideoOutput {
 
    // 1
    AVCaptureVideoDataOutput *output = [[AVCaptureVideoDataOutput alloc] init];
    [[self captureSession] addOutput:output];
 
    // 2
    output.videoSettings =
    [NSDictionary dictionaryWithObject:[NSNumber numberWithInt:kCVPixelFormatType_32BGRA]
                                forKey:(id)kCVPixelBufferPixelFormatTypeKey];
 
    // 3
    dispatch_queue_t queue = dispatch_queue_create("myQueue", NULL);
    [output setSampleBufferDelegate:self queue:queue];
    dispatch_release(queue);
 
}


1- Create a VideoDataOutput and add it to the session.
2- Specify the pixel format.
3- Configure your output.


// This function is called for every new frame.
- (void)captureOutput:(AVCaptureOutput *)captureOutput didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer fromConnection:(AVCaptureConnection *)connection
{
    //1-
    imageReference = CMSampleBufferGetImageBuffer(sampleBuffer);
    //2-
    [videocontroller performSelectorOnMainThread:@selector(processImageCamera) withObject:nil waitUntilDone:YES];
}


1- Get output sample buffer.
2- Launch processImageCamera function (function of ViewController).

next previous