{"id":410,"date":"2012-05-02T16:28:07","date_gmt":"2012-05-02T15:28:07","guid":{"rendered":"http:\/\/www.trappers.tk\/site\/?p=410"},"modified":"2012-05-06T10:57:12","modified_gmt":"2012-05-06T09:57:12","slug":"face-detection-with-core-image-on-live-video","status":"publish","type":"post","link":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/","title":{"rendered":"Face detection with Core Image on Live Video"},"content":{"rendered":"<p><a href=\"https:\/\/i0.wp.com\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097.png\"><img data-recalc-dims=\"1\" loading=\"lazy\" decoding=\"async\" class=\"alignright size-medium wp-image-424\" title=\"Mustachio\" src=\"https:\/\/i0.wp.com\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png?resize=200%2C300\" alt=\"\" width=\"200\" height=\"300\" srcset=\"https:\/\/i0.wp.com\/jeroentrappers.be\/site\/wp-content\/uploads\/2012\/05\/IMG_0097.png?resize=200%2C300&amp;ssl=1 200w, https:\/\/i0.wp.com\/jeroentrappers.be\/site\/wp-content\/uploads\/2012\/05\/IMG_0097.png?w=320&amp;ssl=1 320w\" sizes=\"auto, (max-width: 200px) 100vw, 200px\" \/><\/a>In this article I will explain how to do face detection on a live video feed using an iOS 5 device. We will be using Core Image to do the heavy lifting. The code is loosely based on the <a href=\"http:\/\/developer.apple.com\/library\/ios\/#samplecode\/SquareCam\/Introduction\/Intro.html\">SquareCam<\/a> sample code from Apple.<\/p>\n<p>To get started, we need to show the live video of the front facing camera. We use AVFoundation to do this. We start by setting up the AVCaptureSession. We use 640&#215;480 as the capture resolution. Keep in mind that face detection is relatively compute intensive. The less pixels we need to munch, the faster the processing can be done. This is an interactive application, so realtime performance is important. We tell the AVCaptureSession which camera to use as input device.<\/p>\n<p>To show the preview, we create an AVCaptureVideoPreviewLayer and add it to the previewView, that was created in the Xib. Don&#8217;t forget to call [session startRunning]. That was the easy part.<\/p>\n<pre class=\"brush:c\">NSError *error = nil;\r\nAVCaptureSession *session = [[AVCaptureSession alloc] init];\r\nif ([[UIDevice currentDevice] userInterfaceIdiom] == UIUserInterfaceIdiomPhone){\r\n    [session setSessionPreset:AVCaptureSessionPreset640x480];\r\n} else {\r\n    [session setSessionPreset:AVCaptureSessionPresetPhoto];\r\n}\r\n\/\/ Select a video device, make an input\r\nAVCaptureDevice *device;\r\nAVCaptureDevicePosition desiredPosition = AVCaptureDevicePositionFront;\r\n\/\/ find the front facing camera\r\nfor (AVCaptureDevice *d in [AVCaptureDevice devicesWithMediaType:AVMediaTypeVideo]) {\r\n\tif ([d position] == desiredPosition) {\r\n\t\tdevice = d;\r\n        self.isUsingFrontFacingCamera = YES;\r\n\t\tbreak;\r\n\t}\r\n}\r\n\/\/ fall back to the default camera.\r\nif( nil == device )\r\n{\r\n    self.isUsingFrontFacingCamera = NO;\r\n    device = [AVCaptureDevice defaultDeviceWithMediaType:AVMediaTypeVideo];\r\n}\r\n\/\/ get the input device\r\nAVCaptureDeviceInput *deviceInput = [AVCaptureDeviceInput deviceInputWithDevice:device error:&amp;error];\r\nif( !error ) {\r\n\r\n    \/\/ add the input to the session\r\n    if ( [session canAddInput:deviceInput] ){\r\n        [session addInput:deviceInput];\r\n    }\r\n\r\n    self.previewLayer = [[AVCaptureVideoPreviewLayer alloc] initWithSession:session];\r\n    self.previewLayer.backgroundColor = [[UIColor blackColor] CGColor];\r\n    self.previewLayer.videoGravity = AVLayerVideoGravityResizeAspect;\r\n\r\n    CALayer *rootLayer = [self.previewView layer];\r\n    [rootLayer setMasksToBounds:YES];\r\n    [self.previewLayer setFrame:[rootLayer bounds]];\r\n    [rootLayer addSublayer:self.previewLayer];\r\n    [session startRunning];\r\n\r\n}\r\nsession = nil;\r\nif (error) {\r\n\tUIAlertView *alertView = [[UIAlertView alloc] initWithTitle:\r\n                        [NSString stringWithFormat:@\"Failed with error %d\", (int)[error code]]\r\n                                           message:[error localizedDescription]\r\n\t\t\t\t\t\t\t\t\t      delegate:nil\r\n\t\t\t\t\t\t\t     cancelButtonTitle:@\"Dismiss\"\r\n\t\t\t\t\t\t\t     otherButtonTitles:nil];\r\n\t[alertView show];\r\n\t[self teardownAVCapture];\r\n}<\/pre>\n<p>Now for the face detection.<\/p>\n<p>We create the face detector itself in viewDidLoad, and keep a reference to it with a property. We use low accuracy, again for performance reasons.<\/p>\n<pre class=\"brush:c\">NSDictionary *detectorOptions = [[NSDictionary alloc] initWithObjectsAndKeys:CIDetectorAccuracyLow, CIDetectorAccuracy, nil];\r\nself.faceDetector = [CIDetector detectorOfType:CIDetectorTypeFace context:nil options:detectorOptions];<\/pre>\n<p>&nbsp;<\/p>\n<p>We access the data captured by the camera by creating an AVCaptureVideoDataOutput, using BGRA as pixelformat. We drop frames we cannot process. To do the actual processing, we create a separate processing queue. This feature works via the delegate method, that gets called for each frame on the processing queue.<\/p>\n<pre class=\"brush:c\">\/\/ Make a video data output\r\nself.videoDataOutput = [[AVCaptureVideoDataOutput alloc] init];\r\n\/\/ we want BGRA, both CoreGraphics and OpenGL work well with 'BGRA'\r\nNSDictionary *rgbOutputSettings = [NSDictionary dictionaryWithObject:\r\n                                   [NSNumber numberWithInt:kCMPixelFormat_32BGRA] forKey:(id)kCVPixelBufferPixelFormatTypeKey];\r\n[self.videoDataOutput setVideoSettings:rgbOutputSettings];\r\n[self.videoDataOutput setAlwaysDiscardsLateVideoFrames:YES]; \/\/ discard if the data output queue is blocked\r\n\/\/ create a serial dispatch queue used for the sample buffer delegate\r\n\/\/ a serial dispatch queue must be used to guarantee that video frames will be delivered in order\r\n\/\/ see the header doc for setSampleBufferDelegate:queue: for more information\r\nself.videoDataOutputQueue = dispatch_queue_create(\"VideoDataOutputQueue\", DISPATCH_QUEUE_SERIAL);\r\n[self.videoDataOutput setSampleBufferDelegate:self queue:self.videoDataOutputQueue];\r\nif ( [session canAddOutput:self.videoDataOutput] ){\r\n    [session addOutput:self.videoDataOutput];\r\n}\r\n\/\/ get the output for doing face detection.\r\n[[self.videoDataOutput connectionWithMediaType:AVMediaTypeVideo] setEnabled:YES];<\/pre>\n<p>The actual processing happens in the delegate method, that gets called on the background. First the frameBuffer is created, we use all attachments that come with the captured frame for processing. \u00c2\u00a0We add exif information onto the image, because we need to know which side is up. The actual face detection is done in the method [self.facedetector featuresInImage:ciImage options:imageOptions];<\/p>\n<pre class=\"brush:c\">- (void)captureOutput:(AVCaptureOutput *)captureOutput\r\n    didOutputSampleBuffer:(CMSampleBufferRef)sampleBuffer\r\n       fromConnection:(AVCaptureConnection *)connection\r\n{\r\n\t\/\/ get the image\r\n\tCVPixelBufferRef pixelBuffer = CMSampleBufferGetImageBuffer(sampleBuffer);\r\n\tCFDictionaryRef attachments = CMCopyDictionaryOfAttachments(kCFAllocatorDefault, sampleBuffer, kCMAttachmentMode_ShouldPropagate);\r\n\tCIImage *ciImage = [[CIImage alloc] initWithCVPixelBuffer:pixelBuffer\r\n                                                      options:(__bridge NSDictionary *)attachments];\r\n\tif (attachments) {\r\n\t\tCFRelease(attachments);\r\n    }\r\n\r\n    \/\/ make sure your device orientation is not locked.\r\n\tUIDeviceOrientation curDeviceOrientation = [[UIDevice currentDevice] orientation];\r\n\r\n\tNSDictionary *imageOptions = nil;\r\n\r\n\timageOptions = [NSDictionary dictionaryWithObject:[self exifOrientation:curDeviceOrientation]\r\n                                               forKey:CIDetectorImageOrientation];\r\n\r\n\tNSArray *features = [self.faceDetector featuresInImage:ciImage\r\n                                                   options:imageOptions];\r\n\r\n    \/\/ get the clean aperture\r\n    \/\/ the clean aperture is a rectangle that defines the portion of the encoded pixel dimensions\r\n    \/\/ that represents image data valid for display.\r\n\tCMFormatDescriptionRef fdesc = CMSampleBufferGetFormatDescription(sampleBuffer);\r\n\tCGRect cleanAperture = CMVideoFormatDescriptionGetCleanAperture(fdesc, false \/*originIsTopLeft == false*\/);\r\n\r\n\tdispatch_async(dispatch_get_main_queue(), ^(void) {\r\n\t\t[self drawFaces:features\r\n            forVideoBox:cleanAperture\r\n            orientation:curDeviceOrientation];\r\n\t});\r\n}<\/pre>\n<p>The last step is to actually draw something on the screen where the face has been detected. The method drawFaces:forVideoBox:orientation is called on the main thread to do this.<\/p>\n<p>In this method, we will draw an image onto a CALayer in the previewLayer. For each detected face, we will create or reuse a layer. We have to setup the correct size based on the bounds of the detected face. Take into account that the video has been scaled, so we also need to take that factor into account. \u00c2\u00a0Then we position the image onto the layer. The layer in turn needs to be rotated into the right orientation. This is done based on the device orientation.<\/p>\n<pre class=\"brush:c\">\/\/ called asynchronously as the capture output is capturing sample buffers, this method asks the face detector\r\n\/\/ to detect features and for each draw the green border in a layer and set appropriate orientation\r\n- (void)drawFaces:(NSArray *)features\r\n      forVideoBox:(CGRect)clearAperture\r\n      orientation:(UIDeviceOrientation)orientation\r\n{\r\n\tNSArray *sublayers = [NSArray arrayWithArray:[self.previewLayer sublayers]];\r\n\tNSInteger sublayersCount = [sublayers count], currentSublayer = 0;\r\n\tNSInteger featuresCount = [features count], currentFeature = 0;\r\n\r\n\t[CATransaction begin];\r\n\t[CATransaction setValue:(id)kCFBooleanTrue forKey:kCATransactionDisableActions];\r\n\r\n\t\/\/ hide all the face layers\r\n\tfor ( CALayer *layer in sublayers ) {\r\n\t\tif ( [[layer name] isEqualToString:@\"FaceLayer\"] )\r\n\t\t\t[layer setHidden:YES];\r\n\t}\t\r\n\r\n\tif ( featuresCount == 0 ) {\r\n\t\t[CATransaction commit];\r\n\t\treturn; \/\/ early bail.\r\n\t}\r\n\r\n\tCGSize parentFrameSize = [self.previewView frame].size;\r\n\tNSString *gravity = [self.previewLayer videoGravity];\r\n\tBOOL isMirrored = [self.previewLayer isMirrored];\r\n\tCGRect previewBox = [ViewController videoPreviewBoxForGravity:gravity\r\n                                                        frameSize:parentFrameSize\r\n                                                     apertureSize:clearAperture.size];\r\n\r\n\tfor ( CIFaceFeature *ff in features ) {\r\n\t\t\/\/ find the correct position for the square layer within the previewLayer\r\n\t\t\/\/ the feature box originates in the bottom left of the video frame.\r\n\t\t\/\/ (Bottom right if mirroring is turned on)\r\n\t\tCGRect faceRect = [ff bounds];\r\n\r\n\t\t\/\/ flip preview width and height\r\n\t\tCGFloat temp = faceRect.size.width;\r\n\t\tfaceRect.size.width = faceRect.size.height;\r\n\t\tfaceRect.size.height = temp;\r\n\t\ttemp = faceRect.origin.x;\r\n\t\tfaceRect.origin.x = faceRect.origin.y;\r\n\t\tfaceRect.origin.y = temp;\r\n\t\t\/\/ scale coordinates so they fit in the preview box, which may be scaled\r\n\t\tCGFloat widthScaleBy = previewBox.size.width \/ clearAperture.size.height;\r\n\t\tCGFloat heightScaleBy = previewBox.size.height \/ clearAperture.size.width;\r\n\t\tfaceRect.size.width *= widthScaleBy;\r\n\t\tfaceRect.size.height *= heightScaleBy;\r\n\t\tfaceRect.origin.x *= widthScaleBy;\r\n\t\tfaceRect.origin.y *= heightScaleBy;\r\n\r\n\t\tif ( isMirrored )\r\n\t\t\tfaceRect = CGRectOffset(faceRect, previewBox.origin.x + previewBox.size.width - faceRect.size.width - (faceRect.origin.x * 2), previewBox.origin.y);\r\n\t\telse\r\n\t\t\tfaceRect = CGRectOffset(faceRect, previewBox.origin.x, previewBox.origin.y);\r\n\r\n\t\tCALayer *featureLayer = nil;\r\n\r\n\t\t\/\/ re-use an existing layer if possible\r\n\t\twhile ( !featureLayer &amp;&amp; (currentSublayer &lt; sublayersCount) ) {\r\n\t\t\tCALayer *currentLayer = [sublayers objectAtIndex:currentSublayer++];\r\n\t\t\tif ( [[currentLayer name] isEqualToString:@\"FaceLayer\"] ) {\r\n\t\t\t\tfeatureLayer = currentLayer;\r\n\t\t\t\t[currentLayer setHidden:NO];\r\n\t\t\t}\r\n\t\t}\r\n\r\n\t\t\/\/ create a new one if necessary\r\n\t\tif ( !featureLayer ) {\r\n\t\t\tfeatureLayer = [[CALayer alloc]init];\r\n\t\t\tfeatureLayer.contents = (id)self.borderImage.CGImage;\r\n\t\t\t[featureLayer setName:@\"FaceLayer\"];\r\n\t\t\t[self.previewLayer addSublayer:featureLayer];\r\n\t\t\tfeatureLayer = nil;\r\n\t\t}\r\n\t\t[featureLayer setFrame:faceRect];\r\n\r\n\t\tswitch (orientation) {\r\n\t\t\tcase UIDeviceOrientationPortrait:\r\n\t\t\t\t[featureLayer setAffineTransform:CGAffineTransformMakeRotation(DegreesToRadians(0.))];\r\n\t\t\t\tbreak;\r\n\t\t\tcase UIDeviceOrientationPortraitUpsideDown:\r\n\t\t\t\t[featureLayer setAffineTransform:CGAffineTransformMakeRotation(DegreesToRadians(180.))];\r\n\t\t\t\tbreak;\r\n\t\t\tcase UIDeviceOrientationLandscapeLeft:\r\n\t\t\t\t[featureLayer setAffineTransform:CGAffineTransformMakeRotation(DegreesToRadians(90.))];\r\n\t\t\t\tbreak;\r\n\t\t\tcase UIDeviceOrientationLandscapeRight:\r\n\t\t\t\t[featureLayer setAffineTransform:CGAffineTransformMakeRotation(DegreesToRadians(-90.))];\r\n\t\t\t\tbreak;\r\n\t\t\tcase UIDeviceOrientationFaceUp:\r\n\t\t\tcase UIDeviceOrientationFaceDown:\r\n\t\t\tdefault:\r\n\t\t\t\tbreak; \/\/ leave the layer in its last known orientation\r\n\t\t}\r\n\t\tcurrentFeature++;\r\n\t}\r\n\r\n\t[CATransaction commit];\r\n}<\/pre>\n<p>There you go. That is the basic principle behind Face Detection in iOS 5. For the nitty gritty details, just have a look at <a title=\"the code\" href=\"https:\/\/github.com\/jeroentrappers\/FaceDetectionPOC\">the code<\/a> on github or download <a href=\"http:\/\/www.trappers.tk\/share\/FaceDetectionPOC.zip\">the zip<\/a>.<\/p>\n<p>There is much more to be explored. Core Image also provides access to the detected location of eyes and mouth. That would be even better to place the mustache correctly. We could also rotate the image, based on the angle of the face on the screen.<\/p>\n<p>Adios!<\/p>\n<p>Any feedback is appreciated in the comments.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>In this article I will explain how to do face detection on a live video feed using an iOS 5 device. We will be using Core Image to do the heavy lifting. The code is loosely based on the SquareCam sample code from Apple. To get started, we need to show the live video of&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"jetpack_post_was_ever_published":false,"_jetpack_newsletter_access":"","_jetpack_dont_email_post_to_subs":false,"_jetpack_newsletter_tier_id":0,"_jetpack_memberships_contains_paywalled_content":false,"_jetpack_memberships_contains_paid_content":false,"footnotes":"","jetpack_publicize_message":"","jetpack_publicize_feature_enabled":true,"jetpack_social_post_already_shared":false,"jetpack_social_options":{"image_generator_settings":{"template":"highway","enabled":false},"version":2}},"categories":[70],"tags":[],"class_list":["post-410","post","type-post","status-publish","format-standard","hentry","category-apple"],"jetpack_publicize_connections":[],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.3 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Face detection with Core Image on Live Video - Appmire blog<\/title>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Face detection with Core Image on Live Video - Appmire blog\" \/>\n<meta property=\"og:description\" content=\"In this article I will explain how to do face detection on a live video feed using an iOS 5 device. We will be using Core Image to do the heavy lifting. The code is loosely based on the SquareCam sample code from Apple. To get started, we need to show the live video of...\" \/>\n<meta property=\"og:url\" content=\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/\" \/>\n<meta property=\"og:site_name\" content=\"Appmire blog\" \/>\n<meta property=\"article:published_time\" content=\"2012-05-02T15:28:07+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2012-05-06T09:57:12+00:00\" \/>\n<meta property=\"og:image\" content=\"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png\" \/>\n<meta name=\"author\" content=\"Jeroen\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Jeroen\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"7 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"WebPage\",\"@id\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/\",\"url\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/\",\"name\":\"Face detection with Core Image on Live Video - Appmire blog\",\"isPartOf\":{\"@id\":\"https:\/\/jeroentrappers.be\/site\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#primaryimage\"},\"thumbnailUrl\":\"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png\",\"datePublished\":\"2012-05-02T15:28:07+00:00\",\"dateModified\":\"2012-05-06T09:57:12+00:00\",\"author\":{\"@id\":\"https:\/\/jeroentrappers.be\/site\/#\/schema\/person\/937cade782e684eb82e0ad8bf3e7806f\"},\"breadcrumb\":{\"@id\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#primaryimage\",\"url\":\"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png\",\"contentUrl\":\"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/jeroentrappers.be\/site\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Face detection with Core Image on Live Video\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/jeroentrappers.be\/site\/#website\",\"url\":\"https:\/\/jeroentrappers.be\/site\/\",\"name\":\"Appmire blog\",\"description\":\"www.appmire.be\",\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/jeroentrappers.be\/site\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Person\",\"@id\":\"https:\/\/jeroentrappers.be\/site\/#\/schema\/person\/937cade782e684eb82e0ad8bf3e7806f\",\"name\":\"Jeroen\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\/\/jeroentrappers.be\/site\/#\/schema\/person\/image\/\",\"url\":\"https:\/\/secure.gravatar.com\/avatar\/f73f0c93a26301ab27fc60b560e31d39?s=96&d=identicon&r=g\",\"contentUrl\":\"https:\/\/secure.gravatar.com\/avatar\/f73f0c93a26301ab27fc60b560e31d39?s=96&d=identicon&r=g\",\"caption\":\"Jeroen\"},\"sameAs\":[\"https:\/\/www.google.com\/accounts\/o8\/id?id=AItOawmXjGgZm3xAvfuje3MqTSlqYJRFFcUn9Pk\"],\"url\":\"https:\/\/jeroentrappers.be\/site\/author\/admin\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Face detection with Core Image on Live Video - Appmire blog","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/","og_locale":"en_US","og_type":"article","og_title":"Face detection with Core Image on Live Video - Appmire blog","og_description":"In this article I will explain how to do face detection on a live video feed using an iOS 5 device. We will be using Core Image to do the heavy lifting. The code is loosely based on the SquareCam sample code from Apple. To get started, we need to show the live video of...","og_url":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/","og_site_name":"Appmire blog","article_published_time":"2012-05-02T15:28:07+00:00","article_modified_time":"2012-05-06T09:57:12+00:00","og_image":[{"url":"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png","type":"","width":"","height":""}],"author":"Jeroen","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Jeroen","Est. reading time":"7 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"WebPage","@id":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/","url":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/","name":"Face detection with Core Image on Live Video - Appmire blog","isPartOf":{"@id":"https:\/\/jeroentrappers.be\/site\/#website"},"primaryImageOfPage":{"@id":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#primaryimage"},"image":{"@id":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#primaryimage"},"thumbnailUrl":"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png","datePublished":"2012-05-02T15:28:07+00:00","dateModified":"2012-05-06T09:57:12+00:00","author":{"@id":"https:\/\/jeroentrappers.be\/site\/#\/schema\/person\/937cade782e684eb82e0ad8bf3e7806f"},"breadcrumb":{"@id":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#primaryimage","url":"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png","contentUrl":"http:\/\/www.trappers.tk\/site\/wp-content\/uploads\/2012\/05\/IMG_0097-200x300.png"},{"@type":"BreadcrumbList","@id":"https:\/\/jeroentrappers.be\/site\/2012\/05\/02\/face-detection-with-core-image-on-live-video\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/jeroentrappers.be\/site\/"},{"@type":"ListItem","position":2,"name":"Face detection with Core Image on Live Video"}]},{"@type":"WebSite","@id":"https:\/\/jeroentrappers.be\/site\/#website","url":"https:\/\/jeroentrappers.be\/site\/","name":"Appmire blog","description":"www.appmire.be","potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/jeroentrappers.be\/site\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Person","@id":"https:\/\/jeroentrappers.be\/site\/#\/schema\/person\/937cade782e684eb82e0ad8bf3e7806f","name":"Jeroen","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/jeroentrappers.be\/site\/#\/schema\/person\/image\/","url":"https:\/\/secure.gravatar.com\/avatar\/f73f0c93a26301ab27fc60b560e31d39?s=96&d=identicon&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/f73f0c93a26301ab27fc60b560e31d39?s=96&d=identicon&r=g","caption":"Jeroen"},"sameAs":["https:\/\/www.google.com\/accounts\/o8\/id?id=AItOawmXjGgZm3xAvfuje3MqTSlqYJRFFcUn9Pk"],"url":"https:\/\/jeroentrappers.be\/site\/author\/admin\/"}]}},"jetpack_featured_media_url":"","jetpack_shortlink":"https:\/\/wp.me\/p1Ezsm-6C","jetpack_sharing_enabled":true,"jetpack_likes_enabled":true,"_links":{"self":[{"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/posts\/410","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/comments?post=410"}],"version-history":[{"count":19,"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/posts\/410\/revisions"}],"predecessor-version":[{"id":430,"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/posts\/410\/revisions\/430"}],"wp:attachment":[{"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/media?parent=410"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/categories?post=410"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/jeroentrappers.be\/site\/wp-json\/wp\/v2\/tags?post=410"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}