Skip to main content
 

FaceMatch: visual search for pictures of missing and found persons during a disaster event

Thursday, October 11, 2012 — Poster Session III

10:00 a.m. – Noon

Natcher Conference Center, Building 45

NLM

IMAG-3

Authors

  • E. Borovikov
  • P. Ghosh
  • S. Vajda
  • G. Thoma
  • S. Antani
  • M. Gill

Abstract

NLM’s People Locator system allows the posting of photos and simple metadata (name, age, location) for persons missing (or found) in the wake of a disaster. To extend the current text-based search method to a visual search for people's faces, we developed FaceMatch, a system to match faces in a query image to those in the stored photos. Face matching is a two-stage process: faces in photos sent as queries are first localized using an improved Viola-Jones face detector, and then image features (SIFT, SURF, ORB and HAAR) are extracted, combined and matched against an index of features extracted from the stored photos. Face matching is challenging because of the lack of training data, low-resolution photos, wide variability in lighting, facial expression, head pose, ethnicity, occlusions and deformed faces due to injury. Ongoing research, using images collected from the 2010 earthquake in Haiti and Labeled Faces in the Wild dataset, includes exploring more discriminating features, modeling skin color for more accurate face localization, a Haar wavelet-based technique to eliminate near-duplicate photos, and image normalization. FaceMatch speed and accuracy performance results are presented.

back to top