NIH Research Festival
–
–
FAES Terrace
CC
BIOENG-22
Histograms of oriented gradients (HOG) are widely employed image descriptors in modern computer-aided diagnosis systems. Built upon a set of local, robust statistics of low-level image gradients, HOG features areusually computed on raw intensity images. In this paper, we explore a learned image transformation scheme for producing higher-level inputs to HOG. Leveraging semantic object boundary cues, our methods compute data-driven image feature maps via a supervised boundary detector. Compared with the raw image map, boundary cues offer mid-level, more object-specific visual responses that can be suited for subsequent HOG encoding. We validate integrations of several image transformation maps with an application of computer-aided detection of lymph nodes on thoracoabdominal CT images. Our experiments demonstrate that semantic boundary cues based HOG descriptors complement and enrich the raw intensity alone. We observe an overall system with substantially improved results (∼78% versus 60% recall at 3 FP/volume for two target regions). The proposed system also moderately outperforms the state-of-the-art deep convolutional neural network (CNN) system in the mediastinum region, without relying on data augmentation and requiring significantly fewer training samples.
Scientific Focus Area: Biomedical Engineering and Biophysics
This page was last updated on Friday, March 26, 2021