Skip to main content
 

Deep Learning Workflows for Cell Nucleus Segmentation from Fluorescence Microscopy Images

Friday, September 14, 2018 — Poster Session V

12:00 p.m. – 1:30 p.m.
FAES Terrace
NCI
COMPBIO-14

Authors

  • G Zaki
  • P Gudla
  • J Kim
  • S Shachar
  • T Misteli
  • G Pegoraro

Abstract

The segmentation of the cell nucleus based on DNA stain signal from fluorescence microscopy images is often the first step in high throughput imaging (HTI) applications. For instance, measuring the position of genomic loci using with fluorescent in situ hybridization (FISH) requires accurate identification of the nucleus periphery. Similarly, HTI-based phenotypic screens of perturbing agents (e.g., siRNA, chemical libraries) also requires accurate detection of nuclei and (sub) cellular objects to quantify the biological effects. Ideally, nuclear segmentation algorithms should be robust with regard to varying distribution of pixel intensities, day-to-day experimental variations, different cell and nucleus morphologies , and overlapping or closely packed cell nuclei. In this work, we describe nuclear segmentation workflows based on Deep Convolution Neural Networks (D-CNN) ,that take advantage of transfer learning, and that do not require a large number of labelled training images. In addition, we show how to perform hyper parameter optimization for these D-CNN on the NIH Biowulf HPC cluster using the CANcer Distributed Learning Environment (CANDLE). To benchmark these D-CNN, we compared, semi-quantitatively, the performance of two, state-of-the-art, D-CNN based segmentation models, namely, UResNet152 and Mask-R-CNN. The former falls under the category of pixel-based semantic image segmentation (i.e. the task of assigning a pixel to pre-determined nucleus or background classes). The latter belongs to the category of multiple-class, multiple-instance image segmentation (i.e. the detection of individual objects of different classes). Preliminary results of tests using both models are very promising even when trained using a limited number of ground-truth labelled images.

Category: Computational Biology