NIH Research Festival
–
–
FAES Terrace
CC
BIOENG-2
Segment Anything Model (SAM) is a recently released deep learning foundation model for image segmentation. We evaluated the multi-organ segmentation capabilities of the SAM as an interactive semi-automated annotation tool for magnetic resonance (MR) images. The evaluation utilized a simulated interactive annotation setup in a publicly available multi-organ MRI dataset (AMOS22). The initial prompt given to SAM was the ground truth bounding box with added random jitter. SAM then outputted an initial segmentation mask. In subsequent iterations, SAM took point-based prompts, as well as all previous prompts and the current segmentation mask, to generate the next refined segmentation mask. This procedure was repeated for 10 iterations to produce the final segmentation. The segmentation results of target 15 organs were evaluated against the ground truth masks using the Dice similarity coefficient (DSC) on 2D slices. The mean±std DSC for 15 organs after the initial bounding box prompt was 0.777±0.049. The final mean±std DSC after 10 iterative steps was 0.901±0.056. The best three organs were the right kidney (0.954±0.034), left kidney (0.951±0.028), and spleen (0.949±0.051). The worst three organs were right adrenal gland (0.807±0.113), prostate/uterus (0.819±0.246), and duodenum (0.825±0.156). The mean DSCs increased monotonically with the number of prompts, implying better segmentation results. Experimental results demonstrated that after 10 iterations, the SAM model was able to provide reasonable segmentation results for most of the organs in the MRI images, indicating that the SAM model can potentially reduce radiologists’ annotation burden for segmenting MR images to just a few mouse clicks.
Scientific Focus Area: Biomedical Engineering and Biophysics
This page was last updated on Monday, September 25, 2023