Instance Segmentation of Densely Packed Cells Using a Hybrid Model of U-Net and Mask R-CNN

TitleInstance Segmentation of Densely Packed Cells Using a Hybrid Model of U-Net and Mask R-CNN
Publication TypeConference Paper
Year of Publication2020
AuthorsKonopczyński, T, Heiman, R, Woźnicki, P, Gniewek, P, Duvernoy, M-C, Hallatschek, O, Hesser, J
EditorRutkowski, L, Scherer, R, Korytkowski, M, Pedrycz, W, Tadeusiewicz, R, Zurada, JM
Conference NameArtificial Intelligence and Soft Computing
PublisherSpringer International Publishing
Conference LocationCham
ISBN Number978-3-030-61401-0
KeywordsCell segmentation, Instance segmentation, Mask R-CNN, U-Net

In malignant tumors and microbial infections, cells are commonly growing under confinement due to rapid proliferation in limited space. Nonetheless, this effect is poorly documented despite influencing drug efficiency. Studying budding yeast grown in space-limited micro-environments is a great tool to investigate this effect, conditioned on a robust cell instance segmentation. Due to confinement, cells become densely packed, impairing traditional segmentation methods. To tackle that problem, we show the performance of Mask-RCNN based methods on our dataset of budding yeast populations in a space-limited environment. We compare a number of methods, which include the pure Mask R-CNN, the 1st and 2nd place solution of the 2018 Kaggle Data Science Bowl and a watershed ensemble variant of Mask R-CNN and U-Net. Additionally, we propose a Hybrid model that combines a semantic and an instance segmentation module in a sequential way. In the latter, the encoder-decoder architecture used for semantic segmentation produces a segmentation probability map, which is concatenated with the input image and then fed into the Mask R-CNN network in order to achieve the final instance segmentation result. Consequently, this model is able to efficiently share and reuse information at different levels between the two network modules. Our experiments demonstrate that the proposed model performs best and achieves a mean Average Precision (mAP) of 0.724 and a Dice coefficient of 0.9284 on our dataset.

Citation Keykonopczynski_instance_2020