Deep learning based semantic segmentation and quantification for MRD biochip images
Özet
Microfluidic platforms offer prominent advantages for the early detection of cancer and monitoring the patient
response to therapy. Numerous microfluidic platforms have been developed for capturing and quantifying the
tumor cells integrating several readout methods. Earlier, we have developed a microfluidic platform (MRD
Biochip) to capture and quantify leukemia cells. This is the first study which employs a deep learning-based
segmentation to the MRD Biochip images consisting of leukemic cells, immunomagnetic beads and micropads.
Implementing deep learning algorithms has two main contributions; firstly, the quantification performance of the
readout method is improved for the unbalanced dataset. Secondly, unlike the previous classical computer visionbased method, it does not require any manual tuning of the parameters which resulted in a more generalized
model against variations of objects in the image in terms of size, color, and noise. As a result of these benefits, the
proposed system is promising for providing real time analysis for microfluidic systems. Moreover, we compare
different deep learning based semantic segmentation algorithms on the image dataset which are acquired from
the real patient samples using a bright-field microscopy. Without cell staining, hyper-parameter optimized, and
modified U-Net semantic segmentation algorithm yields 98.7% global accuracy, 86.1% mean IoU, 92.2% mean
precision, 92.2% mean recall and 92.2% mean F-1 score measure on the patient dataset. After segmentation,
quantification result yields 89% average precision, 97% average recall on test images. By applying the deep
learning algorithms, we are able to improve our previous results that employed conventional computer vision
methods.