1
|
Gao W, Bai Y, Yang Y, Jia L, Mi Y, Cui W, Liu D, Shakoor A, Zhao L, Li J, Luo T, Sun D, Jiang Z. Intelligent sensing for the autonomous manipulation of microrobots toward minimally invasive cell surgery. APPLIED PHYSICS REVIEWS 2024; 11. [DOI: 10.1063/5.0211141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2025]
Abstract
The physiology and pathogenesis of biological cells have drawn enormous research interest. Benefiting from the rapid development of microfabrication and microelectronics, miniaturized robots with a tool size below micrometers have widely been studied for manipulating biological cells in vitro and in vivo. Traditionally, the complex physiological environment and biological fragility require human labor interference to fulfill these tasks, resulting in high risks of irreversible structural or functional damage and even clinical risk. Intelligent sensing devices and approaches have been recently integrated within robotic systems for environment visualization and interaction force control. As a consequence, microrobots can be autonomously manipulated with visual and interaction force feedback, greatly improving accuracy, efficiency, and damage regulation for minimally invasive cell surgery. This review first explores advanced tactile sensing in the aspects of sensing principles, design methodologies, and underlying physics. It also comprehensively discusses recent progress on visual sensing, where the imaging instruments and processing methods are summarized and analyzed. It then introduces autonomous micromanipulation practices utilizing visual and tactile sensing feedback and their corresponding applications in minimally invasive surgery. Finally, this work highlights and discusses the remaining challenges of current robotic micromanipulation and their future directions in clinical trials, providing valuable references about this field.
Collapse
Affiliation(s)
- Wendi Gao
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Yunfei Bai
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Yujie Yang
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Lanlan Jia
- Department of Electronic Engineering, Ocean University of China 2 , Qingdao 266400,
| | - Yingbiao Mi
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Wenji Cui
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Dehua Liu
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Adnan Shakoor
- Department of Control and Instrumentation Engineering, King Fahd University of Petroleum and Minerals 3 , Dhahran 31261,
| | - Libo Zhao
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Junyang Li
- Department of Electronic Engineering, Ocean University of China 2 , Qingdao 266400,
| | - Tao Luo
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University 4 , Xiamen 361102,
| | - Dong Sun
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
- Department of Biomedical Engineering, City University of Hong Kong 5 , Hong Kong 999099,
| | - Zhuangde Jiang
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| |
Collapse
|
2
|
Zhang B, Wang W, Zhao W, Jiang X, Patnaik LM. An improved approach for automated cervical cell segmentation with PointRend. Sci Rep 2024; 14:14210. [PMID: 38902285 PMCID: PMC11189924 DOI: 10.1038/s41598-024-64583-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 06/11/2024] [Indexed: 06/22/2024] Open
Abstract
Regular screening for cervical cancer is one of the best tools to reduce cancer incidence. Automated cell segmentation in screening is an essential task because it can present better understanding of the characteristics of cervical cells. The main challenge of cell cytoplasm segmentation is that many boundaries in cell clumps are extremely difficult to be identified. This paper proposes a new convolutional neural network based on Mask RCNN and PointRend module, to segment overlapping cervical cells. The PointRend head concatenates fine grained features and coarse features extracted from different feature maps to fine-tune the candidate boundary pixels of cell cytoplasm, which are crucial for precise cell segmentation. The proposed model achieves a 0.97 DSC (Dice Similarity Coefficient), 0.96 TPRp (Pixelwise True Positive Rate), 0.007 FPRp (Pixelwise False Positive Rate) and 0.006 FNRo (Object False Negative Rate) on dataset from ISBI2014. Specially, the proposed method outperforms state-of-the-art result by about 3 % on DSC, 1 % on TPRp and 1.4 % on FNRo respectively. The performance metrics of our model on dataset from ISBI2015 are slight better than the average value of other approaches. Those results indicate that the proposed method could be effective in cytological analysis and then help experts correctly discover cervical cell lesions.
Collapse
Affiliation(s)
- Baocan Zhang
- Chengyi College, Jimei University, Xiamen, 361021, Fujian, China
| | - Wenfeng Wang
- Shanghai Institute of Technology, Shanghai, 200235, China.
- London Institute of Technology, International Academy of Visual Art and Engineering, London, CR2 6EQ, UK.
| | - Wei Zhao
- Chengyi College, Jimei University, Xiamen, 361021, Fujian, China
| | - Xiaolu Jiang
- Chengyi College, Jimei University, Xiamen, 361021, Fujian, China
| | | |
Collapse
|
3
|
Hörst F, Rempe M, Heine L, Seibold C, Keyl J, Baldini G, Ugurel S, Siveke J, Grünwald B, Egger J, Kleesiek J. CellViT: Vision Transformers for precise cell segmentation and classification. Med Image Anal 2024; 94:103143. [PMID: 38507894 DOI: 10.1016/j.media.2024.103143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 02/14/2024] [Accepted: 03/12/2024] [Indexed: 03/22/2024]
Abstract
Nuclei detection and segmentation in hematoxylin and eosin-stained (H&E) tissue images are important clinical tasks and crucial for a wide range of applications. However, it is a challenging task due to nuclei variances in staining and size, overlapping boundaries, and nuclei clustering. While convolutional neural networks have been extensively used for this task, we explore the potential of Transformer-based networks in combination with large scale pre-training in this domain. Therefore, we introduce a new method for automated instance segmentation of cell nuclei in digitized tissue samples using a deep learning architecture based on Vision Transformer called CellViT. CellViT is trained and evaluated on the PanNuke dataset, which is one of the most challenging nuclei instance segmentation datasets, consisting of nearly 200,000 annotated nuclei into 5 clinically important classes in 19 tissue types. We demonstrate the superiority of large-scale in-domain and out-of-domain pre-trained Vision Transformers by leveraging the recently published Segment Anything Model and a ViT-encoder pre-trained on 104 million histological image patches - achieving state-of-the-art nuclei detection and instance segmentation performance on the PanNuke dataset with a mean panoptic quality of 0.50 and an F1-detection score of 0.83. The code is publicly available at https://github.com/TIO-IKIM/CellViT.
Collapse
Affiliation(s)
- Fabian Hörst
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany.
| | - Moritz Rempe
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Lukas Heine
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Constantin Seibold
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Clinic for Nuclear Medicine, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Julius Keyl
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Pathology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Giulia Baldini
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Institute of Interventional and Diagnostic Radiology and Neuroradiology, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Selma Ugurel
- Department of Dermatology, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany
| | - Jens Siveke
- West German Cancer Center, partner site Essen, a partnership between German Cancer Research Center (DKFZ) and University Hospital Essen, University Hospital Essen (AöR), 45147 Essen, Germany; Bridge Institute of Experimental Tumor Therapy (BIT) and Division of Solid Tumor Translational Oncology (DKTK), West German Cancer Center Essen, University Hospital Essen (AöR), University of Duisburg-Essen, 45147 Essen, Germany
| | - Barbara Grünwald
- Department of Urology, West German Cancer Center, 45147 University Hospital Essen (AöR), Germany; Princess Margaret Cancer Centre, M5G 2M9 Toronto, Ontario, Canada
| | - Jan Egger
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany
| | - Jens Kleesiek
- Institute for AI in Medicine (IKIM), University Hospital Essen (AöR), 45131 Essen, Germany; Cancer Research Center Cologne Essen (CCCE), West German Cancer Center Essen, University Hospital Essen (AöR), 45147 Essen, Germany; German Cancer Consortium (DKTK, Partner site Essen), 69120 Heidelberg, Germany; Department of Physics, TU Dortmund University, 44227 Dortmund, Germany
| |
Collapse
|
4
|
Yang X, Ding B, Qin J, Guo L, Zhao J, He Y. HVS-Unsup: Unsupervised cervical cell instance segmentation method based on human visual simulation. Comput Biol Med 2024; 171:108147. [PMID: 38387385 DOI: 10.1016/j.compbiomed.2024.108147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 01/22/2024] [Accepted: 02/12/2024] [Indexed: 02/24/2024]
Abstract
Instance segmentation plays an important role in the automatic diagnosis of cervical cancer. Although deep learning-based instance segmentation methods can achieve outstanding performance, they need large amounts of labeled data. This results in a huge consumption of manpower and material resources. To solve this problem, we propose an unsupervised cervical cell instance segmentation method based on human visual simulation, named HVS-Unsup. Our method simulates the process of human cell recognition and incorporates prior knowledge of cervical cells. Specifically, firstly, we utilize prior knowledge to generate three types of pseudo labels for cervical cells. In this way, the unsupervised instance segmentation is transformed to a supervised task. Secondly, we design a Nucleus Enhanced Module (NEM) and a Mask-Assisted Segmentation module (MAS) to address problems of cell overlapping, adhesion, and even scenarios involving visually indistinguishable cases. NEM can accurately locate the nuclei by the nuclei attention feature maps generated by point-level pseudo labels, and MAS can reduce the interference from impurities by updating the weight of the shallow network through the dice loss. Next, we propose a Category-Wise droploss (CW-droploss) to reduce cell omissions in lower-contrast images. Finally, we employ an iterative self-training strategy to rectify mislabeled instances. Experimental results on our dataset MS-cellSeg, the public datasets Cx22 and ISBI2015 demonstrate that HVS-Unsup outperforms existing mainstream unsupervised cervical cell segmentation methods.
Collapse
Affiliation(s)
- Xiaona Yang
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Bo Ding
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Jian Qin
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Luyao Guo
- Harbin University of Science and Technology, School of Computer Science and Technology, Harbin, 150080, China
| | - Jing Zhao
- Northeast Forestry University, School of Mechanical and Electrical Engineering, Harbin, 150040, China
| | - Yongjun He
- Harbin Institute of Technology, School of Computer Science and Technology, Harbin, 150001, China.
| |
Collapse
|
5
|
Song Y, Zhang A, Zhou J, Luo Y, Lin Z, Zhou T. Overlapping cytoplasms segmentation via constrained multi-shape evolution for cervical cancer screening. Artif Intell Med 2024; 148:102756. [PMID: 38325933 DOI: 10.1016/j.artmed.2023.102756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 12/03/2023] [Accepted: 12/29/2023] [Indexed: 02/09/2024]
Abstract
Segmenting overlapping cytoplasms in cervical smear images is a clinically essential task for quantitatively measuring cell-level features to screen cervical cancer This task, however, remains rather challenging, mainly due to the deficiency of intensity (or color) information in the overlapping region Although shape prior-based models that compensate intensity deficiency by introducing prior shape information about cytoplasm are firmly established, they often yield visually implausible results, as they model shape priors only by limited shape hypotheses about cytoplasm, exploit cytoplasm-level shape priors alone, and impose no shape constraint on the resulting shape of the cytoplasm In this paper, we present an effective shape prior-based approach, called constrained multi-shape evolution, that segments all overlapping cytoplasms in the clump simultaneously by jointly evolving each cytoplasm's shape guided by the modeled shape priors We model local shape priors (cytoplasm-level) by an infinitely large shape hypothesis set which contains all possible shapes of the cytoplasm In the shape evolution, we compensate intensity deficiency for the segmentation by introducing not only the modeled local shape priors but also global shape priors (clump-level) modeled by considering mutual shape constraints of cytoplasms in the clump We also constrain the resulting shape in each evolution to be in the built shape hypothesis set for further reducing implausible segmentation results We evaluated the proposed method in two typical cervical smear datasets, and the extensive experimental results confirm its effectiveness.
Collapse
Affiliation(s)
- Youyi Song
- School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Ao Zhang
- School of Biomedical Engineering, Shenzhen University, Shenzhen, 518060, China
| | - Jinglin Zhou
- School of Philosophy, Fudan University, Shanghai, 200433, China
| | - Yu Luo
- School of Computer Science and Technology, Guangdong University of Technology, Guangzhou, 510006, China
| | - Zhizhe Lin
- School of Information and Communication Engineering, Hainan University, Haikou, 570228, China
| | - Teng Zhou
- School of Cyberspace Security, Hainan University, Haikou, 570228, China.
| |
Collapse
|
6
|
Wang J, Zhang Z, Wu M, Ye Y, Wang S, Cao Y, Yang H. Nuclei instance segmentation using a transformer-based graph convolutional network and contextual information augmentation. Comput Biol Med 2023; 167:107622. [PMID: 39491378 DOI: 10.1016/j.compbiomed.2023.107622] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2023] [Revised: 10/07/2023] [Accepted: 10/23/2023] [Indexed: 11/05/2024]
Abstract
Nucleus instance segmentation is an important task in medical image analysis involving cell-level pathological analysis and is of great significance for many biomedical applications, such as disease diagnosis and drug screening. However, the high-density and tight-contact between cells is a common feature of most cell images, which poses a great technical challenge for nuclei instance segmentation. The latest research focuses on CNN-based methods for nuclei instance segmentation, which typically rely on bounding box regression and non-maximum suppression to locate nuclei. However, this frequently results in poor local bounding boxes for nuclei that are adhered or clustered together. In response to the challenges of high-density and tight-contact in cellular images, we propose a novel end-to-end nuclei instance segmentation model. Specifically, we first employ the Swin Transformer as the backbone network of our model, which captures global multi-scale information by combining the global modelling capability of transformers and the local modelling capability of convolutional neural networks (CNNs). Additionally, we integrate a graph convolutional feature fusion module (GCFM), that combines deep and shallow features to learn an affinity matrix. The module also adopts graph convolution to guide the network in learning the object-level local information. Finally, we design a hybrid dilated convolution module (HDC) and insert it into the backbone network to enhance the contextual information over a large range. These components assist the network in extracting rich features. The experimental results demonstrate that our algorithm outperforms several state-of-the-art models on the DSB2018 and LIVECell datasets.
Collapse
Affiliation(s)
- Juan Wang
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China; Hubei Key Laboratory for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, China.
| | - Zetao Zhang
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China.
| | - Minghu Wu
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China; Hubei Key Laboratory for High-efficiency Utilization of Solar Energy and Operation Control of Energy Storage System, Hubei University of Technology, China.
| | - Yonggang Ye
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China.
| | - Sheng Wang
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China.
| | - Ye Cao
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China.
| | - Hao Yang
- School of Electrical and Electronic Engineering, Hubei University of Technology, Hongshan District, Hubei Province, Wuhan, China.
| |
Collapse
|
7
|
Rasheed A, Shirazi SH, Umar AI, Shahzad M, Yousaf W, Khan Z. Cervical cell's nucleus segmentation through an improved UNet architecture. PLoS One 2023; 18:e0283568. [PMID: 37788295 PMCID: PMC10547184 DOI: 10.1371/journal.pone.0283568] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Accepted: 03/11/2023] [Indexed: 10/05/2023] Open
Abstract
Precise segmentation of the nucleus is vital for computer-aided diagnosis (CAD) in cervical cytology. Automated delineation of the cervical nucleus has notorious challenges due to clumped cells, color variation, noise, and fuzzy boundaries. Due to its standout performance in medical image analysis, deep learning has gained attention from other techniques. We have proposed a deep learning model, namely C-UNet (Cervical-UNet), to segment cervical nuclei from overlapped, fuzzy, and blurred cervical cell smear images. Cross-scale features integration based on a bi-directional feature pyramid network (BiFPN) and wide context unit are used in the encoder of classic UNet architecture to learn spatial and local features. The decoder of the improved network has two inter-connected decoders that mutually optimize and integrate these features to produce segmentation masks. Each component of the proposed C-UNet is extensively evaluated to judge its effectiveness on a complex cervical cell dataset. Different data augmentation techniques were employed to enhance the proposed model's training. Experimental results have shown that the proposed model outperformed extant models, i.e., CGAN (Conditional Generative Adversarial Network), DeepLabv3, Mask-RCNN (Region-Based Convolutional Neural Network), and FCN (Fully Connected Network), on the employed dataset used in this study and ISBI-2014 (International Symposium on Biomedical Imaging 2014), ISBI-2015 datasets. The C-UNet achieved an object-level accuracy of 93%, pixel-level accuracy of 92.56%, object-level recall of 95.32%, pixel-level recall of 92.27%, Dice coefficient of 93.12%, and F1-score of 94.96% on complex cervical images dataset.
Collapse
Affiliation(s)
- Assad Rasheed
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Syed Hamad Shirazi
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Arif Iqbal Umar
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Muhammad Shahzad
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Waqas Yousaf
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| | - Zakir Khan
- Department of Computer Science & Information Technology, Hazara University Mansehra, Mansehra, Pakistan
| |
Collapse
|
8
|
Chen S, Ding C, Liu M, Cheng J, Tao D. CPP-Net: Context-Aware Polygon Proposal Network for Nucleus Segmentation. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2023; 32:980-994. [PMID: 37022023 DOI: 10.1109/tip.2023.3237013] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Nucleus segmentation is a challenging task due to the crowded distribution and blurry boundaries of nuclei. Recent approaches represent nuclei by means of polygons to differentiate between touching and overlapping nuclei and have accordingly achieved promising performance. Each polygon is represented by a set of centroid-to-boundary distances, which are in turn predicted by features of the centroid pixel for a single nucleus. However, using the centroid pixel alone does not provide sufficient contextual information for robust prediction and thus degrades the segmentation accuracy. To handle this problem, we propose a Context-aware Polygon Proposal Network (CPP-Net) for nucleus segmentation. First, we sample a point set rather than one single pixel within each cell for distance prediction. This strategy substantially enhances contextual information and thereby improves the robustness of the prediction. Second, we propose a Confidence-based Weighting Module, which adaptively fuses the predictions from the sampled point set. Third, we introduce a novel Shape-Aware Perceptual (SAP) loss that constrains the shape of the predicted polygons. Here, the SAP loss is based on an additional network that is pre-trained by means of mapping the centroid probability map and the pixel-to-boundary distance maps to a different nucleus representation. Extensive experiments justify the effectiveness of each component in the proposed CPP-Net. Finally, CPP-Net is found to achieve state-of-the-art performance on three publicly available databases, namely DSB2018, BBBC06, and PanNuke. Code of this paper is available at https://github.com/csccsccsccsc/cpp-net.
Collapse
|
9
|
Wu Y, Li Q. The Algorithm of Watershed Color Image Segmentation Based on Morphological Gradient. SENSORS (BASEL, SWITZERLAND) 2022; 22:s22218202. [PMID: 36365898 PMCID: PMC9657866 DOI: 10.3390/s22218202] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 10/19/2022] [Accepted: 10/19/2022] [Indexed: 06/07/2023]
Abstract
The traditional watershed algorithm has the disadvantage of over-segmentation and interference with an image by reflected light. We propose an improved watershed color image segmentation algorithm. It is based on a morphological gradient. This method obtains the component gradient of a color image in a new color space is not disturbed by the reflected light. The gradient image is reconstructed by opening and closing. Therefore, the final gradient image is obtained. The maximum inter-class variance algorithm is used to obtain the threshold automatically for the final gradient image. The original gradient image is forcibly calibrated with the obtained binary labeled image, and the modified gradient image is segmented by watershed. Experimental results show that the proposed method can obtain an accurate and continuous target contour. It will achieve the minimum number of segmentation regions following human vision. Compared with similar algorithms, this way can suppress the meaningless area generated by the reflected light. It will maintain the edge information of the object well. It will improve the robustness and applicability. From the experimental results, it can be seen that compared with the region-growing method and the automatic threshold method; the proposed algorithm has a great improvement in operation efficiency, which increased by 10%. The accuracy and recall rate of the proposed algorithm is more than 0.98. Through the experimental comparison, the advantages of the proposed algorithm in object segmentation can be more intuitively illustrated.
Collapse
Affiliation(s)
- Yanyan Wu
- Correspondence: ; Tel.: +86-139-5820-6376
| | | |
Collapse
|
10
|
Das PK, Meher S, Panda R, Abraham A. An Efficient Blood-Cell Segmentation for the Detection of Hematological Disorders. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:10615-10626. [PMID: 33735090 DOI: 10.1109/tcyb.2021.3062152] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The automatic segmentation of blood cells for detecting hematological disorders is a crucial job. It has a vital role in diagnosis, treatment planning, and output evaluation. The existing methods suffer from the issues like noise, improper seed-point detection, and oversegmentation problems, which are solved here using a Laplacian-of-Gaussian (LoG)-based modified highboosting operation, bounded opening followed by fast radial symmetry (BOFRS)-based seed-point detection, and hybrid ellipse fitting (EF), respectively. This article proposes a novel hybrid EF-based blood-cell segmentation approach, which may be used for detecting various hematological disorders. Our prime contributions are: 1) more accurate seed-point detection based on BO-FRS; 2) a novel least-squares (LS)-based geometric EF approach; and 3) an improved segmentation performance by employing a hybridized version of geometric and algebraic EF techniques retaining the benefits of both approaches. It is a computationally efficient approach since it hybridizes noniterative-geometric and algebraic methods. Moreover, we propose to estimate the minor and major axes based on the residue and residue offset factors. The residue offset parameter, proposed here, yields more accurate segmentation with proper EF. Our method is compared with the state-of-the-art methods. It outperforms the existing EF techniques in terms of dice similarity, Jaccard score, precision, and F1 score. It may be useful for other medical and cybernetics applications.
Collapse
|
11
|
Wu H, Pang KKY, Pang GKH, Au-Yeung RKH. A soft-computing based approach to overlapped cells analysis in histopathology images with genetic algorithm. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.109279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
12
|
Gou S, Xu Y, Yang H, Tong N, Zhang X, Wei L, Zhao L, Zheng M, Liu W. Automated cervical tumor segmentation on MR images using multi-view feature attention network. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
13
|
Liu J, Fan H, Wang Q, Li W, Tang Y, Wang D, Zhou M, Chen L. Local Label Point Correction for Edge Detection of Overlapping Cervical Cells. Front Neuroinform 2022; 16:895290. [PMID: 35645753 PMCID: PMC9133536 DOI: 10.3389/fninf.2022.895290] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2022] [Accepted: 04/20/2022] [Indexed: 11/18/2022] Open
Abstract
Accurate labeling is essential for supervised deep learning methods. However, it is almost impossible to accurately and manually annotate thousands of images, which results in many labeling errors for most datasets. We proposes a local label point correction (LLPC) method to improve annotation quality for edge detection and image segmentation tasks. Our algorithm contains three steps: gradient-guided point correction, point interpolation, and local point smoothing. We correct the labels of object contours by moving the annotated points to the pixel gradient peaks. This can improve the edge localization accuracy, but it also causes unsmooth contours due to the interference of image noise. Therefore, we design a point smoothing method based on local linear fitting to smooth the corrected edge. To verify the effectiveness of our LLPC, we construct a largest overlapping cervical cell edge detection dataset (CCEDD) with higher precision label corrected by our label correction method. Our LLPC only needs to set three parameters, but yields 30–40% average precision improvement on multiple networks. The qualitative and quantitative experimental results show that our LLPC can improve the quality of manual labels and the accuracy of overlapping cell edge detection. We hope that our study will give a strong boost to the development of the label correction for edge detection and image segmentation. We will release the dataset and code at: https://github.com/nachifur/LLPC.
Collapse
Affiliation(s)
- Jiawei Liu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Huijie Fan
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
- *Correspondence: Huijie Fan
| | - Qiang Wang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Key Laboratory of Manufacturing Industrial Integrated, Shenyang University, Shenyang, China
| | - Wentao Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Yandong Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, China
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, China
| | - Danbo Wang
- Department of Gynecology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
- Danbo Wang
| | - Mingyi Zhou
- Department of Gynecology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| | - Li Chen
- Department of Pathology, Cancer Hospital of China Medical University, Liaoning Cancer Hospital & Institute, Shenyang, China
| |
Collapse
|
14
|
Lu F, Fu C, Zhang G, Shi J. Adaptive multi-scale feature fusion based U-net for fracture segmentation in coal rock images. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2022. [DOI: 10.3233/jifs-211968] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Accurate segmentation of fractures in coal rock CT images is important for the development of coalbed methane. However, due to the large variation of fracture scale and the similarity of gray values between weak fractures and the surrounding matrix, it remains a challenging task. And there is no published dataset of coal rock, which make the task even harder. In this paper, a novel adaptive multi-scale feature fusion method based on U-net (AMSFF-U-net) is proposed for fracture segmentation in coal rock CT images. Specifically, encoder and decoder path consist of residual blocks (ReBlock), respectively. The attention skip concatenation (ASC) module is proposed to capture more representative and distinguishing features by combining the high-level and low-level features of adjacent layers. The adaptive multi-scale feature fusion (AMSFF) module is presented to adaptively fuse different scale feature maps of encoder path; it can effectively capture rich multi-scale features. In response to the lack of coal rock fractures training data, we applied a set of comprehensive data augmentation operations to increase the diversity of training samples. These extensive experiments are conducted via seven state-of-the-art methods (i.e., FCEM, U-net, Res-Unet, Unet++, MSN-Net, WRAU-Net and ours). The experiment results demonstrate that the proposed AMSFF-U-net can achieve better segmentation performance in our works, particularly for weak fractures and tiny scale fractures.
Collapse
Affiliation(s)
- Fengli Lu
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| | - Chengcai Fu
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| | - Guoying Zhang
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| | - Jie Shi
- School of Mechanical Electronic and Information Engineering, China University of Mining and Technology, Beijing, China
| |
Collapse
|
15
|
Wu C, Zhong J, Lin L, Chen Y, Xue Y, Shi P. Segmentation of HE-stained meningioma pathological images based on pseudo-labels. PLoS One 2022; 17:e0263006. [PMID: 35120175 PMCID: PMC8815980 DOI: 10.1371/journal.pone.0263006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 01/10/2022] [Indexed: 12/05/2022] Open
Abstract
Biomedical research is inseparable from the analysis of various histopathological images, and hematoxylin-eosin (HE)-stained images are one of the most basic and widely used types. However, at present, machine learning based approaches of the analysis of this kind of images are highly relied on manual labeling of images for training. Fully automated processing of HE-stained images remains a challenging task due to the high degree of color intensity, size and shape uncertainty of the stained cells. For this problem, we propose a fully automatic pixel-wise semantic segmentation method based on pseudo-labels, which concerns to significantly reduce the manual cell sketching and labeling work before machine learning, and guarantees the accuracy of segmentation. First, we collect reliable training samples in a unsupervised manner based on K-means clustering results; second, we use full mixup strategy to enhance the training images and to obtain the U-Net model for the nuclei segmentation from the background. The experimental results based on the meningioma pathology image dataset show that the proposed method has good performance and the pathological features obtained statistically based on the segmentation results can be used to assist in the clinical grading of meningiomas. Compared with other machine learning strategies, it can provide a reliable reference for clinical research more effectively.
Collapse
Affiliation(s)
- Chongshu Wu
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring Fuzhou, Fujian, China
| | - Jing Zhong
- Radiology and Pathology Department, Fujian Provincial Cancer Hospital, Fuzhou, Fujian, China
| | - Lin Lin
- Radiology Department, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Yanping Chen
- Radiology and Pathology Department, Fujian Provincial Cancer Hospital, Fuzhou, Fujian, China
| | - Yunjing Xue
- Radiology Department, Fujian Medical University Union Hospital, Fuzhou, Fujian, China
| | - Peng Shi
- College of Computer and Cyber Security, Fujian Normal University, Fuzhou, Fujian, China
- Digit Fujian Internet-of-Things Laboratory of Environmental Monitoring Fuzhou, Fujian, China
| |
Collapse
|
16
|
Segmentation of Overlapping Cervical Cells with Mask Region Convolutional Neural Network. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:3890988. [PMID: 34646333 PMCID: PMC8505098 DOI: 10.1155/2021/3890988] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 09/18/2021] [Indexed: 11/18/2022]
Abstract
The task of segmenting cytoplasm in cytology images is one of the most challenging tasks in cervix cytological analysis due to the presence of fuzzy and highly overlapping cells. Deep learning-based diagnostic technology has proven to be effective in segmenting complex medical images. We present a two-stage framework based on Mask RCNN to automatically segment overlapping cells. In stage one, candidate cytoplasm bounding boxes are proposed. In stage two, pixel-to-pixel alignment is used to refine the boundary and category classification is also presented. The performance of the proposed method is evaluated on publicly available datasets from ISBI 2014 and 2015. The experimental results demonstrate that our method outperforms other state-of-the-art approaches with DSC 0.92 and FPRp 0.0008 at the DSC threshold of 0.8. Those results indicate that our Mask RCNN-based segmentation method could be effective in cytological analysis.
Collapse
|
17
|
Tertiary lymphoid structures (TLS) identification and density assessment on H&E-stained digital slides of lung cancer. PLoS One 2021; 16:e0256907. [PMID: 34555057 PMCID: PMC8460026 DOI: 10.1371/journal.pone.0256907] [Citation(s) in RCA: 45] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Accepted: 08/17/2021] [Indexed: 11/24/2022] Open
Abstract
Tertiary lymphoid structures (TLS) are ectopic aggregates of lymphoid cells in inflamed, infected, or tumoral tissues that are easily recognized on an H&E histology slide as discrete entities, distinct from lymphocytes. TLS are associated with improved cancer prognosis but there is no standardised method available to quantify their presence. Previous studies have used immunohistochemistry to determine the presence of specific cells as a marker of the TLS. This has now been proven to be an underestimate of the true number of TLS. Thus, we propose a methodology for the automated identification and quantification of TLS, based on H&E slides. We subsequently determined the mathematical criteria defining a TLS. TLS regions were identified through a deep convolutional neural network and segmentation of lymphocytes was performed through an ellipsoidal model. This methodology had a 92.87% specificity at 95% sensitivity, 88.79% specificity at 98% sensitivity and 84.32% specificity at 99% sensitivity level based on 144 TLS annotated H&E slides implying that the automated approach was able to reproduce the histopathologists’ assessment with great accuracy. We showed that the minimum number of lymphocytes within TLS is 45 and the minimum TLS area is 6,245μm2. Furthermore, we have shown that the density of the lymphocytes is more than 3 times those outside of the TLS. The mean density and standard deviation of lymphocytes within a TLS area are 0.0128/μm2 and 0.0026/μm2 respectively compared to 0.004/μm2 and 0.001/μm2 in non-TLS regions. The proposed methodology shows great potential for automated identification and quantification of the TLS density on digital H&E slides.
Collapse
|
18
|
Yi J, Wu P, Tang H, Liu B, Huang Q, Qu H, Han L, Fan W, Hoeppner DJ, Metaxas DN. Object-Guided Instance Segmentation With Auxiliary Feature Refinement for Biological Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2403-2414. [PMID: 33945472 DOI: 10.1109/tmi.2021.3077285] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Instance segmentation is of great importance for many biological applications, such as study of neural cell interactions, plant phenotyping, and quantitatively measuring how cells react to drug treatment. In this paper, we propose a novel box-based instance segmentation method. Box-based instance segmentation methods capture objects via bounding boxes and then perform individual segmentation within each bounding box region. However, existing methods can hardly differentiate the target from its neighboring objects within the same bounding box region due to their similar textures and low-contrast boundaries. To deal with this problem, in this paper, we propose an object-guided instance segmentation method. Our method first detects the center points of the objects, from which the bounding box parameters are then predicted. To perform segmentation, an object-guided coarse-to-fine segmentation branch is built along with the detection branch. The segmentation branch reuses the object features as guidance to separate target object from the neighboring ones within the same bounding box region. To further improve the segmentation quality, we design an auxiliary feature refinement module that densely samples and refines point-wise features in the boundary regions. Experimental results on three biological image datasets demonstrate the advantages of our method. The code will be available at https://github.com/yijingru/ObjGuided-Instance-Segmentation.
Collapse
|
19
|
|
20
|
Victória Matias A, Atkinson Amorim JG, Buschetto Macarini LA, Cerentini A, Casimiro Onofre AS, De Miranda Onofre FB, Daltoé FP, Stemmer MR, von Wangenheim A. What is the state of the art of computer vision-assisted cytology? A Systematic Literature Review. Comput Med Imaging Graph 2021; 91:101934. [PMID: 34174544 DOI: 10.1016/j.compmedimag.2021.101934] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Revised: 04/16/2021] [Accepted: 05/04/2021] [Indexed: 11/28/2022]
Abstract
Cytology is a low-cost and non-invasive diagnostic procedure employed to support the diagnosis of a broad range of pathologies. Cells are harvested from tissues by aspiration or scraping, and it is still predominantly performed manually by medical or laboratory professionals extensively trained for this purpose. It is a time-consuming and repetitive process where many diagnostic criteria are subjective and vulnerable to human interpretation. Computer Vision technologies, by automatically generating quantitative and objective descriptions of examinations' contents, can help minimize the chances of misdiagnoses and shorten the time required for analysis. To identify the state-of-art of computer vision techniques currently applied to cytology, we conducted a Systematic Literature Review, searching for approaches for the segmentation, detection, quantification, and classification of cells and organelles using computer vision on cytology slides. We analyzed papers published in the last 4 years. The initial search was executed in September 2020 and resulted in 431 articles. After applying the inclusion/exclusion criteria, 157 papers remained, which we analyzed to build a picture of the tendencies and problems present in this research area, highlighting the computer vision methods, staining techniques, evaluation metrics, and the availability of the used datasets and computer code. As a result, we identified that the most used methods in the analyzed works are deep learning-based (70 papers), while fewer works employ classic computer vision only (101 papers). The most recurrent metric used for classification and object detection was the accuracy (33 papers and 5 papers), while for segmentation it was the Dice Similarity Coefficient (38 papers). Regarding staining techniques, Papanicolaou was the most employed one (130 papers), followed by H&E (20 papers) and Feulgen (5 papers). Twelve of the datasets used in the papers are publicly available, with the DTU/Herlev dataset being the most used one. We conclude that there still is a lack of high-quality datasets for many types of stains and most of the works are not mature enough to be applied in a daily clinical diagnostic routine. We also identified a growing tendency towards adopting deep learning-based approaches as the methods of choice.
Collapse
Affiliation(s)
- André Victória Matias
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Allan Cerentini
- Department of Informatics and Statistics, Federal University of Santa Catarina, Florianópolis, Brazil.
| | | | | | - Felipe Perozzo Daltoé
- Department of Pathology, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Marcelo Ricardo Stemmer
- Automation and Systems Department, Federal University of Santa Catarina, Florianópolis, Brazil.
| | - Aldo von Wangenheim
- Brazilian Institute for Digital Convergence, Federal University of Santa Catarina, Florianópolis, Brazil.
| |
Collapse
|
21
|
Hoque IT, Ibtehaz N, Chakravarty S, Rahman MS, Rahman MS. A contour property based approach to segment nuclei in cervical cytology images. BMC Med Imaging 2021; 21:15. [PMID: 33509110 PMCID: PMC7841885 DOI: 10.1186/s12880-020-00533-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 12/06/2020] [Indexed: 11/14/2022] Open
Abstract
Background Segmentation of nuclei in cervical cytology pap smear images is a crucial stage in automated cervical cancer screening. The task itself is challenging due to the presence of cervical cells with spurious edges, overlapping cells, neutrophils, and artifacts. Methods After the initial preprocessing steps of adaptive thresholding, in our approach, the image passes through a convolution filter to filter out some noise. Then, contours from the resultant image are filtered by their distinctive contour properties followed by a nucleus size recovery procedure based on contour average intensity value. Results We evaluate our method on a public (benchmark) dataset collected from ISBI and also a private real dataset. The results show that our algorithm outperforms other state-of-the-art methods in nucleus segmentation on the ISBI dataset with a precision of 0.978 and recall of 0.933. A promising precision of 0.770 and a formidable recall of 0.886 on the private real dataset indicate that our algorithm can effectively detect and segment nuclei on real cervical cytology images. Tuning various parameters, the precision could be increased to as high as 0.949 with an acceptable decrease of recall to 0.759. Our method also managed an Aggregated Jaccard Index of 0.681 outperforming other state-of-the-art methods on the real dataset. Conclusion We have proposed a contour property-based approach for segmentation of nuclei. Our algorithm has several tunable parameters and is flexible enough to adapt to real practical scenarios and requirements.
Collapse
Affiliation(s)
- Iram Tazim Hoque
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, Bangladesh
| | - Nabil Ibtehaz
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, Bangladesh
| | - Saumitra Chakravarty
- Department of Pathology, Bangabandhu Sheikh Mujib Medical University, Shahabag, Dhaka, Bangladesh
| | - M Saifur Rahman
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, Bangladesh
| | - M Sohel Rahman
- Department of CSE, BUET, ECE Building, West Palashi, Dhaka, Bangladesh.
| |
Collapse
|
22
|
Glass-cutting medical images via a mechanical image segmentation method based on crack propagation. Nat Commun 2020; 11:5669. [PMID: 33168802 PMCID: PMC7652839 DOI: 10.1038/s41467-020-19392-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Accepted: 10/07/2020] [Indexed: 11/23/2022] Open
Abstract
Medical image segmentation is crucial in diagnosing and treating diseases, but automatic segmentation of complex images is very challenging. Here we present a method, called the crack propagation method (CPM), based on the principles of fracture mechanics. This unique method converts the image segmentation problem into a mechanical one, extracting the boundary information of the target area by tracing the crack propagation on a thin plate with grooves corresponding to the area edge. The greatest advantage of CPM is in segmenting images involving blurred or even discontinuous boundaries, a task difficult to achieve by existing auto-segmentation methods. The segmentation results for synthesized images and real medical images show that CPM has high accuracy in segmenting complex boundaries. With increasing demand for medical imaging in clinical practice and research, this method will show its unique potential. Automatic segmentation of complex medical images is challenging. Here, the authors present a crack propagation method based on the principles of fracture mechanics: extracting the boundary information of the target area by tracing the crack propagation on a thin plate with grooves corresponding to the area edge.
Collapse
|
23
|
Mahmood F, Borders D, Chen RJ, Mckay GN, Salimian KJ, Baras A, Durr NJ. Deep Adversarial Training for Multi-Organ Nuclei Segmentation in Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3257-3267. [PMID: 31283474 PMCID: PMC8588951 DOI: 10.1109/tmi.2019.2927182] [Citation(s) in RCA: 137] [Impact Index Per Article: 27.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Nuclei mymargin segmentation is a fundamental task for various computational pathology applications including nuclei morphology analysis, cell type classification, and cancer grading. Deep learning has emerged as a powerful approach to segmenting nuclei but the accuracy of convolutional neural networks (CNNs) depends on the volume and the quality of labeled histopathology data for training. In particular, conventional CNN-based approaches lack structured prediction capabilities, which are required to distinguish overlapping and clumped nuclei. Here, we present an approach to nuclei segmentation that overcomes these challenges by utilizing a conditional generative adversarial network (cGAN) trained with synthetic and real data. We generate a large dataset of H&E training images with perfect nuclei segmentation labels using an unpaired GAN framework. This synthetic data along with real histopathology data from six different organs are used to train a conditional GAN with spectral normalization and gradient penalty for nuclei segmentation. This adversarial regression framework enforces higher-order spacial-consistency when compared to conventional CNN models. We demonstrate that this nuclei segmentation approach generalizes across different organs, sites, patients and disease states, and outperforms conventional approaches, especially in isolating individual and overlapping nuclei.
Collapse
|
24
|
Chang YH, Yokota H, Abe K, Tasi MD, Chu SL. Automatic three-dimensional segmentation of mouse embryonic stem cell nuclei by utilising multiple channels of confocal fluorescence images. J Microsc 2020; 281:57-75. [PMID: 32720710 DOI: 10.1111/jmi.12949] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 06/13/2020] [Accepted: 07/23/2020] [Indexed: 11/29/2022]
Abstract
Time-lapse confocal fluorescence microscopy images from mouse embryonic stem cells (ESCs) carrying reporter genes, histone H2B-mCherry and Mvh-Venus, have been used to monitor dynamic changes in cellular/differentiation characteristics of live ESCs. Accurate cell nucleus segmentation is required to analyse the ESC dynamics and differentiation at a single cell resolution. Several methods used concavities on nucleus contours to segment overlapping cell nuclei. Our proposed method evaluates not only the concavities but also the size and shape of every 2D nucleus region to determine if any of the strait, extrusion, convexity and large diameter criteria is satisfied to segment overlapping nuclei inside the region. We then use a 3D segmentation method to reconstruct simple, convex, and reasonably sized 3D nuclei along the image stacking direction using the radius and centre of every segmented region in respective microscopy images. To avoid false concavities on nucleus boundaries, fluorescence images of the H2B-mCherry reporter are used for localisation of cell nuclei and Venus fluorescence images are used for determining the cell colony ranges. We use a series of image preprocessing procedures to remove noise outside and inside cell colonies, and in respective nuclei, and to smooth nucleus boundaries based on the colony ranges. We propose dynamic data structures to record every segmented nucleus region and solid in sets (volumes) of 3D confocal images. The experimental results show that the proposed image preprocessing method preserves the areas of mouse ESC nuclei on microscopy images and that the segmentation method effectively segment out every nucleus with a reasonable size and shape. All 3D nuclei in a set (volume) of confocal microscopy images can be accessed by the dynamic data structures for 3D reconstruction. The 3D nuclei in time-lapse confocal microscopy images can be tracked to calculate cell movement and proliferation in consecutive volumes for understanding the dynamics of the differentiation characteristics about ESCs. LAY DESCRIPTION: Embryonic stem cells (ESCs) are considered as an ideal source for basic cell biology study and producing medically useful cells in vitro. This study uses time-lapse confocal fluorescence microscopy images from mouse ESCs carrying reporter gene to monitor dynamic changes in cellular/differentiation characteristics of live ESCs. To automate analyses of ESC differentiation behaviours, accurate cell nucleus segmentation to distinguish respective cells are required. A series of image preprocessing procedures are implemented to remove noise in live-cell fluorescence images but yield overlapping cell nuclei. A segmentation method that evaluates boundary concavities and the size and shape of every nucleus is then used to determine if any of the strait, extrusion, convexity, large and local minimum diameter criteria satisfied to segment overlapping nuclei. We propose a dynamic data structure to record every newly segmented nucleus. The experimental results show that the proposed image preprocessing method preserves the areas of mouse ESC nuclei and that the segmentation method effectively detects overlapping nuclei. All segmented nuclei in confocal images can be accessed using the dynamic data structures to be visualised and manipulated for quantitative analyses of the ESC differentiation behaviours. The manipulation can be tracking of segmented 3D cell nuclei in time-lapse images to calculate their dynamics of differentiation characteristics.
Collapse
Affiliation(s)
- Y-H Chang
- Department of Information & Computer Engineering, Chung Yuan Christian University, ROC, Chung-Li, Taiwan
| | - H Yokota
- RIKEN Center for Advanced Photonics, Wako, Japan
| | - K Abe
- RIKEN BioResource Research Center, Tsukuba, Japan
| | - M-D Tasi
- Department of Information & Computer Engineering, Chung Yuan Christian University, ROC, Chung-Li, Taiwan
| | - S-L Chu
- Department of Information & Computer Engineering, Chung Yuan Christian University, ROC, Chung-Li, Taiwan
| |
Collapse
|
25
|
Huang J, Wang T, Zheng D, He Y. Nucleus segmentation of cervical cytology images based on multi-scale fuzzy clustering algorithm. Bioengineered 2020; 11:484-501. [PMID: 32279589 PMCID: PMC7161549 DOI: 10.1080/21655979.2020.1747834] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
In the screening of cervical cancer cells, accurate identification and segmentation of nucleus in cell images is a key part in the early diagnosis of cervical cancer. Overlapping, uneven staining, poor contrast, and other reasons present challenges to cervical nucleus segmentation. We propose a segmentation method for cervical nuclei based on a multi-scale fuzzy clustering algorithm, which segments cervical cell clump images at different scales. We adopt a novel interesting degree based on area prior to measure the interesting degree of the node. The application of these two methods not only solves the problem of selecting the categories number of the clustering algorithm but also greatly improves the nucleus recognition performance. The method is evaluated by the IBSI2014 and IBSI2015 public datasets. Experiments show that the proposed algorithm has greater advantages than the state-of-the-art cervical nucleus segmentation algorithms and accomplishes high accuracy nucleus segmentation results.
Collapse
Affiliation(s)
- Jinjie Huang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin, China.,School of Computer Science, Harbin University of Science and Technology, Harbin, China
| | - Tao Wang
- Key Laboratory of Advanced Manufacturing and Intelligent Technology, Ministry of Education, Harbin, China.,School of Computer Science, Harbin University of Science and Technology, Harbin, China.,Network and Education Technology Center, Harbin University of Commerce, Harbin, China
| | - Dequan Zheng
- Network and Education Technology Center, Harbin University of Commerce, Harbin, China
| | - Yongjun He
- School of Computer Science, Harbin University of Science and Technology, Harbin, China
| |
Collapse
|
26
|
Polar coordinate sampling-based segmentation of overlapping cervical cells using attention U-Net and random walk. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.12.036] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
27
|
Accurate segmentation of overlapping cells in cervical cytology with deep convolutional neural networks. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.06.086] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
Conceição T, Braga C, Rosado L, Vasconcelos MJM. A Review of Computational Methods for Cervical Cells Segmentation and Abnormality Classification. Int J Mol Sci 2019; 20:E5114. [PMID: 31618951 PMCID: PMC6834130 DOI: 10.3390/ijms20205114] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 02/07/2023] Open
Abstract
Cervical cancer is the one of the most common cancers in women worldwide, affecting around 570,000 new patients each year. Although there have been great improvements over the years, current screening procedures can still suffer from long and tedious workflows and ambiguities. The increasing interest in the development of computer-aided solutions for cervical cancer screening is to aid with these common practical difficulties, which are especially frequent in the low-income countries where most deaths caused by cervical cancer occur. In this review, an overview of the disease and its current screening procedures is firstly introduced. Furthermore, an in-depth analysis of the most relevant computational methods available on the literature for cervical cells analysis is presented. Particularly, this work focuses on topics related to automated quality assessment, segmentation and classification, including an extensive literature review and respective critical discussion. Since the major goal of this timely review is to support the development of new automated tools that can facilitate cervical screening procedures, this work also provides some considerations regarding the next generation of computer-aided diagnosis systems and future research directions.
Collapse
Affiliation(s)
| | | | - Luís Rosado
- Fraunhofer Portugal AICOS, 4200-135 Porto, Portugal.
| | | |
Collapse
|
29
|
Cao J, Wong MK, Zhao Z, Yan H. 3DMMS: robust 3D Membrane Morphological Segmentation of C. elegans embryo. BMC Bioinformatics 2019; 20:176. [PMID: 30961566 PMCID: PMC6454620 DOI: 10.1186/s12859-019-2720-x] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2018] [Accepted: 03/12/2019] [Indexed: 12/26/2022] Open
Abstract
BACKGROUND Understanding the cellular architecture is a fundamental problem in various biological studies. C. elegans is widely used as a model organism in these studies because of its unique fate determinations. In recent years, researchers have worked extensively on C. elegans to excavate the regulations of genes and proteins on cell mobility and communication. Although various algorithms have been proposed to analyze nucleus, cell shape features are not yet well recorded. This paper proposes a method to systematically analyze three-dimensional morphological cellular features. RESULTS Three-dimensional Membrane Morphological Segmentation (3DMMS) makes use of several novel techniques, such as statistical intensity normalization, and region filters, to pre-process the cell images. We then segment membrane stacks based on watershed algorithms. 3DMMS achieves high robustness and precision over different time points (development stages). It is compared with two state-of-the-art algorithms, RACE and BCOMS. Quantitative analysis shows 3DMMS performs best with the average Dice ratio of 97.7% at six time points. In addition, 3DMMS also provides time series of internal and external shape features of C. elegans. CONCLUSION We have developed the 3DMMS based technique for embryonic shape reconstruction at the single-cell level. With cells accurately segmented, 3DMMS makes it possible to study cellular shapes and bridge morphological features and biological expression in embryo research.
Collapse
Affiliation(s)
- Jianfeng Cao
- Department of Electronic Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong
| | - Ming-Kin Wong
- Department of Biology, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Zhongying Zhao
- Department of Biology, Hong Kong Baptist University, Kowloon Tong, Hong Kong
| | - Hong Yan
- Department of Electronic Engineering, City University of Hong Kong, Kowloon Tong, Hong Kong
| |
Collapse
|