1
|
Gao W, Bai Y, Yang Y, Jia L, Mi Y, Cui W, Liu D, Shakoor A, Zhao L, Li J, Luo T, Sun D, Jiang Z. Intelligent sensing for the autonomous manipulation of microrobots toward minimally invasive cell surgery. APPLIED PHYSICS REVIEWS 2024; 11. [DOI: 10.1063/5.0211141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/02/2025]
Abstract
The physiology and pathogenesis of biological cells have drawn enormous research interest. Benefiting from the rapid development of microfabrication and microelectronics, miniaturized robots with a tool size below micrometers have widely been studied for manipulating biological cells in vitro and in vivo. Traditionally, the complex physiological environment and biological fragility require human labor interference to fulfill these tasks, resulting in high risks of irreversible structural or functional damage and even clinical risk. Intelligent sensing devices and approaches have been recently integrated within robotic systems for environment visualization and interaction force control. As a consequence, microrobots can be autonomously manipulated with visual and interaction force feedback, greatly improving accuracy, efficiency, and damage regulation for minimally invasive cell surgery. This review first explores advanced tactile sensing in the aspects of sensing principles, design methodologies, and underlying physics. It also comprehensively discusses recent progress on visual sensing, where the imaging instruments and processing methods are summarized and analyzed. It then introduces autonomous micromanipulation practices utilizing visual and tactile sensing feedback and their corresponding applications in minimally invasive surgery. Finally, this work highlights and discusses the remaining challenges of current robotic micromanipulation and their future directions in clinical trials, providing valuable references about this field.
Collapse
Affiliation(s)
- Wendi Gao
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Yunfei Bai
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Yujie Yang
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Lanlan Jia
- Department of Electronic Engineering, Ocean University of China 2 , Qingdao 266400,
| | - Yingbiao Mi
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Wenji Cui
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Dehua Liu
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Adnan Shakoor
- Department of Control and Instrumentation Engineering, King Fahd University of Petroleum and Minerals 3 , Dhahran 31261,
| | - Libo Zhao
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| | - Junyang Li
- Department of Electronic Engineering, Ocean University of China 2 , Qingdao 266400,
| | - Tao Luo
- Pen-Tung Sah Institute of Micro-Nano Science and Technology, Xiamen University 4 , Xiamen 361102,
| | - Dong Sun
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
- Department of Biomedical Engineering, City University of Hong Kong 5 , Hong Kong 999099,
| | - Zhuangde Jiang
- State Key Laboratory for Manufacturing Systems Engineering, International Joint Laboratory for Micro/Nano Manufacturing and Measurement Technologies, Overseas Expertise Introduction Center for Micro/Nano Manufacturing and Nano Measurement Technologies Discipline Innovation, Xi'an Jiaotong University (Yantai) Research Institute for Intelligent Sensing Technology and System, School of Instrument Science and Technology, Xi'an Jiaotong University 1 , Xi'an 710049,
| |
Collapse
|
2
|
Alahmari SS, Goldgof D, Hall LO, Mouton PR. A Review of Nuclei Detection and Segmentation on Microscopy Images Using Deep Learning With Applications to Unbiased Stereology Counting. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:7458-7477. [PMID: 36327184 DOI: 10.1109/tnnls.2022.3213407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The detection and segmentation of stained cells and nuclei are essential prerequisites for subsequent quantitative research for many diseases. Recently, deep learning has shown strong performance in many computer vision problems, including solutions for medical image analysis. Furthermore, accurate stereological quantification of microscopic structures in stained tissue sections plays a critical role in understanding human diseases and developing safe and effective treatments. In this article, we review the most recent deep learning approaches for cell (nuclei) detection and segmentation in cancer and Alzheimer's disease with an emphasis on deep learning approaches combined with unbiased stereology. Major challenges include accurate and reproducible cell detection and segmentation of microscopic images from stained sections. Finally, we discuss potential improvements and future trends in deep learning applied to cell detection and segmentation.
Collapse
|
3
|
Toma TT, Wang Y, Gahlmann A, Acton ST. DeepSeeded: Volumetric Segmentation of Dense Cell Populations with a Cascade of Deep Neural Networks in Bacterial Biofilm Applications. EXPERT SYSTEMS WITH APPLICATIONS 2024; 238:122094. [PMID: 38646063 PMCID: PMC11027476 DOI: 10.1016/j.eswa.2023.122094] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Accurate and automatic segmentation of individual cell instances in microscopy images is a vital step for quantifying the cellular attributes, which can subsequently lead to new discoveries in biomedical research. In recent years, data-driven deep learning techniques have shown promising results in this task. Despite the success of these techniques, many fail to accurately segment cells in microscopy images with high cell density and low signal-to-noise ratio. In this paper, we propose a novel 3D cell segmentation approach DeepSeeded, a cascaded deep learning architecture that estimates seeds for a classical seeded watershed segmentation. The cascaded architecture enhances the cell interior and border information using Euclidean distance transforms and detects the cell seeds by performing voxel-wise classification. The data-driven seed estimation process proposed here allows segmenting touching cell instances from a dense, intensity-inhomogeneous microscopy image volume. We demonstrate the performance of the proposed method in segmenting 3D microscopy images of a particularly dense cell population called bacterial biofilms. Experimental results on synthetic and two real biofilm datasets suggest that the proposed method leads to superior segmentation results when compared to state-of-the-art deep learning methods and a classical method.
Collapse
Affiliation(s)
- Tanjin Taher Toma
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Yibo Wang
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
| | - Andreas Gahlmann
- Department of Chemistry, University of Virginia, Charlottesville, 22904, Virginia, USA
- Department of Molecular Physiology and Biological Physics, University of Virginia, Charlottesville, 22903, Virginia, USA
| | - Scott T. Acton
- Department of Electrical and Computer Engineering, University of Virginia, Charlottesville, 22904, Virginia, USA
| |
Collapse
|
4
|
Liu B, Zhu Y, Yang Z, Yan HHN, Leung SY, Shi J. Deep Learning-Based 3D Single-Cell Imaging Analysis Pipeline Enables Quantification of Cell-Cell Interaction Dynamics in the Tumor Microenvironment. Cancer Res 2024; 84:517-526. [PMID: 38085180 DOI: 10.1158/0008-5472.can-23-1100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 04/29/2023] [Accepted: 12/05/2023] [Indexed: 02/16/2024]
Abstract
The three-dimensional (3D) tumor microenvironment (TME) comprises multiple interacting cell types that critically impact tumor pathology and therapeutic response. Efficient 3D imaging assays and analysis tools could facilitate profiling and quantifying distinctive cell-cell interaction dynamics in the TMEs of a wide spectrum of human cancers. Here, we developed a 3D live-cell imaging assay using confocal microscopy of patient-derived tumor organoids and a software tool, SiQ-3D (single-cell image quantifier for 3D), that optimizes deep learning (DL)-based 3D image segmentation, single-cell phenotype classification, and tracking to automatically acquire multidimensional dynamic data for different interacting cell types in the TME. An organoid model of tumor cells interacting with natural killer cells was used to demonstrate the effectiveness of the 3D imaging assay to reveal immuno-oncology dynamics as well as the accuracy and efficiency of SiQ-3D to extract quantitative data from large 3D image datasets. SiQ-3D is Python-based, publicly available, and customizable to analyze data from both in vitro and in vivo 3D imaging. The DL-based 3D imaging analysis pipeline can be employed to study not only tumor interaction dynamics with diverse cell types in the TME but also various cell-cell interactions involved in other tissue/organ physiology and pathology. SIGNIFICANCE A 3D single-cell imaging pipeline that quantifies cancer cell interaction dynamics with other TME cell types using primary patient-derived samples can elucidate how cell-cell interactions impact tumor behavior and treatment responses.
Collapse
Affiliation(s)
- Bodong Liu
- Center for Quantitative Systems Biology, Department of Physics, Hong Kong Baptist University, Hong Kong SAR, P.R. China
| | - Yanting Zhu
- Center for Quantitative Systems Biology, Department of Physics, Hong Kong Baptist University, Hong Kong SAR, P.R. China
- Laboratory for Synthetic Chemistry and Chemical Biology Limited, Hong Kong SAR, P.R. China
| | - Zhenye Yang
- MOE Key Laboratory for Cellular Dynamics, The CAS Key Laboratory of Innate Immunity and Chronic Disease, School of Basic Medical Sciences, Division of Life Sciences and Medicine, University of Science and Technology of China, Hefei, P.R. China
| | - Helen H N Yan
- Department of Pathology, School of Clinical Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong SAR, P.R. China
| | - Suet Yi Leung
- Department of Pathology, School of Clinical Medicine, Li Ka Shing Faculty of Medicine, The University of Hong Kong, Queen Mary Hospital, Pokfulam, Hong Kong SAR, P.R. China
| | - Jue Shi
- Center for Quantitative Systems Biology, Department of Physics, Hong Kong Baptist University, Hong Kong SAR, P.R. China
- Laboratory for Synthetic Chemistry and Chemical Biology Limited, Hong Kong SAR, P.R. China
| |
Collapse
|
5
|
Behnsen JG, Black K, Houghton JE, Worden RH. A Review of Particle Size Analysis with X-ray CT. MATERIALS (BASEL, SWITZERLAND) 2023; 16:1259. [PMID: 36770266 PMCID: PMC9920517 DOI: 10.3390/ma16031259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Revised: 01/19/2023] [Accepted: 01/26/2023] [Indexed: 06/18/2023]
Abstract
Particle size and morphology analysis is a problem common to a wide range of applications, including additive manufacturing, geological and agricultural materials' characterisation, food manufacturing and pharmaceuticals. Here, we review the use of microfocus X-ray computed tomography (X-ray CT) for particle analysis. We give an overview of different sample preparation methods, image processing protocols, the morphology parameters that can be determined, and types of materials that are suitable for analysis of particle sizes using X-ray CT. The main conclusion is that size and shape parameters can be determined for particles larger than approximately 2 to 3 μm, given adequate resolution of the X-ray CT setup. Particles composed of high atomic number materials (Z > 40) require careful sample preparation to ensure X-ray transmission. Problems occur when particles with a broad range of sizes are closely packed together, or when particles are fused (sintered or cemented). The use of X-ray CT for particle size analysis promises to become increasingly widespread, offering measurements of size, shape, and porosity of large numbers of particles within one X-ray CT scan.
Collapse
Affiliation(s)
- Julia G. Behnsen
- School of Engineering, University of Liverpool, Liverpool L69 3GH, UK
| | - Kate Black
- School of Engineering, University of Liverpool, Liverpool L69 3GH, UK
| | - James E. Houghton
- Department of Earth, Ocean and Ecological Science, University of Liverpool, Liverpool L69 3GH, UK
| | - Richard H. Worden
- Department of Earth, Ocean and Ecological Science, University of Liverpool, Liverpool L69 3GH, UK
| |
Collapse
|
6
|
Dogar GM, Shahzad M, Fraz MM. Attention augmented distance regression and classification network for nuclei instance segmentation and type classification in histology images. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
7
|
Wang X, Wu C, Zhang S, Yu P, Li L, Guo C, Li R. A novel deep learning segmentation model for organoid-based drug screening. Front Pharmacol 2022; 13:1080273. [PMID: 36588731 PMCID: PMC9794595 DOI: 10.3389/fphar.2022.1080273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 11/23/2022] [Indexed: 12/23/2022] Open
Abstract
Organoids are self-organized three-dimensional in vitro cell cultures derived from stem cells. They can recapitulate organ development, tissue regeneration, and disease progression and, hence, have broad applications in drug discovery. However, the lack of effective graphic algorithms for organoid growth analysis has slowed the development of organoid-based drug screening. In this study, we take advantage of a bladder cancer organoid system and develop a deep learning model, the res-double dynamic conv attention U-Net (RDAU-Net) model, to improve the efficiency and accuracy of organoid-based drug screenings. In this RDAU-Net model, the dynamic convolution and attention modules are integrated. The feature-extracting capability of the encoder and the utilization of multi-scale information are substantially enhanced, and the semantic gap caused by skip connections has been filled, which substantially improved its anti-interference ability. A total of 200 images of bladder cancer organoids on culture days 1, 3, 5, and 7, with or without drug treatment, were employed for training and testing. Compared with the other variations of the U-Net model, the segmentation indicators, such as Intersection over Union and dice similarity coefficient, in the RDAU-Net model have been improved. In addition, this algorithm effectively prevented false identification and missing identification, while maintaining a smooth edge contour of segmentation results. In summary, we proposed a novel method based on a deep learning model which could significantly improve the efficiency and accuracy of high-throughput drug screening and evaluation using organoids.
Collapse
Affiliation(s)
- Xiaowen Wang
- School of Information, Yunnan University, Kunming, China
| | - Chunyue Wu
- School of Life Science, Yunnan University, Kunming, China
| | - Shudi Zhang
- School of Information, Yunnan University, Kunming, China
| | - Pengfei Yu
- School of Information, Yunnan University, Kunming, China
| | - Lu Li
- School of Life Science, Yunnan University, Kunming, China
| | - Chunming Guo
- School of Life Science, Yunnan University, Kunming, China
| | - Rui Li
- Department of Radiation Oncology, Sichuan Cancer Hospital and Institute, Sichuan Cancer Center, School of Medicine, University of Electronic Science and Technology of China, Chengdu, China
| |
Collapse
|
8
|
Wen T, Tong B, Liu Y, Pan T, Du Y, Chen Y, Zhang S. Review of research on the instance segmentation of cell images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 227:107211. [PMID: 36356384 DOI: 10.1016/j.cmpb.2022.107211] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Revised: 10/27/2022] [Accepted: 10/30/2022] [Indexed: 06/16/2023]
Abstract
The instance segmentation of cell images is the basis for conducting cell research and is of great importance for the study and diagnosis of pathologies. To analyze current situations and future developments in the field of cell image instance segmentation, this paper first systematically reviews image segmentation methods based on traditional and deep learning methods. Then, from the three aspects of cell image weak label extraction, cell image instance segmentation, and cell internal structure segmentation, deep-learning-based cell image segmentation methods are analyzed and summarized. Finally, cell image instance segmentation is summarized, and challenges and future developments are discussed.
Collapse
Affiliation(s)
- Tingxi Wen
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Binbin Tong
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yu Liu
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Ting Pan
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yu Du
- College of Engineering, Huaqiao University, Quanzhou 362021, China
| | - Yuping Chen
- College of Engineering, Huaqiao University, Quanzhou 362021, China.
| | - Shanshan Zhang
- College of Engineering, Huaqiao University, Quanzhou 362021, China.
| |
Collapse
|
9
|
Liu G, Ding Q, Luo H, Sha M, Li X, Ju M. Cx22: A new publicly available dataset for deep learning-based segmentation of cervical cytology images. Comput Biol Med 2022; 150:106194. [PMID: 37859287 DOI: 10.1016/j.compbiomed.2022.106194] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2022] [Revised: 09/12/2022] [Accepted: 10/09/2022] [Indexed: 11/24/2022]
Abstract
The segmentation of cervical cytology images plays an important role in the automatic analysis of cervical cytology screening. Although deep learning-based segmentation methods are well-developed in other image segmentation areas, their application in the segmentation of cervical cytology images is still in the early stage. The most important reason for the slow progress is the lack of publicly available and high-quality datasets, and the study on the deep learning-based segmentation methods may be hampered by the present datasets which are either artificial or plagued by the issue of false-negative objects. In this paper, we develop a new dataset of cervical cytology images named Cx22, which consists of the completely annotated labels of the cellular instances based on the open-source images released by our institute previously. Firstly, we meticulously delineate the contours of 14,946 cellular instances in1320 images that are generated by our proposed ROI-based label cropping algorithm. Then, we propose the baseline methods for the deep learning-based semantic and instance segmentation tasks based on Cx22. Finally, through the experiments, we validate the task suitability of Cx22, and the results reveal the impact of false-negative objects on the performance of the baseline methods. Based on our work, Cx22 can provide a foundation for fellow researchers to develop high-performance deep learning-based methods for the segmentation of cervical cytology images. Other detailed information and step-by-step guidance on accessing the dataset are made available to fellow researchers at https://github.com/LGQ330/Cx22.
Collapse
Affiliation(s)
- Guangqi Liu
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Qinghai Ding
- Space Star Technology Co, Ltd., Beijing, 100086, China.
| | - Haibo Luo
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China.
| | - Min Sha
- Archives of NEU, Northeastern University, Shenyang, 110819, China.
| | - Xiang Li
- Key Laboratory of Opto-Electronic Information Processing, Chinese Academy of Sciences, Shenyang, 110016, China; Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang, 110016, China; Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang, 110169, China; University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Moran Ju
- College of Information Science and Technology, Dalian Maritime University, Dalian, 116026, China.
| |
Collapse
|
10
|
Lu X, Zhu X. Automatic segmentation of breast cancer histological images based on dual-path feature extraction network. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2022; 19:11137-11153. [PMID: 36124584 DOI: 10.3934/mbe.2022519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The traditional manual breast cancer diagnosis method of pathological images is time-consuming and labor-intensive, and it is easy to be misdiagnosed. Computer-aided diagnosis of WSIs gradually comes into people*s sight. However, the complexity of high-resolution breast cancer pathological images poses a great challenge to automatic diagnosis, and the existing algorithms are often difficult to balance the accuracy and efficiency. In order to solve these problems, this paper proposes an automatic image segmentation method based on dual-path feature extraction network for breast pathological WSIs, which has a good segmentation accuracy. Specifically, inspired by the concept of receptive fields in the human visual system, dilated convolutional networks are introduced to encode rich contextual information. Based on the channel attention mechanism, a feature attention module and a feature fusion module are proposed to effectively filter and combine the features. In addition, this method uses a light-weight backbone network and performs pre-processing on the data, which greatly reduces the computational complexity of the algorithm. Compared with the classic models, it has improved accuracy and efficiency and is highly competitive.
Collapse
Affiliation(s)
- Xi Lu
- School of Mechanical Engineering, Southeast University, Nanjing 211189, China
| | - Xuedong Zhu
- School of Mechanical Engineering, Southeast University, Nanjing 211189, China
| |
Collapse
|
11
|
Chand S. Semantic segmentation of human cell nucleus using deep U-Net and other versions of U-Net models. NETWORK (BRISTOL, ENGLAND) 2022; 33:167-186. [PMID: 35822269 DOI: 10.1080/0954898x.2022.2096938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/06/2021] [Revised: 04/04/2022] [Accepted: 06/27/2022] [Indexed: 06/15/2023]
Abstract
The deep learning models play an essential role in many areas, including medical image analysis. These models extract important features without human intervention. In this paper, we propose a deep convolution neural network, named as deep U-Net model, for the segmentation of the cell nucleus, a critical functional unit that determines the function and structure of the body. The nucleus contains all kinds of DNA, RNA, chromosomes, and genes governing all life activities, and its disorder may lead to different types of diseases such as cancer, heart disease, diabetes, Alzheimer's, etc. If the nucleus structure is known correctly, diseases due to nucleus disorder may be detected early. It may also reduce the drug discovery time if the shape and size of the nucleus are known. We evaluate the performance of the proposed models on the nucleus segmentation data set used by the Data Science Bowl 2018 competition hosted by Kaggle. We compare its performance with that of the U-Net, Attention U-Net, R2U-Net, Attention R2U-Net, and both versions of the U-Net++ with and without supervision, in terms of loss, dice coefficient, dice loss, intersection over union, and accuracy. Our model performs better than the existing models.
Collapse
Affiliation(s)
- Satish Chand
- School of Computer and Systems Sciences, Jawaharlal Nehru Univesity, New Delhi, India
| |
Collapse
|
12
|
Kaseva T, Omidali B, Hippeläinen E, Mäkelä T, Wilppu U, Sofiev A, Merivaara A, Yliperttula M, Savolainen S, Salli E. Marker-controlled watershed with deep edge emphasis and optimized H-minima transform for automatic segmentation of densely cultivated 3D cell nuclei. BMC Bioinformatics 2022; 23:289. [PMID: 35864453 PMCID: PMC9306214 DOI: 10.1186/s12859-022-04827-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 06/07/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND The segmentation of 3D cell nuclei is essential in many tasks, such as targeted molecular radiotherapies (MRT) for metastatic tumours, toxicity screening, and the observation of proliferating cells. In recent years, one popular method for automatic segmentation of nuclei has been deep learning enhanced marker-controlled watershed transform. In this method, convolutional neural networks (CNNs) have been used to create nuclei masks and markers, and the watershed algorithm for the instance segmentation. We studied whether this method could be improved for the segmentation of densely cultivated 3D nuclei via developing multiple system configurations in which we studied the effect of edge emphasizing CNNs, and optimized H-minima transform for mask and marker generation, respectively. RESULTS The dataset used for training and evaluation consisted of twelve in vitro cultivated densely packed 3D human carcinoma cell spheroids imaged using a confocal microscope. With this dataset, the evaluation was performed using a cross-validation scheme. In addition, four independent datasets were used for evaluation. The datasets were resampled near isotropic for our experiments. The baseline deep learning enhanced marker-controlled watershed obtained an average of 0.69 Panoptic Quality (PQ) and 0.66 Aggregated Jaccard Index (AJI) over the twelve spheroids. Using a system configuration, which was otherwise the same but used 3D-based edge emphasizing CNNs and optimized H-minima transform, the scores increased to 0.76 and 0.77, respectively. When using the independent datasets for evaluation, the best performing system configuration was shown to outperform or equal the baseline and a set of well-known cell segmentation approaches. CONCLUSIONS The use of edge emphasizing U-Nets and optimized H-minima transform can improve the marker-controlled watershed transform for segmentation of densely cultivated 3D cell nuclei. A novel dataset of twelve spheroids was introduced to the public.
Collapse
Affiliation(s)
- Tuomas Kaseva
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland
| | - Bahareh Omidali
- Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Hippeläinen
- Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland.,HUS Medical Imaging Centre, Clinical Physiology and Nuclear Medicine, Helsinki University Hospital and University of Helsinki, Helsinki, Finland
| | - Teemu Mäkelä
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Ulla Wilppu
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland
| | - Alexey Sofiev
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Arto Merivaara
- Division of Pharmaceutical Biosciences, Faculty of Pharmacy, Centre for Drug Research, University of Helsinki, Helsinki, Finland
| | - Marjo Yliperttula
- Division of Pharmaceutical Biosciences, Faculty of Pharmacy, Centre for Drug Research, University of Helsinki, Helsinki, Finland
| | - Sauli Savolainen
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.,Department of Physics, University of Helsinki, P.O. Box 64, FI-00014, Helsinki, Finland
| | - Eero Salli
- HUS Medical Imaging Center, Radiology, Helsinki University Hospital and University of Helsinki, P.O. Box 340, FI-00290, Helsinki, Finland.
| |
Collapse
|
13
|
He W, Liu T, Han Y, Ming W, Du J, Liu Y, Yang Y, Wang L, Jiang Z, Wang Y, Yuan J, Cao C. A review: The detection of cancer cells in histopathology based on machine vision. Comput Biol Med 2022; 146:105636. [PMID: 35751182 DOI: 10.1016/j.compbiomed.2022.105636] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Revised: 04/04/2022] [Accepted: 04/28/2022] [Indexed: 12/24/2022]
Abstract
Machine vision is being employed in defect detection, size measurement, pattern recognition, image fusion, target tracking and 3D reconstruction. Traditional cancer detection methods are dominated by manual detection, which wastes time and manpower, and heavily relies on the pathologists' skill and work experience. Therefore, these manual detection approaches are not convenient for the inheritance of domain knowledge, and are not suitable for the rapid development of medical care in the future. The emergence of machine vision can iteratively update and learn the domain knowledge of cancer cell pathology detection to achieve automated, high-precision, and consistent detection. Consequently, this paper reviews the use of machine vision to detect cancer cells in histopathology images, as well as the benefits and drawbacks of various detection approaches. First, we review the application of image preprocessing and image segmentation in histopathology for the detection of cancer cells, and compare the benefits and drawbacks of different algorithms. Secondly, for the characteristics of histopathological cancer cell images, the research progress of shape, color and texture features and other methods is mainly reviewed. Furthermore, for the classification methods of histopathological cancer cell images, the benefits and drawbacks of traditional machine vision approaches and deep learning methods are compared and analyzed. Finally, the above research is discussed and forecasted, with the expected future development tendency serving as a guide for future research.
Collapse
Affiliation(s)
- Wenbin He
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Ting Liu
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongjie Han
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Wuyi Ming
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China.
| | - Jinguang Du
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yinxia Liu
- Laboratory Medicine of Dongguan Kanghua Hospital, Dongguan, 523808, China
| | - Yuan Yang
- Guangdong Provincial Hospital of Chinese Medicine, Guangzhou, 510120, China.
| | - Leijie Wang
- School of Mechanical Engineering, Dongguan University of Technology Dongguan, 523808, China
| | - Zhiwen Jiang
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Yongqiang Wang
- Zhengzhou Coal Mining Machinery Group Co., Ltd, Zhengzhou, 450016, China
| | - Jie Yuan
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China
| | - Chen Cao
- Henan Key Lab of Intelligent Manufacturing of Mechanical Equipment, Zhengzhou University of Light Industry, Zhengzhou, 450002, China; Guangdong HUST Industrial Technology Research Institute, Guangdong Provincial Key Laboratory of Digital Manufacturing Equipment, Dongguan, 523808, China
| |
Collapse
|
14
|
Devaraj S, Madian N, Suresh S. Mathematical approach for segmenting chromosome clusters in metaspread images. Exp Cell Res 2022; 418:113251. [PMID: 35691379 DOI: 10.1016/j.yexcr.2022.113251] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Revised: 05/15/2022] [Accepted: 06/06/2022] [Indexed: 11/04/2022]
Abstract
Karyotyping is an examination that helps in detecting chromosomal abnormalities. Chromosome analysis is a very challenging task which requires various steps to obtain a karyotype. The challenges associated with chromosome analysis are overlapping and touching of chromosomes. The input considered for chromosome analysis is the metaspread G band chromosomes. The proposed work mainly focus on separation the overlapped and touching chromosomes which is considered to be the major challenge in karyotype. There are various research contribution in chromosome analysis in progress which includes both low (Machine Learning) and high level (Deep Learning) methods. This paper proposes a mathematical based approaches which is very effective in segmentation of clustered chromosomes. The accuracy of segmentation is robust compared to high level approaches.
Collapse
Affiliation(s)
| | - Nirmala Madian
- Department of BME, Dr.N.G.P Institute of Technology, Coimbatore, India.
| | - S Suresh
- Mediscan Systems, Chennai, India
| |
Collapse
|
15
|
Nguyen EH, Yang H, Deng R, Lu Y, Zhu Z, Roland JT, Lu L, Landman BA, Fogo AB, Huo Y. Circle Representation for Medical Object Detection. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:746-754. [PMID: 34699352 PMCID: PMC8963364 DOI: 10.1109/tmi.2021.3122835] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Box representation has been extensively used for object detection in computer vision. Such representation is efficacious but not necessarily optimized for biomedical objects (e.g., glomeruli), which play an essential role in renal pathology. In this paper, we propose a simple circle representation for medical object detection and introduce CircleNet, an anchor-free detection framework. Compared with the conventional bounding box representation, the proposed bounding circle representation innovates in three-fold: (1) it is optimized for ball-shaped biomedical objects; (2) The circle representation reduced the degree of freedom compared with box representation; (3) It is naturally more rotation invariant. When detecting glomeruli and nuclei on pathological images, the proposed circle representation achieved superior detection performance and be more rotation-invariant, compared with the bounding box. The code has been made publicly available: https://github.com/hrlblab/CircleNet.
Collapse
|
16
|
Kashyap R. Breast Cancer Histopathological Image Classification Using Stochastic Dilated Residual Ghost Model. INTERNATIONAL JOURNAL OF INFORMATION RETRIEVAL RESEARCH 2022. [DOI: 10.4018/ijirr.289655] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
A new deep learning-based classification model called the Stochastic Dilated Residual Ghost (SDRG) was proposed in this work for categorizing histopathology images of breast cancer. The SDRG model used the proposed Multiscale Stochastic Dilated Convolution (MSDC) model, a ghost unit, stochastic upsampling, and downsampling units to categorize breast cancer accurately. This study addresses four primary issues: first, strain normalization was used to manage color divergence, data augmentation with several factors was used to handle the overfitting. The second challenge is extracting and enhancing tiny and low-level information such as edge, contour, and color accuracy; it is done by the proposed multiscale stochastic and dilation unit. The third contribution is to remove redundant or similar information from the convolution neural network using a ghost unit. According to the assessment findings, the SDRG model scored overall 95.65 percent accuracy rates in categorizing images with a precision of 99.17 percent, superior to state-of-the-art approaches.
Collapse
Affiliation(s)
- Ramgopal Kashyap
- Amity School of Engineering and Technology, Amity University, Raipur, India
| |
Collapse
|
17
|
Duanmu H, Wang F, Teodoro G, Kong J. Foveal blur-boosted segmentation of nuclei in histopathology images with shape prior knowledge and probability map constraints. Bioinformatics 2021; 37:3905-3913. [PMID: 34081103 PMCID: PMC11025700 DOI: 10.1093/bioinformatics/btab418] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2020] [Revised: 04/07/2021] [Accepted: 06/02/2021] [Indexed: 11/13/2022] Open
Abstract
MOTIVATION In most tissue-based biomedical research, the lack of sufficient pathology training images with well-annotated ground truth inevitably limits the performance of deep learning systems. In this study, we propose a convolutional neural network with foveal blur enriching datasets with multiple local nuclei regions of interest derived from original pathology images. We further propose a human-knowledge boosted deep learning system by inclusion to the convolutional neural network new loss function terms capturing shape prior knowledge and imposing smoothness constraints on the predicted probability maps. RESULTS Our proposed system outperforms all state-of-the-art deep learning and non-deep learning methods by Jaccard coefficient, Dice coefficient, Accuracy and Panoptic Quality in three independent datasets. The high segmentation accuracy and execution speed suggest its promising potential for automating histopathology nuclei segmentation in biomedical research and clinical settings. AVAILABILITY AND IMPLEMENTATION The codes, the documentation and example data are available on an open source at: https://github.com/HongyiDuanmu26/FovealBoosted. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Hongyi Duanmu
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
| | - Fusheng Wang
- Department of Computer Science, Stony Brook University, Stony Brook, NY 11794, USA
- Department of Biomedical Informatics, Stony Brook University, Stony Brook, NY 11794, USA
| | - George Teodoro
- Department of Computer Science, Federal University of Minas Gerais, Belo Horizonte 31270-901, Brazil
| | - Jun Kong
- Department of Mathematics and Statistics and Computer Science, Georgia State University, Atlanta, GA 30303, USA
- Department of Computer Science and Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| |
Collapse
|
18
|
Wang J, Zhang M, Zhang J, Wang Y, Gahlmann A, Acton ST. Graph-Theoretic Post-Processing of Segmentation With Application to Dense Biofilms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:8580-8594. [PMID: 34613914 PMCID: PMC9159353 DOI: 10.1109/tip.2021.3116792] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Recent deep learning methods have provided successful initial segmentation results for generalized cell segmentation in microscopy. However, for dense arrangements of small cells with limited ground truth for training, the deep learning methods produce both over-segmentation and under-segmentation errors. Post-processing attempts to balance the trade-off between the global goal of cell counting for instance segmentation, and local fidelity to the morphology of identified cells. The need for post-processing is especially evident for segmenting 3D bacterial cells in densely-packed communities called biofilms. A graph-based recursive clustering approach, m-LCuts, is proposed to automatically detect collinearly structured clusters and applied to post-process unsolved cells in 3D bacterial biofilm segmentation. Construction of outlier-removed graphs to extract the collinearity feature in the data adds additional novelty to m-LCuts. The superiority of m-LCuts is observed by the evaluation in cell counting with over 90% of cells correctly identified, while a lower bound of 0.8 in terms of average single-cell segmentation accuracy is maintained. This proposed method does not need manual specification of the number of cells to be segmented. Furthermore, the broad adaptation for working on various applications, with the presence of data collinearity, also makes m-LCuts stand out from the other approaches.
Collapse
|
19
|
Belini VL, Junior OM, Ceccato-Antonini SR, Suhr H, Wiedemann P. Morphometric quantification of a pseudohyphae forming Saccharomyces cerevisiae strain using in situ microscopy and image analysis. J Microbiol Methods 2021; 190:106338. [PMID: 34597736 DOI: 10.1016/j.mimet.2021.106338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/21/2021] [Accepted: 09/21/2021] [Indexed: 11/30/2022]
Abstract
Yeast morphology and counting are highly important in fermentation as they are often associated with productivity and can be influenced by process conditions. At present, time-consuming and offline methods are utilized for routine analysis of yeast morphology and cell counting using a haemocytometer. In this study, we demonstrate the application of an in situ microscope to obtain a fast stream of pseudohyphae images from agitated sample suspensions of a Saccharomyces cerevisiae strain, whose morphology in cell clusters is frequently found in the bioethanol fermentation industry. The large statistics of microscopic images allow for online determination of the principal morphological characteristics of the pseudohyphae, including the number of constituent cells, cell-size, number of branches, and length of branches. The distributions of these feature values are calculated online, constituting morphometric monitoring of the pseudohyphae population. By providing representative data, the proposed system can improve the effectiveness of morphological characterization, which in turn can help to improve the understanding and control of bioprocesses in which pseudohyphal-like morphologies are found.
Collapse
Affiliation(s)
- Valdinei L Belini
- Department of Electrical Engineering, Universidade Federal de São Carlos, Rodovia Washington Luís, km 235, São Carlos, SP CEP 13565-905, Brazil.
| | - Orides M Junior
- Computing Department, Universidade Federal de São Carlos, Rodovia Washington Luís, km 235, São Carlos, SP CEP 13565-905, Brazil
| | - Sandra R Ceccato-Antonini
- Department of Agroindustrial Technology and Rural Socio-Economics, Universidade Federal de São Carlos, Via Anhanguera, km 174, Araras, SP CEP 13600-970, Brazil
| | - Hajo Suhr
- Department of Information Technology, Mannheim University of Applied Sciences, Paul-Wittsack-Straße 10, 68163 Mannheim, Germany
| | - Philipp Wiedemann
- Department of Biotechnology, Mannheim University of Applied Sciences, Paul-Wittsack-Straße 10, 68163 Mannheim, Germany
| |
Collapse
|
20
|
Rahmon G, Toubal IE, Palaniappan K. Extending U-Net Network for Improved Nuclei Instance Segmentation Accuracy in Histopathology Images. IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP : [PROCEEDINGS]. IEEE APPLIED IMAGERY PATTERN RECOGNITION WORKSHOP 2021; 2021:10.1109/aipr52630.2021.9762213. [PMID: 35506043 PMCID: PMC9060239 DOI: 10.1109/aipr52630.2021.9762213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Analysis of morphometric features of nuclei plays an important role in understanding disease progression and predict efficacy of treatment. First step towards this goal requires segmentation of individual nuclei within the imaged tissue. Accurate nuclei instance segmentation is one of the most challenging tasks in computational pathology due to broad morphological variances of individual nuclei and dense clustering of nuclei with indistinct boundaries. It is extremely laborious and costly to annotate nuclei instances, requiring experienced pathologists to manually draw the contours, which often results in the lack of annotated data. Inevitably subjective annotation and mislabeling prevent supervised learning approaches to learn from accurate samples and consequently decrease the generalization capacity to robustly segment unseen organ nuclei, leading to over- or under-segmentations as a result. To address these issues, we use a variation of U-Net that uses squeeze and excitation blocks (USE-Net) for robust nuclei segmentation. The squeeze and excitation blocks allow the network to perform feature recalibration by emphasizing informative features and suppressing less useful ones. Furthermore, we extend the proposed network USE-Net not to generate only a segmentation mask, but also to output shape markers to allow better separation of nuclei from each other particularly within dense clusters. The proposed network was trained, tested, and evaluated on 2018 MICCAI Multi-Organ-Nuclei-Segmentation (MoNuSeg) challenge dataset. Promising results were obtained on unseen data despite that the data used for training USE-Net was significantly small. The source code of the USE-Net is available at https://github.com/CIVA-Lab/USE-Net.
Collapse
Affiliation(s)
- Gani Rahmon
- Dept. of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Imad Eddine Toubal
- Dept. of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| | - Kannappan Palaniappan
- Dept. of Electrical Engineering and Computer Science, University of Missouri, Columbia, MO 65211, USA
| |
Collapse
|
21
|
Zou T, Pan T, Taylor M, Stern H. Recognition of overlapping elliptical objects in a binary image. Pattern Anal Appl 2021. [DOI: 10.1007/s10044-020-00951-z] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
Abstract
AbstractRecognition of overlapping objects is required in many applications in the field of computer vision. Examples include cell segmentation, bubble detection and bloodstain pattern analysis. This paper presents a method to identify overlapping objects by approximating them with ellipses. The method is intended to be applied to complex-shaped regions which are believed to be composed of one or more overlapping objects. The method has two primary steps. First, a pool of candidate ellipses are generated by applying the Euclidean distance transform on a compressed image and the pool is filtered by an overlaying method. Second, the concave points on the contour of the region of interest are extracted by polygon approximation to divide the contour into segments. Then, the optimal ellipses are selected from among the candidates by choosing a minimal subset that best fits the identified segments. We propose the use of the adjusted Rand index, commonly applied in clustering, to compare the fitting result with ground truth. Through a set of computational and optimization efficiencies, we are able to apply our approach in complex images comprised of a number of overlapped regions. Experimental results on a synthetic data set, two types of cell images and bloodstain patterns show superior accuracy and flexibility of our method in ellipse recognition, relative to other methods.
Collapse
|
22
|
Sun Y, Huang X, Zhou H, Zhang Q. SRPN: similarity-based region proposal networks for nuclei and cells detection in histology images. Med Image Anal 2021; 72:102142. [PMID: 34198042 DOI: 10.1016/j.media.2021.102142] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/11/2021] [Accepted: 06/17/2021] [Indexed: 10/21/2022]
Abstract
The detection of nuclei and cells in histology images is of great value in both clinical practice and pathological studies. However, multiple reasons such as morphological variations of nuclei or cells make it a challenging task where conventional object detection methods cannot obtain satisfactory performance in many cases. A detection task consists of two sub-tasks, classification and localization. Under the condition of dense object detection, classification is a key to boost the detection performance. Considering this, we propose similarity based region proposal networks (SRPN) for nuclei and cells detection in histology images. In particular, a customised convolution layer termed as embedding layer is designed for network building. The embedding layer is added into the region proposal networks, enabling the networks to learn discriminative features based on similarity learning. Features obtained by similarity learning can significantly boost the classification performance compared to conventional methods. SRPN can be easily integrated into standard convolutional neural networks architectures such as the Faster R-CNN and RetinaNet. We test the proposed approach on tasks of multi-organ nuclei detection and signet ring cells detection in histological images. Experimental results show that networks applying similarity learning achieved superior performance on both tasks when compared to their counterparts. In particular, the proposed SRPN achieve state-of-the-art performance on the MoNuSeg benchmark for nuclei segmentation and detection while compared to previous methods, and on the signet ring cell detection benchmark when compared with baselines. The sourcecode is publicly available at: https://github.com/sigma10010/nuclei_cells_det.
Collapse
Affiliation(s)
- Yibao Sun
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom.
| | - Huiyu Zhou
- School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| |
Collapse
|
23
|
Javed S, Mahmood A, Dias J, Werghi N, Rajpoot N. Spatially Constrained Context-Aware Hierarchical Deep Correlation Filters for Nucleus Detection in Histology Images. Med Image Anal 2021; 72:102104. [PMID: 34242872 DOI: 10.1016/j.media.2021.102104] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2020] [Revised: 05/10/2021] [Accepted: 05/12/2021] [Indexed: 09/30/2022]
Abstract
Nucleus detection in histology images is a fundamental step for cellular-level analysis in computational pathology. In clinical practice, quantitative nuclear morphology can be used for diagnostic decision making, prognostic stratification, and treatment outcome prediction. Nucleus detection is a challenging task because of large variations in the shape of different types of nucleus such as nuclear clutter, heterogeneous chromatin distribution, and irregular and fuzzy boundaries. To address these challenges, we aim to accurately detect nuclei using spatially constrained context-aware correlation filters using hierarchical deep features extracted from multiple layers of a pre-trained network. During training, we extract contextual patches around each nucleus which are used as negative examples while the actual nucleus patch is used as a positive example. In order to spatially constrain the correlation filters, we propose to construct a spatial structural graph across different nucleus components encoding pairwise similarities. The correlation filters are constrained to act as eigenvectors of the Laplacian of the spatial graphs enforcing these to capture the nucleus structure. A novel objective function is proposed by embedding graph-based structural information as well as the contextual information within the discriminative correlation filter framework. The learned filters are constrained to be orthogonal to both the contextual patches and the spatial graph-Laplacian basis to improve the localization and discriminative performance. The proposed objective function trains a hierarchy of correlation filters on different deep feature layers to capture the heterogeneity in nuclear shape and texture. The proposed algorithm is evaluated on three publicly available datasets and compared with 15 current state-of-the-art methods demonstrating competitive performance in terms of accuracy, speed, and generalization.
Collapse
Affiliation(s)
- Sajid Javed
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Arif Mahmood
- Department of Computer Science, Information Technology University, Lahore, Pakistan
| | - Jorge Dias
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE
| | - Naoufel Werghi
- Khalifa University Center for Autonomous Robotic Systems (KUCARS), Khalifa University, Abu Dhabi, UAE.; Department of Electrical Engineering and Computer Science, Khalifa University, Abu Dhabi, UAE..
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, CV4 7AL, U.K.; Department of Pathology, University Hospitals Coventry and Warwickshire, Walsgrave, Coventry, CV2 2DX, U.K.; The Alan Turing Institute, London, NW1 2DB, U.K
| |
Collapse
|
24
|
Kong H, Chen P. Mask R-CNN-based feature extraction and three-dimensional recognition of rice panicle CT images. PLANT DIRECT 2021; 5:e00323. [PMID: 33981945 PMCID: PMC8110429 DOI: 10.1002/pld3.323] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 02/26/2021] [Accepted: 04/01/2021] [Indexed: 05/03/2023]
Abstract
The rice panicle seed setting rate is extremely important for calculating rice yield and performing genetic analysis. Unlike machine vision, X-ray computed tomography (CT) imaging is a nondestructive technique that provides direct information on the internal and external structure of rice panicles. However, occlusion and adhesion of panicles and grains in a CT image sequence make these objects difficult to identify, which in turn hinders accurate determination of the seed setting rate of rice panicles. Therefore, this paper proposes a method based on a mask region convolutional neural network (Mask R-CNN) for feature extraction and three-dimensional (3-D) recognition of CT images of rice panicles. X-ray CT feature characterization was combined with the Mask R-CNN algorithm to perform feature extraction and classification of a panicle and grains in each layer of the CT sequence. The Euclidean distance between adjacent layers was minimized to extract the features of a 3-D panicle and grains. The results were used to calculate the rice panicle seed setting rate. The proposed method was experimentally verified using eight sets of different rice panicles. The results showed that the proposed method can efficiently identify and count plump grains and blighted grains to achieve an accuracy above 99% for the seed setting rate.
Collapse
Affiliation(s)
| | - Ping Chen
- North University of ChinaTaiyuanChina
| |
Collapse
|
25
|
Mota SM, Rogers RE, Haskell AW, McNeill EP, Kaunas R, Gregory CA, Giger ML, Maitland KC. Automated mesenchymal stem cell segmentation and machine learning-based phenotype classification using morphometric and textural analysis. J Med Imaging (Bellingham) 2021; 8:014503. [PMID: 33542945 PMCID: PMC7849042 DOI: 10.1117/1.jmi.8.1.014503] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Accepted: 01/11/2021] [Indexed: 01/22/2023] Open
Abstract
Purpose: Mesenchymal stem cells (MSCs) have demonstrated clinically relevant therapeutic effects for treatment of trauma and chronic diseases. The proliferative potential, immunomodulatory characteristics, and multipotentiality of MSCs in monolayer culture is reflected by their morphological phenotype. Standard techniques to evaluate culture viability are subjective, destructive, or time-consuming. We present an image analysis approach to objectively determine morphological phenotype of MSCs for prediction of culture efficacy. Approach: The algorithm was trained using phase-contrast micrographs acquired during the early and mid-logarithmic stages of MSC expansion. Cell regions are localized using edge detection, thresholding, and morphological operations, followed by cell marker identification using H-minima transform within each region to differentiate individual cells from cell clusters. Clusters are segmented using marker-controlled watershed to obtain single cells. Morphometric and textural features are extracted to classify cells based on phenotype using machine learning. Results: Algorithm performance was validated using an independent test dataset of 186 MSCs in 36 culture images. Results show 88% sensitivity and 86% precision for overall cell detection and a mean Sorensen-Dice coefficient of 0.849 ± 0.106 for segmentation per image. The algorithm exhibited an area under the curve of 0.816 (CI 95 = 0.769 to 0.886) and 0.787 (CI 95 = 0.716 to 0.851) for classifying MSCs according to their phenotype at early and mid-logarithmic expansion, respectively. Conclusions: The proposed method shows potential to segment and classify low and moderately dense MSCs based on phenotype with high accuracy and robustness. It enables quantifiable and consistent morphology-based quality assessment for various culture protocols to facilitate cytotherapy development.
Collapse
Affiliation(s)
- Sakina M. Mota
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| | - Robert E. Rogers
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Andrew W. Haskell
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Eoin P. McNeill
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Roland Kaunas
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Carl A. Gregory
- Texas A&M Health Science Center, College of Medicine, Bryan, Texas, United States
| | - Maryellen L. Giger
- University of Chicago, Department of Radiology, Committee on Medical Physics, Chicago, Illinois, United States
| | - Kristen C. Maitland
- Texas A&M University, Department of Biomedical Engineering, College Station, Texas, United States
| |
Collapse
|
26
|
Objective Diagnosis for Histopathological Images Based on Machine Learning Techniques: Classical Approaches and New Trends. MATHEMATICS 2020. [DOI: 10.3390/math8111863] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Histopathology refers to the examination by a pathologist of biopsy samples. Histopathology images are captured by a microscope to locate, examine, and classify many diseases, such as different cancer types. They provide a detailed view of different types of diseases and their tissue status. These images are an essential resource with which to define biological compositions or analyze cell and tissue structures. This imaging modality is very important for diagnostic applications. The analysis of histopathology images is a prolific and relevant research area supporting disease diagnosis. In this paper, the challenges of histopathology image analysis are evaluated. An extensive review of conventional and deep learning techniques which have been applied in histological image analyses is presented. This review summarizes many current datasets and highlights important challenges and constraints with recent deep learning techniques, alongside possible future research avenues. Despite the progress made in this research area so far, it is still a significant area of open research because of the variety of imaging techniques and disease-specific characteristics.
Collapse
|
27
|
Koyuncu CF, Gunesli GN, Cetin-Atalay R, Gunduz-Demir C. DeepDistance: A multi-task deep regression model for cell detection in inverted microscopy images. Med Image Anal 2020; 63:101720. [PMID: 32438298 DOI: 10.1016/j.media.2020.101720] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2018] [Revised: 02/28/2020] [Accepted: 05/04/2020] [Indexed: 11/25/2022]
Abstract
This paper presents a new deep regression model, which we call DeepDistance, for cell detection in images acquired with inverted microscopy. This model considers cell detection as a task of finding most probable locations that suggest cell centers in an image. It represents this main task with a regression task of learning an inner distance metric. However, different than the previously reported regression based methods, the DeepDistance model proposes to approach its learning as a multi-task regression problem where multiple tasks are learned by using shared feature representations. To this end, it defines a secondary metric, normalized outer distance, to represent a different aspect of the problem and proposes to define its learning as complementary to the main cell detection task. In order to learn these two complementary tasks more effectively, the DeepDistance model designs a fully convolutional network (FCN) with a shared encoder path and end-to-end trains this FCN to concurrently learn the tasks in parallel. For further performance improvement on the main task, this paper also presents an extended version of the DeepDistance model that includes an auxiliary classification task and learns it in parallel to the two regression tasks by also sharing feature representations with them. DeepDistance uses the inner distances estimated by these FCNs in a detection algorithm to locate individual cells in a given image. In addition to this detection algorithm, this paper also suggests a cell segmentation algorithm that employs the estimated maps to find cell boundaries. Our experiments on three different human cell lines reveal that the proposed multi-task learning models, the DeepDistance model and its extended version, successfully identify the locations of cell as well as delineate their boundaries, even for the cell line that was not used in training, and improve the results of its counterparts.
Collapse
Affiliation(s)
| | - Gozde Nur Gunesli
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey.
| | - Rengul Cetin-Atalay
- CanSyL,Graduate School of Informatics, Middle East Technical University, Ankara TR-06800, Turkey.
| | - Cigdem Gunduz-Demir
- Department of Computer Engineering, Bilkent University, Ankara TR-06800, Turkey; Neuroscience Graduate Program, Bilkent University, Ankara TR-06800, Turkey.
| |
Collapse
|
28
|
Boukari F, Makrogiannis S. Automated Cell Tracking Using Motion Prediction-Based Matching and Event Handling. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2020; 17:959-971. [PMID: 30334766 PMCID: PMC6832744 DOI: 10.1109/tcbb.2018.2875684] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Automated cell segmentation and tracking enables the quantification of static and dynamic cell characteristics and is significant for disease diagnosis, treatment, drug development, and other biomedical applications. This paper introduces a method for fully automated cell tracking, lineage construction, and quantification. Cell detection is performed in the joint spatio-temporal domain by a motion diffusion-based Partial Differential Equation (PDE) combined with energy minimizing active contours. In the tracking stage, we adopt a variational joint local-global optical flow technique to determine the motion vector field. We utilize the predicted cell motion jointly with spatial cell features to define a maximum likelihood criterion to find inter-frame cell correspondences assuming Markov dependency. We formulate cell tracking and cell event detection as a graph partitioning problem. We propose a solution obtained by minimization of a global cost function defined over the set of all cell tracks. We construct a cell lineage tree that represents the cell tracks and cell events. Finally, we compute morphological, motility, and diffusivity measures and validate cell tracking against manually generated reference standards. The automated tracking method applied to reference segmentation maps produces an average tracking accuracy score ( TRA) of 99 percent, and the fully automated segmentation and tracking system produces an average TRA of 89 percent.
Collapse
|
29
|
Kowal M, Żejmo M, Skobel M, Korbicz J, Monczak R. Cell Nuclei Segmentation in Cytological Images Using Convolutional Neural Network and Seeded Watershed Algorithm. J Digit Imaging 2020; 33:231-242. [PMID: 31161430 PMCID: PMC7064474 DOI: 10.1007/s10278-019-00200-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023] Open
Abstract
Morphometric analysis of nuclei is crucial in cytological examinations. Unfortunately, nuclei segmentation presents many challenges because they usually create complex clusters in cytological samples. To deal with this problem, we are proposing an approach, which combines convolutional neural network and watershed transform to segment nuclei in cytological images of breast cancer. The method initially is preprocessing images using color deconvolution to highlight hematoxylin-stained objects (nuclei). Next, convolutional neural network is applied to perform semantic segmentation of preprocessed image. It finds nuclei areas, cytoplasm areas, edges of nuclei, and background. All connected components in the binary mask of nuclei are treated as potential nuclei. However, some objects actually are clusters of overlapping nuclei. They are detected by their outlying values of morphometric features. Then an attempt is made to separate them using the seeded watershed segmentation. If the attempt is successful, they are included in the nuclei set. The accuracy of this approach is evaluated with the help of referenced, manually segmented images. The degree of matching between reference nuclei and discovered objects is measured with the help of Jaccard distance and Hausdorff distance. As part of the study, we verified how the use of a convolutional neural network instead of the intensity thresholding to generate a topographical map for the watershed improves segmentation outcomes. Our results show that convolutional neural network outperforms Otsu thresholding and adaptive thresholding in most cases, especially in scenarios with many overlapping nuclei.
Collapse
Affiliation(s)
- Marek Kowal
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland
| | - Michał Żejmo
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland.
| | - Marcin Skobel
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland
| | - Józef Korbicz
- Institute of Control and Computation Engineering, University of Zielona Góra, Szafrana 2, 65-516, Zielona Góra, Poland
| | - Roman Monczak
- Department of Pathology, University Hospital in Zielona Góra, Zyty 26, 65-046, Zielona Góra, Poland
| |
Collapse
|
30
|
Kowal M, Korbicz J. Refinement of Convolutional Neural Network Based Cell Nuclei Detection Using Bayesian Inference. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2019:7216-7222. [PMID: 31947499 DOI: 10.1109/embc.2019.8857950] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Cytological samples provide useful data for cancer diagnostics but their visual analysis under a microscope is tedious and time-consuming. Moreover, some scientific tests indicate that various pathologists can classify the same sample differently or the same pathologist can classify the sample differently if there is a long interval between subsequent examinations. We can help pathologists by providing tools for automatic analysis of cellular structures. Unfortunately, cytological samples usually consist of clumped structures, so it is difficult to extract single cells to measure their morphometric parameters. To deal with this problem, we are proposing a nuclei detection approach, which combines convolutional neural network and Bayesian inference. The input image is preprocessed by the stain separation procedure to extract a blue dye (hematoxylin) which is mainly absorbed by nuclei. Next, a convolutional neural network is trained to provide a semantic segmentation of the image. Finally, the segmentation results are post processed in order to detect nuclei. To do that, we model the nuclei distribution on a plane using marked point process and apply the Besag's iterated conditional modes to find the configuration of ellipses that fit the nuclei distribution. Thanks to this we can represent clusters of occluded cell nuclei as a set of an overlapping ellipses. The accuracy of the proposed method was tested on 50 cytological images of breast cancer. Reference data was generated by the manual labeling of cell nuclei in images. The effectiveness of the proposed method was compared with the marker-controlled watershed. We applied our method and marker controlled watershed to detect nuclei in the semantic segmentation maps generated by the convolutional neural network. The accuracy of nuclei detection is measured as the number of true positive (TP) detections and false positive (FP) detections. It was recorded that the method can detect correctly 93.5% of nuclei (TP) and at the same time it generates only 6.1% of FP. The proposed approach has led to better results than the marker-controlled watershed both in the number of correctly detected nuclei and in the number of false detections.
Collapse
|
31
|
Image-based size analysis of agglomerated and partially sintered particles via convolutional neural networks. POWDER TECHNOL 2020. [DOI: 10.1016/j.powtec.2019.10.020] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
32
|
BERGH T, JOHNSTONE D, CROUT P, HØGÅS S, MIDGLEY P, HOLMESTAD R, VULLUM P, HELVOORT AVAN. Nanocrystal segmentation in scanning precession electron diffraction data. J Microsc 2019; 279:158-167. [DOI: 10.1111/jmi.12850] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2019] [Revised: 11/10/2019] [Accepted: 11/27/2019] [Indexed: 11/28/2022]
Affiliation(s)
- T. BERGH
- Department of PhysicsNorwegian University of Science and Technology (NTNU)Trondheim Norway
| | - D.N. JOHNSTONE
- Department of Materials Science and MetallurgyUniversity of CambridgeCambridge U.K
| | - P. CROUT
- Department of Materials Science and MetallurgyUniversity of CambridgeCambridge U.K
| | - S. HØGÅS
- Department of PhysicsNorwegian University of Science and Technology (NTNU)Trondheim Norway
| | - P.A. MIDGLEY
- Department of Materials Science and MetallurgyUniversity of CambridgeCambridge U.K
| | - R. HOLMESTAD
- Department of PhysicsNorwegian University of Science and Technology (NTNU)Trondheim Norway
| | - P.E. VULLUM
- Department of PhysicsNorwegian University of Science and Technology (NTNU)Trondheim Norway
- Department of Materials and NanotechnologySINTEF IndustryTrondheim Norway
| | - A.T.J. VAN HELVOORT
- Department of PhysicsNorwegian University of Science and Technology (NTNU)Trondheim Norway
| |
Collapse
|
33
|
Conceição T, Braga C, Rosado L, Vasconcelos MJM. A Review of Computational Methods for Cervical Cells Segmentation and Abnormality Classification. Int J Mol Sci 2019; 20:E5114. [PMID: 31618951 PMCID: PMC6834130 DOI: 10.3390/ijms20205114] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 10/07/2019] [Accepted: 10/09/2019] [Indexed: 02/07/2023] Open
Abstract
Cervical cancer is the one of the most common cancers in women worldwide, affecting around 570,000 new patients each year. Although there have been great improvements over the years, current screening procedures can still suffer from long and tedious workflows and ambiguities. The increasing interest in the development of computer-aided solutions for cervical cancer screening is to aid with these common practical difficulties, which are especially frequent in the low-income countries where most deaths caused by cervical cancer occur. In this review, an overview of the disease and its current screening procedures is firstly introduced. Furthermore, an in-depth analysis of the most relevant computational methods available on the literature for cervical cells analysis is presented. Particularly, this work focuses on topics related to automated quality assessment, segmentation and classification, including an extensive literature review and respective critical discussion. Since the major goal of this timely review is to support the development of new automated tools that can facilitate cervical screening procedures, this work also provides some considerations regarding the next generation of computer-aided diagnosis systems and future research directions.
Collapse
Affiliation(s)
| | | | - Luís Rosado
- Fraunhofer Portugal AICOS, 4200-135 Porto, Portugal.
| | | |
Collapse
|
34
|
Das DK, Koley S, Bose S, Maiti AK, Mitra B, Mukherjee G, Dutta PK. Computer aided tool for automatic detection and delineation of nucleus from oral histopathology images for OSCC screening. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105642] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
35
|
Tumor Malignancy Detection Using Histopathology Imaging. J Med Imaging Radiat Sci 2019; 50:514-528. [PMID: 31501064 DOI: 10.1016/j.jmir.2019.07.004] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2019] [Revised: 06/27/2019] [Accepted: 07/08/2019] [Indexed: 11/20/2022]
Abstract
Image segmentation and classification in the biomedical imaging field has high worth in cancer diagnosis and grading. The proposed method classifies the images based on a combination of handcrafted features and shape features using bag of visual words (BoW). The multistage segmentation technique to localize the nuclei in histopathology images includes the stain decomposition and histogram equalization to highlight the nucleus region, followed by the nuclei key point extraction using fast radial symmetry transform, normalized graph cut based on the nuclei region estimation, and nuclei boundary estimation using modified gradient. Subsequently, features from localized regions termed as handcrafted features and the shape features using BoW are extracted for classification. The experiments are performed using both the handcrafted features and BoW to take the advantages of both local nuclei features and globally spatial features. The simulation is performed on the Bisque and BreakHis data sets (with corresponding average accuracies of 93.87% and 96.96%, respectively) and confirms better diagnosis performance using the proposed method.
Collapse
|
36
|
Zhang P, Wang F, Teodoro G, Liang Y, Roy M, Brat D, Kong J. Effective nuclei segmentation with sparse shape prior and dynamic occlusion constraint for glioblastoma pathology images. J Med Imaging (Bellingham) 2019; 6:017502. [PMID: 30891467 DOI: 10.1117/1.jmi.6.1.017502] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2018] [Accepted: 02/19/2019] [Indexed: 11/14/2022] Open
Abstract
We propose a segmentation method for nuclei in glioblastoma histopathologic images based on a sparse shape prior guided variational level set framework. By spectral clustering and sparse coding, a set of shape priors is exploited to accommodate complicated shape variations. We automate the object contour initialization by a seed detection algorithm and deform contours by minimizing an energy functional that incorporates a shape term in a sparse shape prior representation, an adaptive contour occlusion penalty term, and a boundary term encouraging contours to converge to strong edges. As a result, our approach is able to deal with mutual occlusions and detect contours of multiple intersected nuclei simultaneously. Our method is applied to several whole-slide histopathologic image datasets for nuclei segmentation. The proposed method is compared with other state-of-the-art methods and demonstrates good accuracy for nuclei detection and segmentation, suggesting its promise to support biomedical image-based investigations.
Collapse
Affiliation(s)
- Pengyue Zhang
- Stony Brook University, Department of Computer Science, Stony Brook, New York, United States
| | - Fusheng Wang
- Stony Brook University, Department of Biomedical Informatics and Computer Science, Stony Brook, New York, United States
| | - George Teodoro
- University of Brasìlia, Department of Computer Science, Brasìlia, Brazil
| | - Yanhui Liang
- Google Inc., Mountain View, California, United States
| | - Mousumi Roy
- Stony Brook University, Department of Computer Science, Stony Brook, New York, United States
| | - Daniel Brat
- Northwestern University, Department of Pathology, Chicago, Illinois, United States
| | - Jun Kong
- Emory University, Department of Computer Science and Biomedical Informatics, Atlanta, Georgia, United States.,Georgia State University, Department of Mathematics and Statistics, Atlanta, Georgia, United States
| |
Collapse
|
37
|
Xu J, Gong L, Wang G, Lu C, Gilmore H, Zhang S, Madabhushi A. Convolutional neural network initialized active contour model with adaptive ellipse fitting for nuclear segmentation on breast histopathological images. J Med Imaging (Bellingham) 2019; 6:017501. [PMID: 30840729 DOI: 10.1117/1.jmi.6.1.017501] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 01/07/2019] [Indexed: 11/14/2022] Open
Abstract
Automated detection and segmentation of nuclei from high-resolution histopathological images is a challenging problem owing to the size and complexity of digitized histopathologic images. In the context of breast cancer, the modified Bloom-Richardson Grading system is highly correlated with the morphological and topological nuclear features are highly correlated with Modified Bloom-Richardson grading. Therefore, to develop a computer-aided prognosis system, automated detection and segmentation of nuclei are critical prerequisite steps. We present a method for automated detection and segmentation of breast cancer nuclei named a convolutional neural network initialized active contour model with adaptive ellipse fitting (CoNNACaeF). The CoNNACaeF model is able to detect and segment nuclei simultaneously, which consist of three different modules: convolutional neural network (CNN) for accurate nuclei detection, (2) region-based active contour (RAC) model for subsequent nuclear segmentation based on the initial CNN-based detection of nuclear patches, and (3) adaptive ellipse fitting for overlapping solution of clumped nuclear regions. The performance of the CoNNACaeF model is evaluated on three different breast histological data sets, comprising a total of 257 H&E-stained images. The model is shown to have improved detection accuracy of F-measure 80.18%, 85.71%, and 80.36% and average area under precision-recall curves (AveP) 77%, 82%, and 74% on a total of 3 million nuclei from 204 whole slide images from three different datasets. Additionally, CoNNACaeF yielded an F-measure at 74.01% and 85.36%, respectively, for two different breast cancer datasets. The CoNNACaeF model also outperformed the three other state-of-the-art nuclear detection and segmentation approaches, which are blue ratio initialized local region active contour, iterative radial voting initialized local region active contour, and maximally stable extremal region initialized local region active contour models.
Collapse
Affiliation(s)
- Jun Xu
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Lei Gong
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Guanhao Wang
- Nanjing University of Information Science and Technology, Jiangsu Key Laboratory of Big Data Analysis Technique, Nanjing, China
| | - Cheng Lu
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States
| | - Hannah Gilmore
- University Hospitals Case Medical Center, Case Western Reserve University, Institute for Pathology, Cleveland, Ohio, United States
| | - Shaoting Zhang
- University of North Carolina at Charlotte, Department of Computer Science, Charlotte, North Carolina, United States
| | - Anant Madabhushi
- Case Western Reserve University, Department of Biomedical Engineering, Cleveland, Ohio, United States.,Louis Stokes Cleveland Veterans Administration Medical Center, Cleveland, Ohio, United States
| |
Collapse
|
38
|
CIA-Net: Robust Nuclei Instance Segmentation with Contour-Aware Information Aggregation. LECTURE NOTES IN COMPUTER SCIENCE 2019. [DOI: 10.1007/978-3-030-20351-1_53] [Citation(s) in RCA: 67] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
39
|
Bai X, Sun C, Sun C. Cell Segmentation Based on FOPSO Combined With Shape Information Improved Intuitionistic FCM. IEEE J Biomed Health Inform 2019; 23:449-459. [DOI: 10.1109/jbhi.2018.2803020] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
40
|
Algorithms for 3D Particles Characterization Using X-Ray Microtomography in Proppant Crush Test. J Imaging 2018. [DOI: 10.3390/jimaging4110134] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
We present image processing algorithms for a new technique of ceramic proppant crush resistance characterization. To obtain the images of the proppant material before and after the test we used X-ray microtomography. We propose a watershed-based unsupervised algorithm for segmentation of proppant particles, as well as a set of parameters for the characterization of 3D particle size, shape, and porosity. An effective approach based on central geometric moments is described. The approach is used for calculation of particles’ form factor, compactness, equivalent ellipsoid axes lengths, and lengths of projections to these axes. Obtained grain size distribution and crush resistance fit the results of conventional test measured by sieves. However, our technique has a remarkable advantage over traditional laboratory method since it allows to trace the destruction at the level of individual particles and their fragments; it grants to analyze morphological features of fines. We also provide an example describing how the approach can be used for verification of statistical hypotheses about the correlation between particles’ parameters and their crushing under load.
Collapse
|
41
|
Casiraghi E, Huber V, Frasca M, Cossa M, Tozzi M, Rivoltini L, Leone BE, Villa A, Vergani B. A novel computational method for automatic segmentation, quantification and comparative analysis of immunohistochemically labeled tissue sections. BMC Bioinformatics 2018; 19:357. [PMID: 30367588 PMCID: PMC6191943 DOI: 10.1186/s12859-018-2302-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023] Open
Abstract
Background In the clinical practice, the objective quantification of histological results is essential not only to define objective and well-established protocols for diagnosis, treatment, and assessment, but also to ameliorate disease comprehension. Software The software MIAQuant_Learn presented in this work segments, quantifies and analyzes markers in histochemical and immunohistochemical images obtained by different biological procedures and imaging tools. MIAQuant_Learn employs supervised learning techniques to customize the marker segmentation process with respect to any marker color appearance. Our software expresses the location of the segmented markers with respect to regions of interest by mean-distance histograms, which are numerically compared by measuring their intersection. When contiguous tissue sections stained by different markers are available, MIAQuant_Learn aligns them and overlaps the segmented markers in a unique image enabling a visual comparative analysis of the spatial distribution of each marker (markers’ relative location). Additionally, it computes novel measures of markers’ co-existence in tissue volumes depending on their density. Conclusions Applications of MIAQuant_Learn in clinical research studies have proven its effectiveness as a fast and efficient tool for the automatic extraction, quantification and analysis of histological sections. It is robust with respect to several deficits caused by image acquisition systems and produces objective and reproducible results. Thanks to its flexibility, MIAQuant_Learn represents an important tool to be exploited in basic research where needs are constantly changing.
Collapse
Affiliation(s)
- Elena Casiraghi
- Department of Computer Science "Giovanni Degli Antoni", Università degli Studi di Milano, Via Celoria 18, 20135, Milan, Italy.
| | - Veronica Huber
- Unit of Immunotherapy of Human Tumors, Department of Experimental Oncology and Molecular Medicine, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Marco Frasca
- Department of Computer Science "Giovanni Degli Antoni", Università degli Studi di Milano, Via Celoria 18, 20135, Milan, Italy
| | - Mara Cossa
- Unit of Immunotherapy of Human Tumors, Department of Experimental Oncology and Molecular Medicine, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | - Matteo Tozzi
- Department of medicine and surgery, Vascular Surgery, University of Insubria Hospital, Varese, Italy
| | - Licia Rivoltini
- Unit of Immunotherapy of Human Tumors, Department of Experimental Oncology and Molecular Medicine, Fondazione IRCCS Istituto Nazionale dei Tumori, Milan, Italy
| | | | - Antonello Villa
- School of Medicine and Surgery, University of Milano Bicocca, Monza, Italy.,Consorzio MIA - Microscopy and Image Analysis, University of Milano Bicocca, Monza, Italy
| | - Barbara Vergani
- School of Medicine and Surgery, University of Milano Bicocca, Monza, Italy.,Consorzio MIA - Microscopy and Image Analysis, University of Milano Bicocca, Monza, Italy
| |
Collapse
|
42
|
A deep learning-based algorithm for 2-D cell segmentation in microscopy images. BMC Bioinformatics 2018; 19:365. [PMID: 30285608 PMCID: PMC6171227 DOI: 10.1186/s12859-018-2375-z] [Citation(s) in RCA: 107] [Impact Index Per Article: 15.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 09/17/2018] [Indexed: 12/04/2022] Open
Abstract
Background Automatic and reliable characterization of cells in cell cultures is key to several applications such as cancer research and drug discovery. Given the recent advances in light microscopy and the need for accurate and high-throughput analysis of cells, automated algorithms have been developed for segmenting and analyzing the cells in microscopy images. Nevertheless, accurate, generic and robust whole-cell segmentation is still a persisting need to precisely quantify its morphological properties, phenotypes and sub-cellular dynamics. Results We present a single-channel whole cell segmentation algorithm. We use markers that stain the whole cell, but with less staining in the nucleus, and without using a separate nuclear stain. We show the utility of our approach in microscopy images of cell cultures in a wide variety of conditions. Our algorithm uses a deep learning approach to learn and predict locations of the cells and their nuclei, and combines that with thresholding and watershed-based segmentation. We trained and validated our approach using different sets of images, containing cells stained with various markers and imaged at different magnifications. Our approach achieved a 86% similarity to ground truth segmentation when identifying and separating cells. Conclusions The proposed algorithm is able to automatically segment cells from single channel images using a variety of markers and magnifications. Electronic supplementary material The online version of this article (10.1186/s12859-018-2375-z) contains supplementary material, which is available to authorized users.
Collapse
|
43
|
Mouelhi A, Rmili H, Ali JB, Sayadi M, Doghri R, Mrad K. Fast unsupervised nuclear segmentation and classification scheme for automatic allred cancer scoring in immunohistochemical breast tissue images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 165:37-51. [PMID: 30337080 DOI: 10.1016/j.cmpb.2018.08.005] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/12/2018] [Revised: 07/22/2018] [Accepted: 08/08/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE This paper presents an improved scheme able to perform accurate segmentation and classification of cancer nuclei in immunohistochemical (IHC) breast tissue images in order to provide quantitative evaluation of estrogen or progesterone (ER/PR) receptor status that will assist pathologists in cancer diagnostic process. METHODS The proposed segmentation method is based on adaptive local thresholding and an enhanced morphological procedure, which are applied to extract all stained nuclei regions and to split overlapping nuclei. In fact, a new segmentation approach is presented here for cell nuclei detection from the IHC image using a modified Laplacian filter and an improved watershed algorithm. Stromal cells are then removed from the segmented image using an adaptive criterion in order to get fast tumor nuclei recognition. Finally, unsupervised classification of cancer nuclei is obtained by the combination of four common color separation techniques for a subsequent Allred cancer scoring. RESULTS Experimental results on various IHC tissue images of different cancer affected patients, demonstrate the effectiveness of the proposed scheme when compared to the manual scoring of pathological experts. A statistical analysis is performed on the whole image database between immuno-score of manual and automatic method, and compared with the scores that have reached using other state-of-art segmentation and classification strategies. According to the performance evaluation, we recorded more than 98% for both accuracy of detected nuclei and image cancer scoring over the truths provided by experienced pathologists which shows the best correlation with the expert's score (Pearson's correlation coefficient = 0.993, p-value < 0.005) and the lowest computational total time of 72.3 s/image (±1.9) compared to recent studied methods. CONCLUSIONS The proposed scheme can be easily applied for any histopathological diagnostic process that needs stained nuclear quantification and cancer grading. Moreover, the reduced processing time and manual interactions of our procedure can facilitate its implementation in a real-time device to construct a fully online evaluation system of IHC tissue images.
Collapse
MESH Headings
- Algorithms
- Breast Neoplasms/classification
- Breast Neoplasms/diagnostic imaging
- Breast Neoplasms/metabolism
- Carcinoma, Ductal, Breast/classification
- Carcinoma, Ductal, Breast/diagnostic imaging
- Carcinoma, Ductal, Breast/metabolism
- Cell Nucleus/classification
- Cell Nucleus/metabolism
- Cell Nucleus/pathology
- Female
- Humans
- Image Interpretation, Computer-Assisted/methods
- Image Interpretation, Computer-Assisted/statistics & numerical data
- Immunohistochemistry/methods
- Immunohistochemistry/statistics & numerical data
- Receptors, Estrogen/metabolism
- Receptors, Progesterone/metabolism
- Staining and Labeling
- Unsupervised Machine Learning
Collapse
Affiliation(s)
- Aymen Mouelhi
- University of Tunis, ENSIT, LR13ES03 SIME, Montfleury 1008, Tunisia.
| | - Hana Rmili
- University of Tunis El-Manar, ISTMT, Laboratory of Biophysics and Medical Technologies, Tunisia.
| | - Jaouher Ben Ali
- University of Tunis, ENSIT, LR13ES03 SIME, Montfleury 1008, Tunisia; FEMTO-ST Institute, AS2M department, UMR CNRS 6174 - UFC / ENSMM /UTBM, Besançon 25000, France.
| | - Mounir Sayadi
- University of Tunis, ENSIT, LR13ES03 SIME, Montfleury 1008, Tunisia.
| | - Raoudha Doghri
- Salah Azaiez Institute of Oncology, Morbid Anatomy Service, bd du 9 avril, Bab Saadoun, Tunis 1006, Tunisia.
| | - Karima Mrad
- Salah Azaiez Institute of Oncology, Morbid Anatomy Service, bd du 9 avril, Bab Saadoun, Tunis 1006, Tunisia.
| |
Collapse
|
44
|
Koyuncu CF, Cetin-Atalay R, Gunduz-Demir C. Object-Oriented Segmentation of Cell Nuclei in Fluorescence Microscopy Images. Cytometry A 2018; 93:1019-1028. [PMID: 30211975 DOI: 10.1002/cyto.a.23594] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2017] [Revised: 06/14/2018] [Accepted: 07/30/2018] [Indexed: 12/17/2022]
Abstract
Cell nucleus segmentation remains an open and challenging problem especially to segment nuclei in cell clumps. Splitting a cell clump would be straightforward if the gradients of boundary pixels in-between the nuclei were always higher than the others. However, imperfections may exist: inhomogeneities of pixel intensities in a nucleus may cause to define spurious boundaries whereas insufficient pixel intensity differences at the border of overlapping nuclei may cause to miss some true boundary pixels. In contrast, these imperfections are typically observed at the pixel-level, causing local changes in pixel values without changing the semantics on a large scale. In response to these issues, this article introduces a new nucleus segmentation method that relies on using gradient information not at the pixel level but at the object level. To this end, it proposes to decompose an image into smaller homogeneous subregions, define edge-objects at four different orientations to encode the gradient information at the object level, and devise a merging algorithm, in which the edge-objects vote for subregion pairs along their orientations and the pairs are iteratively merged if they get sufficient votes from multiple orientations. Our experiments on fluorescence microscopy images reveal that this high-level representation and the design of a merging algorithm using edge-objects (gradients at the object level) improve the segmentation results.
Collapse
Affiliation(s)
| | - Rengul Cetin-Atalay
- Graduate School of Informatics, Middle East Technical University, 06800, Ankara, Turkey
| | - Cigdem Gunduz-Demir
- Computer Engineering Department, Bilkent University, 06800, Ankara, Turkey.,Neuroscience Graduate Program, Bilkent University, 06800, Ankara, Turkey
| |
Collapse
|
45
|
Win KY, Choomchuay S, Hamamoto K, Raveesunthornkiat M. Comparative Study on Automated Cell Nuclei Segmentation Methods for Cytology Pleural Effusion Images. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:9240389. [PMID: 30344991 PMCID: PMC6164204 DOI: 10.1155/2018/9240389] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/17/2017] [Accepted: 07/18/2018] [Indexed: 01/04/2023]
Abstract
Automated cell nuclei segmentation is the most crucial step toward the implementation of a computer-aided diagnosis system for cancer cells. Studies on the automated analysis of cytology pleural effusion images are few because of the lack of reliable cell nuclei segmentation methods. Therefore, this paper presents a comparative study of twelve nuclei segmentation methods for cytology pleural effusion images. Each method involves three main steps: preprocessing, segmentation, and postprocessing. The preprocessing and segmentation stages help enhancing the image quality and extracting the nuclei regions from the rest of the image, respectively. The postprocessing stage helps in refining the segmented nuclei and removing false findings. The segmentation methods are quantitatively evaluated for 35 cytology images of pleural effusion by computing five performance metrics. The evaluation results show that the segmentation performances of the Otsu, k-means, mean shift, Chan-Vese, and graph cut methods are 94, 94, 95, 94, and 93%, respectively, with high abnormal nuclei detection rates. The average computational times per image are 1.08, 36.62, 50.18, 330, and 44.03 seconds, respectively. The findings of this study will be useful for current and potential future studies on cytology images of pleural effusion.
Collapse
Affiliation(s)
- Khin Yadanar Win
- Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
| | - Somsak Choomchuay
- Faculty of Engineering, King Mongkut's Institute of Technology Ladkrabang, Bangkok, Thailand
| | - Kazuhiko Hamamoto
- School of Information and Telecommunication Engineering, Tokai University, Tokyo, Japan
| | | |
Collapse
|
46
|
Tareef A, Song Y, Huang H, Feng D, Chen M, Wang Y, Cai W. Multi-Pass Fast Watershed for Accurate Segmentation of Overlapping Cervical Cells. IEEE TRANSACTIONS ON MEDICAL IMAGING 2018; 37:2044-2059. [PMID: 29993863 DOI: 10.1109/tmi.2018.2815013] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The task of segmenting cell nuclei and cytoplasm in pap smear images is one of the most challenging tasks in automated cervix cytological analysis due to specifically the presence of overlapping cells. This paper introduces a multi-pass fast watershed-based method (MPFW) to segment both nucleus and cytoplasm from large cell masses of overlapping cervical cells in three watershed passes. The first pass locates the nuclei with barrier-based watershed on the gradient-based edge map of a pre-processed image. The next pass segments the isolated, touching, and partially overlapping cells with a watershed transform adapted to the cell shape and location. The final pass introduces mutual iterative watersheds separately applied to each nucleus in the largely overlapping clusters to estimate the cell shape. In MPFW, the line-shaped contours of the watershed cells are deformed with ellipse fitting and contour adjustment to give a better representation of cell shapes. The performance of the proposed method has been evaluated using synthetic, real extended depth-of-field, and multi-layers cervical cytology images provided by the first and second overlapping cervical cytology image segmentation challenges in ISBI 2014 and ISBI 2015. The experimental results demonstrate superior performance of the proposed MPFW in terms of segmentation accuracy, detection rate, and time complexity, compared with recent peer methods.
Collapse
|
47
|
Nurzynska K, Mikhalkin A, Piorkowski A. CAS: Cell Annotation Software - Research on Neuronal Tissue Has Never Been so Transparent. Neuroinformatics 2018; 15:365-382. [PMID: 28849545 PMCID: PMC5671565 DOI: 10.1007/s12021-017-9340-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2023]
Abstract
CAS (Cell Annotation Software) is a novel tool for analysis of microscopic images and selection of the cell soma or nucleus, depending on the research objectives in medicine, biology, bioinformatics, etc. It replaces time-consuming and tiresome manual analysis of single images not only with automatic methods for object segmentation based on the Statistical Dominance Algorithm, but also semi-automatic tools for object selection within a marked region of interest. For each image, a broad set of object parameters is computed, including shape features and optical and topographic characteristics, thus giving additional insight into data. Our solution for cell detection and analysis has been verified by microscopic data and its application in the annotation of the lateral geniculate nucleus has been examined in a case study.
Collapse
Affiliation(s)
- Karolina Nurzynska
- Institute of Informatics, Silesian University of Technology, Gliwice, Poland.
| | - Aleksandr Mikhalkin
- Laboratory of Neuromorphology, Pavlov Institute of Physiology RAS, St. Petersburg, Russia
| | - Adam Piorkowski
- Department of Geoinformatics and Applied Computer Science, AGH University of Science and Technology, Cracow, Poland
| |
Collapse
|
48
|
Frei M, Kruis FE. Fully automated primary particle size analysis of agglomerates on transmission electron microscopy images via artificial neural networks. POWDER TECHNOL 2018. [DOI: 10.1016/j.powtec.2018.03.032] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
49
|
Representation learning-based unsupervised domain adaptation for classification of breast cancer histopathology images. Biocybern Biomed Eng 2018. [DOI: 10.1016/j.bbe.2018.04.008] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
|
50
|
Abstract
Neuronal soma segmentation is essential for morphology quantification analysis. Rapid advances in light microscope imaging techniques have generated such massive amounts of data that time-consuming manual methods cannot meet requirements for high throughput. However, touching soma segmentation is still a challenge for automatic segmentation methods. In this paper, we propose a soma segmentation method that combines the Rayburst sampling algorithm and ellipsoid fitting. The improved Rayburst sampling algorithm is used to detect the soma surface; the ellipsoid fitting method then refines jagged sampled soma surface to generate smooth ellipsoidal shapes for efficient analysis. In experiments, we validated the proposed method by applying it to datasets from the fluorescence micro-optical sectioning tomography (fMOST) system. The results indicate that the proposed method is comparable to the manual segmented gold standard with accurate soma segmentation at a relatively high speed. The proposed method can be extended to large-scale image stacks in the future.
Collapse
Affiliation(s)
- Tianyu Hu
- Britton Chance Center for Biomedical Photonics, School of Engineering Sciences, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Qiufeng Xu
- Britton Chance Center for Biomedical Photonics, School of Engineering Sciences, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Wei Lv
- Britton Chance Center for Biomedical Photonics, School of Engineering Sciences, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, 430074, China
| | - Qian Liu
- Britton Chance Center for Biomedical Photonics, School of Engineering Sciences, Wuhan National Laboratory for Optoelectronics-Huazhong University of Science and Technology, Wuhan, 430074, China.
- MoE Key Laboratory for Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, 430074, China.
| |
Collapse
|