201
|
Dai D, Tang C, Wang G, Xia S. Building partially understandable convolutional neural networks by differentiating class-related neural nodes. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.04.003] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
202
|
An optimal feature selection method for histopathology tissue image classification using adaptive jaya algorithm. EVOLUTIONARY INTELLIGENCE 2021. [DOI: 10.1007/s12065-019-00205-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/16/2023]
|
203
|
Huang Y, Zhu H, Duan X, Hong X, Sun H, Lv W, Lu L, Feng Q. GapFill-Recon Net: A Cascade Network for simultaneously PET Gap Filling and Image Reconstruction. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 208:106271. [PMID: 34274612 DOI: 10.1016/j.cmpb.2021.106271] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/19/2021] [Accepted: 07/01/2021] [Indexed: 06/13/2023]
Abstract
PET image reconstruction from incomplete data, such as the gap between adjacent detector blocks generally introduces partial projection data loss, is an important and challenging problem in medical imaging. This work proposes an efficient convolutional neural network (CNN) framework, called GapFill-Recon Net, that jointly reconstructs PET images and their associated sinogram data. GapFill-Recon Net including two blocks: the Gap-Filling block first address the sinogram gap and the Image-Recon block maps the filled sinogram onto the final image directly. A total of 43,660 pairs of synthetic 2D PET sinograms with gaps and images generated from the MOBY phantom are utilized for network training, testing and validation. Whole-body mouse Monte Carlo (MC) simulated data are also used for evaluation. The experimental results show that the reconstructed image quality of GapFill-Recon Net outperforms filtered back-projection (FBP) and maximum likelihood expectation maximization (MLEM) in terms of the structural similarity index metric (SSIM), relative root mean squared error (rRMSE), and peak signal-to-noise ratio (PSNR). Moreover, the reconstruction speed is equivalent to that of FBP and was nearly 83 times faster than that of MLEM. In conclusion, compared with the traditional reconstruction algorithm, GapFill-Recon Net achieves relatively optimal performance in image quality and reconstruction speed, which effectively achieves a balance between efficiency and performance.
Collapse
Affiliation(s)
- Yanchao Huang
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China; Nanfang PET Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong 510515, China
| | - Huobiao Zhu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Xiaoman Duan
- Division of Biomedical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N5A9, Canada
| | - Xiaotong Hong
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Hao Sun
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Wenbing Lv
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China
| | - Lijun Lu
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| | - Qianjin Feng
- School of Biomedical Engineering and Guangdong Provincial Key Laboratory of Medial Image Processing, Southern Medical University, Guangzhou, Guangdong 510515, China; Guangdong Province Engineering Laboratory for Medical Imaging and Diagnostic Technology, Southern Medical University, Guangzhou, China.
| |
Collapse
|
204
|
Freeman K, Geppert J, Stinton C, Todkill D, Johnson S, Clarke A, Taylor-Phillips S. Use of artificial intelligence for image analysis in breast cancer screening programmes: systematic review of test accuracy. BMJ 2021; 374:n1872. [PMID: 34470740 PMCID: PMC8409323 DOI: 10.1136/bmj.n1872] [Citation(s) in RCA: 113] [Impact Index Per Article: 28.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVE To examine the accuracy of artificial intelligence (AI) for the detection of breast cancer in mammography screening practice. DESIGN Systematic review of test accuracy studies. DATA SOURCES Medline, Embase, Web of Science, and Cochrane Database of Systematic Reviews from 1 January 2010 to 17 May 2021. ELIGIBILITY CRITERIA Studies reporting test accuracy of AI algorithms, alone or in combination with radiologists, to detect cancer in women's digital mammograms in screening practice, or in test sets. Reference standard was biopsy with histology or follow-up (for screen negative women). Outcomes included test accuracy and cancer type detected. STUDY SELECTION AND SYNTHESIS Two reviewers independently assessed articles for inclusion and assessed the methodological quality of included studies using the QUality Assessment of Diagnostic Accuracy Studies-2 (QUADAS-2) tool. A single reviewer extracted data, which were checked by a second reviewer. Narrative data synthesis was performed. RESULTS Twelve studies totalling 131 822 screened women were included. No prospective studies measuring test accuracy of AI in screening practice were found. Studies were of poor methodological quality. Three retrospective studies compared AI systems with the clinical decisions of the original radiologist, including 79 910 women, of whom 1878 had screen detected cancer or interval cancer within 12 months of screening. Thirty four (94%) of 36 AI systems evaluated in these studies were less accurate than a single radiologist, and all were less accurate than consensus of two or more radiologists. Five smaller studies (1086 women, 520 cancers) at high risk of bias and low generalisability to the clinical context reported that all five evaluated AI systems (as standalone to replace radiologist or as a reader aid) were more accurate than a single radiologist reading a test set in the laboratory. In three studies, AI used for triage screened out 53%, 45%, and 50% of women at low risk but also 10%, 4%, and 0% of cancers detected by radiologists. CONCLUSIONS Current evidence for AI does not yet allow judgement of its accuracy in breast cancer screening programmes, and it is unclear where on the clinical pathway AI might be of most benefit. AI systems are not sufficiently specific to replace radiologist double reading in screening programmes. Promising results in smaller studies are not replicated in larger studies. Prospective studies are required to measure the effect of AI in clinical practice. Such studies will require clear stopping rules to ensure that AI does not reduce programme specificity. STUDY REGISTRATION Protocol registered as PROSPERO CRD42020213590.
Collapse
Affiliation(s)
- Karoline Freeman
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Julia Geppert
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Chris Stinton
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Daniel Todkill
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Samantha Johnson
- Division of Health Sciences, University of Warwick, Coventry, UK
| | - Aileen Clarke
- Division of Health Sciences, University of Warwick, Coventry, UK
| | | |
Collapse
|
205
|
Zhang X, Cornish TC, Yang L, Bennett TD, Ghosh D, Xing F. Generative Adversarial Domain Adaptation for Nucleus Quantification in Images of Tissue Immunohistochemically Stained for Ki-67. JCO Clin Cancer Inform 2021; 4:666-679. [PMID: 32730116 PMCID: PMC7397778 DOI: 10.1200/cci.19.00108] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
PURPOSE We focus on the problem of scarcity of annotated training data for nucleus recognition in Ki-67 immunohistochemistry (IHC)–stained pancreatic neuroendocrine tumor (NET) images. We hypothesize that deep learning–based domain adaptation is helpful for nucleus recognition when image annotations are unavailable in target data sets. METHODS We considered 2 different institutional pancreatic NET data sets: one (ie, source) containing 38 cases with 114 annotated images and the other (ie, target) containing 72 cases with 20 annotated images. The gold standards were manually annotated by 1 pathologist. We developed a novel deep learning–based domain adaptation framework to count different types of nuclei (ie, immunopositive tumor, immunonegative tumor, nontumor nuclei). We compared the proposed method with several recent fully supervised deep learning models, such as fully convolutional network-8s (FCN-8s), U-Net, fully convolutional regression network (FCRN) A, FCRNB, and fully residual convolutional network (FRCN). We also evaluated the proposed method by learning with a mixture of converted source images and real target annotations. RESULTS Our method achieved an F1 score of 81.3% and 62.3% for nucleus detection and classification in the target data set, respectively. Our method outperformed FCN-8s (53.6% and 43.6% for nucleus detection and classification, respectively), U-Net (61.1% and 47.6%), FCRNA (63.4% and 55.8%), and FCRNB (68.2% and 60.6%) in terms of F1 score and was competitive with FRCN (81.7% and 70.7%). In addition, learning with a mixture of converted source images and only a small set of real target labels could further boost the performance. CONCLUSION This study demonstrates that deep learning–based domain adaptation is helpful for nucleus recognition in Ki-67 IHC stained images when target data annotations are not available. It would improve the applicability of deep learning models designed for downstream supervised learning tasks on different data sets.
Collapse
Affiliation(s)
- Xuhong Zhang
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Toby C Cornish
- Department of Pathology, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Lin Yang
- Department of Electrical and Computer Engineering, Department of Computer and Information Science, Department of Biomedical Engineering, University of Florida, Gainesville, FL
| | - Tellen D Bennett
- Department of Pediatrics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Debashis Ghosh
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| | - Fuyong Xing
- Department of Biostatistics and Informatics, University of Colorado Anschutz Medical Campus, Aurora, CO.,The Data Science to Patient Value Initiative, University of Colorado Anschutz Medical Campus, Aurora, CO
| |
Collapse
|
206
|
Aatresh AA, Yatgiri RP, Chanchal AK, Kumar A, Ravi A, Das D, Bs R, Lal S, Kini J. Efficient deep learning architecture with dimension-wise pyramid pooling for nuclei segmentation of histopathology images. Comput Med Imaging Graph 2021; 93:101975. [PMID: 34461375 DOI: 10.1016/j.compmedimag.2021.101975] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Revised: 08/05/2021] [Accepted: 08/19/2021] [Indexed: 11/30/2022]
Abstract
Image segmentation remains to be one of the most vital tasks in the area of computer vision and more so in the case of medical image processing. Image segmentation quality is the main metric that is often considered with memory and computation efficiency overlooked, limiting the use of power hungry models for practical use. In this paper, we propose a novel framework (Kidney-SegNet) that combines the effectiveness of an attention based encoder-decoder architecture with atrous spatial pyramid pooling with highly efficient dimension-wise convolutions. The segmentation results of the proposed Kidney-SegNet architecture have been shown to outperform existing state-of-the-art deep learning methods by evaluating them on two publicly available kidney and TNBC breast H&E stained histopathology image datasets. Further, our simulation experiments also reveal that the computational complexity and memory requirement of our proposed architecture is very efficient compared to existing deep learning state-of-the-art methods for the task of nuclei segmentation of H&E stained histopathology images. The source code of our implementation will be available at https://github.com/Aaatresh/Kidney-SegNet.
Collapse
Affiliation(s)
- Anirudh Ashok Aatresh
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Rohit Prashant Yatgiri
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Amit Kumar Chanchal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Aman Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Akansh Ravi
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Devikalyan Das
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Raghavendra Bs
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Shyam Lal
- Department of Electronics and Communication Engineering, National Institute of Technology Karnataka, Surathkal, India.
| | - Jyoti Kini
- Department of Pathology, Kasturba Medical College, Mangalore, Manipal Academy of Higher Education, Manipal, India.
| |
Collapse
|
207
|
Lee SMW, Shaw A, Simpson JL, Uminsky D, Garratt LW. Differential cell counts using center-point networks achieves human-level accuracy and efficiency over segmentation. Sci Rep 2021; 11:16917. [PMID: 34413367 PMCID: PMC8377024 DOI: 10.1038/s41598-021-96067-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 08/03/2021] [Indexed: 11/08/2022] Open
Abstract
Differential cell counts is a challenging task when applying computer vision algorithms to pathology. Existing approaches to train cell recognition require high availability of multi-class segmentation and/or bounding box annotations and suffer in performance when objects are tightly clustered. We present differential count network ("DCNet"), an annotation efficient modality that utilises keypoint detection to locate in brightfield images the centre points of cells (not nuclei) and their cell class. The single centre point annotation for DCNet lowered burden for experts to generate ground truth data by 77.1% compared to bounding box labeling. Yet centre point annotation still enabled high accuracy when training DCNet on a multi-class algorithm on whole cell features, matching human experts in all 5 object classes in average precision and outperforming humans in consistency. The efficacy and efficiency of the DCNet end-to-end system represents a significant progress toward an open source, fully computationally approach to differential cell count based diagnosis that can be adapted to any pathology need.
Collapse
Affiliation(s)
- Sarada M W Lee
- Perth Machine Learning Group, Perth, WA, 6000, Australia
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - Andrew Shaw
- Data Institute, University of San Francisco, San Francisco, CA, 94117, USA
| | - Jodie L Simpson
- School of Medicine and Public Health, University of Newcastle, Callaghan, NSW, 2308, Australia
- Priority Research Centre for Healthy Lungs, University of Newcastle, Callaghan, NSW, 2308, Australia
| | - David Uminsky
- Department of Computer Science, University of Chicago, Chicago, IL, 60637, USA
| | - Luke W Garratt
- Wal-yan Respiratory Research Centre, Telethon Kids Institute, University of Western Australia, Nedlands, WA, 6009, Australia.
| |
Collapse
|
208
|
Chen Y, Liang D, Bai X, Xu Y, Yang X. Cell Localization and Counting Using Direction Field Map. IEEE J Biomed Health Inform 2021; 26:359-368. [PMID: 34406952 DOI: 10.1109/jbhi.2021.3105545] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Automatic cell counting in pathology images is challenging due to blurred boundaries, low-contrast, and overlapping between cells. In this paper, we train a convolutional neural network (CNN) to predict a two-dimensional direction field map and then use it to localize cell individuals for counting. Specifically, we define a direction field on each pixel in the cell regions (obtained by dilating the original annotation in terms of cell centers) as a two-dimensional unit vector pointing from the pixel to its corresponding cell center. Direction field for adjacent pixels in different cells have opposite directions departing from each other, while those in the same cell region have directions pointing to the same center. Such unique property is used to partition overlapped cells for localization and counting. To deal with those blurred boundaries or low contrast cells, we set the direction field of the background pixels to be zeros in the ground-truth generation. Thus, adjacent pixels belonging to cells and background will have an obvious difference in the predicted direction field. To further deal with cells of varying density and overlapping issues, we adopt geometry adaptive (varying) radius for cells of different densities in the generation of ground-truth direction field map, which guides the CNN model to separate cells of different densities and overlapping cells. Extensive experimental results on three widely used datasets (i.e., Cell, CRCHistoPhenotype2016, and MBM datasets) demonstrate the effectiveness of the proposed approach.
Collapse
|
209
|
Zhang L, Wang L, Gao J, Risacher SL, Yan J, Li G, Liu T, Zhu D. Deep Fusion of Brain Structure-Function in Mild Cognitive Impairment. Med Image Anal 2021; 72:102082. [PMID: 34004495 DOI: 10.1016/j.media.2021.102082] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Revised: 03/20/2021] [Accepted: 04/13/2021] [Indexed: 01/22/2023]
Abstract
Multimodal fusion of different types of neural image data provides an irreplaceable opportunity to take advantages of complementary cross-modal information that may only partially be contained in single modality. To jointly analyze multimodal data, deep neural networks can be especially useful because many studies have suggested that deep learning strategy is very efficient to reveal complex and non-linear relations buried in the data. However, most deep models, e.g., convolutional neural network and its numerous extensions, can only operate on regular Euclidean data like voxels in 3D MRI. The interrelated and hidden structures that beyond the grid neighbors, such as brain connectivity, may be overlooked. Moreover, how to effectively incorporate neuroscience knowledge into multimodal data fusion with a single deep framework is understudied. In this work, we developed a graph-based deep neural network to simultaneously model brain structure and function in Mild Cognitive Impairment (MCI): the topology of the graph is initialized using structural network (from diffusion MRI) and iteratively updated by incorporating functional information (from functional MRI) to maximize the capability of differentiating MCI patients from elderly normal controls. This resulted in a new connectome by exploring "deep relations" between brain structure and function in MCI patients and we named it as Deep Brain Connectome. Though deep brain connectome is learned individually, it shows consistent patterns of alteration comparing to structural network at group level. With deep brain connectome, our developed deep model can achieve 92.7% classification accuracy on ADNI dataset.
Collapse
Affiliation(s)
- Lu Zhang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Li Wang
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA; Department of Mathematics, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Jean Gao
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA
| | - Shannon L Risacher
- Department of Radiology and Imaging Sciences, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Jingwen Yan
- School of Informatics and Computing, Indiana University School of Medicine, Indianapolis, IN 46202 USA
| | - Gang Li
- Biomedical Research Imaging Center and Department of Radiology, University of North Carolina at Chapel Hill, Chapel Hill, NC 27599-7160, USA
| | - Tianming Liu
- Cortical Architecture Imaging and Discovery Lab, Department of Computer Science and Bioimaging Research Center, The University of Georgia, Athens, GA, USA
| | - Dajiang Zhu
- Department of Computer Science and Engineering, The University of Texas at Arlington, Arlington, TX 76019 USA.
| | | |
Collapse
|
210
|
Sarvamangala DR, Kulkarni RV. Grading of Knee Osteoarthritis Using Convolutional Neural Networks. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10529-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
211
|
Bhatt C, Kumar I, Vijayakumar V, Singh KU, Kumar A. The state of the art of deep learning models in medical science and their challenges. MULTIMEDIA SYSTEMS 2021; 27:599-613. [DOI: 10.1007/s00530-020-00694-1] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/30/2023]
|
212
|
Dogan RO, Dogan H, Bayrak C, Kayikcioglu T. A Two-Phase Approach using Mask R-CNN and 3D U-Net for High-Accuracy Automatic Segmentation of Pancreas in CT Imaging. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2021; 207:106141. [PMID: 34020373 DOI: 10.1016/j.cmpb.2021.106141] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Accepted: 04/25/2021] [Indexed: 06/12/2023]
Abstract
BACKGROUND AND OBJECTIVE The size, shape, and position of the pancreas are affected by the patient characteristics such as age, sex, adiposity. Owing to more complex anatomical structures (size, shape, and position) of the pancreas, specialists have some difficulties in the analysis of pancreatic diseases (diabetes, pancreatic cancer, pancreatitis). Therefore, the treatment of the disease requires enormous time and depends on the experience of specialists. In order to decrease the rate of pancreatic disease deaths and to assist the specialist in the analysis of pancreatic diseases, automatic pancreas segmentation techniques have been actively developed in the research article for many years. METHODS Although the rapid growth of deep learning and proving satisfactory performance in many medical image processing and computer vision applications, the maximum Dice Similarity Coefficients (DSC) value of these techniques related to automatic pancreas segmentation is only around 85% due to complex structure of the pancreas and other factors. Contrary to previous techniques which are required significantly higher computational power and memory, this paper suggests a novel two-phase approach for high-accuracy automatic pancreas segmentation in computed tomography (CT) imaging. The proposed approach consists of two phases; (1) Pancreas Localization, where the rough pancreas position is detected on the 2D CT slice by adopting Mask R-CNN model, (2) Pancreas Segmentation, where the segmented pancreas region is produced by refining the candidate pancreas region with 3D U-Net on the 2D sub-CT slices generated in the first phase. Both qualitative and quantitative assessments of the proposed approach are performed on the NIH data set. RESULTS In order to evaluate the achievement of the recommended approach, a total of 16 different automatic pancreas segmentation techniques reported in the literature are compared by utilizing performance assessment procedures which are Dice Similarity Coefficient (DSC), Jaccard Index (JI), Precision (PRE), Recall (REC), Pixel Accuracy (ACC), Specificity (SPE), Receiver Operating Characteristics (ROC) and Area under ROC curve (AUC). The average values of DSC, JI, REC and ACC are computed as 86.15%, 75.93%, 86.27%, 86.27% and 99.95% respectively, which are the best values among well-established studies for automatic pancreas segmentation. CONCLUSION It is demonstrated with qualitative and quantitative results that our suggested two-phase approach creates more favorable results than other existing approaches.
Collapse
Affiliation(s)
- Ramazan Ozgur Dogan
- Department of Computer Technology, Gumushane University, Turkey; Departmant of Computer Science & Information Systems, Youngstown State University, USA.
| | - Hulya Dogan
- Departmant of Computer Engineering, Karadeniz Technical University, Turkey; Departmant of Computer Science & Information Systems, Youngstown State University, USA.
| | - Coskun Bayrak
- Departmant of Computer Science & Information Systems, Youngstown State University, USA.
| | - Temel Kayikcioglu
- Departmant of Electrical & Electronics Engineering, Karadeniz Technical University, Turkey.
| |
Collapse
|
213
|
A Computational Tumor-Infiltrating Lymphocyte Assessment Method Comparable with Visual Reporting Guidelines for Triple-Negative Breast Cancer. EBioMedicine 2021; 70:103492. [PMID: 34280779 PMCID: PMC8318866 DOI: 10.1016/j.ebiom.2021.103492] [Citation(s) in RCA: 36] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 06/30/2021] [Accepted: 06/30/2021] [Indexed: 12/21/2022] Open
Abstract
Background Tumor-infiltrating lymphocytes (TILs) are clinically significant in triple-negative breast cancer (TNBC). Although a standardized methodology for visual TILs assessment (VTA) exists, it has several inherent limitations. We established a deep learning-based computational TIL assessment (CTA) method broadly following VTA guideline and compared it with VTA for TNBC to determine the prognostic value of the CTA and a reasonable CTA workflow for clinical practice. Methods We trained three deep neural networks for nuclei segmentation, nuclei classification and necrosis classification to establish a CTA workflow. The automatic TIL (aTIL) score generated was compared with manual TIL (mTIL) scores provided by three pathologists in an Asian (n = 184) and a Caucasian (n = 117) TNBC cohort to evaluate scoring concordance and prognostic value. Findings The intraclass correlations (ICCs) between aTILs and mTILs varied from 0.40 to 0.70 in two cohorts. Multivariate Cox proportional hazards analysis revealed that the aTIL score was associated with disease free survival (DFS) in both cohorts, as either a continuous [hazard ratio (HR)=0.96, 95% CI 0.94–0.99] or dichotomous variable (HR=0.29, 95% CI 0.12–0.72). A higher C-index was observed in a composite mTIL/aTIL three-tier stratification model than in the dichotomous model, using either mTILs or aTILs alone. Interpretation The current study provides a useful tool for stromal TIL assessment and prognosis evaluation for patients with TNBC. A workflow integrating both VTA and CTA may aid pathologists in performing risk management and decision-making tasks.
Collapse
|
214
|
A Cascade Deep Forest Model for Breast Cancer Subtype Classification Using Multi-Omics Data. MATHEMATICS 2021. [DOI: 10.3390/math9131574] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/26/2022]
Abstract
Automated diagnosis systems aim to reduce the cost of diagnosis while maintaining the same efficiency. Many methods have been used for breast cancer subtype classification. Some use single data source, while others integrate many data sources, the case that results in reduced computational performance as opposed to accuracy. Breast cancer data, especially biological data, is known for its imbalance, with lack of extensive amounts of histopathological images as biological data. Recent studies have shown that cascade Deep Forest ensemble model achieves a competitive classification accuracy compared with other alternatives, such as the general ensemble learning methods and the conventional deep neural networks (DNNs), especially for imbalanced training sets, through learning hyper-representations through using cascade ensemble decision trees. In this work, a cascade Deep Forest is employed to classify breast cancer subtypes, IntClust and Pam50, using multi-omics datasets and different configurations. The results obtained recorded an accuracy of 83.45% for 5 subtypes and 77.55% for 10 subtypes. The significance of this work is that it is shown that using gene expression data alone with the cascade Deep Forest classifier achieves comparable accuracy to other techniques with higher computational performance, where the time recorded is about 5 s for 10 subtypes, and 7 s for 5 subtypes.
Collapse
|
215
|
Multiclassification of Endoscopic Colonoscopy Images Based on Deep Transfer Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2021; 2021:2485934. [PMID: 34306173 PMCID: PMC8272675 DOI: 10.1155/2021/2485934] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 05/27/2021] [Accepted: 06/09/2021] [Indexed: 11/17/2022]
Abstract
With the continuous improvement of human living standards, dietary habits are constantly changing, which brings various bowel problems. Among them, the morbidity and mortality rates of colorectal cancer have maintained a significant upward trend. In recent years, the application of deep learning in the medical field has become increasingly spread aboard and deep. In a colonoscopy, Artificial Intelligence based on deep learning is mainly used to assist in the detection of colorectal polyps and the classification of colorectal lesions. But when it comes to classification, it can lead to confusion between polyps and other diseases. In order to accurately diagnose various diseases in the intestines and improve the classification accuracy of polyps, this work proposes a multiclassification method for medical colonoscopy images based on deep learning, which mainly classifies the four conditions of polyps, inflammation, tumor, and normal. In view of the relatively small number of data sets, the network firstly trained by transfer learning on ImageNet was used as the pretraining model, and the prior knowledge learned from the source domain learning task was applied to the classification task about intestinal illnesses. Then, we fine-tune the model to make it more suitable for the task of intestinal classification by our data sets. Finally, the model is applied to the multiclassification of medical colonoscopy images. Experimental results show that the method in this work can significantly improve the recognition rate of polyps while ensuring the classification accuracy of other categories, so as to assist the doctor in the diagnosis of surgical resection.
Collapse
|
216
|
Cherian Kurian N, Sethi A, Reddy Konduru A, Mahajan A, Rane SU. A 2021 update on cancer image analytics with deep learning. WIRES DATA MINING AND KNOWLEDGE DISCOVERY 2021; 11. [DOI: 10.1002/widm.1410] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Accepted: 03/09/2021] [Indexed: 02/05/2023]
Abstract
AbstractDeep learning (DL)‐based interpretation of medical images has reached a critical juncture of expanding outside research projects into translational ones, and is ready to make its way to the clinics. Advances over the last decade in data availability, DL techniques, as well as computing capabilities have accelerated this journey. Through this journey, today we have a better understanding of the challenges to and pitfalls of wider adoption of DL into clinical care, which, according to us, should and will drive the advances in this field in the next few years. The most important among these challenges are the lack of an appropriately digitized environment within healthcare institutions, the lack of adequate open and representative datasets on which DL algorithms can be trained and tested, and the lack of robustness of widely used DL training algorithms to certain pervasive pathological characteristics of medical images and repositories. In this review, we provide an overview of the role of imaging in oncology, the different techniques that are shaping the way DL algorithms are being made ready for clinical use, and also the problems that DL techniques still need to address before DL can find a home in clinics. Finally, we also provide a summary of how DL can potentially drive the adoption of digital pathology, vendor neutral archives, and picture archival and communication systems. We caution that the respective researchers may find the coverage of their own fields to be at a high‐level. This is so by design as this format is meant to only introduce those looking in from outside of deep learning and medical research, respectively, to gain an appreciation for the main concerns and limitations of these two fields instead of telling them something new about their own.This article is categorized under:
Technologies > Artificial Intelligence
Algorithmic Development > Biological Data Mining
Collapse
Affiliation(s)
- Nikhil Cherian Kurian
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Amit Sethi
- Department of Electrical Engineering Indian Institute of Technology, Bombay Mumbai India
| | - Anil Reddy Konduru
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| | - Abhishek Mahajan
- Department of Radiology Tata Memorial Hospital, HBNI Mumbai India
| | - Swapnil Ulhas Rane
- Department of Pathology Tata Memorial Center‐ACTREC, HBNI Navi Mumbai India
| |
Collapse
|
217
|
Gong M, Chen S, Chen Q, Zeng Y, Zhang Y. Generative Adversarial Networks in Medical Image Processing. Curr Pharm Des 2021; 27:1856-1868. [PMID: 33238866 DOI: 10.2174/1381612826666201125110710] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2020] [Revised: 10/14/2020] [Accepted: 10/21/2020] [Indexed: 11/22/2022]
Abstract
BACKGROUND The emergence of generative adversarial networks (GANs) has provided new technology and framework for the application of medical images. Specifically, a GAN requires little to no labeled data to obtain high-quality data that can be generated through competition between the generator and discriminator networks. Therefore, GANs are rapidly proving to be a state-of-the-art foundation, achieving enhanced performances in various medical applications. METHODS In this article, we introduce the principles of GANs and their various variants, deep convolutional GAN, conditional GAN, Wasserstein GAN, Info-GAN, boundary equilibrium GAN, and cycle-GAN. RESULTS All various GANs have found success in medical imaging tasks, including medical image enhancement, segmentation, classification, reconstruction, and synthesis. Furthermore, we summarize the data processing methods and evaluation indicators. Finally, we note the limitations of existing methods and the existing challenges that need to be addressed in this field. CONCLUSION Although GANs are in the initial stage of development in medical image processing, it will have a great prospect in the future.
Collapse
Affiliation(s)
- Meiqin Gong
- West China Second University Hospital, Sichuan University, Chengdu 610041, China
| | - Siyu Chen
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| | - Qingyuan Chen
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| | - Yuanqi Zeng
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| | - Yongqing Zhang
- School of Computer Science, Chengdu University of Information Technology, Chengdu 610225, China
| |
Collapse
|
218
|
Chen Z, Wang S, Jia C, Hu K, Ye X, Li X, Gao X. CRDet: Improving Signet Ring Cell Detection by Reinforcing the Classification Branch. J Comput Biol 2021; 28:732-743. [PMID: 34190641 DOI: 10.1089/cmb.2020.0555] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Detecting signet ring cells on histopathologic images is a critical computer-aided diagnostic task that is highly relevant to cancer grading and patients' survival rates. However, the cells are densely distributed and exhibit diverse and complex visual patterns in the image, together with the commonly observed incomplete annotation issue, posing a significant barrier to accurate detection. In this article, we propose to mitigate the detection difficulty from a model reinforcement point of view. Specifically, we devise a Classification Reinforcement Detection Network (CRDet). It is featured by adding a dedicated Classification Reinforcement Branch (CRB) on top of the architecture of Cascade RCNN. The proposed CRB consists of a context pooling module to perform a more robust feature representation by fully making use of context information, and a feature enhancement classifier to generate a superior feature by leveraging the deconvolution and attention mechanism. With the enhanced feature, the small-sized cell can be better characterized and CRDet enjoys a more accurate signet ring cell identification. We validate our proposal on a large-scale real clinical signet ring cell data set. It is shown that CRDet outperforms several popular convolutional neural network-based object detection models on this particular task.
Collapse
Affiliation(s)
- Zhineng Chen
- School of Computer Science, Fudan University, Shanghai, China.,School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China
| | - Sai Wang
- School of Computer and Information Technology, Beijing Jiaotong University, Beijing, China.,Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Caiyan Jia
- Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Kai Hu
- School of Computer Science, Xiangtan University, Xiangtan, China
| | - Xiongjun Ye
- Peking University People's Hospital, Beijing, China
| | | | - Xieping Gao
- School of Computer Science, Xiangtan University, Xiangtan, China.,School of Medical Imaging and Examination, Xiangnan University, Chenzhou, China
| |
Collapse
|
219
|
Ali MAS, Misko O, Salumaa SO, Papkov M, Palo K, Fishman D, Parts L. Evaluating Very Deep Convolutional Neural Networks for Nucleus Segmentation from Brightfield Cell Microscopy Images. SLAS DISCOVERY 2021; 26:1125-1137. [PMID: 34167359 PMCID: PMC8458686 DOI: 10.1177/24725552211023214] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Advances in microscopy have increased output data volumes, and powerful image analysis methods are required to match. In particular, finding and characterizing nuclei from microscopy images, a core cytometry task, remains difficult to automate. While deep learning models have given encouraging results on this problem, the most powerful approaches have not yet been tested for attacking it. Here, we review and evaluate state-of-the-art very deep convolutional neural network architectures and training strategies for segmenting nuclei from brightfield cell images. We tested U-Net as a baseline model; considered U-Net++, Tiramisu, and DeepLabv3+ as latest instances of advanced families of segmentation models; and propose PPU-Net, a novel light-weight alternative. The deeper architectures outperformed standard U-Net and results from previous studies on the challenging brightfield images, with balanced pixel-wise accuracies of up to 86%. PPU-Net achieved this performance with 20-fold fewer parameters than the comparably accurate methods. All models perform better on larger nuclei and in sparser images. We further confirmed that in the absence of plentiful training data, augmentation and pretraining on other data improve performance. In particular, using only 16 images with data augmentation is enough to achieve a pixel-wise F1 score that is within 5% of the one achieved with a full data set for all models. The remaining segmentation errors are mainly due to missed nuclei in dense regions, overlapping cells, and imaging artifacts, indicating the major outstanding challenges.
Collapse
Affiliation(s)
- Mohammed A S Ali
- Department of Computer Science, University of Tartu, Tartu, Estonia
| | - Oleg Misko
- Ukrainian Catholic University, Lviv, L'vìvs'ka, Ukraine
| | | | - Mikhail Papkov
- Department of Computer Science, University of Tartu, Tartu, Estonia
| | - Kaupo Palo
- PerkinElmer Cellular Technologies Germany GmbH, Hamburg, Germany
| | - Dmytro Fishman
- Department of Computer Science, University of Tartu, Tartu, Estonia
| | - Leopold Parts
- Department of Computer Science, University of Tartu, Tartu, Estonia.,Wellcome Sanger Institute, Hinxton, Cambridgeshire, UK
| |
Collapse
|
220
|
Sun Y, Huang X, Zhou H, Zhang Q. SRPN: similarity-based region proposal networks for nuclei and cells detection in histology images. Med Image Anal 2021; 72:102142. [PMID: 34198042 DOI: 10.1016/j.media.2021.102142] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 05/11/2021] [Accepted: 06/17/2021] [Indexed: 10/21/2022]
Abstract
The detection of nuclei and cells in histology images is of great value in both clinical practice and pathological studies. However, multiple reasons such as morphological variations of nuclei or cells make it a challenging task where conventional object detection methods cannot obtain satisfactory performance in many cases. A detection task consists of two sub-tasks, classification and localization. Under the condition of dense object detection, classification is a key to boost the detection performance. Considering this, we propose similarity based region proposal networks (SRPN) for nuclei and cells detection in histology images. In particular, a customised convolution layer termed as embedding layer is designed for network building. The embedding layer is added into the region proposal networks, enabling the networks to learn discriminative features based on similarity learning. Features obtained by similarity learning can significantly boost the classification performance compared to conventional methods. SRPN can be easily integrated into standard convolutional neural networks architectures such as the Faster R-CNN and RetinaNet. We test the proposed approach on tasks of multi-organ nuclei detection and signet ring cells detection in histological images. Experimental results show that networks applying similarity learning achieved superior performance on both tasks when compared to their counterparts. In particular, the proposed SRPN achieve state-of-the-art performance on the MoNuSeg benchmark for nuclei segmentation and detection while compared to previous methods, and on the signet ring cell detection benchmark when compared with baselines. The sourcecode is publicly available at: https://github.com/sigma10010/nuclei_cells_det.
Collapse
Affiliation(s)
- Yibao Sun
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| | - Xingru Huang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom.
| | - Huiyu Zhou
- School of Informatics, University of Leicester, University Road, Leicester, LE1 7RH, United Kingdom
| | - Qianni Zhang
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Road, London, E1 4NS, United Kingdom
| |
Collapse
|
221
|
Zinchuk V, Grossenbacher-Zinchuk O. Machine Learning for Analysis of Microscopy Images: A Practical Guide. ACTA ACUST UNITED AC 2021; 86:e101. [PMID: 31904918 DOI: 10.1002/cpcb.101] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
The explosive growth of machine learning has provided scientists with insights into data in ways unattainable using prior research techniques. It has allowed the detection of biological features that were previously unrecognized and overlooked. However, because machine-learning methodology originates from informatics, many cell biology labs have experienced difficulties in implementing this approach. In this article, we target the rapidly expanding audience of cell and molecular biologists interested in exploiting machine learning for analysis of their research. We discuss the advantages of employing machine learning with microscopy approaches and describe the machine-learning pipeline. We also give practical guidelines for building models of cell behavior using machine learning. We conclude with an overview of the tools required for model creation, and share advice on their use. © 2020 by John Wiley & Sons, Inc.
Collapse
Affiliation(s)
- Vadim Zinchuk
- Department of Neurobiology and Anatomy, Kochi University Faculty of Medicine, Kochi, Japan
| | | |
Collapse
|
222
|
Zhu X, Li X, Ong K, Zhang W, Li W, Li L, Young D, Su Y, Shang B, Peng L, Xiong W, Liu Y, Liao W, Xu J, Wang F, Liao Q, Li S, Liao M, Li Y, Rao L, Lin J, Shi J, You Z, Zhong W, Liang X, Han H, Zhang Y, Tang N, Hu A, Gao H, Cheng Z, Liang L, Yu W, Ding Y. Hybrid AI-assistive diagnostic model permits rapid TBS classification of cervical liquid-based thin-layer cell smears. Nat Commun 2021; 12:3541. [PMID: 34112790 PMCID: PMC8192526 DOI: 10.1038/s41467-021-23913-3] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 05/24/2021] [Indexed: 02/05/2023] Open
Abstract
Technical advancements significantly improve earlier diagnosis of cervical cancer, but accurate diagnosis is still difficult due to various factors. We develop an artificial intelligence assistive diagnostic solution, AIATBS, to improve cervical liquid-based thin-layer cell smear diagnosis according to clinical TBS criteria. We train AIATBS with >81,000 retrospective samples. It integrates YOLOv3 for target detection, Xception and Patch-based models to boost target classification, and U-net for nucleus segmentation. We integrate XGBoost and a logical decision tree with these models to optimize the parameters given by the learning process, and we develop a complete cervical liquid-based cytology smear TBS diagnostic system which also includes a quality control solution. We validate the optimized system with >34,000 multicenter prospective samples and achieve better sensitivity compared to senior cytologists, yet retain high specificity while achieving a speed of <180s/slide. Our system is adaptive to sample preparation using different standards, staining protocols and scanners.
Collapse
Affiliation(s)
- Xiaohui Zhu
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Xiaoming Li
- Department of Pathology, Shenzhen Bao'an People's Hospital (group), Shenzhen, Guangdong Province, PR China
| | - Kokhaur Ong
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
- Bioinformatics Institute, A*STAR, Singapore, Singapore
| | - Wenli Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Wencai Li
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Longjie Li
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - David Young
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yongjian Su
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Bin Shang
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Linggan Peng
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wei Xiong
- Guangzhou Kaipu Biotechnology Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Yunke Liu
- Laboratory Department, Guangzhou Tianhe District Maternal and Child Health Care Hospital, Guangzhou, Guangdong Province, PR China
| | - Wenting Liao
- State Key Laboratory of Oncology in South China, Collaborative Innovation Center for Cancer Medicine, Sun Yat-sen University Cancer Center, Guangzhou, Guangdong Province, PR China
| | - Jingjing Xu
- The First Affiliated Hospital of Zhengzhou University, Zhengzhou, Henan Province, PR China
| | - Feifei Wang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Qing Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Shengnan Li
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Minmin Liao
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Yu Li
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China
| | - Linshang Rao
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jinquan Lin
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Jianyuan Shi
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Zejun You
- Guangzhou F.Q.PATHOTECH Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Wenlong Zhong
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Xinrong Liang
- Guangzhou Huayin medical inspection center Co., Ltd, Guangzhou, Guangdong Province, PR China
| | - Hao Han
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore
| | - Yan Zhang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China
- Department of Pathology, Shenzhen Longhua District Maternity & Child Healthcare Hospital, Shenzhen, PR China
| | - Na Tang
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China
| | - Aixia Hu
- Department of Pathology, Henan Provincial People's Hospital, Zhengzhou, Henan Province, PR China
| | - Hongyi Gao
- Department of Pathology, Guangdong Provincial Women's and Children's Dispensary, Shenzhen, Guangdong Province, PR China
| | - Zhiqiang Cheng
- Department of Pathology, Shenzhen First People's Hospital, Shenzhen, Guangdong Province, PR China.
| | - Li Liang
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| | - Weimiao Yu
- Institute of Molecular and Cell Biology, A*STAR, Singapore, Singapore.
- Bioinformatics Institute, A*STAR, Singapore, Singapore.
| | - Yanqing Ding
- Department of Pathology, Nanfang Hospital and Basic Medical College, Southern Medical University, Guangzhou, Guangdong Province, PR China.
- Guangdong Province Key Laboratory of Molecular Tumor Pathology, Guangzhou, Guangdong Province, PR China.
| |
Collapse
|
223
|
Cao B, Zhang KC, Wei B, Chen L. Status quo and future prospects of artificial neural network from the perspective of gastroenterologists. World J Gastroenterol 2021; 27:2681-2709. [PMID: 34135549 PMCID: PMC8173384 DOI: 10.3748/wjg.v27.i21.2681] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Revised: 03/29/2021] [Accepted: 04/22/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial neural networks (ANNs) are one of the primary types of artificial intelligence and have been rapidly developed and used in many fields. In recent years, there has been a sharp increase in research concerning ANNs in gastrointestinal (GI) diseases. This state-of-the-art technique exhibits excellent performance in diagnosis, prognostic prediction, and treatment. Competitions between ANNs and GI experts suggest that efficiency and accuracy might be compatible in virtue of technique advancements. However, the shortcomings of ANNs are not negligible and may induce alterations in many aspects of medical practice. In this review, we introduce basic knowledge about ANNs and summarize the current achievements of ANNs in GI diseases from the perspective of gastroenterologists. Existing limitations and future directions are also proposed to optimize ANN’s clinical potential. In consideration of barriers to interdisciplinary knowledge, sophisticated concepts are discussed using plain words and metaphors to make this review more easily understood by medical practitioners and the general public.
Collapse
Affiliation(s)
- Bo Cao
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Ke-Cheng Zhang
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Bo Wei
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| | - Lin Chen
- Department of General Surgery & Institute of General Surgery, Chinese People’s Liberation Army General Hospital, Beijing 100853, China
| |
Collapse
|
224
|
Fishman D, Salumaa SO, Majoral D, Laasfeld T, Peel S, Wildenhain J, Schreiner A, Palo K, Parts L. Practical segmentation of nuclei in brightfield cell images with neural networks trained on fluorescently labelled samples. J Microsc 2021; 284:12-24. [PMID: 34081320 DOI: 10.1111/jmi.13038] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2019] [Revised: 05/27/2021] [Accepted: 05/27/2021] [Indexed: 11/28/2022]
Abstract
Identifying nuclei is a standard first step when analysing cells in microscopy images. The traditional approach relies on signal from a DNA stain, or fluorescent transgene expression localised to the nucleus. However, imaging techniques that do not use fluorescence can also carry useful information. Here, we used brightfield and fluorescence images of fixed cells with fluorescently labelled DNA, and confirmed that three convolutional neural network architectures can be adapted to segment nuclei from the brightfield channel, relying on fluorescence signal to extract the ground truth for training. We found that U-Net achieved the best overall performance, Mask R-CNN provided an additional benefit of instance segmentation, and that DeepCell proved too slow for practical application. We trained the U-Net architecture on over 200 dataset variations, established that accurate segmentation is possible using as few as 16 training images, and that models trained on images from similar cell lines can extrapolate well. Acquiring data from multiple focal planes further helps distinguish nuclei in the samples. Overall, our work helps to liberate a fluorescence channel reserved for nuclear staining, thus providing more information from the specimen, and reducing reagents and time required for preparing imaging experiments.
Collapse
Affiliation(s)
- Dmytro Fishman
- Department of Computer Science, University of Tartu, Narva Str 20, Tartu, 51009, Estonia
| | - Sten-Oliver Salumaa
- Department of Computer Science, University of Tartu, Narva Str 20, Tartu, 51009, Estonia
| | - Daniel Majoral
- Department of Computer Science, University of Tartu, Narva Str 20, Tartu, 51009, Estonia
| | - Tõnis Laasfeld
- Department of Computer Science, University of Tartu, Narva Str 20, Tartu, 51009, Estonia.,Chair of Bioorganic Chemistry, Institute of Chemistry, University of Tartu, Ravila, Estonia
| | | | | | | | - Kaupo Palo
- PerkinElmer Cellular Technologies, Germany GmbH, Hamburg, Germany
| | - Leopold Parts
- Department of Computer Science, University of Tartu, Narva Str 20, Tartu, 51009, Estonia.,Wellcome Sanger Institute, Wellcome Genome Campus, Hinxton, Cambridgeshire, UK
| |
Collapse
|
225
|
Mahmood H, Shaban M, Rajpoot N, Khurram SA. Artificial Intelligence-based methods in head and neck cancer diagnosis: an overview. Br J Cancer 2021; 124:1934-1940. [PMID: 33875821 PMCID: PMC8184820 DOI: 10.1038/s41416-021-01386-x] [Citation(s) in RCA: 77] [Impact Index Per Article: 19.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 03/11/2021] [Accepted: 03/31/2021] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND This paper reviews recent literature employing Artificial Intelligence/Machine Learning (AI/ML) methods for diagnostic evaluation of head and neck cancers (HNC) using automated image analysis. METHODS Electronic database searches using MEDLINE via OVID, EMBASE and Google Scholar were conducted to retrieve articles using AI/ML for diagnostic evaluation of HNC (2009-2020). No restrictions were placed on the AI/ML method or imaging modality used. RESULTS In total, 32 articles were identified. HNC sites included oral cavity (n = 16), nasopharynx (n = 3), oropharynx (n = 3), larynx (n = 2), salivary glands (n = 2), sinonasal (n = 1) and in five studies multiple sites were studied. Imaging modalities included histological (n = 9), radiological (n = 8), hyperspectral (n = 6), endoscopic/clinical (n = 5), infrared thermal (n = 1) and optical (n = 1). Clinicopathologic/genomic data were used in two studies. Traditional ML methods were employed in 22 studies (69%), deep learning (DL) in eight studies (25%) and a combination of these methods in two studies (6%). CONCLUSIONS There is an increasing volume of studies exploring the role of AI/ML to aid HNC detection using a range of imaging modalities. These methods can achieve high degrees of accuracy that can exceed the abilities of human judgement in making data predictions. Large-scale multi-centric prospective studies are required to aid deployment into clinical practice.
Collapse
Affiliation(s)
- Hanya Mahmood
- Academic Unit of Oral & Maxillofacial Surgery, School of Clinical Dentistry, University of Sheffield, Sheffield, UK.
| | - Muhammad Shaban
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Nasir Rajpoot
- Department of Computer Science, University of Warwick, Coventry, UK
| | - Syed A Khurram
- Unit of Oral & Maxillofacial Pathology, School of Clinical Dentistry, University of Sheffield, Sheffield, UK
| |
Collapse
|
226
|
Improved bag-of-features using grey relational analysis for classification of histology images. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00275-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractAn efficient classification method to categorize histopathological images is a challenging research problem. In this paper, an improved bag-of-features approach is presented as an efficient image classification method. In bag-of-features, a large number of keypoints are extracted from histopathological images that increases the computational cost of the codebook construction step. Therefore, to select the a relevant subset of keypoints, a new keypoints selection method is introduced in the bag-of-features method. To validate the performance of the proposed method, an extensive experimental analysis is conducted on two standard histopathological image datasets, namely ADL and Blue histology datasets. The proposed keypoint selection method reduces the extracted high dimensional features by 95% and 68% from the ADL and Blue histology datasets respectively with less computational time. Moreover, the enhanced bag-of-features method increases classification accuracy by from other considered classification methods.
Collapse
|
227
|
Kobayashi S, Saltz JH, Yang VW. State of machine and deep learning in histopathological applications in digestive diseases. World J Gastroenterol 2021; 27:2545-2575. [PMID: 34092975 PMCID: PMC8160628 DOI: 10.3748/wjg.v27.i20.2545] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/28/2021] [Revised: 03/27/2021] [Accepted: 04/29/2021] [Indexed: 02/06/2023] Open
Abstract
Machine learning (ML)- and deep learning (DL)-based imaging modalities have exhibited the capacity to handle extremely high dimensional data for a number of computer vision tasks. While these approaches have been applied to numerous data types, this capacity can be especially leveraged by application on histopathological images, which capture cellular and structural features with their high-resolution, microscopic perspectives. Already, these methodologies have demonstrated promising performance in a variety of applications like disease classification, cancer grading, structure and cellular localizations, and prognostic predictions. A wide range of pathologies requiring histopathological evaluation exist in gastroenterology and hepatology, indicating these as disciplines highly targetable for integration of these technologies. Gastroenterologists have also already been primed to consider the impact of these algorithms, as development of real-time endoscopic video analysis software has been an active and popular field of research. This heightened clinical awareness will likely be important for future integration of these methods and to drive interdisciplinary collaborations on emerging studies. To provide an overview on the application of these methodologies for gastrointestinal and hepatological histopathological slides, this review will discuss general ML and DL concepts, introduce recent and emerging literature using these methods, and cover challenges moving forward to further advance the field.
Collapse
Affiliation(s)
- Soma Kobayashi
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Joel H Saltz
- Department of Biomedical Informatics, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
| | - Vincent W Yang
- Department of Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY 11794, United States
- Department of Physiology and Biophysics, Renaissance School of Medicine, Stony Brook University, Stony Brook , NY 11794, United States
| |
Collapse
|
228
|
Artificial intelligence-based morphological fingerprinting of megakaryocytes: a new tool for assessing disease in MPN patients. Blood Adv 2021; 4:3284-3294. [PMID: 32706893 DOI: 10.1182/bloodadvances.2020002230] [Citation(s) in RCA: 46] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 06/15/2020] [Indexed: 12/14/2022] Open
Abstract
Accurate diagnosis and classification of myeloproliferative neoplasms (MPNs) requires integration of clinical, morphological, and genetic findings. Despite major advances in our understanding of the molecular and genetic basis of MPNs, the morphological assessment of bone marrow trephines (BMT) is critical in differentiating MPN subtypes and their reactive mimics. However, morphological assessment is heavily constrained by a reliance on subjective, qualitative, and poorly reproducible criteria. To improve the morphological assessment of MPNs, we have developed a machine learning approach for the automated identification, quantitative analysis, and abstract representation of megakaryocyte features using reactive/nonneoplastic BMT samples (n = 43) and those from patients with established diagnoses of essential thrombocythemia (n = 45), polycythemia vera (n = 18), or myelofibrosis (n = 25). We describe the application of an automated workflow for the identification and delineation of relevant histological features from routinely prepared BMTs. Subsequent analysis enabled the tissue diagnosis of MPN with a high predictive accuracy (area under the curve = 0.95) and revealed clear evidence of the potential to discriminate between important MPN subtypes. Our method of visually representing abstracted megakaryocyte features in the context of analyzed patient cohorts facilitates the interpretation and monitoring of samples in a manner that is beyond conventional approaches. The automated BMT phenotyping approach described here has significant potential as an adjunct to standard genetic and molecular testing in established or suspected MPN patients, either as part of the routine diagnostic pathway or in the assessment of disease progression/response to treatment.
Collapse
|
229
|
A CNN-based unified framework utilizing projection loss in unison with label noise handling for multiple Myeloma cancer diagnosis. Med Image Anal 2021; 72:102099. [PMID: 34098240 DOI: 10.1016/j.media.2021.102099] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 04/26/2021] [Accepted: 04/27/2021] [Indexed: 01/16/2023]
Abstract
Multiple Myeloma (MM) is a malignancy of plasma cells. Similar to other forms of cancer, it demands prompt diagnosis for reducing the risk of mortality. The conventional diagnostic tools are resource-intense and hence, these solutions are not easily scalable for extending their reach to the masses. Advancements in deep learning have led to rapid developments in affordable, resource optimized, easily deployable computer-assisted solutions. This work proposes a unified framework for MM diagnosis using microscopic blood cell imaging data that addresses the key challenges of inter-class visual similarity of healthy versus cancer cells and that of the label noise of the dataset. To extract class distinctive features, we propose projection loss to maximize the projection of a sample's activation on the respective class vector besides imposing orthogonality constraints on the class vectors. This projection loss is used along with the cross-entropy loss to design a dual branch architecture that helps achieve improved performance and provides scope for targeting the label noise problem. Based on this architecture, two methodologies have been proposed to correct the noisy labels. A coupling classifier has also been proposed to resolve the conflicts in the dual-branch architecture's predictions. We have utilized a large dataset of 72 subjects (26 healthy and 46 MM cancer) containing a total of 74996 images (including 34555 training cell images and 40441 test cell images). This is so far the most extensive dataset on Multiple Myeloma cancer ever reported in the literature. An ablation study has also been carried out. The proposed architecture performs best with a balanced accuracy of 94.17% on binary cell classification of healthy versus cancer in the comparative performance with ten state-of-the-art architectures. Extensive experiments on two additional publicly available datasets of two different modalities have also been utilized for analyzing the label noise handling capability of the proposed methodology. The code will be available under https://github.com/shivgahlout/CAD-MM.
Collapse
|
230
|
Valkonen M, Hognas G, Bova GS, Ruusuvuori P. Generalized Fixation Invariant Nuclei Detection Through Domain Adaptation Based Deep Learning. IEEE J Biomed Health Inform 2021; 25:1747-1757. [PMID: 33211668 DOI: 10.1109/jbhi.2020.3039414] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Nucleus detection is a fundamental task in histological image analysis and an important tool for many follow up analyses. It is known that sample preparation and scanning procedure of histological slides introduce a great amount of variability to the histological images and poses challenges for automated nucleus detection. Here, we studied the effect of histopathological sample fixation on the accuracy of a deep learning based nuclei detection model trained with hematoxylin and eosin stained images. We experimented with training data that includes three methods of fixation; PAXgene, formalin and frozen, and studied the detection accuracy results of various convolutional neural networks. Our results indicate that the variability introduced during sample preparation affects the generalization of a model and should be considered when building accurate and robust nuclei detection algorithms. Our dataset includes over 67 000 annotated nuclei locations from 16 patients and three different sample fixation types. The dataset provides excellent basis for building an accurate and robust nuclei detection model, and combined with unsupervised domain adaptation, the workflow allows generalization to images from unseen domains, including different tissues and images from different labs.
Collapse
|
231
|
Zhang Z, Genc Y, Wang D, Ahsen ME, Fan X. Effect of AI Explanations on Human Perceptions of Patient-Facing AI-Powered Healthcare Systems. J Med Syst 2021; 45:64. [PMID: 33948743 DOI: 10.1007/s10916-021-01743-6] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Accepted: 04/28/2021] [Indexed: 10/21/2022]
Abstract
Ongoing research efforts have been examining how to utilize artificial intelligence technology to help healthcare consumers make sense of their clinical data, such as diagnostic radiology reports. How to promote the acceptance of such novel technology is a heated research topic. Recent studies highlight the importance of providing local explanations about AI prediction and model performance to help users determine whether to trust AI's predictions. Despite some efforts, limited empirical research has been conducted to quantitatively measure how AI explanations impact healthcare consumers' perceptions of using patient-facing, AI-powered healthcare systems. The aim of this study is to evaluate the effects of different AI explanations on people's perceptions of AI-powered healthcare system. In this work, we designed and deployed a large-scale experiment (N = 3,423) on Amazon Mechanical Turk (MTurk) to evaluate the effects of AI explanations on people's perceptions in the context of comprehending radiology reports. We created four groups based on two factors-the extent of explanations for the prediction (High vs. Low Transparency) and the model performance (Good vs. Weak AI Model)-and randomly assigned participants to one of the four conditions. Participants were instructed to classify a radiology report as describing a normal or abnormal finding, followed by completing a post-study survey to indicate their perceptions of the AI tool. We found that revealing model performance information can promote people's trust and perceived usefulness of system outputs, while providing local explanations for the rationale of a prediction can promote understandability but not necessarily trust. We also found that when model performance is low, the more information the AI system discloses, the less people would trust the system. Lastly, whether human agrees with AI predictions or not and whether the AI prediction is correct or not could also influence the effect of AI explanations. We conclude this paper by discussing implications for designing AI systems for healthcare consumers to interpret diagnostic report.
Collapse
Affiliation(s)
- Zhan Zhang
- School of Computer Science and Information Systems, Pace University, New York, USA.
| | - Yegin Genc
- School of Computer Science and Information Systems, Pace University, New York, USA
| | | | - Mehmet Eren Ahsen
- College of Business, University of Illinois At Urbana-Champaign, Champaign, USA
| | - Xiangmin Fan
- The Institute of Software, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
232
|
van der Laak J, Litjens G, Ciompi F. Deep learning in histopathology: the path to the clinic. Nat Med 2021; 27:775-784. [PMID: 33990804 DOI: 10.1038/s41591-021-01343-4] [Citation(s) in RCA: 361] [Impact Index Per Article: 90.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2020] [Accepted: 03/31/2021] [Indexed: 02/08/2023]
Abstract
Machine learning techniques have great potential to improve medical diagnostics, offering ways to improve accuracy, reproducibility and speed, and to ease workloads for clinicians. In the field of histopathology, deep learning algorithms have been developed that perform similarly to trained pathologists for tasks such as tumor detection and grading. However, despite these promising results, very few algorithms have reached clinical implementation, challenging the balance between hope and hype for these new techniques. This Review provides an overview of the current state of the field, as well as describing the challenges that still need to be addressed before artificial intelligence in histopathology can achieve clinical value.
Collapse
Affiliation(s)
- Jeroen van der Laak
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands. .,Center for Medical Image Science and Visualization, Linköping University, Linköping, Sweden.
| | - Geert Litjens
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| | - Francesco Ciompi
- Department of Pathology, Radboud University Medical Center, Nijmegen, the Netherlands
| |
Collapse
|
233
|
Roszkowiak L, Korzynska A, Siemion K, Zak J, Pijanowska D, Bosch R, Lejeune M, Lopez C. System for quantitative evaluation of DAB&H-stained breast cancer biopsy digital images (CHISEL). Sci Rep 2021; 11:9291. [PMID: 33927266 PMCID: PMC8085130 DOI: 10.1038/s41598-021-88611-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 04/14/2021] [Indexed: 02/02/2023] Open
Abstract
This study presents CHISEL (Computer-assisted Histopathological Image Segmentation and EvaLuation), an end-to-end system capable of quantitative evaluation of benign and malignant (breast cancer) digitized tissue samples with immunohistochemical nuclear staining of various intensity and diverse compactness. It stands out with the proposed seamless segmentation based on regions of interest cropping as well as the explicit step of nuclei cluster splitting followed by a boundary refinement. The system utilizes machine learning and recursive local processing to eliminate distorted (inaccurate) outlines. The method was validated using two labeled datasets which proved the relevance of the achieved results. The evaluation was based on the IISPV dataset of tissue from biopsy of breast cancer patients, with markers of T cells, along with Warwick Beta Cell Dataset of DAB&H-stained tissue from postmortem diabetes patients. Based on the comparison of the ground truth with the results of the detected and classified objects, we conclude that the proposed method can achieve better or similar results as the state-of-the-art methods. This system deals with the complex problem of nuclei quantification in digitalized images of immunohistochemically stained tissue sections, achieving best results for DAB&H-stained breast cancer tissue samples. Our method has been prepared with user-friendly graphical interface and was optimized to fully utilize the available computing power, while being accessible to users with fewer resources than needed by deep learning techniques.
Collapse
Affiliation(s)
- Lukasz Roszkowiak
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland.
| | - Anna Korzynska
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
| | - Krzysztof Siemion
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
- Medical Pathomorphology Department, Medical University of Bialystok, Białystok, Poland
| | - Jakub Zak
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
| | - Dorota Pijanowska
- Nalecz Institute of Biocybernetics and Biomedical Engineering Polish Academy of Sciences, Ks. Trojdena 4 st., 02-109, Warsaw, Poland
| | - Ramon Bosch
- Pathology Department, Hospital de Tortosa Verge de la Cinta, Institut d'Investigacio Sanitaria Pere Virgili (IISPV), URV, Tortosa, Spain
| | - Marylene Lejeune
- Molecular Biology and Research Section, Hospital de Tortosa Verge de la Cinta, Institut d'Investigacio Sanitaria Pere Virgili (IISPV), URV, Tortosa, Spain
| | - Carlos Lopez
- Molecular Biology and Research Section, Hospital de Tortosa Verge de la Cinta, Institut d'Investigacio Sanitaria Pere Virgili (IISPV), URV, Tortosa, Spain
| |
Collapse
|
234
|
Elkhader J, Elemento O. Artificial intelligence in oncology: From bench to clinic. Semin Cancer Biol 2021; 84:113-128. [PMID: 33915289 DOI: 10.1016/j.semcancer.2021.04.013] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2020] [Revised: 03/22/2021] [Accepted: 04/15/2021] [Indexed: 02/07/2023]
Abstract
In the past few years, Artificial Intelligence (AI) techniques have been applied to almost every facet of oncology, from basic research to drug development and clinical care. In the clinical arena where AI has perhaps received the most attention, AI is showing promise in enhancing and automating image-based diagnostic approaches in fields such as radiology and pathology. Robust AI applications, which retain high performance and reproducibility over multiple datasets, extend from predicting indications for drug development to improving clinical decision support using electronic health record data. In this article, we review some of these advances. We also introduce common concepts and fundamentals of AI and its various uses, along with its caveats, to provide an overview of the opportunities and challenges in the field of oncology. Leveraging AI techniques productively to provide better care throughout a patient's medical journey can fuel the predictive promise of precision medicine.
Collapse
Affiliation(s)
- Jamal Elkhader
- HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine, Dept. of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10021, USA; Caryl and Israel Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, NY, 10021, USA; Sandra and Edward Meyer Cancer Center, Weill Cornell Medicine, New York, NY, 10065, USA; Tri-Institutional Training Program in Computational Biology and Medicine, New York, NY, 10065, USA
| | - Olivier Elemento
- HRH Prince Alwaleed Bin Talal Bin Abdulaziz Alsaud Institute for Computational Biomedicine, Dept. of Physiology and Biophysics, Weill Cornell Medicine, New York, NY 10021, USA; Caryl and Israel Englander Institute for Precision Medicine, Weill Cornell Medicine, New York, NY, 10021, USA; Sandra and Edward Meyer Cancer Center, Weill Cornell Medicine, New York, NY, 10065, USA; Tri-Institutional Training Program in Computational Biology and Medicine, New York, NY, 10065, USA.
| |
Collapse
|
235
|
Automatic Detection of Melanins and Sebums from Skin Images Using a Generative Adversarial Network. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09870-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
236
|
Lapierre-Landry M, Liu Z, Ling S, Bayat M, Wilson DL, Jenkins MW. Nuclei Detection for 3D Microscopy With a Fully Convolutional Regression Network. IEEE ACCESS : PRACTICAL INNOVATIONS, OPEN SOLUTIONS 2021; 9:60396-60408. [PMID: 35024261 PMCID: PMC8751907 DOI: 10.1109/access.2021.3073894] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Advances in three-dimensional microscopy and tissue clearing are enabling whole-organ imaging with single-cell resolution. Fast and reliable image processing tools are needed to analyze the resulting image volumes, including automated cell detection, cell counting and cell analytics. Deep learning approaches have shown promising results in two- and three-dimensional nuclei detection tasks, however detecting overlapping or non-spherical nuclei of different sizes and shapes in the presence of a blurring point spread function remains challenging and often leads to incorrect nuclei merging and splitting. Here we present a new regression-based fully convolutional network that located a thousand nuclei centroids with high accuracy in under a minute when combined with V-net, a popular three-dimensional semantic-segmentation architecture. High nuclei detection F1-scores of 95.3% and 92.5% were obtained in two different whole quail embryonic hearts, a tissue type difficult to segment because of its high cell density, and heterogeneous and elliptical nuclei. Similar high scores were obtained in the mouse brain stem, demonstrating that this approach is highly transferable to nuclei of different shapes and intensities. Finally, spatial statistics were performed on the resulting centroids. The spatial distribution of nuclei obtained by our approach most resembles the spatial distribution of manually identified nuclei, indicating that this approach could serve in future spatial analyses of cell organization.
Collapse
Affiliation(s)
- Maryse Lapierre-Landry
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Zexuan Liu
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Shan Ling
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Mahdi Bayat
- Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH 44106, USA
| | - David L Wilson
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Radiology, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Michael W Jenkins
- Department of Biomedical Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
- Department of Pediatrics, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
237
|
Franklin MM, Schultz FA, Tafoya MA, Kerwin AA, Broehm CJ, Fischer EG, Gullapalli RR, Clark DP, Hanson JA, Martin DR. A Deep Learning Convolutional Neural Network Can Differentiate Between Helicobacter Pylori Gastritis and Autoimmune Gastritis With Results Comparable to Gastrointestinal Pathologists. Arch Pathol Lab Med 2021; 146:117-122. [PMID: 33861314 DOI: 10.5858/arpa.2020-0520-oa] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/20/2021] [Indexed: 12/16/2022]
Abstract
CONTEXT.— Pathology studies using convolutional neural networks (CNNs) have focused on neoplasms, while studies in inflammatory pathology are rare. We previously demonstrated a CNN differentiates reactive gastropathy, Helicobacter pylori gastritis (HPG), and normal gastric mucosa. OBJECTIVE.— To determine whether a CNN can differentiate the following 2 gastric inflammatory patterns: autoimmune gastritis (AG) and HPG. DESIGN.— Gold standard diagnoses were blindly established by 2 gastrointestinal (GI) pathologists. One hundred eighty-seven cases were scanned for analysis by HALO-AI. All levels and tissue fragments per slide were included for analysis. The cases were randomized, 112 (60%; 60 HPG, 52 AG) in the training set and 75 (40%; 40 HPG, 35 AG) in the test set. A HALO-AI correct area distribution (AD) cutoff of 50% or more was required to credit the CNN with the correct diagnosis. The test set was blindly reviewed by pathologists with different levels of GI pathology expertise as follows: 2 GI pathologists, 2 general surgical pathologists, and 2 residents. Each pathologist rendered their preferred diagnosis, HPG or AG. RESULTS.— At the HALO-AI AD percentage cutoff of 50% or more, the CNN results were 100% concordant with the gold standard diagnoses. On average, autoimmune gastritis cases had 84.7% HALO-AI autoimmune gastritis AD and HP cases had 87.3% HALO-AI HP AD. The GI pathologists, general anatomic pathologists, and residents were on average, 100%, 86%, and 57% concordant with the gold standard diagnoses, respectively. CONCLUSIONS.— A CNN can distinguish between cases of HPG and autoimmune gastritis with accuracy equal to GI pathologists.
Collapse
Affiliation(s)
- Michael M Franklin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Fred A Schultz
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Marissa A Tafoya
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Audra A Kerwin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Cory J Broehm
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Edgar G Fischer
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Rama R Gullapalli
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Douglas P Clark
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - Joshua A Hanson
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| | - David R Martin
- From the Department of Pathology, University of New Mexico School of Medicine, Albuquerque. Hanson and Martin are co-senior authors on the manuscript
| |
Collapse
|
238
|
Liu L, Chen X, Wong KC. Early Cancer Detection from Genome-wide Cell-free DNA Fragmentation via Shuffled Frog Leaping Algorithm and Support Vector Machine. Bioinformatics 2021; 37:3099-3105. [PMID: 33837381 DOI: 10.1093/bioinformatics/btab236] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2020] [Revised: 03/19/2021] [Accepted: 04/08/2021] [Indexed: 01/09/2023] Open
Abstract
MOTIVATION Early cancer detection is significant for the patient mortality rate reduction. Although machine learning has been widely employed in that context, there are still deficiencies. In this work, we studied different machine learning algorithms for early cancer detection and proposed an Adaptive Support Vector Machine (ASVM) method by synergizing Shuffled Frog Leaping Algorithm (SFLA) and Support Vector Machine (SVM) in this paper. RESULTS As ASVM regulates SVM for parameter adaption based on data characteristics, the experimental results demonstrated the robust generalization capability of ASVM on different datasets under different settings; for instance, ASVM can enhance the sensitivity by over 10% for early cancer detection compared with SVM. Besides, our proposed ASVM outperformed Grid Search + SVM and Random Search + SVM by significant margins in terms of the area under the ROC curve (AUC) (0.938 vs. 0.922 vs. 0.921). AVAILABILITY The proposed algorithm and dataset are available at https://github.com/ElaineLIU-920/ASVM-for-Early-Cancer-Detection. SUPPLEMENTARY INFORMATION Supplementary data are available at Bioinformatics online.
Collapse
Affiliation(s)
- Linjing Liu
- Department of Computer Science, City University of Hong Kong, Hong Kong, China
| | - Xingjian Chen
- Department of Computer Science, City University of Hong Kong, Hong Kong, China
| | - Ka-Chun Wong
- Department of Computer Science, City University of Hong Kong, Hong Kong, China
| |
Collapse
|
239
|
Stroke Lesion Detection and Analysis in MRI Images Based on Deep Learning. JOURNAL OF HEALTHCARE ENGINEERING 2021. [DOI: 10.1155/2021/5524769] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Stroke is a kind of cerebrovascular disease that heavily damages people’s life and health. The quantitative analysis of brain MRI images plays an important role in the diagnosis and treatment of stroke. Deep neural networks with massive data learning ability supply a powerful tool for lesion detection. In order to study the property of the stroke lesions and complete intelligent automatic detection, we collaborated with two authoritative hospitals and collected 5,668 brain MRI images of 300 ischemic stroke patients. All the lesion regions in the images were accurately labeled by professional doctors to ensure the authority and effectiveness of the data. Three categories of deep learning object detection networks including Faster R-CNN, YOLOV3, and SSD are applied to implement automatic lesion detection with the best precision of 89.77%. Meanwhile, statistical analysis of the locations, shapes of the lesions, and possible related diseases is conducted with valid conclusions. The research contributes to the intelligent assisted diagnosis and prevention and treatment of ischemic stroke.
Collapse
|
240
|
Wen Z, Feng R, Liu J, Li Y, Ying S. GCSBA-Net: Gabor-Based and Cascade Squeeze Bi-Attention Network for Gland Segmentation. IEEE J Biomed Health Inform 2021; 25:1185-1196. [PMID: 32780703 DOI: 10.1109/jbhi.2020.3015844] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Colorectal cancer is the second and the third most common cancer in women and men, respectively. Pathological diagnosis is the "gold standard" for tumor diagnosis. Accurate segmentation of glands from tissue images is a crucial step in assisting pathologists in their diagnosis. The typical methods for gland segmentation form a dense image representation, ignoring its texture and multi-scale attention information. Therefore, we utilize a Gabor-based module to extract texture information at different scales and directions in histopathology images. This paper also designs a Cascade Squeeze Bi-Attention (CSBA) module. Specifically, we add Atrous Cascade Spatial Pyramid (ACSP), Squeeze Position Attention (SPA) module and Squeeze Channel Attention module (SCA) to model semantic correlation and maintain the multi-level aggregation on the spatial pyramid with different dilations. Besides, to solve the imbalance of data distribution and boundary blur, we propose a hybrid loss function to response the object boudary better. The experimental results show that the proposed method achieves state-of-the-art performance on the GlaS challenge dataset and CRAG colorectal adenocarcinoma dataset, respectively.
Collapse
|
241
|
Guy S, Jacquet C, Tsenkoff D, Argenson JN, Ollivier M. Deep learning for the radiographic diagnosis of proximal femur fractures: Limitations and programming issues. Orthop Traumatol Surg Res 2021; 107:102837. [PMID: 33529731 DOI: 10.1016/j.otsr.2021.102837] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/11/2020] [Revised: 08/08/2020] [Accepted: 08/17/2020] [Indexed: 02/03/2023]
Abstract
INTRODUCTION Radiology is one of the domains where artificial intelligence (AI) yields encouraging results, with diagnostic accuracy that approaches that of experienced radiologists and physicians. Diagnostic errors in traumatology are rare but can have serious functional consequences. Using AI as a radiological diagnostic aid may be beneficial in the emergency room. Thus, an effective, low-cost software that helps with making radiographic diagnoses would be a relevant tool for current clinical practice, although this concept has rarely been evaluated in orthopedics for proximal femur fractures (PFF). This led us to conduct a prospective study with the goals of: 1) programming deep learning software to help make the diagnosis of PFF on radiographs and 2) to evaluate its performance. HYPOTHESIS It is possible to program an effective deep learning software to help make the diagnosis of PFF based on a limited number of radiographs. METHODS Our database consisted of 1309 radiographs: 963 had a PFF, while 346 did not. The sample size was increased 8-fold (resulting in 10,472 radiographs) using a validated technique. Each radiograph was evaluated by an orthopedic surgeon using RectLabel™ software (https://rectlabel.com), by differentiating between healthy and fractured zones. Fractures were classified according to the AO system. The deep learning algorithm was programmed on Tensorflow™ software (Google Brain, Santa Clara, Ca, USA, tensorflow.org). In all, 9425 annotated radiographs (90%) were used for the training phase and 1074 (10%) for the test phase. RESULTS The sensitivity of the algorithm was 61% for femoral neck fractures and 67% for trochanteric fractures. The specificity was 67% and 69%, the positive predictive value was 55% and 56%, while the negative predictive value was 74% and 78%, respectively. CONCLUSION Our results are not good enough for our algorithm to be used in current clinical practice. Programming of deep learning software with sufficient diagnostic accuracy can only be done with several tens of thousands of radiographs, or by using transfer learning. LEVEL OF EVIDENCE III; Diagnostic studies, Study of nonconsecutive patients, without consistently applied reference "gold" standard.
Collapse
Affiliation(s)
- Sylvain Guy
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France.
| | - Christophe Jacquet
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Damien Tsenkoff
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Jean-Noël Argenson
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| | - Matthieu Ollivier
- Institut du Mouvement et de l'appareil Locomoteur, 270, boulevard de Sainte Marguerite, 13009 Marseille, France
| |
Collapse
|
242
|
Sobhani F, Robinson R, Hamidinekoo A, Roxanis I, Somaiah N, Yuan Y. Artificial intelligence and digital pathology: Opportunities and implications for immuno-oncology. Biochim Biophys Acta Rev Cancer 2021; 1875:188520. [PMID: 33561505 PMCID: PMC9062980 DOI: 10.1016/j.bbcan.2021.188520] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2020] [Revised: 01/04/2021] [Accepted: 01/30/2021] [Indexed: 02/08/2023]
Abstract
The field of immuno-oncology has expanded rapidly over the past decade, but key questions remain. How does tumour-immune interaction regulate disease progression? How can we prospectively identify patients who will benefit from immunotherapy? Identifying measurable features of the tumour immune-microenvironment which have prognostic or predictive value will be key to making meaningful gains in these areas. Recent developments in deep learning enable big-data analysis of pathological samples. Digital approaches allow data to be acquired, integrated and analysed far beyond what is possible with conventional techniques, and to do so efficiently and at scale. This has the potential to reshape what can be achieved in terms of volume, precision and reliability of output, enabling data for large cohorts to be summarised and compared. This review examines applications of artificial intelligence (AI) to important questions in immuno-oncology (IO). We discuss general considerations that need to be taken into account before AI can be applied in any clinical setting. We describe AI methods that have been applied to the field of IO to date and present several examples of their use.
Collapse
Affiliation(s)
- Faranak Sobhani
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ruth Robinson
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Azam Hamidinekoo
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| | - Ioannis Roxanis
- The Breast Cancer Now Toby Robins Research Centre, The Institute of Cancer Research, London, UK.
| | - Navita Somaiah
- Division of Radiotherapy and Imaging, Institute of Cancer Research, The Royal Marsden NHS Foundation Trust, London, UK.
| | - Yinyin Yuan
- Division of Molecular Pathology, The Institute of Cancer Research, London, UK; Centre for Evolution and Cancer, The Institute of Cancer Research, London, UK.
| |
Collapse
|
243
|
Cheng J, Liu Y, Huang W, Hong W, Wang L, Zhan X, Han Z, Ni D, Huang K, Zhang J. Computational Image Analysis Identifies Histopathological Image Features Associated With Somatic Mutations and Patient Survival in Gastric Adenocarcinoma. Front Oncol 2021; 11:623382. [PMID: 33869007 PMCID: PMC8045755 DOI: 10.3389/fonc.2021.623382] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2020] [Accepted: 03/15/2021] [Indexed: 12/24/2022] Open
Abstract
Computational analysis of histopathological images can identify sub-visual objective image features that may not be visually distinguishable by human eyes, and hence provides better modeling of disease phenotypes. This study aims to investigate whether specific image features are associated with somatic mutations and patient survival in gastric adenocarcinoma (sample size = 310). An automated image analysis pipeline was developed to extract quantitative morphological features from H&E stained whole-slide images. We found that four frequently somatically mutated genes (TP53, ARID1A, OBSCN, and PIK3CA) were significantly associated with tumor morphological changes. A prognostic model built on the image features significantly stratified patients into low-risk and high-risk groups (log-rank test p-value = 2.6e-4). Multivariable Cox regression showed the model predicted risk index was an additional prognostic factor besides tumor grade and stage. Gene ontology enrichment analysis showed that the genes whose expressions mostly correlated with the contributing features in the prognostic model were enriched on biological processes such as cell cycle and muscle contraction. These results demonstrate that histopathological image features can reflect underlying somatic mutations and identify high-risk patients that may benefit from more precise treatment regimens. Both the image features and pipeline are highly interpretable to enable translational applications.
Collapse
Affiliation(s)
- Jun Cheng
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Marshall Laboratory of Biomedical Engineering Shenzhen University, Shenzhen, China
| | - Yuting Liu
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Wei Huang
- Department of Radiation Oncology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, School of Medicine, South China University of Technology, Guangzhou, China
| | - Wenhui Hong
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Lingling Wang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China
| | - Xiaohui Zhan
- School of Basic Medicine, Chongqing Medical University, Chongqin, China
| | - Zhi Han
- Department of Medicine, Indiana University, School of Medicine, Indianapolis, IN, United States
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Shenzhen University, Shenzhen, China.,Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China.,Marshall Laboratory of Biomedical Engineering Shenzhen University, Shenzhen, China
| | - Kun Huang
- Department of Medicine, Indiana University, School of Medicine, Indianapolis, IN, United States
| | - Jie Zhang
- Department of Medical and Molecular Genetics, Indiana University School of Medicine, Indianapolis, IN, United States
| |
Collapse
|
244
|
Alzubaidi L, Zhang J, Humaidi AJ, Al-Dujaili A, Duan Y, Al-Shamma O, Santamaría J, Fadhel MA, Al-Amidie M, Farhan L. Review of deep learning: concepts, CNN architectures, challenges, applications, future directions. JOURNAL OF BIG DATA 2021; 8:53. [PMID: 33816053 PMCID: PMC8010506 DOI: 10.1186/s40537-021-00444-8] [Citation(s) in RCA: 1019] [Impact Index Per Article: 254.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/21/2021] [Accepted: 03/22/2021] [Indexed: 05/04/2023]
Abstract
In the last few years, the deep learning (DL) computing paradigm has been deemed the Gold Standard in the machine learning (ML) community. Moreover, it has gradually become the most widely used computational approach in the field of ML, thus achieving outstanding results on several complex cognitive tasks, matching or even beating those provided by human performance. One of the benefits of DL is the ability to learn massive amounts of data. The DL field has grown fast in the last few years and it has been extensively used to successfully address a wide range of traditional applications. More importantly, DL has outperformed well-known ML techniques in many domains, e.g., cybersecurity, natural language processing, bioinformatics, robotics and control, and medical information processing, among many others. Despite it has been contributed several works reviewing the State-of-the-Art on DL, all of them only tackled one aspect of the DL, which leads to an overall lack of knowledge about it. Therefore, in this contribution, we propose using a more holistic approach in order to provide a more suitable starting point from which to develop a full understanding of DL. Specifically, this review attempts to provide a more comprehensive survey of the most important aspects of DL and including those enhancements recently added to the field. In particular, this paper outlines the importance of DL, presents the types of DL techniques and networks. It then presents convolutional neural networks (CNNs) which the most utilized DL network type and describes the development of CNNs architectures together with their main features, e.g., starting with the AlexNet network and closing with the High-Resolution network (HR.Net). Finally, we further present the challenges and suggested solutions to help researchers understand the existing research gaps. It is followed by a list of the major DL applications. Computational tools including FPGA, GPU, and CPU are summarized along with a description of their influence on DL. The paper ends with the evolution matrix, benchmark datasets, and summary and conclusion.
Collapse
Affiliation(s)
- Laith Alzubaidi
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - Jinglan Zhang
- School of Computer Science, Queensland University of Technology, Brisbane, QLD 4000 Australia
| | - Amjad J. Humaidi
- Control and Systems Engineering Department, University of Technology, Baghdad, 10001 Iraq
| | - Ayad Al-Dujaili
- Electrical Engineering Technical College, Middle Technical University, Baghdad, 10001 Iraq
| | - Ye Duan
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Omran Al-Shamma
- AlNidhal Campus, University of Information Technology & Communications, Baghdad, 10001 Iraq
| | - J. Santamaría
- Department of Computer Science, University of Jaén, 23071 Jaén, Spain
| | - Mohammed A. Fadhel
- College of Computer Science and Information Technology, University of Sumer, Thi Qar, 64005 Iraq
| | - Muthana Al-Amidie
- Faculty of Electrical Engineering & Computer Science, University of Missouri, Columbia, MO 65211 USA
| | - Laith Farhan
- School of Engineering, Manchester Metropolitan University, Manchester, M1 5GD UK
| |
Collapse
|
245
|
Wang KS, Yu G, Xu C, Meng XH, Zhou J, Zheng C, Deng Z, Shang L, Liu R, Su S, Zhou X, Li Q, Li J, Wang J, Ma K, Qi J, Hu Z, Tang P, Deng J, Qiu X, Li BY, Shen WD, Quan RP, Yang JT, Huang LY, Xiao Y, Yang ZC, Li Z, Wang SC, Ren H, Liang C, Guo W, Li Y, Xiao H, Gu Y, Yun JP, Huang D, Song Z, Fan X, Chen L, Yan X, Li Z, Huang ZC, Huang J, Luttrell J, Zhang CY, Zhou W, Zhang K, Yi C, Wu C, Shen H, Wang YP, Xiao HM, Deng HW. Accurate diagnosis of colorectal cancer based on histopathology images using artificial intelligence. BMC Med 2021; 19:76. [PMID: 33752648 PMCID: PMC7986569 DOI: 10.1186/s12916-021-01942-5] [Citation(s) in RCA: 75] [Impact Index Per Article: 18.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/22/2020] [Accepted: 02/16/2021] [Indexed: 02/06/2023] Open
Abstract
BACKGROUND Accurate and robust pathological image analysis for colorectal cancer (CRC) diagnosis is time-consuming and knowledge-intensive, but is essential for CRC patients' treatment. The current heavy workload of pathologists in clinics/hospitals may easily lead to unconscious misdiagnosis of CRC based on daily image analyses. METHODS Based on a state-of-the-art transfer-learned deep convolutional neural network in artificial intelligence (AI), we proposed a novel patch aggregation strategy for clinic CRC diagnosis using weakly labeled pathological whole-slide image (WSI) patches. This approach was trained and validated using an unprecedented and enormously large number of 170,099 patches, > 14,680 WSIs, from > 9631 subjects that covered diverse and representative clinical cases from multi-independent-sources across China, the USA, and Germany. RESULTS Our innovative AI tool consistently and nearly perfectly agreed with (average Kappa statistic 0.896) and even often better than most of the experienced expert pathologists when tested in diagnosing CRC WSIs from multicenters. The average area under the receiver operating characteristics curve (AUC) of AI was greater than that of the pathologists (0.988 vs 0.970) and achieved the best performance among the application of other AI methods to CRC diagnosis. Our AI-generated heatmap highlights the image regions of cancer tissue/cells. CONCLUSIONS This first-ever generalizable AI system can handle large amounts of WSIs consistently and robustly without potential bias due to fatigue commonly experienced by clinical pathologists. It will drastically alleviate the heavy clinical burden of daily pathology diagnosis and improve the treatment for CRC patients. This tool is generalizable to other cancer diagnosis based on image recognition.
Collapse
Affiliation(s)
- K S Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - G Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Xu
- Department of Biostatistics and Epidemiology, The University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - X H Meng
- Laboratory of Molecular and Statistical Genetics, College of Life Sciences, Hunan Normal University, Changsha, 410081, Hunan, China
| | - J Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - C Zheng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Deng
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - L Shang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - R Liu
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - S Su
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - X Zhou
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - Q Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Li
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - J Wang
- Department of Pathology, Xiangya Hospital, Central South University, Changsha, 410078, Hunan, China
| | - K Ma
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Qi
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - Z Hu
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - P Tang
- Department of Pathology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
| | - X Qiu
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - B Y Li
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - W D Shen
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - R P Quan
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - J T Yang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - L Y Huang
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Y Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China
| | - Z C Yang
- Department of Pharmacology, Xiangya School of Pharmaceutical Sciences, Central South University, Changsha, 410078, Hunan, China
| | - Z Li
- School of Life Sciences, Central South University, Changsha, 410013, Hunan, China
| | - S C Wang
- College of Information Science and Engineering, Hunan Normal University, Changsha, 410081, Hunan, China
| | - H Ren
- Department of Pathology, Gongli Hospital, Second Military Medical University, Shanghai, 200135, China
- Department of Pathology, the Peace Hospital Affiliated to Changzhi Medical College, Changzhi, 046000, China
| | - C Liang
- Pathological Laboratory of Adicon Medical Laboratory Co., Ltd, Hangzhou, 310023, Zhejiang, China
| | - W Guo
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - Y Li
- Department of Pathology, First Affiliated Hospital of Hunan Normal University, The People's Hospital of Hunan Province, Changsha, 410005, Hunan, China
| | - H Xiao
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - Y Gu
- Department of Pathology, the Third Xiangya Hospital, Central South University, Changsha, 410013, Hunan, China
| | - J P Yun
- Department of Pathology, Sun Yat-Sen University Cancer Center, Guangzhou, 510060, China
| | - D Huang
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Z Song
- Department of Pathology, Chinese PLA General Hospital, Beijing, 100853, China
| | - X Fan
- Department of Pathology, Nanjing Drum Tower Hospital, the Affiliated Hospital of Nanjing University Medical School, Nanjing, 210008, China
| | - L Chen
- Department of Pathology, The first affiliated hospital, Air Force Medical University, Xi'an, 710032, China
| | - X Yan
- Institute of Pathology and southwest cancer center, Southwest Hospital, Third Military Medical University, Chongqing, 400038, China
| | - Z Li
- Department of Pathology, Guangdong Provincial People's Hospital, Guangdong Academy of Medical Sciences, Guangzhou, 510080, China
| | - Z C Huang
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Huang
- Department of Anatomy and Neurobiology, School of Basic Medical Science, Central South University, Changsha, 410013, Hunan, China
| | - J Luttrell
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - C Y Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, 39406, USA
| | - W Zhou
- College of Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - K Zhang
- Department of Computer Science, Bioinformatics Facility of Xavier NIH RCMI Cancer Research Center, Xavier University of Louisiana, New Orleans, LA, 70125, USA
| | - C Yi
- Department of Pathology, Ochsner Medical Center, New Orleans, LA, 70121, USA
| | - C Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - H Shen
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA
| | - Y P Wang
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - H M Xiao
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
| | - H W Deng
- Department of Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA, 70112, USA.
- Centers of System Biology, Data Information and Reproductive Health, School of Basic Medical Science, School of Basic Medical Science, Central South University, Changsha, 410008, Hunan, China.
- Division of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|
246
|
CryoNuSeg: A dataset for nuclei instance segmentation of cryosectioned H&E-stained histological images. Comput Biol Med 2021; 132:104349. [PMID: 33774269 DOI: 10.1016/j.compbiomed.2021.104349] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 02/28/2021] [Accepted: 03/16/2021] [Indexed: 12/31/2022]
Abstract
Nuclei instance segmentation plays an important role in the analysis of hematoxylin and eosin (H&E)-stained images. While supervised deep learning (DL)-based approaches represent the state-of-the-art in automatic nuclei instance segmentation, annotated datasets are required to train these models. There are two main types of tissue processing protocols resulting in formalin-fixed paraffin-embedded samples (FFPE) and frozen tissue samples (FS), respectively. Although FFPE-derived H&E stained tissue sections are the most widely used samples, H&E staining of frozen sections derived from FS samples is a relevant method in intra-operative surgical sessions as it can be performed more rapidly. Due to differences in the preparation of these two types of samples, the derived images and in particular the nuclei appearance may be different in the acquired whole slide images. Analysis of FS-derived H&E stained images can be more challenging as rapid preparation, staining, and scanning of FS sections may lead to deterioration in image quality. In this paper, we introduce CryoNuSeg, the first fully annotated FS-derived cryosectioned and H&E-stained nuclei instance segmentation dataset. The dataset contains images from 10 human organs that were not exploited in other publicly available datasets, and is provided with three manual mark-ups to allow measuring intra-observer and inter-observer variabilities. Moreover, we investigate the effects of tissue fixation/embedding protocol (i.e., FS or FFPE) on the automatic nuclei instance segmentation performance and provide a baseline segmentation benchmark for the dataset that can be used in future research. A step-by-step guide to generate the dataset as well as the full dataset and other detailed information are made available to fellow researchers at https://github.com/masih4/CryoNuSeg.
Collapse
|
247
|
Lai Y, Fan F, Wu Q, Ke W, Liao P, Deng Z, Chen H, Zhang Y. LCANet: Learnable Connected Attention Network for Human Identification Using Dental Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:905-915. [PMID: 33259294 DOI: 10.1109/tmi.2020.3041452] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Forensic odontology is regarded as an important branch of forensics dealing with human identification based on dental identification. This paper proposes a novel method that uses deep convolution neural networks to assist in human identification by automatically and accurately matching 2-D panoramic dental X-ray images. Designed as a top-down architecture, the network incorporates an improved channel attention module and a learnable connected module to better extract features for matching. By integrating associated features among all channel maps, the channel attention module can selectively emphasize interdependent channel information, which contributes to more precise recognition results. The learnable connected module not only connects different layers in a feed-forward fashion but also searches the optimal connections for each connected layer, resulting in automatically and adaptively learning the connections among layers. Extensive experiments demonstrate that our method can achieve new state-of-the-art performance in human identification using dental images. Specifically, the method is tested on a dataset including 1,168 dental panoramic images of 503 different subjects, and its dental image recognition accuracy for human identification reaches 87.21% rank-1 accuracy and 95.34% rank-5 accuracy. Code has been released on Github. (https://github.com/cclaiyc/TIdentify).
Collapse
|
248
|
Narayanan PL, Raza SEA, Hall AH, Marks JR, King L, West RB, Hernandez L, Guppy N, Dowsett M, Gusterson B, Maley C, Hwang ES, Yuan Y. Unmasking the immune microecology of ductal carcinoma in situ with deep learning. NPJ Breast Cancer 2021; 7:19. [PMID: 33649333 PMCID: PMC7921670 DOI: 10.1038/s41523-020-00205-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2020] [Accepted: 10/21/2020] [Indexed: 02/07/2023] Open
Abstract
Despite increasing evidence supporting the clinical relevance of tumour infiltrating lymphocytes (TILs) in invasive breast cancer, TIL spatial variability within ductal carcinoma in situ (DCIS) samples and its association with progression are not well understood. To characterise tissue spatial architecture and the microenvironment of DCIS, we designed and validated a new deep learning pipeline, UNMaSk. Following automated detection of individual DCIS ducts using a new method IM-Net, we applied spatial tessellation to create virtual boundaries for each duct. To study local TIL infiltration for each duct, DRDIN was developed for mapping the distribution of TILs. In a dataset comprising grade 2-3 pure DCIS and DCIS adjacent to invasive cancer (adjacent DCIS), we found that pure DCIS cases had more TILs compared to adjacent DCIS. However, the colocalisation of TILs with DCIS ducts was significantly lower in pure DCIS compared to adjacent DCIS, which may suggest a more inflamed tissue ecology local to DCIS ducts in adjacent DCIS cases. Our study demonstrates that technological developments in deep convolutional neural networks and digital pathology can enable an automated morphological and microenvironmental analysis of DCIS, providing a new way to study differential immune ecology for individual ducts and identify new markers of progression.
Collapse
Affiliation(s)
- Priya Lakshmi Narayanan
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK.
- Division of Molecular Pathology, Institute of Cancer Research, London, UK.
| | - Shan E Ahmed Raza
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK
- Division of Molecular Pathology, Institute of Cancer Research, London, UK
| | - Allison H Hall
- Department of Pathology, Duke University School of Medicine, Durham, NC, USA
| | - Jeffrey R Marks
- Department of Surgery, Duke University School of Medicine, Durham, NC, USA
| | - Lorraine King
- Department of Surgery, Duke University School of Medicine, Durham, NC, USA
| | - Robert B West
- Department of Pathology, Surgical Pathology, Stanford, CA, USA
| | - Lucia Hernandez
- Department of Anatomic Pathology, Hospital Universitario, 12 de Octubre, Madrid, Spain
| | - Naomi Guppy
- Breast Cancer Now Histopathology Core, Institute of Cancer Research, London, UK
- UCL Advanced Diagnostics, University College London, London, UK
| | - Mitch Dowsett
- The Breast Cancer Now Toby Robins Research Centre, Institute of Cancer Research, London, UK
- Academic Department of Biochemistry, Royal Marsden Hospital, London, UK
| | - Barry Gusterson
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK
| | - Carlo Maley
- Biodesign Center for Personalized Diagnostics and School of Life Sciences, Arizona State University, Tempe, AZ, USA
| | - E Shelley Hwang
- Department of Surgery, Duke University School of Medicine, Durham, NC, USA
| | - Yinyin Yuan
- Centre for Evolution and Cancer, Institute of Cancer Research, London, UK.
- Division of Molecular Pathology, Institute of Cancer Research, London, UK.
| |
Collapse
|
249
|
Dmitriev K, Marino J, Baker K, Kaufman AE. Visual Analytics of a Computer-Aided Diagnosis System for Pancreatic Lesions. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2021; 27:2174-2185. [PMID: 31613771 DOI: 10.1109/tvcg.2019.2947037] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Machine learning is a powerful and effective tool for medical image analysis to perform computer-aided diagnosis (CAD). Having great potential in improving the accuracy of a diagnosis, CAD systems are often analyzed in terms of the final accuracy, leading to a limited understanding of the internal decision process, impossibility to gain insights, and ultimately to skepticism from clinicians. We present a visual analytics approach to uncover the decision-making process of a CAD system for classifying pancreatic cystic lesions. This CAD algorithm consists of two distinct components: random forest (RF), which classifies a set of predefined features, including demographic features, and a convolutional neural network (CNN), which analyzes radiological (imaging) features of the lesions. We study the class probabilities generated by the RF and the semantical meaning of the features learned by the CNN. We also use an eye tracker to better understand which radiological features are particularly useful for a radiologist to make a diagnosis and to quantitatively compare with the features that lead the CNN to its final classification decision. Additionally, we evaluate the effects and benefits of supplying the CAD system with a case-based visual aid in a second-reader setting.
Collapse
|
250
|
Wollmann T, Rohr K. Deep Consensus Network: Aggregating predictions to improve object detection in microscopy images. Med Image Anal 2021; 70:102019. [PMID: 33730623 DOI: 10.1016/j.media.2021.102019] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 02/13/2021] [Accepted: 02/19/2021] [Indexed: 12/11/2022]
Abstract
Detection of cells and particles in microscopy images is a common and challenging task. In recent years, detection approaches in computer vision achieved remarkable improvements by leveraging deep learning. Microscopy images pose challenges like small and clustered objects, low signal to noise, and complex shape and appearance, for which current approaches still struggle. We introduce Deep Consensus Network, a new deep neural network for object detection in microscopy images based on object centroids. Our network is trainable end-to-end and comprises a Feature Pyramid Network-based feature extractor, a Centroid Proposal Network, and a layer for ensembling detection hypotheses over all image scales and anchors. We suggest an anchor regularization scheme that favours prior anchors over regressed locations. We also propose a novel loss function based on Normalized Mutual Information to cope with strong class imbalance, which we derive within a Bayesian framework. In addition, we introduce an improved algorithm for Non-Maximum Suppression which significantly reduces the algorithmic complexity. Experiments on synthetic data are performed to provide insights into the properties of the proposed loss function and its robustness. We also applied our method to challenging data from the TUPAC16 mitosis detection challenge and the Particle Tracking Challenge, and achieved results competitive or better than state-of-the-art.
Collapse
Affiliation(s)
- Thomas Wollmann
- Biomedical Computer Vision Group, BioQuant, IPMB, Heidelberg University Im Neuenheimer Feld 267, Heidelberg, Germany.
| | - Karl Rohr
- Biomedical Computer Vision Group, BioQuant, IPMB, Heidelberg University Im Neuenheimer Feld 267, Heidelberg, Germany.
| |
Collapse
|