151
|
Learning to count biological structures with raters’ uncertainty. Med Image Anal 2022; 80:102500. [DOI: 10.1016/j.media.2022.102500] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2021] [Revised: 05/22/2022] [Accepted: 05/25/2022] [Indexed: 11/21/2022]
|
152
|
CellNet: A Lightweight Model towards Accurate LOC-Based High-Speed Cell Detection. ELECTRONICS 2022. [DOI: 10.3390/electronics11091407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Abstract
Label-free cell separation and sorting in a microfluidic system, an essential technique for modern cancer diagnosis, resulted in high-throughput single-cell analysis becoming a reality. However, designing an efficient cell detection model is challenging. Traditional cell detection methods are subject to occlusion boundaries and weak textures, resulting in poor performance. Modern detection models based on convolutional neural networks (CNNs) have achieved promising results at the cost of a large number of both parameters and floating point operations (FLOPs). In this work, we present a lightweight, yet powerful cell detection model named CellNet, which includes two efficient modules, CellConv blocks and the h-swish nonlinearity function. CellConv is proposed as an effective feature extractor as a substitute to computationally expensive convolutional layers, whereas the h-swish function is introduced to increase the nonlinearity of the compact model. To boost the prediction and localization ability of the detection model, we re-designed the model’s multi-task loss function. In comparison with other efficient object detection methods, our approach achieved state-of-the-art 98.70% mean average precision (mAP) on our custom sea urchin embryos dataset with only 0.08 M parameters and 0.10 B FLOPs, reducing the size of the model by 39.5× and the computational cost by 4.6×. We deployed CellNet on different platforms to verify its efficiency. The inference speed on a graphics processing unit (GPU) was 500.0 fps compared with 87.7 fps on a CPU. Additionally, CellNet is 769.5-times smaller and 420 fps faster than YOLOv3. Extensive experimental results demonstrate that CellNet can achieve an excellent efficiency/accuracy trade-off on resource-constrained platforms.
Collapse
|
153
|
Spatial interplay of lymphocytes and fibroblasts in estrogen receptor-positive HER2-negative breast cancer. NPJ Breast Cancer 2022; 8:56. [PMID: 35484275 PMCID: PMC9051105 DOI: 10.1038/s41523-022-00416-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Accepted: 03/19/2022] [Indexed: 11/08/2022] Open
Abstract
In estrogen-receptor-positive, HER2-negative (ER+HER2-) breast cancer, higher levels of tumor infiltrating lymphocytes (TILs) are often associated with a poor prognosis and this phenomenon is still poorly understood. Fibroblasts represent one of the most frequent cells in breast cancer and harbor immunomodulatory capabilities. Here, we evaluate the molecular and clinical impact of the spatial patterns of TILs and fibroblast in ER+HER2- breast cancer. We used a deep neural network to locate and identify tumor, TILs, and fibroblasts on hematoxylin and eosin-stained slides from 179 ER+HER2- breast tumors (ICGC cohort) together with a new density estimation analysis to measure the spatial patterns. We clustered tumors based on their spatial patterns and gene set enrichment analysis was performed to study their molecular characteristics. We independently assessed the spatial patterns in a second cohort of ER+HER2- breast cancer (N = 630, METABRIC) and studied their prognostic value. The spatial integration of fibroblasts, TILs, and tumor cells leads to a new reproducible spatial classification of ER+HER2- breast cancer and is linked to inflammation, fibroblast meddling, or immunosuppression. ER+HER2- patients with high TIL did not have a significant improved overall survival (HR = 0.76, P = 0.212), except when they had received chemotherapy (HR = 0.447). A poorer survival was observed for patients with high fibroblasts that did not show a high level of TILs (HR = 1.661, P = 0.0303). Especially spatial mixing of fibroblasts and TILs was associated with a good prognosis (HR = 0.464, P = 0.013). Our findings demonstrate a reproducible pipeline for the spatial profiling of TILs and fibroblasts in ER+HER2- breast cancer and suggest that this spatial interplay holds a decisive role in their cancer-immune interactions.
Collapse
|
154
|
Improved Deep Convolutional Neural Networks via Boosting for Predicting the Quality of In Vitro Bovine Embryos. ELECTRONICS 2022. [DOI: 10.3390/electronics11091363] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated diagnosis for the quality of bovine in vitro-derived embryos based on imaging data is an important research problem in developmental biology. By predicting the quality of embryos correctly, embryologists can (1) avoid the time-consuming and tedious work of subjective visual examination to assess the quality of embryos; (2) automatically perform real-time evaluation of embryos, which accelerates the examination process; and (3) possibly avoid the economic, social, and medical implications caused by poor-quality embryos. While generated embryo images provide an opportunity for analyzing such images, there is a lack of consistent noninvasive methods utilizing deep learning to assess the quality of embryos. Hence, designing high-performance deep learning algorithms is crucial for data analysts who work with embryologists. A key goal of this study is to provide advanced deep learning tools to embryologists, who would, in turn, use them as prediction calculators to evaluate the quality of embryos. The proposed deep learning approaches utilize a modified convolutional neural network, with or without boosting techniques, to improve the prediction performance. Experimental results on image data pertaining to in vitro bovine embryos show that our proposed deep learning approaches perform better than existing baseline approaches in terms of prediction performance and statistical significance.
Collapse
|
155
|
Alom Z, Asari VK, Parwani A, Taha TM. Microscopic nuclei classification, segmentation, and detection with improved deep convolutional neural networks (DCNN). Diagn Pathol 2022; 17:38. [PMID: 35436941 PMCID: PMC9017017 DOI: 10.1186/s13000-022-01189-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2021] [Accepted: 12/30/2021] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Nuclei classification, segmentation, and detection from pathological images are challenging tasks due to cellular heterogeneity in the Whole Slide Images (WSI). METHODS In this work, we propose advanced DCNN models for nuclei classification, segmentation, and detection tasks. The Densely Connected Neural Network (DCNN) and Densely Connected Recurrent Convolutional Network (DCRN) models are applied for the nuclei classification tasks. The Recurrent Residual U-Net (R2U-Net) and the R2UNet-based regression model named the University of Dayton Net (UD-Net) are applied for nuclei segmentation and detection tasks respectively. The experiments are conducted on publicly available datasets, including Routine Colon Cancer (RCC) classification and detection and the Nuclei Segmentation Challenge 2018 datasets for segmentation tasks. The experimental results were evaluated with a five-fold cross-validation method, and the average testing results are compared against the existing approaches in terms of precision, recall, Dice Coefficient (DC), Mean Squared Error (MSE), F1-score, and overall testing accuracy by calculating pixels and cell-level analysis. RESULTS The results demonstrate around 2.6% and 1.7% higher performance in terms of F1-score for nuclei classification and detection tasks when compared to the recently published DCNN based method. Also, for nuclei segmentation, the R2U-Net shows around 91.90% average testing accuracy in terms of DC, which is around 1.54% higher than the U-Net model. CONCLUSION The proposed methods demonstrate robustness with better quantitative and qualitative results in three different tasks for analyzing the WSI.
Collapse
Affiliation(s)
- Zahangir Alom
- Department of Pathology, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Vijayan K Asari
- Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA
| | - Anil Parwani
- Department of Pathology, The Ohio State University, Columbus, OH, USA
| | - Tarek M Taha
- Department of Electrical and Computer Engineering, University of Dayton, Dayton, OH, USA
| |
Collapse
|
156
|
Liang S, Lu H, Zang M, Wang X, Jiao Y, Zhao T, Xu EY, Xu J. Deep SED-Net with interactive learning for multiple testicular cell types segmentation and cell composition analysis in mouse seminiferous tubules. Cytometry A 2022; 101:658-674. [PMID: 35388957 DOI: 10.1002/cyto.a.24556] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 03/05/2022] [Accepted: 04/01/2022] [Indexed: 11/06/2022]
Abstract
The development of mouse spermatozoa is a continuous process from spermatogonia, spermatocytes, spermatids to mature sperm. Those developing germ cells (spermatogonia, spermatocyte, spermatids) together with supporting Sertoli cells are all enclosed inside seminiferous tubules of the testis, their identification is key to testis histology and pathology analysis. Automated segmentation of all these cells is a challenging task because of their dynamical changes in different stages. The accurate segmentation of testicular cells is critical in developing computerized spermatogenesis staging. In this paper, we present a novel segmentation model, SED-Net, which incorporates a Squeeze-and-Excitation (SE) module and a Dense unit. The SE module optimizes and obtains features from different channels, whereas the Dense unit uses fewer parameters to enhance the use of features. A human-in-the-loop strategy, named deep interactive learning, is developed to achieve better segmentation performance while reducing the workload of manual annotation and time consumption. Across a cohort of 274 seminiferous tubules from Stages VI to VIII, the SED-Net achieved a pixel accuracy of 0.930, a mean pixel accuracy of 0.866, a mean intersection over union of 0.710, and a frequency weighted intersection over union of 0.878, respectively, in terms of four types of testicular cell segmentation. There is no significant difference between manual annotated tubules and segmentation results by SED-Net in cell composition analysis for tubules from Stages VI to VIII. In addition, we performed cell composition analysis on 2346 segmented seminiferous tubule images from 12 segmented testicular section results. The results provided quantitation of cells of various testicular cell types across 12 stages. The rule reflects the cell variation tendency across 12 stages during development of mouse spermatozoa. The method could enable us to not only analyze cell morphology and staging during the development of mouse spermatozoa but also potientially could be applied to the study of reproductive diseases such as infertility.
Collapse
Affiliation(s)
- Shi Liang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Haoda Lu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Min Zang
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Xiangxue Wang
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yiping Jiao
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| | - Tingting Zhao
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China
| | - Eugene Yujun Xu
- State Key Laboratory of Reproductive Medicine, Nanjing Medical University, Nanjing, China.,Department of Neurology, Center for Reproductive Sciences, Northwestern University Feinberg School of Medicine, IL, USA
| | - Jun Xu
- Institute for AI in Medicine, School of Artificial Intelligence, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
157
|
Tiwari S, Falahkheirkhah K, Cheng G, Bhargava R. Colon Cancer Grading Using Infrared Spectroscopic Imaging-Based Deep Learning. APPLIED SPECTROSCOPY 2022; 76:475-484. [PMID: 35332784 PMCID: PMC9202565 DOI: 10.1177/00037028221076170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Tumor grade assessment is critical to the treatment of cancers. A pathologist typically evaluates grade by examining morphologic organization in tissue using hematoxylin and eosin (H&E) stained tissue sections. Fourier transform infrared spectroscopic (FT-IR) imaging provides an alternate view of tissue in which spatially specific molecular information from unstained tissue can be utilized. Here, we examine the potential of IR imaging for grading colon cancer in biopsy samples. We used a 148-patient cohort to develop a deep learning classifier to estimate the tumor grade using IR absorption. We demonstrate that FT-IR imaging can be a viable tool to determine colorectal cancer grades, which we validated on an independent cohort of surgical resections. This work demonstrates that harnessing molecular information from FT-IR imaging and coupling it with morphometry is a potential path to develop clinically relevant grade prediction models.
Collapse
Affiliation(s)
- Saumya Tiwari
- Department of Medicine, University of California San Diego, San Diego, CA, USA
| | - Kianoush Falahkheirkhah
- Department of Chemical and Biomolecular Engineering and Chemistry and Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Georgina Cheng
- Carle Foundation Hospital (Carle Health), Urbana, IL, USA
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Rohit Bhargava
- Cancer Center at Illinois, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Departments of Bioengineering, Electrical and Computer Engineering, Mechanical Science and Engineering, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| |
Collapse
|
158
|
Ghahremani P, Li Y, Kaufman A, Vanguri R, Greenwald N, Angelo M, Hollmann TJ, Nadeem S. Deep Learning-Inferred Multiplex ImmunoFluorescence for Immunohistochemical Image Quantification. NAT MACH INTELL 2022; 4:401-412. [PMID: 36118303 PMCID: PMC9477216 DOI: 10.1038/s42256-022-00471-x] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2021] [Accepted: 02/28/2022] [Indexed: 01/03/2023]
Abstract
Reporting biomarkers assessed by routine immunohistochemical (IHC) staining of tissue is broadly used in diagnostic pathology laboratories for patient care. To date, clinical reporting is predominantly qualitative or semi-quantitative. By creating a multitask deep learning framework referred to as DeepLIIF, we present a single-step solution to stain deconvolution/separation, cell segmentation, and quantitative single-cell IHC scoring. Leveraging a unique de novo dataset of co-registered IHC and multiplex immunofluorescence (mpIF) staining of the same slides, we segment and translate low-cost and prevalent IHC slides to more expensive-yet-informative mpIF images, while simultaneously providing the essential ground truth for the superimposed brightfield IHC channels. Moreover, a new nuclear-envelop stain, LAP2beta, with high (>95%) cell coverage is introduced to improve cell delineation/segmentation and protein expression quantification on IHC slides. By simultaneously translating input IHC images to clean/separated mpIF channels and performing cell segmentation/classification, we show that our model trained on clean IHC Ki67 data can generalize to more noisy and artifact-ridden images as well as other nuclear and non-nuclear markers such as CD3, CD8, BCL2, BCL6, MYC, MUM1, CD10, and TP53. We thoroughly evaluate our method on publicly available benchmark datasets as well as against pathologists' semi-quantitative scoring. The code, the pre-trained models, along with easy-to-run containerized docker files as well as Google CoLab project are available at https://github.com/nadeemlab/deepliif.
Collapse
Affiliation(s)
- Parmida Ghahremani
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Yanyun Li
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Arie Kaufman
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Rami Vanguri
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Noah Greenwald
- Department of Pathology, Stanford University, Stanford, CA, USA
| | - Michael Angelo
- Department of Pathology, Stanford University, Stanford, CA, USA
| | - Travis J Hollmann
- Department of Pathology, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| | - Saad Nadeem
- Department of Medical Physics, Memorial Sloan Kettering Cancer Center, New York, NY, USA
| |
Collapse
|
159
|
Thiagarajan P, Khairnar P, Ghosh S. Explanation and Use of Uncertainty Quantified by Bayesian Neural Network Classifiers for Breast Histopathology Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:815-825. [PMID: 34699354 DOI: 10.1109/tmi.2021.3123300] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Despite the promise of Convolutional neural network (CNN) based classification models for histopathological images, it is infeasible to quantify its uncertainties. Moreover, CNNs may suffer from overfitting when the data is biased. We show that Bayesian-CNN can overcome these limitations by regularizing automatically and by quantifying the uncertainty. We have developed a novel technique to utilize the uncertainties provided by the Bayesian-CNN that significantly improves the performance on a large fraction of the test data (about 6% improvement in accuracy on 77% of test data). Further, we provide a novel explanation for the uncertainty by projecting the data into a low dimensional space through a nonlinear dimensionality reduction technique. This dimensionality reduction enables interpretation of the test data through visualization and reveals the structure of the data in a low dimensional feature space. We show that the Bayesian-CNN can perform much better than the state-of-the-art transfer learning CNN (TL-CNN) by reducing the false negative and false positive by 11% and 7.7% respectively for the present data set. It achieves this performance with only 1.86 million parameters as compared to 134.33 million for TL-CNN. Besides, we modify the Bayesian-CNN by introducing a stochastic adaptive activation function. The modified Bayesian-CNN performs slightly better than Bayesian-CNN on all performance metrics and significantly reduces the number of false negatives and false positives (3% reduction for both). We also show that these results are statistically significant by performing McNemar's statistical significance test. This work shows the advantages of Bayesian-CNN against the state-of-the-art, explains and utilizes the uncertainties for histopathological images. It should find applications in various medical image classifications.
Collapse
|
160
|
Sports video athlete detection based on deep learning. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07077-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
161
|
Deep Learning on Histopathological Images for Colorectal Cancer Diagnosis: A Systematic Review. Diagnostics (Basel) 2022; 12:diagnostics12040837. [PMID: 35453885 PMCID: PMC9028395 DOI: 10.3390/diagnostics12040837] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2022] [Revised: 03/22/2022] [Accepted: 03/25/2022] [Indexed: 02/04/2023] Open
Abstract
Colorectal cancer (CRC) is the second most common cancer in women and the third most common in men, with an increasing incidence. Pathology diagnosis complemented with prognostic and predictive biomarker information is the first step for personalized treatment. The increased diagnostic load in the pathology laboratory, combined with the reported intra- and inter-variability in the assessment of biomarkers, has prompted the quest for reliable machine-based methods to be incorporated into the routine practice. Recently, Artificial Intelligence (AI) has made significant progress in the medical field, showing potential for clinical applications. Herein, we aim to systematically review the current research on AI in CRC image analysis. In histopathology, algorithms based on Deep Learning (DL) have the potential to assist in diagnosis, predict clinically relevant molecular phenotypes and microsatellite instability, identify histological features related to prognosis and correlated to metastasis, and assess the specific components of the tumor microenvironment.
Collapse
|
162
|
Lee J, Warner E, Shaikhouni S, Bitzer M, Kretzler M, Gipson D, Pennathur S, Bellovich K, Bhat Z, Gadegbeku C, Massengill S, Perumal K, Saha J, Yang Y, Luo J, Zhang X, Mariani L, Hodgin JB, Rao A. Unsupervised machine learning for identifying important visual features through bag-of-words using histopathology data from chronic kidney disease. Sci Rep 2022; 12:4832. [PMID: 35318420 PMCID: PMC8941143 DOI: 10.1038/s41598-022-08974-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2021] [Accepted: 03/14/2022] [Indexed: 12/22/2022] Open
Abstract
Pathologists use visual classification to assess patient kidney biopsy samples when diagnosing the underlying cause of kidney disease. However, the assessment is qualitative, or semi-quantitative at best, and reproducibility is challenging. To discover previously unknown features which predict patient outcomes and overcome substantial interobserver variability, we developed an unsupervised bag-of-words model. Our study applied to the C-PROBE cohort of patients with chronic kidney disease (CKD). 107,471 histopathology images were obtained from 161 biopsy cores and identified important morphological features in biopsy tissue that are highly predictive of the presence of CKD both at the time of biopsy and in one year. To evaluate the performance of our model, we estimated the AUC and its 95% confidence interval. We show that this method is reliable and reproducible and can achieve 0.93 AUC at predicting glomerular filtration rate at the time of biopsy as well as predicting a loss of function at one year. Additionally, with this method, we ranked the identified morphological features according to their importance as diagnostic markers for chronic kidney disease. In this study, we have demonstrated the feasibility of using an unsupervised machine learning method without human input in order to predict the level of kidney function in CKD. The results from our study indicate that the visual dictionary, or visual image pattern, obtained from unsupervised machine learning can predict outcomes using machine-derived values that correspond to both known and unknown clinically relevant features.
Collapse
Affiliation(s)
- Joonsang Lee
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Elisa Warner
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Salma Shaikhouni
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Markus Bitzer
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Matthias Kretzler
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Debbie Gipson
- Department of Pediatrics, Pediatric Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Subramaniam Pennathur
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Keith Bellovich
- Department of Internal Medicine, Nephrology, St. Clair Nephrology Research, Detroit, MI, USA
| | - Zeenat Bhat
- Department of Internal Medicine, Nephrology, Wayne State University, Detroit, MI, USA
| | - Crystal Gadegbeku
- Department of Internal Medicine, Nephrology, Cleveland Clinic, Cleveland, OH, USA
| | - Susan Massengill
- Department of Pediatrics, Pediatric Nephrology, Levine Children's Hospital, Charlotte, NC, USA
| | - Kalyani Perumal
- Department of Internal Medicine, Nephrology, Department of JH Stroger Hospital, Chicago, IL, USA
| | - Jharna Saha
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Yingbao Yang
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Jinghui Luo
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA
| | - Xin Zhang
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA
| | - Laura Mariani
- Department of Internal Medicine, Nephrology, University of Michigan, Ann Arbor, MI, USA
| | - Jeffrey B Hodgin
- Department of Pathology, University of Michigan, Ann Arbor, MI, USA.
| | - Arvind Rao
- Department of Computational Medicine and Bioinformatics, University of Michigan, Ann Arbor, MI, USA.
- Department of Biostatistics, University of Michigan, Ann Arbor, MI, USA.
- Department of Radiation Oncology, University of Michigan, Ann Arbor, MI, USA.
- Department of Biomedical Engineering, University of Michigan, Ann Arbor, MI, USA.
| |
Collapse
|
163
|
Ginghina O, Hudita A, Zamfir M, Spanu A, Mardare M, Bondoc I, Buburuzan L, Georgescu SE, Costache M, Negrei C, Nitipir C, Galateanu B. Liquid Biopsy and Artificial Intelligence as Tools to Detect Signatures of Colorectal Malignancies: A Modern Approach in Patient's Stratification. Front Oncol 2022; 12:856575. [PMID: 35356214 PMCID: PMC8959149 DOI: 10.3389/fonc.2022.856575] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2022] [Accepted: 02/16/2022] [Indexed: 01/19/2023] Open
Abstract
Colorectal cancer (CRC) is the second most frequently diagnosed type of cancer and a major worldwide public health concern. Despite the global efforts in the development of modern therapeutic strategies, CRC prognosis is strongly correlated with the stage of the disease at diagnosis. Early detection of CRC has a huge impact in decreasing mortality while pre-lesion detection significantly reduces the incidence of the pathology. Even though the management of CRC patients is based on robust diagnostic methods such as serum tumor markers analysis, colonoscopy, histopathological analysis of tumor tissue, and imaging methods (computer tomography or magnetic resonance), these strategies still have many limitations and do not fully satisfy clinical needs due to their lack of sensitivity and/or specificity. Therefore, improvements of the current practice would substantially impact the management of CRC patients. In this view, liquid biopsy is a promising approach that could help clinicians screen for disease, stratify patients to the best treatment, and monitor treatment response and resistance mechanisms in the tumor in a regular and minimally invasive manner. Liquid biopsies allow the detection and analysis of different tumor-derived circulating markers such as cell-free nucleic acids (cfNA), circulating tumor cells (CTCs), and extracellular vesicles (EVs) in the bloodstream. The major advantage of this approach is its ability to trace and monitor the molecular profile of the patient's tumor and to predict personalized treatment in real-time. On the other hand, the prospective use of artificial intelligence (AI) in medicine holds great promise in oncology, for the diagnosis, treatment, and prognosis prediction of disease. AI has two main branches in the medical field: (i) a virtual branch that includes medical imaging, clinical assisted diagnosis, and treatment, as well as drug research, and (ii) a physical branch that includes surgical robots. This review summarizes findings relevant to liquid biopsy and AI in CRC for better management and stratification of CRC patients.
Collapse
Affiliation(s)
- Octav Ginghina
- Department II, University of Medicine and Pharmacy “Carol Davila” Bucharest, Bucharest, Romania
- Department of Surgery, “Sf. Ioan” Clinical Emergency Hospital, Bucharest, Romania
| | - Ariana Hudita
- Department of Biochemistry and Molecular Biology, University of Bucharest, Bucharest, Romania
| | - Marius Zamfir
- Department of Surgery, “Sf. Ioan” Clinical Emergency Hospital, Bucharest, Romania
| | - Andrada Spanu
- Department of Surgery, “Sf. Ioan” Clinical Emergency Hospital, Bucharest, Romania
| | - Mara Mardare
- Department of Surgery, “Sf. Ioan” Clinical Emergency Hospital, Bucharest, Romania
| | - Irina Bondoc
- Department of Surgery, “Sf. Ioan” Clinical Emergency Hospital, Bucharest, Romania
| | | | - Sergiu Emil Georgescu
- Department of Biochemistry and Molecular Biology, University of Bucharest, Bucharest, Romania
| | - Marieta Costache
- Department of Biochemistry and Molecular Biology, University of Bucharest, Bucharest, Romania
| | - Carolina Negrei
- Department of Toxicology, University of Medicine and Pharmacy “Carol Davila” Bucharest, Bucharest, Romania
| | - Cornelia Nitipir
- Department II, University of Medicine and Pharmacy “Carol Davila” Bucharest, Bucharest, Romania
- Department of Oncology, Elias University Emergency Hospital, Bucharest, Romania
| | - Bianca Galateanu
- Department of Biochemistry and Molecular Biology, University of Bucharest, Bucharest, Romania
| |
Collapse
|
164
|
The Effects of Rso2 and PI Monitoring Images on the Treatment of Premature Infants Based on Deep Learning. COMPUTATIONAL AND MATHEMATICAL METHODS IN MEDICINE 2022; 2022:5671713. [PMID: 35242208 PMCID: PMC8888060 DOI: 10.1155/2022/5671713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 01/17/2022] [Accepted: 01/31/2022] [Indexed: 11/17/2022]
Abstract
In recent years, due to the combined effects of individual behavior, psychological factors, environmental exposure, medical conditions, biological factors, etc., the incidence of preterm birth has gradually increased, so the incidence of various complications of preterm infants has also become higher and higher. This article is aimed at studying the therapeutic effects of preterm infants and proposing the application of rSO2 and PI image monitoring based on deep learning to the treatment of preterm infants. This article introduces deep learning, blood perfusion index, preterm infants, and other related content in detail and conducts experiments on the treatment of rSO2 and PI monitoring images based on deep learning in preterm infants. The experimental results show that the rSO2 and PI monitoring images based on deep learning can provide great help for the treatment of preterm infants and greatly improve the treatment efficiency of preterm infants by at least 15%.
Collapse
|
165
|
Optimization of deep learning based segmentation method. Soft comput 2022. [DOI: 10.1007/s00500-021-06711-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
166
|
Sethy PK, Behera SK. Automatic classification with concatenation of deep and handcrafted features of histological images for breast carcinoma diagnosis. MULTIMEDIA TOOLS AND APPLICATIONS 2022; 81:9631-9643. [DOI: 10.1007/s11042-021-11756-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Revised: 09/27/2021] [Accepted: 11/22/2021] [Indexed: 08/02/2023]
|
167
|
Wilm F, Benz M, Bruns V, Baghdadlian S, Dexl J, Hartmann D, Kuritcyn P, Weidenfeller M, Wittenberg T, Merkel S, Hartmann A, Eckstein M, Geppert CI. Fast whole-slide cartography in colon cancer histology using superpixels and CNN classification. J Med Imaging (Bellingham) 2022; 9:027501. [PMID: 35300344 PMCID: PMC8920491 DOI: 10.1117/1.jmi.9.2.027501] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2021] [Accepted: 02/17/2022] [Indexed: 11/14/2022] Open
Abstract
Purpose: Automatic outlining of different tissue types in digitized histological specimen provides a basis for follow-up analyses and can potentially guide subsequent medical decisions. The immense size of whole-slide-images (WSIs), however, poses a challenge in terms of computation time. In this regard, the analysis of nonoverlapping patches outperforms pixelwise segmentation approaches but still leaves room for optimization. Furthermore, the division into patches, regardless of the biological structures they contain, is a drawback due to the loss of local dependencies. Approach: We propose to subdivide the WSI into coherent regions prior to classification by grouping visually similar adjacent pixels into superpixels. Afterward, only a random subset of patches per superpixel is classified and patch labels are combined into a superpixel label. We propose a metric for identifying superpixels with an uncertain classification and evaluate two medical applications, namely tumor area and invasive margin estimation and tumor composition analysis. Results: The algorithm has been developed on 159 hand-annotated WSIs of colon resections and its performance is compared with an analysis without prior segmentation. The algorithm shows an average speed-up of 41% and an increase in accuracy from 93.8% to 95.7%. By assigning a rejection label to uncertain superpixels, we further increase the accuracy by 0.4%. While tumor area estimation shows high concordance to the annotated area, the analysis of tumor composition highlights limitations of our approach. Conclusion: By combining superpixel segmentation and patch classification, we designed a fast and accurate framework for whole-slide cartography that is AI-model agnostic and provides the basis for various medical endpoints.
Collapse
Affiliation(s)
- Frauke Wilm
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Michaela Benz
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Volker Bruns
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Serop Baghdadlian
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Jakob Dexl
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - David Hartmann
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Petr Kuritcyn
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Martin Weidenfeller
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany
| | - Thomas Wittenberg
- Fraunhofer Institute for Integrated Circuits IIS, Image Processing and Medical Engineering Department, Erlangen, Germany.,Friedrich-Alexander-University, Erlangen-Nuremberg, Department of Computer Science, Erlangen, Germany
| | - Susanne Merkel
- University Hospital Erlangen, Department of Surgery, FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Arndt Hartmann
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Markus Eckstein
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| | - Carol Immanuel Geppert
- University Hospital Erlangen, Comprehensive Cancer Center Erlangen-EMN (CCC), FAU Erlangen-Nuremberg, Erlangen, Germany.,University Hospital Erlangen, Institute of Pathology, FAU Erlangen-Nuremberg, Erlangen, Germany
| |
Collapse
|
168
|
Jiang H, Li S, Li H. Parallel ‘same’ and ‘valid’ convolutional block and input-collaboration strategy for histopathological image classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
169
|
Ciga O, Xu T, Martel AL. Self supervised contrastive learning for digital histopathology. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2021.100198] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022] Open
|
170
|
Artificial intelligence-assisted system for precision diagnosis of PD-L1 expression in non-small cell lung cancer. Mod Pathol 2022; 35:403-411. [PMID: 34518630 DOI: 10.1038/s41379-021-00904-9] [Citation(s) in RCA: 47] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Revised: 08/11/2021] [Accepted: 08/11/2021] [Indexed: 02/07/2023]
Abstract
Standardized programmed death-ligand 1 (PD-L1) assessment in non-small cell lung cancer (NSCLC) is challenging, owing to inter-observer variability among pathologists and the use of different antibodies. There is a strong demand for the development of an artificial intelligence (AI) system to obtain high-precision scores of PD-L1 expression in clinical diagnostic scenarios. We developed an AI system using whole slide images (WSIs) of the 22c3 assay to automatically assess the tumor proportion score (TPS) of PD-L1 expression based on a deep learning (DL) model of tumor detection. Tests were performed to show the diagnostic ability of the AI system in the 22c3 assay to assist pathologists and the reliability of the application in the SP263 assay. A robust high-performance DL model for automated tumor detection was devised with an accuracy and specificity of 0.9326 and 0.9641, respectively, and a concrete TPS value was obtained after tumor cell segmentation. The TPS comparison test in the 22c3 assay showed strong consistency between the TPS calculated with the AI system and trained pathologists (R = 0.9429-0.9458). AI-assisted diagnosis test confirmed that the repeatability and efficiency of untrained pathologists could be improved using the AI system. The Ventana PD-L1 (SP263) assay showed high consistency in TPS calculations between the AI system and pathologists (R = 0.9787). In conclusion, a high-precision AI system is proposed for the automated TPS assessment of PD-L1 expression in the 22c3 and SP263 assays in NSCLC. Our study also indicates the benefits of using an AI-assisted system to improve diagnostic repeatability and efficiency for pathologists.
Collapse
|
171
|
Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. ROFO-FORTSCHR RONTG 2022; 194:605-612. [PMID: 35211929 DOI: 10.1055/a-1718-4128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Machine learning (ML) is considered an important technology for future data analysis in health care. METHODS The inherently technology-driven fields of diagnostic radiology and nuclear medicine will both benefit from ML in terms of image acquisition and reconstruction. Within the next few years, this will lead to accelerated image acquisition, improved image quality, a reduction of motion artifacts and - for PET imaging - reduced radiation exposure and new approaches for attenuation correction. Furthermore, ML has the potential to support decision making by a combined analysis of data derived from different modalities, especially in oncology. In this context, we see great potential for ML in multiparametric hybrid imaging and the development of imaging biomarkers. RESULTS AND CONCLUSION In this review, we will describe the basics of ML, present approaches in hybrid imaging of MRI, CT, and PET, and discuss the specific challenges associated with it and the steps ahead to make ML a diagnostic and clinical tool in the future. KEY POINTS · ML provides a viable clinical solution for the reconstruction, processing, and analysis of hybrid imaging obtained from MRI, CT, and PET.. CITATION FORMAT · Küstner T, Hepp T, Seith F. Multiparametric Oncologic Hybrid Imaging: Machine Learning Challenges and Opportunities. Fortschr Röntgenstr 2022; DOI: 10.1055/a-1718-4128.
Collapse
Affiliation(s)
- Thomas Küstner
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Tobias Hepp
- Medical Image and Data Analysis (MIDAS.lab), Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| | - Ferdinand Seith
- Department of Diagnostic and Interventional Radiology, University Hospitals Tubingen, Germany
| |
Collapse
|
172
|
Doan TNN, Song B, Vuong TTL, Kim K, Kwak JT. SONNET: A self-guided ordinal regression neural network for segmentation and classification of nuclei in large-scale multi-tissue histology images. IEEE J Biomed Health Inform 2022; 26:3218-3228. [PMID: 35139032 DOI: 10.1109/jbhi.2022.3149936] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Automated nuclei segmentation and classification are the keys to analyze and understand the cellular characteristics and functionality, supporting computer-aided digital pathology in disease diagnosis. However, the task still remains challenging due to the intrinsic variations in size, intensity, and morphology of different types of nuclei. Herein, we propose a self-guided ordinal regression neural network for simultaneous nuclear segmentation and classification that can exploit the intrinsic characteristics of nuclei and focus on highly uncertain areas during training. The proposed network formulates nuclei segmentation as an ordinal regression learning by introducing a distance decreasing discretization strategy, which stratifies nuclei in a way that inner regions forming a regular shape of nuclei are separated from outer regions forming an irregular shape. It also adopts a self-guided training strategy to adaptively adjust the weights associated with nuclear pixels, depending on the difficulty of the pixels that is assessed by the network itself. To evaluate the performance of the proposed network, we employ large-scale multi-tissue datasets with 276349 exhaustively annotated nuclei. We show that the proposed network achieves the state-of-the-art performance in both nuclei segmentation and classification in comparison to several methods that are recently developed for segmentation and/or classification.
Collapse
|
173
|
Fan Z, Zhang H, Zhang Z, Lu G, Zhang Y, Wang Y. A survey of crowd counting and density estimation based on convolutional neural network. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.02.103] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
174
|
A comprehensive review of computer-aided whole-slide image analysis: from datasets to feature extraction, segmentation, classification and detection approaches. Artif Intell Rev 2022. [DOI: 10.1007/s10462-021-10121-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
|
175
|
Yousef R, Gupta G, Yousef N, Khari M. A holistic overview of deep learning approach in medical imaging. MULTIMEDIA SYSTEMS 2022; 28:881-914. [PMID: 35079207 PMCID: PMC8776556 DOI: 10.1007/s00530-021-00884-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Accepted: 12/23/2021] [Indexed: 05/07/2023]
Abstract
Medical images are a rich source of invaluable necessary information used by clinicians. Recent technologies have introduced many advancements for exploiting the most of this information and use it to generate better analysis. Deep learning (DL) techniques have been empowered in medical images analysis using computer-assisted imaging contexts and presenting a lot of solutions and improvements while analyzing these images by radiologists and other specialists. In this paper, we present a survey of DL techniques used for variety of tasks along with the different medical image's modalities to provide critical review of the recent developments in this direction. We have organized our paper to provide significant contribution of deep leaning traits and learn its concepts, which is in turn helpful for non-expert in medical society. Then, we present several applications of deep learning (e.g., segmentation, classification, detection, etc.) which are commonly used for clinical purposes for different anatomical site, and we also present the main key terms for DL attributes like basic architecture, data augmentation, transfer learning, and feature selection methods. Medical images as inputs to deep learning architectures will be the mainstream in the coming years, and novel DL techniques are predicted to be the core of medical images analysis. We conclude our paper by addressing some research challenges and the suggested solutions for them found in literature, and also future promises and directions for further developments.
Collapse
Affiliation(s)
- Rammah Yousef
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Gaurav Gupta
- Yogananda School of AI Computer and Data Sciences, Shoolini University, Solan, 173229 Himachal Pradesh India
| | - Nabhan Yousef
- Electronics and Communication Engineering, Marwadi University, Rajkot, Gujrat India
| | - Manju Khari
- Jawaharlal Nehru University, New Delhi, India
| |
Collapse
|
176
|
Liang F, Wang S, Zhang K, Liu TJ, Li JN. Development of artificial intelligence technology in diagnosis, treatment, and prognosis of colorectal cancer. World J Gastrointest Oncol 2022; 14:124-152. [PMID: 35116107 PMCID: PMC8790413 DOI: 10.4251/wjgo.v14.i1.124] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/15/2021] [Revised: 08/19/2021] [Accepted: 11/15/2021] [Indexed: 02/06/2023] Open
Abstract
Artificial intelligence (AI) technology has made leaps and bounds since its invention. AI technology can be subdivided into many technologies such as machine learning and deep learning. The application scope and prospect of different technologies are also totally different. Currently, AI technologies play a pivotal role in the highly complex and wide-ranging medical field, such as medical image recognition, biotechnology, auxiliary diagnosis, drug research and development, and nutrition. Colorectal cancer (CRC) is a common gastrointestinal cancer that has a high mortality, posing a serious threat to human health. Many CRCs are caused by the malignant transformation of colorectal polyps. Therefore, early diagnosis and treatment are crucial to CRC prognosis. The methods of diagnosing CRC are divided into imaging diagnosis, endoscopy, and pathology diagnosis. Treatment methods are divided into endoscopic treatment, surgical treatment, and drug treatment. AI technology is in the weak era and does not have communication capabilities. Therefore, the current AI technology is mainly used for image recognition and auxiliary analysis without in-depth communication with patients. This article reviews the application of AI in the diagnosis, treatment, and prognosis of CRC and provides the prospects for the broader application of AI in CRC.
Collapse
Affiliation(s)
- Feng Liang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Shu Wang
- Department of Radiotherapy, Jilin University Second Hospital, Changchun 130041, Jilin Province, China
| | - Kai Zhang
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Tong-Jun Liu
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| | - Jian-Nan Li
- Department of General Surgery, The Second Hospital of Jilin University, Changchun 130041, Jilin Province, China
| |
Collapse
|
177
|
Ragab M, Albukhari A. Automated Artificial Intelligence Empowered Colorectal Cancer Detection and Classification Model. COMPUTERS, MATERIALS & CONTINUA 2022; 72:5577-5591. [DOI: 10.32604/cmc.2022.026715] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/18/2022] [Indexed: 10/28/2024]
|
178
|
Harinee S, Mahendran A. On the Study of Machine Learning Algorithms Towards Healthcare Applications. STUDIES IN BIG DATA 2022:117-129. [DOI: 10.1007/978-3-030-75855-4_7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2025]
|
179
|
Diaz-Flores E, Meyer T, Giorkallos A. Evolution of Artificial Intelligence-Powered Technologies in Biomedical Research and Healthcare. ADVANCES IN BIOCHEMICAL ENGINEERING/BIOTECHNOLOGY 2022; 182:23-60. [DOI: 10.1007/10_2021_189] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
180
|
Salari A, Djavadifar A, Liu XR, Najjaran H. Object recognition datasets and challenges: A review. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.01.022] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
|
181
|
Pham V, Nguyen H, Pham B, Nguyen T, Nguyen H. Robust Engineering-based Unified Biomedical Imaging Framework for Liver Tumor Segmentation. Curr Med Imaging 2022; 19:37-45. [PMID: 34348633 DOI: 10.2174/1573405617666210804151024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2021] [Revised: 06/02/2021] [Accepted: 06/12/2021] [Indexed: 11/22/2022]
Abstract
BACKGROUND Computer vision in general and semantic segmentation has experienced many achievements in recent years. Consequently, the emergence of medical imaging has provided new opportunities for conducting artificial intelligence research. Since cancer is the second-leading cause of death in the world, early-stage diagnosis is an essential process that directly slows down the development speed of cancer. METHODS Deep neural network-based methods are anticipated to reduce diagnosis time for pathologists. RESULTS In this research paper, an approach to liver tumor identification based on two types of medical images has been presented: computed tomography scans and whole-slide. It is constructed based on the improvement of U-Net and GLNet architectures. It also includes sub-modules that are combined with segmentation models to boost up the overall performance during inference phases. CONCLUSION Based on the experimental results, the proposed unified framework has been emerging to be used in the production environment.
Collapse
Affiliation(s)
- Vuong Pham
- Faculty of Information Technology, Sai Gon University, Ho Chi Minh City, Vietnam
| | - Hai Nguyen
- Faculty of Software Engineering, University of Information Technology, Ho Chi Minh City, Vietnam
- Vietnam National University, Ho Chi Minh City, Vietnam
| | - Bao Pham
- Faculty of Information Technology, Sai Gon University, Ho Chi Minh City, Vietnam
| | - Thien Nguyen
- VietNam Aviation Academy, Ho Chi Minh city, Vietnam
| | - Hien Nguyen
- Faculty of Computer Science, University of Information Technology, Ho Chi Minh City, Vietnam
- Vietnam National University, Ho Chi Minh City, Vietnam
| |
Collapse
|
182
|
Li H, Kang Y, Yang W, Wu Z, Shi X, Liu F, Liu J, Hu L, Ma Q, Cui L, Feng J, Yang L. A Robust Training Method for Pathological Cellular Detector via Spatial Loss Calibration. Front Med (Lausanne) 2022; 8:767625. [PMID: 34970560 PMCID: PMC8712578 DOI: 10.3389/fmed.2021.767625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 11/15/2021] [Indexed: 11/13/2022] Open
Abstract
Computer-aided diagnosis of pathological images usually requires detecting and examining all positive cells for accurate diagnosis. However, cellular datasets tend to be sparsely annotated due to the challenge of annotating all the cells. However, training detectors on sparse annotations may be misled by miscalculated losses, limiting the detection performance. Thus, efficient and reliable methods for training cellular detectors on sparse annotations are in higher demand than ever. In this study, we propose a training method that utilizes regression boxes' spatial information to conduct loss calibration to reduce the miscalculated loss. Extensive experimental results show that our method can significantly boost detectors' performance trained on datasets with varying degrees of sparse annotations. Even if 90% of the annotations are missing, the performance of our method is barely affected. Furthermore, we find that the middle layers of the detector are closely related to the generalization performance. More generally, this study could elucidate the link between layers and generalization performance, provide enlightenment for future research, such as designing and applying constraint rules to specific layers according to gradient analysis to achieve “scalpel-level” model training.
Collapse
Affiliation(s)
- Hansheng Li
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Yuxin Kang
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Wentao Yang
- Fudan University Shanghai Cancer Center, Shanghai, China
| | - Zhuoyue Wu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Xiaoshuang Shi
- Department of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
| | - Feihong Liu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jianye Liu
- School of Information Science and Technology, Northwest University, Xi'an, China
| | | | - Qian Ma
- AstraZeneca, Shanghai, China
| | - Lei Cui
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Jun Feng
- School of Information Science and Technology, Northwest University, Xi'an, China
| | - Lin Yang
- School of Information Science and Technology, Northwest University, Xi'an, China
| |
Collapse
|
183
|
Musa IH, Afolabi LO, Zamit I, Musa TH, Musa HH, Tassang A, Akintunde TY, Li W. Artificial Intelligence and Machine Learning in Cancer Research: A Systematic and Thematic Analysis of the Top 100 Cited Articles Indexed in Scopus Database. Cancer Control 2022; 29:10732748221095946. [PMID: 35688650 PMCID: PMC9189515 DOI: 10.1177/10732748221095946] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023] Open
Abstract
INTRODUCTION Cancer is a major public health problem and a global leading cause of death where the screening, diagnosis, prediction, survival estimation, and treatment of cancer and control measures are still a major challenge. The rise of Artificial Intelligence (AI) and Machine Learning (ML) techniques and their applications in various fields have brought immense value in providing insights into advancement in support of cancer control. METHODS A systematic and thematic analysis was performed on the Scopus database to identify the top 100 cited articles in cancer research. Data were analyzed using RStudio and VOSviewer.Var1.6.6. RESULTS The top 100 articles in AI and ML in cancer received a 33 920 citation score with a range of 108 to 5758 times. Doi Kunio from the USA was the most cited author with total number of citations (TNC = 663). Out of 43 contributed countries, 30% of the top 100 cited articles originated from the USA, and 10% originated from China. Among the 57 peer-reviewed journals, the "Expert Systems with Application" published 8% of the total articles. The results were presented in highlight technological advancement through AI and ML via the widespread use of Artificial Neural Network (ANNs), Deep Learning or machine learning techniques, Mammography-based Model, Convolutional Neural Networks (SC-CNN), and text mining techniques in the prediction, diagnosis, and prevention of various types of cancers towards cancer control. CONCLUSIONS This bibliometric study provides detailed overview of the most cited empirical evidence in AI and ML adoption in cancer research that could efficiently help in designing future research. The innovations guarantee greater speed by using AI and ML in the detection and control of cancer to improve patient experience.
Collapse
Affiliation(s)
- Ibrahim H. Musa
- Department of Software Engineering, School of Computer Science and Engineering, Southeast University, Nanjing, China
- Key Laboratory of Computer Network and Information Integration, Ministry of Education, Southeast University, Nanjing, China
| | - Lukman O. Afolabi
- Guangdong Immune Cell Therapy Engineering and Technology Research Center, Center for Protein and Cell-Based Drugs, Institute of Biomedicine and Biotechnology, Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Ibrahim Zamit
- University of Chinese Academy of Sciences, Beijing, China
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Taha H. Musa
- Biomedical Research Institute, Darfur University College, Nyala, South Darfur, Sudan
- Key Laboratory of Environmental Medicine Engineering, Ministry of Education, Department of Epidemiology and Health Statistics, School of Public Health, Southeast University, Nanjing, Jiangsu Province, China
| | - Hassan H. Musa
- Faculty of Medical Laboratory Sciences, University of Khartoum, Khartoum, Sudan
| | - Andrew Tassang
- Faculty of Health Sciences, University of Buea, Cameroon
- Buea Regional Hospital, Annex, Cameroon
| | - Tosin Y. Akintunde
- Department of Sociology, School of Public Administration, Hohai University, Nanjing, China
| | - Wei Li
- Department of quality management, Children’s hospital of Nanjing Medical University, Nanjing, China
| |
Collapse
|
184
|
Wang LM, Huang YH, Chou PH, Wang YM, Chen CM, Sun CW. Characteristics of brain connectivity during verbal fluency test: Convolutional neural network for functional near-infrared spectroscopy analysis. JOURNAL OF BIOPHOTONICS 2022; 15:e202100180. [PMID: 34553833 DOI: 10.1002/jbio.202100180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 07/29/2021] [Accepted: 09/08/2021] [Indexed: 06/13/2023]
Abstract
Human connectome describes the complicated connection matrix of nervous system among human brain. It also possesses high potential of assisting doctors to monitor the brain injuries and recoveries in patients. In order to unravel the enigma of neuron connections and functions, previous research has strived to dig out the relations between neurons and brain regions. Verbal fluency test (VFT) is a general neuropsychological test, which has been used in functional connectivity investigations. In this study, we employed convolutional neural network (CNN) on a brain hemoglobin concentration changes (ΔHB) map obtained during VFT to investigate the connections of activated brain areas and different mental status. Our results show that feature of functional connectivity can be identified accurately with the employment of CNN on ΔHB mapping, which is beneficial to improve the understanding of brain functional connections.
Collapse
Affiliation(s)
- Le-Mei Wang
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Yi-Hua Huang
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Po-Han Chou
- Department of Psychiatry, China Medical University Hsinchu Hospital, China Medical University, Hsinchu, Taiwan
| | - Yi-Min Wang
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| | - Chung-Ming Chen
- Department of Biomedical Engineering, National Taiwan University, Taipei, Taiwan
| | - Chia-Wei Sun
- Biomedical Optical Imaging Lab, Department of Photonics, College of Electrical and Computer Engineering, National Yang Ming Chiao Tung University, Hsinchu, Taiwan
| |
Collapse
|
185
|
Xie X, Wang X, Liang Y, Yang J, Wu Y, Li L, Sun X, Bing P, He B, Tian G, Shi X. Evaluating Cancer-Related Biomarkers Based on Pathological Images: A Systematic Review. Front Oncol 2021; 11:763527. [PMID: 34900711 PMCID: PMC8660076 DOI: 10.3389/fonc.2021.763527] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2021] [Accepted: 10/18/2021] [Indexed: 12/12/2022] Open
Abstract
Many diseases are accompanied by changes in certain biochemical indicators called biomarkers in cells or tissues. A variety of biomarkers, including proteins, nucleic acids, antibodies, and peptides, have been identified. Tumor biomarkers have been widely used in cancer risk assessment, early screening, diagnosis, prognosis, treatment, and progression monitoring. For example, the number of circulating tumor cell (CTC) is a prognostic indicator of breast cancer overall survival, and tumor mutation burden (TMB) can be used to predict the efficacy of immune checkpoint inhibitors. Currently, clinical methods such as polymerase chain reaction (PCR) and next generation sequencing (NGS) are mainly adopted to evaluate these biomarkers, which are time-consuming and expansive. Pathological image analysis is an essential tool in medical research, disease diagnosis and treatment, functioning by extracting important physiological and pathological information or knowledge from medical images. Recently, deep learning-based analysis on pathological images and morphology to predict tumor biomarkers has attracted great attention from both medical image and machine learning communities, as this combination not only reduces the burden on pathologists but also saves high costs and time. Therefore, it is necessary to summarize the current process of processing pathological images and key steps and methods used in each process, including: (1) pre-processing of pathological images, (2) image segmentation, (3) feature extraction, and (4) feature model construction. This will help people choose better and more appropriate medical image processing methods when predicting tumor biomarkers.
Collapse
Affiliation(s)
- Xiaoliang Xie
- Department of Colorectal Surgery, General Hospital of Ningxia Medical University, Yinchuan, China.,College of Clinical Medicine, Ningxia Medical University, Yinchuan, China
| | - Xulin Wang
- Department of Oncology Surgery, Central Hospital of Jia Mu Si City, Jia Mu Si, China
| | - Yuebin Liang
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Jingya Yang
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,School of Electrical and Information Engineering, Anhui University of Technology, Ma'anshan, China
| | - Yan Wu
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| | - Li Li
- Beijing Shanghe Jiye Biotech Co., Ltd., Bejing, China
| | - Xin Sun
- Department of Medical Affairs, Central Hospital of Jia Mu Si City, Jia Mu Si, China
| | - Pingping Bing
- Academician Workstation, Changsha Medical University, Changsha, China
| | - Binsheng He
- Academician Workstation, Changsha Medical University, Changsha, China
| | - Geng Tian
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China.,IBMC-BGI Center, T`he Cancer Hospital of the University of Chinese Academy of Sciences (Zhejiang Cancer Hospital), Institute of Basic Medicine and Cancer (IBMC), Chinese Academy of Sciences, Hangzhou, China
| | - Xiaoli Shi
- Geneis Beijing Co., Ltd., Beijing, China.,Qingdao Geneis Institute of Big Data Mining and Precision Medicine, Qingdao, China
| |
Collapse
|
186
|
Mehrvar S, Himmel LE, Babburi P, Goldberg AL, Guffroy M, Janardhan K, Krempley AL, Bawa B. Deep Learning Approaches and Applications in Toxicologic Histopathology: Current Status and Future Perspectives. J Pathol Inform 2021; 12:42. [PMID: 34881097 PMCID: PMC8609289 DOI: 10.4103/jpi.jpi_36_21] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/18/2021] [Indexed: 12/13/2022] Open
Abstract
Whole slide imaging enables the use of a wide array of digital image analysis tools that are revolutionizing pathology. Recent advances in digital pathology and deep convolutional neural networks have created an enormous opportunity to improve workflow efficiency, provide more quantitative, objective, and consistent assessments of pathology datasets, and develop decision support systems. Such innovations are already making their way into clinical practice. However, the progress of machine learning - in particular, deep learning (DL) - has been rather slower in nonclinical toxicology studies. Histopathology data from toxicology studies are critical during the drug development process that is required by regulatory bodies to assess drug-related toxicity in laboratory animals and its impact on human safety in clinical trials. Due to the high volume of slides routinely evaluated, low-throughput, or narrowly performing DL methods that may work well in small-scale diagnostic studies or for the identification of a single abnormality are tedious and impractical for toxicologic pathology. Furthermore, regulatory requirements around good laboratory practice are a major hurdle for the adoption of DL in toxicologic pathology. This paper reviews the major DL concepts, emerging applications, and examples of DL in toxicologic pathology image analysis. We end with a discussion of specific challenges and directions for future research.
Collapse
Affiliation(s)
- Shima Mehrvar
- Preclinical Safety, AbbVie Inc., North Chicago, IL, USA
| | | | - Pradeep Babburi
- Business Technology Solutions, AbbVie Inc., North Chicago, IL, USA
| | | | | | | | | | | |
Collapse
|
187
|
Verma R, Kumar N, Patil A, Kurian NC, Rane S, Graham S, Vu QD, Zwager M, Raza SEA, Rajpoot N, Wu X, Chen H, Huang Y, Wang L, Jung H, Brown GT, Liu Y, Liu S, Jahromi SAF, Khani AA, Montahaei E, Baghshah MS, Behroozi H, Semkin P, Rassadin A, Dutande P, Lodaya R, Baid U, Baheti B, Talbar S, Mahbod A, Ecker R, Ellinger I, Luo Z, Dong B, Xu Z, Yao Y, Lv S, Feng M, Xu K, Zunair H, Hamza AB, Smiley S, Yin TK, Fang QR, Srivastava S, Mahapatra D, Trnavska L, Zhang H, Narayanan PL, Law J, Yuan Y, Tejomay A, Mitkari A, Koka D, Ramachandra V, Kini L, Sethi A. MoNuSAC2020: A Multi-Organ Nuclei Segmentation and Classification Challenge. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3413-3423. [PMID: 34086562 DOI: 10.1109/tmi.2021.3085712] [Citation(s) in RCA: 56] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Detecting various types of cells in and around the tumor matrix holds a special significance in characterizing the tumor micro-environment for cancer prognostication and research. Automating the tasks of detecting, segmenting, and classifying nuclei can free up the pathologists' time for higher value tasks and reduce errors due to fatigue and subjectivity. To encourage the computer vision research community to develop and test algorithms for these tasks, we prepared a large and diverse dataset of nucleus boundary annotations and class labels. The dataset has over 46,000 nuclei from 37 hospitals, 71 patients, four organs, and four nucleus types. We also organized a challenge around this dataset as a satellite event at the International Symposium on Biomedical Imaging (ISBI) in April 2020. The challenge saw a wide participation from across the world, and the top methods were able to match inter-human concordance for the challenge metric. In this paper, we summarize the dataset and the key findings of the challenge, including the commonalities and differences between the methods developed by various participants. We have released the MoNuSAC2020 dataset to the public.
Collapse
|
188
|
Peng X, Yang X. Liver tumor detection based on objects as points. Phys Med Biol 2021; 66. [PMID: 34727529 DOI: 10.1088/1361-6560/ac35c7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Accepted: 11/02/2021] [Indexed: 11/11/2022]
Abstract
The automatic detection of liver tumors by computed tomography is challenging, owing to their wide variations in size and location, as well as to their irregular shapes. Existing detection methods largely rely on two-stage detectors and use CT images marked with bounding boxes for training and detection. In this study, we propose a single-stage detector method designed to accurately detect multiple tumors simultaneously, and provide results demonstrating its increased speed and efficiency compared to prior methods. The proposed model divides CT images into multiple channels to obtain continuity information and implements a bounding box attention mechanism to overcome the limitation of inaccurate prediction of tumor center points and decrease redundant bounding boxes. The model integrates information from various channels using an effective Squeeze-and-Excitation attention module. The proposed model obtained a mean average precision result of 0.476 on the Decathlon dataset, which was superior to that of the prior methods examined for comparison. This research is expected to enable physicians to diagnose tumors very efficiently; particularly, the prediction of tumor center points is expected to enable physicians to rapidly verify their diagnostic judgments. The proposed method is considered suitable for future adoption in clinical practice in hospitals and resource-poor areas because its superior performance does not increase computational cost; hence, the equipment required is relatively inexpensive.
Collapse
Affiliation(s)
- Xuefeng Peng
- The Faculty of Information, Beijing University of Technology, Beijing, People's Republic of China
| | - Xinwu Yang
- The Faculty of Information, Beijing University of Technology, Beijing, People's Republic of China
| |
Collapse
|
189
|
Li W, Li J, Polson J, Wang Z, Speier W, Arnold C. High resolution histopathology image generation and segmentation through adversarial training. Med Image Anal 2021; 75:102251. [PMID: 34814059 DOI: 10.1016/j.media.2021.102251] [Citation(s) in RCA: 19] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2020] [Revised: 07/09/2021] [Accepted: 09/20/2021] [Indexed: 12/01/2022]
Abstract
Semantic segmentation of histopathology images can be a vital aspect of computer-aided diagnosis, and deep learning models have been effectively applied to this task with varying levels of success. However, their impact has been limited due to the small size of fully annotated datasets. Data augmentation is one avenue to address this limitation. Generative Adversarial Networks (GANs) have shown promise in this respect, but previous work has focused mostly on classification tasks applied to MR and CT images, both of which have lower resolution and scale than histopathology images. There is limited research that applies GANs as a data augmentation approach for large-scale image semantic segmentation, which requires high-quality image-mask pairs. In this work, we propose a multi-scale conditional GAN for high-resolution, large-scale histopathology image generation and segmentation. Our model consists of a pyramid of GAN structures, each responsible for generating and segmenting images at a different scale. Using semantic masks, the generative component of our model is able to synthesize histopathology images that are visually realistic. We demonstrate that these synthesized images along with their masks can be used to boost segmentation performance, especially in the semi-supervised scenario.
Collapse
Affiliation(s)
- Wenyuan Li
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Electrical and Computer Engineering, UCLA, Los Angeles, USA.
| | - Jiayun Li
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - Jennifer Polson
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - Zichen Wang
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA
| | - William Speier
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA; The Department of Radiological Sciences, UCLA, Los Angeles, USA
| | - Corey Arnold
- Computational Diagnostics Lab, UCLA, Los Angeles, USA; The Department of Electrical and Computer Engineering, UCLA, Los Angeles, USA; The Department of Bioengineering, UCLA, Los Angeles, USA; The Department of Radiological Sciences, UCLA, Los Angeles, USA; The Department of Pathology & Laboratory Medicine, UCLA, Los Angeles, USA.
| |
Collapse
|
190
|
Babu T, Singh T, Gupta D, Hameed S. Colon cancer prediction on histological images using deep learning features and Bayesian optimized SVM. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-189850] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023]
Abstract
Colon cancer is one of the highest cancer diagnosis mortality rates worldwide. However, relying on the expertise of pathologists is a demanding and time-consuming process for histopathological analysis. The automated diagnosis of colon cancer from biopsy examination played an important role for patients and prognosis. As conventional handcrafted feature extraction requires specialized experience to select realistic features, deep learning processes have been chosen as abstract high-level features may be extracted automatically. This paper presents the colon cancer detection system using transfer learning architectures to automatically extract high-level features from colon biopsy images for automated diagnosis of patients and prognosis. In this study, the image features are extracted from a pre-trained convolutional neural network (CNN) and used to train the Bayesian optimized Support Vector Machine classifier. Moreover, Alexnet, VGG-16, and Inception-V3 pre-trained neural networks were used to analyze the best network for colon cancer detection. Furthermore, the proposed framework is evaluated using four datasets: two are collected from Indian hospitals (with different magnifications 4X, 10X, 20X, and 40X) and the other two are public colon image datasets. Compared with the existing classifiers and methods using public datasets, the test results evaluated the Inception-V3 network with the accuracy range from 96.5% - 99% as best suited for the proposed framework.
Collapse
Affiliation(s)
- Tina Babu
- Department of Computer Science and Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India
| | - Tripty Singh
- Department of Computer Science and Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India
| | - Deepa Gupta
- Department of Computer Science and Engineering, Amrita School of Engineering, Bengaluru, Amrita Vishwa Vidyapeetham, India
| | - Shahin Hameed
- Department of Pathology, MVR Cancer Center and Research Institute, Poolacode, Kerala, India
| |
Collapse
|
191
|
Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments in advanced imaging modalities and new technologies. Generative adversarial networks (GANs) are a recent development in the field of artificial intelligence and since their inception, have boosted considerable interest in digital pathology. GANs and their extensions have opened several ways to tackle many challenging histopathological image processing problems such as color normalization, virtual staining, ink removal, image enhancement, automatic feature extraction, segmentation of nuclei, domain adaptation and data augmentation. This paper reviews recent advances in histopathological image processing using GANs with special emphasis on the future perspectives related to the use of such a technique. The papers included in this review were retrieved by conducting a keyword search on Google Scholar and manually selecting the papers on the subject of H&E stained digital pathology images for histopathological image processing. In the first part, we describe recent literature that use GANs in various image preprocessing tasks such as stain normalization, virtual staining, image enhancement, ink removal, and data augmentation. In the second part, we describe literature that use GANs for image analysis, such as nuclei detection, segmentation, and feature extraction. This review illustrates the role of GANs in digital pathology with the objective to trigger new research on the application of generative models in future research in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- Australian Institute of Health Innovation, Centre for
Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
- Department of Physics and Astronomy, Faculty of Science
and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| |
Collapse
|
192
|
Yu G, Sun K, Xu C, Shi XH, Wu C, Xie T, Meng RQ, Meng XH, Wang KS, Xiao HM, Deng HW. Accurate recognition of colorectal cancer with semi-supervised deep learning on pathological images. Nat Commun 2021; 12:6311. [PMID: 34728629 PMCID: PMC8563931 DOI: 10.1038/s41467-021-26643-8] [Citation(s) in RCA: 62] [Impact Index Per Article: 15.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2020] [Accepted: 10/12/2021] [Indexed: 02/07/2023] Open
Abstract
Machine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.
Collapse
Affiliation(s)
- Gang Yu
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Kai Sun
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Chao Xu
- Department of Biostatistics and Epidemiology, University of Oklahoma Health Sciences Center, Oklahoma City, OK, 73104, USA
| | - Xing-Hua Shi
- Department of Computer & Information Sciences, College of Science and Technology, Temple University, Philadelphia, PA, 19122, USA
| | - Chong Wu
- Department of Statistics, Florida State University, Tallahassee, FL, 32306, USA
| | - Ting Xie
- Department of Biomedical Engineering, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Run-Qi Meng
- Electronic Information Science and Technology, School of Physics and Electronics, Central South University, 410083, Changsha, Hunan, China
| | - Xiang-He Meng
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China
| | - Kuan-Song Wang
- Department of Pathology, Xiangya Hospital, School of Basic Medical Science, Central South University, 410078, Changsha, Hunan, China.
| | - Hong-Mei Xiao
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China.
| | - Hong-Wen Deng
- Center for System Biology, Data Sciences and Reproductive Health, School of Basic Medical Science, Central South University, 410013, Changsha, Hunan, China.
- Deming Department of Medicine, Tulane Center of Biomedical Informatics and Genomics, Tulane University School of Medicine, New Orleans, LA, 70112, USA.
| |
Collapse
|
193
|
Xu Z, Wang X, Zeng S, Ren X, Yan Y, Gong Z. Applying artificial intelligence for cancer immunotherapy. Acta Pharm Sin B 2021; 11:3393-3405. [PMID: 34900525 PMCID: PMC8642413 DOI: 10.1016/j.apsb.2021.02.007] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 12/07/2020] [Accepted: 12/21/2020] [Indexed: 02/07/2023] Open
Abstract
Artificial intelligence (AI) is a general term that refers to the use of a machine to imitate intelligent behavior for performing complex tasks with minimal human intervention, such as machine learning; this technology is revolutionizing and reshaping medicine. AI has considerable potential to perfect health-care systems in areas such as diagnostics, risk analysis, health information administration, lifestyle supervision, and virtual health assistance. In terms of immunotherapy, AI has been applied to the prediction of immunotherapy responses based on immune signatures, medical imaging and histological analysis. These features could also be highly useful in the management of cancer immunotherapy given their ever-increasing performance in improving diagnostic accuracy, optimizing treatment planning, predicting outcomes of care and reducing human resource costs. In this review, we present the details of AI and the current progression and state of the art in employing AI for cancer immunotherapy. Furthermore, we discuss the challenges, opportunities and corresponding strategies in applying the technology for widespread clinical deployment. Finally, we summarize the impact of AI on cancer immunotherapy and provide our perspectives about underlying applications of AI in the future.
Collapse
Key Words
- AI, artificial intelligence
- Artificial intelligence
- CT, computed tomography
- CTLA-4, cytotoxic T lymphocyte-associated antigen 4
- Cancer immunotherapy
- DL, deep learning
- Diagnostics
- ICB, immune checkpoint blockade
- MHC-I, major histocompatibility complex class I
- ML, machine learning
- MMR, mismatch repair
- MRI, magnetic resonance imaging
- Machine learning
- PD-1, programmed cell death protein 1
- PD-L1, PD-1 ligand1
- TNBC, triple-negative breast cancer
- US, ultrasonography
- irAEs, immune-related adverse events
Collapse
Affiliation(s)
- Zhijie Xu
- Department of Pathology, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Xiang Wang
- Department of Pharmacy, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Shuangshuang Zeng
- Department of Pharmacy, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Xinxin Ren
- Center for Molecular Medicine, Xiangya Hospital, Key Laboratory of Molecular Radiation Oncology of Hunan Province, Central South University, Changsha 410008, China
| | - Yuanliang Yan
- Department of Pharmacy, Xiangya Hospital, Central South University, Changsha 410008, China
- Institute for Rational and Safe Medication Practices, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha 410008, China
| | - Zhicheng Gong
- Department of Pharmacy, Xiangya Hospital, Central South University, Changsha 410008, China
- Institute for Rational and Safe Medication Practices, National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha 410008, China
| |
Collapse
|
194
|
Zhou X, Gu M, Cheng Z. Local Integral Regression Network for Cell Nuclei Detection. ENTROPY 2021; 23:e23101336. [PMID: 34682060 PMCID: PMC8535160 DOI: 10.3390/e23101336] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/15/2021] [Accepted: 10/07/2021] [Indexed: 11/16/2022]
Abstract
Nuclei detection is a fundamental task in the field of histopathology image analysis and remains challenging due to cellular heterogeneity. Recent studies explore convolutional neural networks to either isolate them with sophisticated boundaries (segmentation-based methods) or locate the centroids of the nuclei (counting-based approaches). Although these two methods have demonstrated superior success, their fully supervised training demands considerable and laborious pixel-wise annotations manually labeled by pathology experts. To alleviate such tedious effort and reduce the annotation cost, we propose a novel local integral regression network (LIRNet) that allows both fully and weakly supervised learning (FSL/WSL) frameworks for nuclei detection. Furthermore, the LIRNet can output an exquisite density map of nuclei, in which the localization of each nucleus is barely affected by the post-processing algorithms. The quantitative experimental results demonstrate that the FSL version of the LIRNet achieves a state-of-the-art performance compared to other counterparts. In addition, the WSL version has exhibited a competitive detection performance and an effortless data annotation that requires only 17.5% of the annotation effort.
Collapse
|
195
|
Weakly supervised learning for classification of lung cytological images using attention-based multiple instance learning. Sci Rep 2021; 11:20317. [PMID: 34645863 PMCID: PMC8514584 DOI: 10.1038/s41598-021-99246-4] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2021] [Accepted: 09/22/2021] [Indexed: 11/09/2022] Open
Abstract
In cytological examination, suspicious cells are evaluated regarding malignancy and cancer type. To assist this, we previously proposed an automated method based on supervised learning that classifies cells in lung cytological images as benign or malignant. However, it is often difficult to label all cells. In this study, we developed a weakly supervised method for the classification of benign and malignant lung cells in cytological images using attention-based deep multiple instance learning (AD MIL). Images of lung cytological specimens were divided into small patch images and stored in bags. Each bag was then labeled as benign or malignant, and classification was conducted using AD MIL. The distribution of attention weights was also calculated as a color map to confirm the presence of malignant cells in the image. AD MIL using the AlexNet-like convolutional neural network model showed the best classification performance, with an accuracy of 0.916, which was better than that of supervised learning. In addition, an attention map of the entire image based on the attention weight allowed AD MIL to focus on most malignant cells. Our weakly supervised method automatically classifies cytological images with acceptable accuracy based on supervised learning without complex annotations.
Collapse
|
196
|
Vidyarthi A, Patel A. Deep assisted dense model based classification of invasive ductal breast histology images. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05947-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
197
|
Xing F, Cornish TC, Bennett TD, Ghosh D. Bidirectional Mapping-Based Domain Adaptation for Nucleus Detection in Cross-Modality Microscopy Images. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:2880-2896. [PMID: 33284750 PMCID: PMC8543886 DOI: 10.1109/tmi.2020.3042789] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Cell or nucleus detection is a fundamental task in microscopy image analysis and has recently achieved state-of-the-art performance by using deep neural networks. However, training supervised deep models such as convolutional neural networks (CNNs) usually requires sufficient annotated image data, which is prohibitively expensive or unavailable in some applications. Additionally, when applying a CNN to new datasets, it is common to annotate individual cells/nuclei in those target datasets for model re-learning, leading to inefficient and low-throughput image analysis. To tackle these problems, we present a bidirectional, adversarial domain adaptation method for nucleus detection on cross-modality microscopy image data. Specifically, the method learns a deep regression model for individual nucleus detection with both source-to-target and target-to-source image translation. In addition, we explicitly extend this unsupervised domain adaptation method to a semi-supervised learning situation and further boost the nucleus detection performance. We evaluate the proposed method on three cross-modality microscopy image datasets, which cover a wide variety of microscopy imaging protocols or modalities, and obtain a significant improvement in nucleus detection compared to reference baseline approaches. In addition, our semi-supervised method is very competitive with recent fully supervised learning models trained with all real target training labels.
Collapse
|
198
|
Abousamra S, Belinsky D, Van Arnam J, Allard F, Yee E, Gupta R, Kurc T, Samaras D, Saltz J, Chen C. Multi-Class Cell Detection Using Spatial Context Representation. PROCEEDINGS. IEEE INTERNATIONAL CONFERENCE ON COMPUTER VISION 2021; 2021:3985-3994. [PMID: 38783989 PMCID: PMC11114143 DOI: 10.1109/iccv48922.2021.00397] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
In digital pathology, both detection and classification of cells are important for automatic diagnostic and prognostic tasks. Classifying cells into subtypes, such as tumor cells, lymphocytes or stromal cells is particularly challenging. Existing methods focus on morphological appearance of individual cells, whereas in practice pathologists often infer cell classes through their spatial context. In this paper, we propose a novel method for both detection and classification that explicitly incorporates spatial contextual information. We use the spatial statistical function to describe local density in both a multi-class and a multi-scale manner. Through representation learning and deep clustering techniques, we learn advanced cell representation with both appearance and spatial context. On various benchmarks, our method achieves better performance than state-of-the-arts, especially on the classification task. We also create a new dataset for multi-class cell detection and classification in breast cancer and we make both our code and data publicly available.
Collapse
Affiliation(s)
| | | | | | | | - Eric Yee
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Tahsin Kurc
- Stony Brook University, Stony Brook, NY 11794, USA
| | | | - Joel Saltz
- Stony Brook University, Stony Brook, NY 11794, USA
| | - Chao Chen
- Stony Brook University, Stony Brook, NY 11794, USA
| |
Collapse
|
199
|
Oza P, Sharma P, Patel S, Bruno A. A Bottom-Up Review of Image Analysis Methods for Suspicious Region Detection in Mammograms. J Imaging 2021; 7:190. [PMID: 34564116 PMCID: PMC8466003 DOI: 10.3390/jimaging7090190] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 09/09/2021] [Accepted: 09/14/2021] [Indexed: 11/17/2022] Open
Abstract
Breast cancer is one of the most common death causes amongst women all over the world. Early detection of breast cancer plays a critical role in increasing the survival rate. Various imaging modalities, such as mammography, breast MRI, ultrasound and thermography, are used to detect breast cancer. Though there is a considerable success with mammography in biomedical imaging, detecting suspicious areas remains a challenge because, due to the manual examination and variations in shape, size, other mass morphological features, mammography accuracy changes with the density of the breast. Furthermore, going through the analysis of many mammograms per day can be a tedious task for radiologists and practitioners. One of the main objectives of biomedical imaging is to provide radiologists and practitioners with tools to help them identify all suspicious regions in a given image. Computer-aided mass detection in mammograms can serve as a second opinion tool to help radiologists avoid running into oversight errors. The scientific community has made much progress in this topic, and several approaches have been proposed along the way. Following a bottom-up narrative, this paper surveys different scientific methodologies and techniques to detect suspicious regions in mammograms spanning from methods based on low-level image features to the most recent novelties in AI-based approaches. Both theoretical and practical grounds are provided across the paper sections to highlight the pros and cons of different methodologies. The paper's main scope is to let readers embark on a journey through a fully comprehensive description of techniques, strategies and datasets on the topic.
Collapse
Affiliation(s)
- Parita Oza
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Paawan Sharma
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Samir Patel
- Computer Science and Engineering Department, School of Technology, Pandit Deendayal Energy University, Gandhinagar 382007, India; (P.S.); (S.P.)
| | - Alessandro Bruno
- Department of Computing and Informatics, Bournemouth University, Poole, Dorset BH12 5BB, UK
| |
Collapse
|
200
|
Liu TYA, Wei J, Zhu H, Subramanian PS, Myung D, Yi PH, Hui FK, Unberath M, Ting DSW, Miller NR. Detection of Optic Disc Abnormalities in Color Fundus Photographs Using Deep Learning. J Neuroophthalmol 2021; 41:368-374. [PMID: 34415271 PMCID: PMC10637344 DOI: 10.1097/wno.0000000000001358] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
BACKGROUND To date, deep learning-based detection of optic disc abnormalities in color fundus photographs has mostly been limited to the field of glaucoma. However, many life-threatening systemic and neurological conditions can manifest as optic disc abnormalities. In this study, we aimed to extend the application of deep learning (DL) in optic disc analyses to detect a spectrum of nonglaucomatous optic neuropathies. METHODS Using transfer learning, we trained a ResNet-152 deep convolutional neural network (DCNN) to distinguish between normal and abnormal optic discs in color fundus photographs (CFPs). Our training data set included 944 deidentified CFPs (abnormal 364; normal 580). Our testing data set included 151 deidentified CFPs (abnormal 71; normal 80). Both the training and testing data sets contained a wide range of optic disc abnormalities, including but not limited to ischemic optic neuropathy, atrophy, compressive optic neuropathy, hereditary optic neuropathy, hypoplasia, papilledema, and toxic optic neuropathy. The standard measures of performance (sensitivity, specificity, and area under the curve of the receiver operating characteristic curve (AUC-ROC)) were used for evaluation. RESULTS During the 10-fold cross-validation test, our DCNN for distinguishing between normal and abnormal optic discs achieved the following mean performance: AUC-ROC 0.99 (95 CI: 0.98-0.99), sensitivity 94% (95 CI: 91%-97%), and specificity 96% (95 CI: 93%-99%). When evaluated against the external testing data set, our model achieved the following mean performance: AUC-ROC 0.87, sensitivity 90%, and specificity 69%. CONCLUSION In summary, we have developed a deep learning algorithm that is capable of detecting a spectrum of optic disc abnormalities in color fundus photographs, with a focus on neuro-ophthalmological etiologies. As the next step, we plan to validate our algorithm prospectively as a focused screening tool in the emergency department, which if successful could be beneficial because current practice pattern and training predict a shortage of neuro-ophthalmologists and ophthalmologists in general in the near future.
Collapse
Affiliation(s)
- T Y Alvin Liu
- Department of Ophthalmology (TYAL, NRM), Wilmer Eye Institute, Johns Hopkins University, Baltimore, Maryland; Department of Biomedical Engineering (JW), Johns Hopkins University, Baltimore, Maryland; Malone Center for Engineering in Healthcare (HZ, MU), Johns Hopkins University, Baltimore, Maryland; Department of Radiology (PHY, FKH), Johns Hopkins University, Baltimore, Maryland; Singapore Eye Research Institute (DSWT), Singapore National Eye Center, Duke-NUS Medical School, National University of Singapore, Singapore ; Department of Ophthalmology (PSS), University of Colorado School of Medicine, Aurora, Colorado; and Department of Ophthalmology (DM), Byers Eye Institute, Stanford University, Palo Alto, California
| | | | | | | | | | | | | | | | | | | |
Collapse
|