1
|
Wu DY, Vo DT, Seiler SJ. For the busy clinical-imaging professional in an AI world: Gaining intuition about deep learning without math. J Med Imaging Radiat Sci 2025; 56:101762. [PMID: 39437625 DOI: 10.1016/j.jmir.2024.101762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2024] [Revised: 07/25/2024] [Accepted: 08/25/2024] [Indexed: 10/25/2024]
Abstract
Medical diagnostics comprise recognizing patterns in images, tissue slides, and symptoms. Deep learning algorithms (DLs) are well suited to such tasks, but they are black boxes in various ways. To explain DL Computer-Aided Diagnostic (CAD) results and their accuracy to patients, to manage or drive the direction of future medical DLs, to make better decisions with CAD, etc., clinical professionals may benefit from hands-on, under-the-hood lessons about medical DL. For those who already have some high-level knowledge about DL, the next step is to gain a more-fundamental understanding of DLs, which may help illuminate inside the boxes. The objectives of this Continuing Medical Education (CME) article include:Better understanding can come from relatable medical analogies and personally experiencing quick simulations to observe deep learning in action, akin to the way clinicians are trained to perform other tasks. We developed readily-implementable demonstrations and simulation exercises. We framed the exercises using analogies to breast cancer, malignancy and cancer stage as example diagnostic applications. The simulations revealed a nuanced relationship between DL output accuracy and the quantity and nature of the data. The simulation results provided lessons-learned and implications for the clinical world. Although we focused on DLs for diagnosis, they are similar to DLs for treatment (e.g. radiotherapy) so that treatment providers may also benefit from this tutorial.
Collapse
Affiliation(s)
- Dolly Y Wu
- Volunteer Services, UT Southwestern Medical Center, Dallas, TX, USA.
| | - Dat T Vo
- Department of Radiation Oncology, UT Southwestern Medical Center, Dallas, TX, USA
| | - Stephen J Seiler
- Department of Radiology, UT Southwestern Medical Center, Dallas, TX, USA
| |
Collapse
|
2
|
Mohseni A, Ghotbi E, Kazemi F, Shababi A, Jahan SC, Mohseni A, Shababi N. Artificial Intelligence in Radiology: What Is Its True Role at Present, and Where Is the Evidence? Radiol Clin North Am 2024; 62:935-947. [PMID: 39393852 DOI: 10.1016/j.rcl.2024.03.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2024]
Abstract
The integration of artificial intelligence (AI) in radiology has brought about substantial advancements and transformative potential in diagnostic imaging practices. This study presents an overview of the current research on the application of AI in radiology, highlighting key insights from recent studies and surveys. These recent studies have explored the expected impact of AI, encompassing machine learning and deep learning, on the work volume of diagnostic radiologists. The present and future role of AI in radiology holds great promise for enhancing diagnostic capabilities, improving workflow efficiency, and ultimately, advancing patient care.
Collapse
Affiliation(s)
- Alireza Mohseni
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA.
| | - Elena Ghotbi
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA
| | - Foad Kazemi
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA
| | - Amirali Shababi
- School of Medicine, Iran University of Medical Sciences, Hemat Highway next to Milad Tower 14535, Tehran, Iran
| | - Shayan Chashm Jahan
- Department of Computer Science, University of Maryland, 8125 Paint Branch Drive College Park, MD 20742, USA
| | - Anita Mohseni
- Azad University Tehran Medical Branch, Danesh, Shariati Street, Tehran, Iran 19395/1495
| | - Niloufar Shababi
- Johns Hopkins University School of Medicine, 600 N. Wolfe Street / Phipps 446, Baltimore, MD 21287, USA
| |
Collapse
|
3
|
Guo S, Wang H, Agaian S, Han L, Song X. LRENet: a location-related enhancement network for liver lesions in CT images. Phys Med Biol 2024; 69:035019. [PMID: 38211307 DOI: 10.1088/1361-6560/ad1d6b] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2023] [Accepted: 01/11/2024] [Indexed: 01/13/2024]
Abstract
Objective. Liver cancer is a major global health problem expected to increase by more than 55% by 2040. Accurate segmentation of liver tumors from computed tomography (CT) images is essential for diagnosis and treatment planning. However, this task is challenging due to the variations in liver size, the low contrast between tumor and normal tissue, and the noise in the images. APPROACH In this study, we propose a novel method called location-related enhancement network (LRENet) which can enhance the contrast of liver lesions in CT images and facilitate their segmentation. LRENet consists of two steps: (1) locating the lesions and the surrounding tissues using a morphological approach and (2) enhancing the lesions and smoothing the other regions using a new loss function. MAIN RESULTS We evaluated LRENet on two public datasets (LiTS and 3Dircadb01) and one dataset collected from a collaborative hospital (Liver cancer dateset), and compared it with state-of-the-art methods regarding several metrics. The results of the experiments showed that our proposed method outperformed the compared methods on three datasets in several metrics. We also trained the Swin-Transformer network on the enhanced datasets and showed that our method could improve the segmentation performance of both liver and lesions. SIGNIFICANCE Our method has potential applications in clinical diagnosis and treatment planning, as it can provide more reliable and informative CT images of liver tumors.
Collapse
Affiliation(s)
- Shuli Guo
- State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, People's Republic of China
| | - Hui Wang
- State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, People's Republic of China
| | - Sos Agaian
- Computer Science Departments, College of Staten Island and the Graduate Center, City University of New York, 2800 Victory Boulevard, Staten Island, NY,10314, United States of America
| | - Lina Han
- Department of Cardiology, The Second Medical Center, National Clinical Research Center for Geriatric Diseases, Chinese PLA General Hospital, Beijing, 100853, People's Republic of China
| | - Xiaowei Song
- State Key Laboratory of Intelligent Control and Decision of Complex Systems, School of Automation, Beijing Institute of Technology, Beijing, 100081, People's Republic of China
| |
Collapse
|
4
|
Leming MJ, Bron EE, Bruffaerts R, Ou Y, Iglesias JE, Gollub RL, Im H. Challenges of implementing computer-aided diagnostic models for neuroimages in a clinical setting. NPJ Digit Med 2023; 6:129. [PMID: 37443276 DOI: 10.1038/s41746-023-00868-x] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2022] [Accepted: 06/26/2023] [Indexed: 07/15/2023] Open
Abstract
Advances in artificial intelligence have cultivated a strong interest in developing and validating the clinical utilities of computer-aided diagnostic models. Machine learning for diagnostic neuroimaging has often been applied to detect psychological and neurological disorders, typically on small-scale datasets or data collected in a research setting. With the collection and collation of an ever-growing number of public datasets that researchers can freely access, much work has been done in adapting machine learning models to classify these neuroimages by diseases such as Alzheimer's, ADHD, autism, bipolar disorder, and so on. These studies often come with the promise of being implemented clinically, but despite intense interest in this topic in the laboratory, limited progress has been made in clinical implementation. In this review, we analyze challenges specific to the clinical implementation of diagnostic AI models for neuroimaging data, looking at the differences between laboratory and clinical settings, the inherent limitations of diagnostic AI, and the different incentives and skill sets between research institutions, technology companies, and hospitals. These complexities need to be recognized in the translation of diagnostic AI for neuroimaging from the laboratory to the clinic.
Collapse
Affiliation(s)
- Matthew J Leming
- Center for Systems Biology, Massachusetts General Hospital, Boston, MA, USA.
- Massachusetts Alzheimer's Disease Research Center, Charlestown, MA, USA.
| | - Esther E Bron
- Department of Radiology and Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Rose Bruffaerts
- Computational Neurology, Experimental Neurobiology Unit (ENU), Department of Biomedical Sciences, University of Antwerp, Antwerp, Belgium
- Biomedical Research Institute, Hasselt University, Diepenbeek, Belgium
| | - Yangming Ou
- Boston Children's Hospital, 300 Longwood Ave, Boston, MA, USA
| | - Juan Eugenio Iglesias
- Center for Medical Image Computing, University College London, London, UK
- Martinos Center for Biomedical Imaging, Harvard Medical School, Boston, MA, USA
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Randy L Gollub
- Department of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Hyungsoon Im
- Center for Systems Biology, Massachusetts General Hospital, Boston, MA, USA.
- Massachusetts Alzheimer's Disease Research Center, Charlestown, MA, USA.
- Department of Radiology, Massachusetts General Hospital, Boston, MA, USA.
| |
Collapse
|
5
|
Role of Artificial Intelligence in Radiogenomics for Cancers in the Era of Precision Medicine. Cancers (Basel) 2022; 14:cancers14122860. [PMID: 35740526 PMCID: PMC9220825 DOI: 10.3390/cancers14122860] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2022] [Revised: 06/03/2022] [Accepted: 06/07/2022] [Indexed: 12/12/2022] Open
Abstract
Simple Summary Recently, radiogenomics has played a significant role and offered a new understanding of cancer’s biology and behavior in response to standard therapy. It also provides a more precise prognosis, investigation, and analysis of the patient’s cancer. Over the years, Artificial Intelligence (AI) has provided a significant strength in radiogenomics. In this paper, we offer computational and oncological prospects of the role of AI in radiogenomics, as well as its offers, achievements, opportunities, and limitations in the current clinical practices. Abstract Radiogenomics, a combination of “Radiomics” and “Genomics,” using Artificial Intelligence (AI) has recently emerged as the state-of-the-art science in precision medicine, especially in oncology care. Radiogenomics syndicates large-scale quantifiable data extracted from radiological medical images enveloped with personalized genomic phenotypes. It fabricates a prediction model through various AI methods to stratify the risk of patients, monitor therapeutic approaches, and assess clinical outcomes. It has recently shown tremendous achievements in prognosis, treatment planning, survival prediction, heterogeneity analysis, reoccurrence, and progression-free survival for human cancer study. Although AI has shown immense performance in oncology care in various clinical aspects, it has several challenges and limitations. The proposed review provides an overview of radiogenomics with the viewpoints on the role of AI in terms of its promises for computational as well as oncological aspects and offers achievements and opportunities in the era of precision medicine. The review also presents various recommendations to diminish these obstacles.
Collapse
|
6
|
Gerber S, Pospisil L, Sys S, Hewel C, Torkamani A, Horenko I. Co-Inference of Data Mislabelings Reveals Improved Models in Genomics and Breast Cancer Diagnostics. Front Artif Intell 2022; 4:739432. [PMID: 35072059 PMCID: PMC8766632 DOI: 10.3389/frai.2021.739432] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2021] [Accepted: 11/19/2021] [Indexed: 11/13/2022] Open
Abstract
Mislabeling of cases as well as controls in case–control studies is a frequent source of strong bias in prognostic and diagnostic tests and algorithms. Common data processing methods available to the researchers in the biomedical community do not allow for consistent and robust treatment of labeled data in the situations where both, the case and the control groups, contain a non-negligible proportion of mislabeled data instances. This is an especially prominent issue in studies regarding late-onset conditions, where individuals who may convert to cases may populate the control group, and for screening studies that often have high false-positive/-negative rates. To address this problem, we propose a method for a simultaneous robust inference of Lasso reduced discriminative models and of latent group-specific mislabeling risks, not requiring any exactly labeled data. We apply it to a standard breast cancer imaging dataset and infer the mislabeling probabilities (being rates of false-negative and false-positive core-needle biopsies) together with a small set of simple diagnostic rules, outperforming the state-of-the-art BI-RADS diagnostics on these data. The inferred mislabeling rates for breast cancer biopsies agree with the published purely empirical studies. Applying the method to human genomic data from a healthy-ageing cohort reveals a previously unreported compact combination of single-nucleotide polymorphisms that are strongly associated with a healthy-ageing phenotype for Caucasians. It determines that 7.5% of Caucasians in the 1000 Genomes dataset (selected as a control group) carry a pattern characteristic of healthy ageing.
Collapse
Affiliation(s)
- Susanne Gerber
- Institute of Human Genetics, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
- *Correspondence: Susanne Gerber, ; Illia Horenko,
| | - Lukas Pospisil
- Faculty of Informatics, Institute of Computational Science, Università Della Svizzera Italiana, Lugano, Switzerland
| | - Stanislav Sys
- Institute of Human Genetics, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Charlotte Hewel
- Institute of Human Genetics, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Ali Torkamani
- Department of Integrative Structural and Computational Biology, The Scripps Research Institute, La Jolla, CA, United States
| | - Illia Horenko
- Faculty of Informatics, Institute of Computational Science, Università Della Svizzera Italiana, Lugano, Switzerland
- *Correspondence: Susanne Gerber, ; Illia Horenko,
| |
Collapse
|
7
|
Li H, Ye J, Liu H, Wang Y, Shi B, Chen J, Kong A, Xu Q, Cai J. Application of deep learning in the detection of breast lesions with four different breast densities. Cancer Med 2021; 10:4994-5000. [PMID: 34132495 PMCID: PMC8290249 DOI: 10.1002/cam4.4042] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 03/04/2021] [Accepted: 03/20/2021] [Indexed: 01/05/2023] Open
Abstract
Objective This retrospective study evaluated the model from populations with different breast densities and showed the model's performance on malignancy prediction. Methods A total of 608 mammograms were collected from Northern Jiangsu People's Hospital in Yangzhou City. The data from this province have not been used in the training or evaluation data set. The model consists of three submodules, lesion detection (Mask‐rcnn), lesion registration between craniocaudal view and mediolateral oblique view, malignancy prediction network (ResNet). The data set used to train the model was obtained from nine institutions across six cities. For normal cases, there were no annotations. Here, we adopted the free‐response receiver operating characteristic (FROC) curve as the indicator to evaluate the detection performance of all cancers and triple‐negative breast cancer (TNBC). The FROC curves are also shown for mass/distortion/asymmetry and typical benign calcification in two kinds of populations with four types of breast density. Results The sensitivity to mass/distortion/asymmetry for the four types of breast (A, B, C, D) are 0.94, 0.92, 0.89, and 0.72, respectively, when false positive per image is 0.25, while these values are 1.00, 0.95, 0.92, and 0.90, respectively, for the amorphous calcification lesions. The sensitivity for the cancer is 0.85 at the same false‐positive rate. The TNBC accounts for about 10%–20% of all breast cancers and is more aggressive with poor prognosis than other breast cancers. Herein, we also evaluated performance on the TNBC cases. Our results show that Yizhun AI could detect 75% TNBC lesions at the same false‐positive level mentioned above. Conclusion The Yizhun AI model used in our work has good diagnostic efficiency for different types of breast, even for the extremely dense breast. It has a guiding role in the clinical diagnosis of breast cancer. The performance of Yizhun AI on mass/distortion/asymmetry is affected by breast density significantly compared to that on amorphous calcification.
Collapse
Affiliation(s)
- Hongmei Li
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Jing Ye
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Hao Liu
- Yizhun Medical AI, Beijing, China
| | - Yichuan Wang
- Yizhun Medical AI, Beijing, China.,School of Electronics Engineering and Computer Science, Peking University, Beijing, China
| | - Binbin Shi
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Juan Chen
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Aiping Kong
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Qing Xu
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| | - Junhui Cai
- Department of Radiology, Subei People's Hospital of Jiangsu Province, Yangzhou Jiangsu, China
| |
Collapse
|
8
|
Gerasimova-Chechkina E, Toner BC, Batchelder KA, White B, Freynd G, Antipev I, Arneodo A, Khalil A. Loss of Mammographic Tissue Homeostasis in Invasive Lobular and Ductal Breast Carcinomas vs. Benign Lesions. Front Physiol 2021; 12:660883. [PMID: 34054577 PMCID: PMC8153084 DOI: 10.3389/fphys.2021.660883] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2021] [Accepted: 04/09/2021] [Indexed: 12/24/2022] Open
Abstract
The 2D wavelet transform modulus maxima (WTMM) method is used to perform a comparison of the spatial fluctuations of mammographic breast tissue from patients with invasive lobular carcinoma, those with invasive ductal carcinoma, and those with benign lesions. We follow a procedure developed and validated in a previous study, in which a sliding window protocol is used to analyze thousands of small subregions in a given mammogram. These subregions are categorized according to their Hurst exponent values (H): fatty tissue (H ≤ 0.45), dense tissue (H ≥ 0.55), and disrupted tissue potentially linked with tumor-associated loss of homeostasis (0.45 < H < 0.55). Following this categorization scheme, we compare the mammographic tissue composition of the breasts. First, we show that cancerous breasts are significantly different than breasts with a benign lesion (p-value ∼ 0.002). Second, the asymmetry between a patient’s cancerous breast and its contralateral counterpart, when compared to the asymmetry from patients with benign lesions, is also statistically significant (p-value ∼ 0.006). And finally, we show that lobular and ductal cancerous breasts show similar levels of disruption and similar levels of asymmetry. This study demonstrates reproducibility of the WTMM sliding-window approach to help detect and characterize tumor-associated breast tissue disruption from standard mammography. It also shows promise to help with the detection lobular lesions that typically go undetected via standard screening mammography at a much higher rate than ductal lesions. Here both types are assessed similarly.
Collapse
Affiliation(s)
| | - Brian C Toner
- CompuMAINE Laboratory, University of Maine, Orono, ME, United States
| | | | - Basel White
- CompuMAINE Laboratory, University of Maine, Orono, ME, United States
| | - Genrietta Freynd
- Department of Pathology, Perm State Medical University Named After Academician E. A. Wagner, Perm, Russia
| | - Igor Antipev
- Department of Pathology, Perm State Medical University Named After Academician E. A. Wagner, Perm, Russia
| | - Alain Arneodo
- Laboratoire Ondes et Matière d'Aquitaine, Universite de Bordeaux, Bordeaux, France
| | - Andre Khalil
- CompuMAINE Laboratory, University of Maine, Orono, ME, United States.,Department of Chemical and Biomedical Engineering, University of Maine, Orono, ME, United States
| |
Collapse
|
9
|
Huang Q, Hu B, Zhang F. Evolutionary optimized fuzzy reasoning with mined diagnostic patterns for classification of breast tumors in ultrasound. Inf Sci (N Y) 2019. [DOI: 10.1016/j.ins.2019.06.054] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
10
|
Negrão de Figueiredo G, Ingrisch M, Fallenberg EM. Digital Analysis in Breast Imaging. Breast Care (Basel) 2019; 14:142-150. [PMID: 31316312 DOI: 10.1159/000501099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Accepted: 05/21/2019] [Indexed: 01/02/2023] Open
Abstract
Breast imaging is a multimodal approach that plays an essential role in the diagnosis of breast cancer. Mammography, sonography, magnetic resonance, and image-guided biopsy are imaging techniques used to search for malignant changes in the breast or precursors of malignant changes in, e.g., screening programs or follow-ups after breast cancer treatment. However, these methods still have some disadvantages such as interobserver variability and the mammography sensitivity in women with radiologically dense breasts. In order to overcome these difficulties and decrease the number of false positive findings, improvements in imaging analysis with the help of artificial intelligence are constantly being developed and tested. In addition, the extraction and correlation of imaging features with special tumor characteristics and genetics of the patients in order to get more information about treatment response, prognosis, and also cancer risk are coming more and more in focus. The aim of this review is to address recent developments in digital analysis of images and demonstrate their potential value in multimodal breast imaging.
Collapse
Affiliation(s)
| | - Michael Ingrisch
- Department of Radiology, Ludwig Maximilian University of Munich - Grosshadern Campus, Munich, Germany
| | - Eva Maria Fallenberg
- Department of Radiology, Ludwig Maximilian University of Munich - Grosshadern Campus, Munich, Germany
| |
Collapse
|
11
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
12
|
Vogl WD, Pinker K, Helbich TH, Bickel H, Grabner G, Bogner W, Gruber S, Bago-Horvath Z, Dubsky P, Langs G. Automatic segmentation and classification of breast lesions through identification of informative multiparametric PET/MRI features. Eur Radiol Exp 2019; 3:18. [PMID: 31030291 PMCID: PMC6486931 DOI: 10.1186/s41747-019-0096-3] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Accepted: 03/07/2019] [Indexed: 02/08/2023] Open
Abstract
Background Multiparametric positron emission tomography/magnetic resonance imaging (mpPET/MRI) shows clinical potential for detection and classification of breast lesions. Yet, the contribution of features for computer-aided segmentation and diagnosis (CAD) need to be better understood. We proposed a data-driven machine learning approach for a CAD system combining dynamic contrast-enhanced (DCE)-MRI, diffusion-weighted imaging (DWI), and 18F-fluorodeoxyglucose (18F-FDG)-PET. Methods The CAD incorporated a random forest (RF) classifier combined with mpPET/MRI intensity-based features for lesion segmentation and shape features, kinetic and spatio-temporal texture features, for lesion classification. The CAD pipeline detected and segmented suspicious regions and classified lesions as benign or malignant. The inherent feature selection method of RF and alternatively the minimum-redundancy-maximum-relevance feature ranking method were used. Results In 34 patients, we report a detection rate of 10/12 (83.3%) and 22/22 (100%) for benign and malignant lesions, respectively, a Dice similarity coefficient of 0.665 for segmentation, and a classification performance with an area under the curve at receiver operating characteristics analysis of 0.978, a sensitivity of 0.946, and a specificity of 0.936. Segmentation but not classification performance of DCE-MRI improved with information from DWI and FDG-PET. Feature ranking revealed that kinetic and spatio-temporal texture features had the highest contribution for lesion classification. 18F-FDG-PET and morphologic features were less predictive. Conclusion Our CAD enables the assessment of the relevance of mpPET/MRI features on segmentation and classification accuracy. It may aid as a novel computational tool for exploring different modalities/features and their contributions for the detection and classification of breast lesions. Electronic supplementary material The online version of this article (10.1186/s41747-019-0096-3) contains supplementary material, which is available to authorized users.
Collapse
Affiliation(s)
- Wolf-Dieter Vogl
- Computational Imaging Research Laboratory, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria
| | - Katja Pinker
- Division of Molecular and Gender Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria.,Department of Radiology, Breast Imaging Service, Memorial Sloan Kettering Cancer Center, New York, NY, 10065, USA
| | - Thomas H Helbich
- Division of Molecular and Gender Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria
| | - Hubert Bickel
- Division of Molecular and Gender Imaging, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria
| | - Günther Grabner
- MR Center of Excellence, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria.,Department of Radiologic Technology, Carinthia University of Applied Sciences, Klagenfurt, Austria
| | - Wolfgang Bogner
- MR Center of Excellence, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria
| | - Stephan Gruber
- MR Center of Excellence, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, 1090, Vienna, Austria
| | | | - Peter Dubsky
- Department of Surgery, Medical University Vienna, 1090, Vienna, Austria
| | - Georg Langs
- Computational Imaging Research Laboratory, Department of Biomedical Imaging and Image-guided Therapy, Medical University Vienna, Waehringer Guertel 18-20, 1090, Vienna, Austria.
| |
Collapse
|
13
|
Ahsen ME, Ayvaci MUS, Raghunathan S. When Algorithmic Predictions Use Human-Generated Data: A Bias-Aware Classification Algorithm for Breast Cancer Diagnosis. INFORMATION SYSTEMS RESEARCH 2019. [DOI: 10.1287/isre.2018.0789] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
Affiliation(s)
- Mehmet Eren Ahsen
- Genetics and Genomic Sciences, Icahn School of Medicine at Mount Sinai, New York, New York 10029
| | | | | |
Collapse
|
14
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Collapse
Affiliation(s)
- Ahmed Hosny
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Chintan Parmar
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - John Quackenbush
- Department of Biostatistics & Computational Biology, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Department of Radiology, New York Presbyterian Hospital, New York, NY, USA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
15
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Collapse
Affiliation(s)
- Ahmed Hosny
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Chintan Parmar
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - John Quackenbush
- Department of Biostatistics & Computational Biology, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Department of Radiology, New York Presbyterian Hospital, New York, NY, USA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
16
|
Abstract
Artificial intelligence (AI) algorithms, particularly deep learning, have demonstrated remarkable progress in image-recognition tasks. Methods ranging from convolutional neural networks to variational autoencoders have found myriad applications in the medical image analysis field, propelling it forward at a rapid pace. Historically, in radiology practice, trained physicians visually assessed medical images for the detection, characterization and monitoring of diseases. AI methods excel at automatically recognizing complex patterns in imaging data and providing quantitative, rather than qualitative, assessments of radiographic characteristics. In this Opinion article, we establish a general understanding of AI methods, particularly those pertaining to image-based tasks. We explore how these methods could impact multiple facets of radiology, with a general focus on applications in oncology, and demonstrate ways in which these methods are advancing the field. Finally, we discuss the challenges facing clinical implementation and provide our perspective on how the domain could be advanced.
Collapse
Affiliation(s)
- Ahmed Hosny
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - Chintan Parmar
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA
| | - John Quackenbush
- Department of Biostatistics & Computational Biology, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Cancer Biology, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Lawrence H Schwartz
- Department of Radiology, Columbia University College of Physicians and Surgeons, New York, NY, USA
- Department of Radiology, New York Presbyterian Hospital, New York, NY, USA
| | - Hugo J W L Aerts
- Department of Radiation Oncology, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
- Department of Radiology, Dana-Farber Cancer Institute, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
17
|
Han S, Kang HK, Jeong JY, Park MH, Kim W, Bang WC, Seong YK. A deep learning framework for supporting the classification of breast lesions in ultrasound images. Phys Med Biol 2017; 62:7714-7728. [PMID: 28753132 DOI: 10.1088/1361-6560/aa82ec] [Citation(s) in RCA: 188] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
In this research, we exploited the deep learning framework to differentiate the distinctive types of lesions and nodules in breast acquired with ultrasound imaging. A biopsy-proven benchmarking dataset was built from 5151 patients cases containing a total of 7408 ultrasound breast images, representative of semi-automatically segmented lesions associated with masses. The dataset comprised 4254 benign and 3154 malignant lesions. The developed method includes histogram equalization, image cropping and margin augmentation. The GoogLeNet convolutionary neural network was trained to the database to differentiate benign and malignant tumors. The networks were trained on the data with augmentation and the data without augmentation. Both of them showed an area under the curve of over 0.9. The networks showed an accuracy of about 0.9 (90%), a sensitivity of 0.86 and a specificity of 0.96. Although target regions of interest (ROIs) were selected by radiologists, meaning that radiologists still have to point out the location of the ROI, the classification of malignant lesions showed promising results. If this method is used by radiologists in clinical situations it can classify malignant lesions in a short time and support the diagnosis of radiologists in discriminating malignant lesions. Therefore, the proposed method can work in tandem with human radiologists to improve performance, which is a fundamental purpose of computer-aided diagnosis.
Collapse
Affiliation(s)
- Seokmin Han
- Korea National University of Transportation, Uiwang-si, Kyunggi-do, Republic of Korea
| | | | | | | | | | | | | |
Collapse
|
18
|
Taghanaki SA, Kawahara J, Miles B, Hamarneh G. Pareto-optimal multi-objective dimensionality reduction deep auto-encoder for mammography classification. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2017; 145:85-93. [PMID: 28552129 DOI: 10.1016/j.cmpb.2017.04.012] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/02/2016] [Revised: 03/21/2017] [Accepted: 04/12/2017] [Indexed: 06/07/2023]
Abstract
BACKGROUND AND OBJECTIVE Feature reduction is an essential stage in computer aided breast cancer diagnosis systems. Multilayer neural networks can be trained to extract relevant features by encoding high-dimensional data into low-dimensional codes. Optimizing traditional auto-encoders works well only if the initial weights are close to a proper solution. They are also trained to only reduce the mean squared reconstruction error (MRE) between the encoder inputs and the decoder outputs, but do not address the classification error. The goal of the current work is to test the hypothesis that extending traditional auto-encoders (which only minimize reconstruction error) to multi-objective optimization for finding Pareto-optimal solutions provides more discriminative features that will improve classification performance when compared to single-objective and other multi-objective approaches (i.e. scalarized and sequential). METHODS In this paper, we introduce a novel multi-objective optimization of deep auto-encoder networks, in which the auto-encoder optimizes two objectives: MRE and mean classification error (MCE) for Pareto-optimal solutions, rather than just MRE. These two objectives are optimized simultaneously by a non-dominated sorting genetic algorithm. RESULTS We tested our method on 949 X-ray mammograms categorized into 12 classes. The results show that the features identified by the proposed algorithm allow a classification accuracy of up to 98.45%, demonstrating favourable accuracy over the results of state-of-the-art methods reported in the literature. CONCLUSIONS We conclude that adding the classification objective to the traditional auto-encoder objective and optimizing for finding Pareto-optimal solutions, using evolutionary multi-objective optimization, results in producing more discriminative features.
Collapse
Affiliation(s)
| | - Jeremy Kawahara
- Medical Image Analysis Lab, Simon Fraser University, Canada.
| | - Brandon Miles
- Medical Image Analysis Lab, Simon Fraser University, Canada.
| | | |
Collapse
|
19
|
Wang Y, Aghaei F, Zarafshani A, Qiu Y, Qian W, Zheng B. Computer-aided classification of mammographic masses using visually sensitive image features. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2017; 25:171-186. [PMID: 27911353 PMCID: PMC5291799 DOI: 10.3233/xst-16212] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/19/2023]
Abstract
PURPOSE To develop a new computer-aided diagnosis (CAD) scheme that computes visually sensitive image features routinely used by radiologists to develop a machine learning classifier and distinguish between the malignant and benign breast masses detected from digital mammograms. METHODS An image dataset including 301 breast masses was retrospectively selected. From each segmented mass region, we computed image features that mimic five categories of visually sensitive features routinely used by radiologists in reading mammograms. We then selected five optimal features in the five feature categories and applied logistic regression models for classification. A new CAD interface was also designed to show lesion segmentation, computed feature values and classification score. RESULTS Areas under ROC curves (AUC) were 0.786±0.026 and 0.758±0.027 when to classify mass regions depicting on two view images, respectively. By fusing classification scores computed from two regions, AUC increased to 0.806±0.025. CONCLUSION This study demonstrated a new approach to develop CAD scheme based on 5 visually sensitive image features. Combining with a "visual aid" interface, CAD results may be much more easily explainable to the observers and increase their confidence to consider CAD generated classification results than using other conventional CAD approaches, which involve many complicated and visually insensitive texture features.
Collapse
Affiliation(s)
- Yunzhi Wang
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Faranak Aghaei
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Ali Zarafshani
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Yuchen Qiu
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| | - Wei Qian
- Department of Electrical and Computer Engineering, University of Texas, El Paso, TX 79905, USA
- Sino-Dutch Biomedical and Information Engineering School, Northeastern University, Shenyang 110819, China
| | - Bin Zheng
- School of Electrical and Computer Engineering, University of Oklahoma, Norman, OK 73019, USA
| |
Collapse
|
20
|
Lee K, Kim H, Lee JH, Jeong H, Shin SA, Han T, Seo YL, Yoo Y, Nam SE, Park JH, Park YM. Retrospective observation on contribution and limitations of screening for breast cancer with mammography in Korea: detection rate of breast cancer and incidence rate of interval cancer of the breast. BMC WOMENS HEALTH 2016; 16:72. [PMID: 27863517 PMCID: PMC5116177 DOI: 10.1186/s12905-016-0351-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/02/2016] [Accepted: 11/03/2016] [Indexed: 12/19/2022]
Abstract
Background The purpose of this study was to determine the benefits and limitations of screening for breast cancer using mammography. Methods Descriptive design with follow-up was used in the study. Data from breast cancer screening and health insurance claim data were used. The study population consisted of all participants in breast cancer screening from 2009 to 2014. Crude detection rate, positive predictive value and sensitivity and specificity of breast cancer screening and, incidence rate of interval cancer of the breast were calculated. Results The crude detection rate of breast cancer screening per 100,000 participants increased from 126.3 in 2009 to 182.1 in 2014. The positive predictive value of breast cancer screening per 100,000 positives increased from 741.2 in 2009 to 1,367.9 in 2014. The incidence rate of interval cancer of the breast per 100,000 negatives increased from 51.7 in 2009 to 76.3 in 2014. The sensitivities of screening for breast cancer were 74.6% in 2009 and 75.1% in 2014 and the specificities were 83.1% in 2009 and 85.7% in 2014. Conclusions To increase the detection rate of breast cancer by breast cancer screening using mammography, the participation rate should be higher and an environment where accurate mammography and reading can be performed and reinforcement of quality control are required. To reduce the incidence rate of interval cancer of the breast, it will be necessary to educate women after their 20s to perform self-examination of the breast once a month regardless of participation in screening for breast cancer.
Collapse
Affiliation(s)
- Kunsei Lee
- Departments of Preventive Medicine, School of Medicine, Konkuk University, 1 Hwayang-dong, Gwangjin-gu, Seoul, 143-701, South Korea
| | - Hyeongsu Kim
- Departments of Preventive Medicine, School of Medicine, Konkuk University, 1 Hwayang-dong, Gwangjin-gu, Seoul, 143-701, South Korea.
| | - Jung Hyun Lee
- Departments of Preventive Medicine, School of Medicine, Konkuk University, 1 Hwayang-dong, Gwangjin-gu, Seoul, 143-701, South Korea
| | - Hyoseon Jeong
- Departments of Preventive Medicine, School of Medicine, Konkuk University, 1 Hwayang-dong, Gwangjin-gu, Seoul, 143-701, South Korea
| | - Soon Ae Shin
- Bigdata Steering Department, National Health Insurance Service, Wonju, South Korea
| | - Taehwa Han
- Yonsei University Health System, College of Medicine, Yonsei University, Seoul, South Korea
| | - Young Lan Seo
- Department of Radiology, Kangdong Seong-Sim Hospital, College of Medicine, Hallym University, Seoul, South Korea
| | - Youngbum Yoo
- Departments of Surgery, School of Medicine, Konkuk University, Seoul, South Korea
| | - Sang Eun Nam
- Departments of Surgery, School of Medicine, Konkuk University, Seoul, South Korea
| | - Jong Heon Park
- Bigdata Steering Department, National Health Insurance Service, Wonju, South Korea
| | - Yoo Mi Park
- Medical and Health Policy Division, Seoul Metropolitan Government, Seoul, South Korea
| |
Collapse
|
21
|
Gerasimova-Chechkina E, Toner B, Marin Z, Audit B, Roux SG, Argoul F, Khalil A, Gileva O, Naimark O, Arneodo A. Comparative Multifractal Analysis of Dynamic Infrared Thermograms and X-Ray Mammograms Enlightens Changes in the Environment of Malignant Tumors. Front Physiol 2016; 7:336. [PMID: 27555823 PMCID: PMC4977307 DOI: 10.3389/fphys.2016.00336] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2016] [Accepted: 07/20/2016] [Indexed: 01/07/2023] Open
Abstract
There is growing evidence that the microenvironment surrounding a tumor plays a special role in cancer development and cancer therapeutic resistance. Tumors arise from the dysregulation and alteration of both the malignant cells and their environment. By providing tumor-repressing signals, the microenvironment can impose and sustain normal tissue architecture. Once tissue homeostasis is lost, the altered microenvironment can create a niche favoring the tumorigenic transformation process. A major challenge in early breast cancer diagnosis is thus to show that these physiological and architectural alterations can be detected with currently used screening techniques. In a recent study, we used a 1D wavelet-based multi-scale method to analyze breast skin temperature temporal fluctuations collected with an IR thermography camera in patients with breast cancer. This study reveals that the multifractal complexity of temperature fluctuations superimposed on cardiogenic and vasomotor perfusion oscillations observed in healthy breasts is lost in malignant tumor foci in cancerous breasts. Here we use a 2D wavelet-based multifractal method to analyze the spatial fluctuations of breast density in the X-ray mammograms of the same panel of patients. As compared to the long-range correlations and anti-correlations in roughness fluctuations, respectively observed in dense and fatty breast areas, some significant change in the nature of breast density fluctuations with some clear loss of correlations is detected in the neighborhood of malignant tumors. This attests to some architectural disorganization that may deeply affect heat transfer and related thermomechanics in breast tissues, corroborating the change to homogeneous monofractal temperature fluctuations recorded in cancerous breasts with the IR camera. These results open new perspectives in computer-aided methods to assist in early breast cancer diagnosis.
Collapse
Affiliation(s)
| | - Brian Toner
- CompuMAINE Laboratory, Department of Mathematics and Statistics, University of Maine Orono, ME, USA
| | - Zach Marin
- CompuMAINE Laboratory, Department of Mathematics and Statistics, University of Maine Orono, ME, USA
| | - Benjamin Audit
- Université Lyon, Ecole Normale Supérieure de Lyon, Université Claude Bernard Lyon 1, Centre National de la Recherche Scientifique, Laboratoire de Physique Lyon, France
| | - Stephane G Roux
- Université Lyon, Ecole Normale Supérieure de Lyon, Université Claude Bernard Lyon 1, Centre National de la Recherche Scientifique, Laboratoire de Physique Lyon, France
| | - Francoise Argoul
- Université Lyon, Ecole Normale Supérieure de Lyon, Université Claude Bernard Lyon 1, Centre National de la Recherche Scientifique, Laboratoire de PhysiqueLyon, France; Laboratoire Ondes et Matière d'Aquitaine, Centre National de la Recherche Scientifique, Université de Bordeaux, UMR 5798Talence, France
| | - Andre Khalil
- CompuMAINE Laboratory, Department of Mathematics and Statistics, University of Maine Orono, ME, USA
| | - Olga Gileva
- Department of Therapeutic and Propedeutic Dentistry, Perm State Medical University Perm, Russia
| | - Oleg Naimark
- Laboratory of Physical Foundation of Strength, Institute of Continuous Media Mechanics UB RAS Perm, Russia
| | - Alain Arneodo
- Université Lyon, Ecole Normale Supérieure de Lyon, Université Claude Bernard Lyon 1, Centre National de la Recherche Scientifique, Laboratoire de PhysiqueLyon, France; Laboratoire Ondes et Matière d'Aquitaine, Centre National de la Recherche Scientifique, Université de Bordeaux, UMR 5798Talence, France
| |
Collapse
|
22
|
Arevalo J, González FA, Ramos-Pollán R, Oliveira JL, Guevara Lopez MA. Representation learning for mammography mass lesion classification with convolutional neural networks. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2016; 127:248-57. [PMID: 26826901 DOI: 10.1016/j.cmpb.2015.12.014] [Citation(s) in RCA: 170] [Impact Index Per Article: 18.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2015] [Revised: 12/18/2015] [Accepted: 12/21/2015] [Indexed: 05/03/2023]
Abstract
BACKGROUND AND OBJECTIVE The automatic classification of breast imaging lesions is currently an unsolved problem. This paper describes an innovative representation learning framework for breast cancer diagnosis in mammography that integrates deep learning techniques to automatically learn discriminative features avoiding the design of specific hand-crafted image-based feature detectors. METHODS A new biopsy proven benchmarking dataset was built from 344 breast cancer patients' cases containing a total of 736 film mammography (mediolateral oblique and craniocaudal) views, representative of manually segmented lesions associated with masses: 426 benign lesions and 310 malignant lesions. The developed method comprises two main stages: (i) preprocessing to enhance image details and (ii) supervised training for learning both the features and the breast imaging lesions classifier. In contrast to previous works, we adopt a hybrid approach where convolutional neural networks are used to learn the representation in a supervised way instead of designing particular descriptors to explain the content of mammography images. RESULTS Experimental results using the developed benchmarking breast cancer dataset demonstrated that our method exhibits significant improved performance when compared to state-of-the-art image descriptors, such as histogram of oriented gradients (HOG) and histogram of the gradient divergence (HGD), increasing the performance from 0.787 to 0.822 in terms of the area under the ROC curve (AUC). Interestingly, this model also outperforms a set of hand-crafted features that take advantage of additional information from segmentation by the radiologist. Finally, the combination of both representations, learned and hand-crafted, resulted in the best descriptor for mass lesion classification, obtaining 0.826 in the AUC score. CONCLUSIONS A novel deep learning based framework to automatically address classification of breast mass lesions in mammography was developed.
Collapse
Affiliation(s)
- John Arevalo
- Universidad Nacional de Colombia, Bogotá, Colombia.
| | | | | | | | | |
Collapse
|
23
|
Wavelet-based 3D reconstruction of microcalcification clusters from two mammographic views: new evidence that fractal tumors are malignant and Euclidean tumors are benign. PLoS One 2014; 9:e107580. [PMID: 25222610 PMCID: PMC4164655 DOI: 10.1371/journal.pone.0107580] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2014] [Accepted: 08/20/2014] [Indexed: 12/14/2022] Open
Abstract
The 2D Wavelet-Transform Modulus Maxima (WTMM) method was used to detect microcalcifications (MC) in human breast tissue seen in mammograms and to characterize the fractal geometry of benign and malignant MC clusters. This was done in the context of a preliminary analysis of a small dataset, via a novel way to partition the wavelet-transform space-scale skeleton. For the first time, the estimated 3D fractal structure of a breast lesion was inferred by pairing the information from two separate 2D projected mammographic views of the same breast, i.e. the cranial-caudal (CC) and mediolateral-oblique (MLO) views. As a novelty, we define the “CC-MLO fractal dimension plot”, where a “fractal zone” and “Euclidean zones” (non-fractal) are defined. 118 images (59 cases, 25 malignant and 34 benign) obtained from a digital databank of mammograms with known radiologist diagnostics were analyzed to determine which cases would be plotted in the fractal zone and which cases would fall in the Euclidean zones. 92% of malignant breast lesions studied (23 out of 25 cases) were in the fractal zone while 88% of the benign lesions were in the Euclidean zones (30 out of 34 cases). Furthermore, a Bayesian statistical analysis shows that, with 95% credibility, the probability that fractal breast lesions are malignant is between 74% and 98%. Alternatively, with 95% credibility, the probability that Euclidean breast lesions are benign is between 76% and 96%. These results support the notion that the fractal structure of malignant tumors is more likely to be associated with an invasive behavior into the surrounding tissue compared to the less invasive, Euclidean structure of benign tumors. Finally, based on indirect 3D reconstructions from the 2D views, we conjecture that all breast tumors considered in this study, benign and malignant, fractal or Euclidean, restrict their growth to 2-dimensional manifolds within the breast tissue.
Collapse
|
24
|
Cascio D, Fauci F, Iacomi M, Raso G, Magro R, Castrogiovanni D, Filosto G, Ienzi R, Vasile MS. Computer-aided diagnosis in digital mammography: comparison of two commercial systems. ACTA ACUST UNITED AC 2014. [DOI: 10.2217/iim.13.68] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
25
|
Sharaf-El-Deen DA, Moawad IF, Khalifa ME. A new hybrid case-based reasoning approach for medical diagnosis systems. J Med Syst 2014; 38:9. [PMID: 24469683 DOI: 10.1007/s10916-014-0009-1] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2013] [Accepted: 01/03/2014] [Indexed: 10/25/2022]
Abstract
Case-Based Reasoning (CBR) has been applied in many different medical applications. Due to the complexities and the diversities of this domain, most medical CBR systems become hybrid. Besides, the case adaptation process in CBR is often a challenging issue as it is traditionally carried out manually by domain experts. In this paper, a new hybrid case-based reasoning approach for medical diagnosis systems is proposed to improve the accuracy of the retrieval-only CBR systems. The approach integrates case-based reasoning and rule-based reasoning, and also applies the adaptation process automatically by exploiting adaptation rules. Both adaptation rules and reasoning rules are generated from the case-base. After solving a new case, the case-base is expanded, and both adaptation and reasoning rules are updated. To evaluate the proposed approach, a prototype was implemented and experimented to diagnose breast cancer and thyroid diseases. The final results show that the proposed approach increases the diagnosing accuracy of the retrieval-only CBR systems, and provides a reliable accuracy comparing to the current breast cancer and thyroid diagnosis systems.
Collapse
Affiliation(s)
- Dina A Sharaf-El-Deen
- Faculty of Computer and Information Sciences, Ain Shams University, Abbasia, Cairo, Egypt,
| | | | | |
Collapse
|
26
|
Liu YI, Rubin DL. The role of informatics in health care reform. Acad Radiol 2012; 19:1094-9. [PMID: 22771052 DOI: 10.1016/j.acra.2012.05.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2012] [Revised: 05/11/2012] [Accepted: 05/15/2012] [Indexed: 11/26/2022]
Abstract
Improving health care quality while simultaneously reducing cost has become a high priority of health care reform. Informatics is crucial in tackling this challenge. The American Recovery and Reinvestment Act of 2009 mandates adaptation and "meaningful use " of health information technology. In this review, we will highlight several areas in which informatics can make significant contributions, with a focus on radiology. We also discuss informatics related to the increasing imperatives of state and local regulations (such as radiation dose tracking) and quality initiatives.
Collapse
|
27
|
Ayvaci MUS, Alagoz O, Burnside ES. The Effect of Budgetary Restrictions on Breast Cancer Diagnostic Decisions. MANUFACTURING & SERVICE OPERATIONS MANAGEMENT : M & SOM 2012; 14:600-617. [PMID: 24027436 PMCID: PMC3767197 DOI: 10.1287/msom.1110.0371] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
We develop a finite-horizon discrete-time constrained Markov decision process (MDP) to model diagnostic decisions after mammography where we maximize the total expected quality-adjusted life years (QALYs) of a patient under resource constraints. We use clinical data to estimate the parameters of the MDP model and solve it as a mixed-integer program. By repeating optimization for a sequence of budget levels, we calculate incremental cost-effectiveness ratios attributable to consecutive levels of funding and compare actual clinical practice with optimal decisions. We prove that the optimal value function is concave in the allocated budget. Comparing to actual clinical practice, using optimal thresholds for decision making may result in approximately 22% cost savings without sacrificing QALYs. Our analysis indicates short-term follow-ups are the immediate target for elimination when budget becomes a concern. Policy change is more drastic in the older age group with the increasing budget, yet the gains in total expected QALYs related to larger budgets are predominantly seen in younger women along with modest gains for older women.
Collapse
Affiliation(s)
- Mehmet U. S. Ayvaci
- Department of Industrial and Systems Engineering, University of Wisconsin–Madison, Madison, Wisconsin 53706
| | - Oguzhan Alagoz
- Department of Industrial and Systems Engineering, University of Wisconsin–Madison, Madison, Wisconsin 53706
| | | |
Collapse
|