1
|
Dale R, Ross N, Howard S, O’Sullivan TD, Dehghani H. Towards real-time diffuse optical tomography with a handheld scanning probe. BIOMEDICAL OPTICS EXPRESS 2025; 16:1582-1601. [PMID: 40322000 PMCID: PMC12047716 DOI: 10.1364/boe.549880] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/27/2024] [Revised: 02/12/2025] [Accepted: 03/12/2025] [Indexed: 05/08/2025]
Abstract
Diffuse optical tomography (DOT) performed using deep-learning allows high-speed reconstruction of tissue optical properties and could thereby enable image-guided scanning, e.g., to enhance clinical breast imaging. Previously published models are geometry-specific and, therefore, require extensive data generation and training for each use case, restricting the scanning protocol at the point of use. A transformer-based architecture is proposed to overcome these obstacles that encode spatially unstructured DOT measurements, enabling a single trained model to handle arbitrary scanning pathways and measurement density. The model is demonstrated with breast tissue-emulating simulated and phantom data, yielding - for 24 mm-deep absorptions (μ a ) and reduced scattering (μ s ') images, respectively - average RMSEs of 0.0095±0.0023 cm-1 and 1.95±0.78 cm-1, Sørensen-Dice coefficients of 0.55±0.12 and 0.67±0.1, and anomaly contrast of 79±10% and 93.3±4.6% of the ground-truth contrast, with an effective imaging speed of 14 Hz. The average absolute μ a and μ s ' values of homogeneous simulated examples were within 10% of the true values.
Collapse
Affiliation(s)
- Robin Dale
- University of Birmingham, Medical Imaging Lab, School of Computer Science, University Rd W, Birmingham, B15 2TT, UK
| | - Nicholas Ross
- University of Notre Dame, Department of Electrical Engineering and Bioengineering Program, 275 Fitzpatrick Hall, Notre Dame, Indiana, 46556, USA
| | - Scott Howard
- University of Notre Dame, Department of Electrical Engineering and Bioengineering Program, 275 Fitzpatrick Hall, Notre Dame, Indiana, 46556, USA
| | - Thomas D. O’Sullivan
- University of Notre Dame, Department of Electrical Engineering and Bioengineering Program, 275 Fitzpatrick Hall, Notre Dame, Indiana, 46556, USA
| | - Hamid Dehghani
- University of Birmingham, Medical Imaging Lab, School of Computer Science, University Rd W, Birmingham, B15 2TT, UK
| |
Collapse
|
2
|
Harnischmacher N, Rodner E, Schmitz CH. Detection of breast cancer using machine learning on time-series diffuse optical transillumination data. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:115001. [PMID: 39529875 PMCID: PMC11552526 DOI: 10.1117/1.jbo.29.11.115001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/19/2024] [Revised: 09/11/2024] [Accepted: 09/25/2024] [Indexed: 11/16/2024]
Abstract
Significance Optical mammography as a promising tool for cancer diagnosis has largely fallen behind expectations. Modern machine learning (ML) methods offer ways to improve cancer detection in diffuse optical transmission data. Aim We aim to quantitatively evaluate the classification of cancer-positive versus cancer-negative patients using ML methods on raw transmission time series data from bilateral breast scans during subjects' rest. Approach We use a support vector machine (SVM) with hyperparameter optimization and cross-validation to systematically explore a range of data preprocessing and feature-generation strategies. We also apply an automated ML (AutoML) framework to validate our findings. We use receiver operating characteristics and the corresponding area under the curve (AUC) to quantify classification performance. Results For the sample group available ( N = 63 , 18 cancer patients), we demonstrate an AUC score of up to 93.3% for SVM classification and up to 95.0% for the AutoML classifier. Conclusions ML offers a viable strategy for clinically relevant breast cancer diagnosis using diffuse-optical transmission measurements. The diagnostic performance of ML on raw data can outperform traditional statistical biomarkers derived from reconstructed image time series. To achieve clinically relevant performance, our ML approach requires simultaneous bilateral scanning of the breasts with spatially dense channel coverage.
Collapse
Affiliation(s)
- Nils Harnischmacher
- HTW - University of Applied Sciences Berlin, Faculty II, KI-Werkstatt, Berlin, Germany
| | - Erik Rodner
- HTW - University of Applied Sciences Berlin, Faculty II, KI-Werkstatt, Berlin, Germany
| | - Christoph H. Schmitz
- HTW - University of Applied Sciences Berlin, Faculty I - Health Electronics, Biomedical Electronics and Applied Research (BEAR) Labs, Berlin, Germany
| |
Collapse
|
3
|
Xue M, Li S, Zhu Q. Improving diffuse optical tomography imaging quality using APU-Net: an attention-based physical U-Net model. JOURNAL OF BIOMEDICAL OPTICS 2024; 29:086001. [PMID: 39070721 PMCID: PMC11272096 DOI: 10.1117/1.jbo.29.8.086001] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2024] [Revised: 05/28/2024] [Accepted: 07/01/2024] [Indexed: 07/30/2024]
Abstract
Significance Traditional diffuse optical tomography (DOT) reconstructions are hampered by image artifacts arising from factors such as DOT sources being closer to shallow lesions, poor optode-tissue coupling, tissue heterogeneity, and large high-contrast lesions lacking information in deeper regions (known as shadowing effect). Addressing these challenges is crucial for improving the quality of DOT images and obtaining robust lesion diagnosis. Aim We address the limitations of current DOT imaging reconstruction by introducing an attention-based U-Net (APU-Net) model to enhance the image quality of DOT reconstruction, ultimately improving lesion diagnostic accuracy. Approach We designed an APU-Net model incorporating a contextual transformer attention module to enhance DOT reconstruction. The model was trained on simulation and phantom data, focusing on challenges such as artifact-induced distortions and lesion-shadowing effects. The model was then evaluated by the clinical data. Results Transitioning from simulation and phantom data to clinical patients' data, our APU-Net model effectively reduced artifacts with an average artifact contrast decrease of 26.83% and improved image quality. In addition, statistical analyses revealed significant contrast improvements in depth profile with an average contrast increase of 20.28% and 45.31% for the second and third target layers, respectively. These results highlighted the efficacy of our approach in breast cancer diagnosis. Conclusions The APU-Net model improves the image quality of DOT reconstruction by reducing DOT image artifacts and improving the target depth profile.
Collapse
Affiliation(s)
- Minghao Xue
- Washington University in St. Louis, Biomedical Engineering Department, St. Louis, Missouri, United States
| | - Shuying Li
- Boston University, Electrical and Computer Engineering Department, Boston, Massachusetts, United States
| | - Quing Zhu
- Washington University in St. Louis, Biomedical Engineering Department, St. Louis, Missouri, United States
- Washington University in St. Louis, Radiology Department, St. Louis, Missouri, United States
| |
Collapse
|
4
|
Zhao Y, Li X, Zhou C, Peng H, Zheng Z, Chen J, Ding W. A review of cancer data fusion methods based on deep learning. INFORMATION FUSION 2024; 108:102361. [DOI: 10.1016/j.inffus.2024.102361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2025]
|
5
|
Islam T, Hoque ME, Ullah M, Islam T, Nishu NA, Islam R. CNN-based deep learning approach for classification of invasive ductal and metastasis types of breast carcinoma. Cancer Med 2024; 13:e70069. [PMID: 39215495 PMCID: PMC11364780 DOI: 10.1002/cam4.70069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2023] [Revised: 04/04/2024] [Accepted: 07/23/2024] [Indexed: 09/04/2024] Open
Abstract
OBJECTIVE Breast cancer is one of the leading cancer causes among women worldwide. It can be classified as invasive ductal carcinoma (IDC) or metastatic cancer. Early detection of breast cancer is challenging due to the lack of early warning signs. Generally, a mammogram is recommended by specialists for screening. Existing approaches are not accurate enough for real-time diagnostic applications and thus require better and smarter cancer diagnostic approaches. This study aims to develop a customized machine-learning framework that will give more accurate predictions for IDC and metastasis cancer classification. METHODS This work proposes a convolutional neural network (CNN) model for classifying IDC and metastatic breast cancer. The study utilized a large-scale dataset of microscopic histopathological images to automatically perceive a hierarchical manner of learning and understanding. RESULTS It is evident that using machine learning techniques significantly (15%-25%) boost the effectiveness of determining cancer vulnerability, malignancy, and demise. The results demonstrate an excellent performance ensuring an average of 95% accuracy in classifying metastatic cells against benign ones and 89% accuracy was obtained in terms of detecting IDC. CONCLUSIONS The results suggest that the proposed model improves classification accuracy. Therefore, it could be applied effectively in classifying IDC and metastatic cancer in comparison to other state-of-the-art models.
Collapse
Affiliation(s)
- Tobibul Islam
- Department of Biomedical EngineeringMilitary Institute of Science and TechnologyDhakaBangladesh
| | - Md Enamul Hoque
- Department of Biomedical EngineeringMilitary Institute of Science and TechnologyDhakaBangladesh
| | - Mohammad Ullah
- Center for Advance Intelligent MaterialsUniversiti Malaysia PahangKuantanMalaysia
| | - Toufiqul Islam
- Department of SurgeryM Abdur Rahim Medical CollegeDinajpurBangladesh
| | | | - Rabiul Islam
- Department of Electrical and Computer EngineeringTexas A&M UniversityCollege StationTexasUSA
| |
Collapse
|
6
|
Ben Yedder H, Cardoen B, Shokoufi M, Golnaraghi F, Hamarneh G. Deep orthogonal multi-wavelength fusion for tomogram-free diagnosis in diffuse optical imaging. Comput Biol Med 2024; 178:108676. [PMID: 38878395 DOI: 10.1016/j.compbiomed.2024.108676] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2024] [Revised: 05/15/2024] [Accepted: 05/27/2024] [Indexed: 07/24/2024]
Abstract
Novel portable diffuse optical tomography (DOT) devices for breast cancer lesions hold great promise for non-invasive, non-ionizing breast cancer screening. Critical to this capability is not just the identification of lesions but rather the complex problem of discriminating between malignant and benign lesions. To accurately reconstruct the highly heterogeneous tissue of a cancer lesion in healthy breast tissue using DOT, multiple wavelengths can be leveraged to maximize signal penetration while minimizing sensitivity to noise. However, these wavelength responses can overlap, capture common information, and correlate, potentially confounding reconstruction and downstream end tasks. We show that an orthogonal fusion loss regularizes multi-wavelength DOT leading to improved reconstruction and accuracy of end-to-end discrimination of malignant versus benign lesions. We further show that our raw-to-task model significantly reduces computational complexity without sacrificing accuracy, making it ideal for real-time throughput, desired in medical settings where handheld devices have severely restricted power budgets. Furthermore, our results indicate that image reconstruction is not necessary for unbiased classification of lesions with a balanced accuracy of 77% and 66% on the synthetic dataset and clinical dataset, respectively, using the raw-to-task model. Code is available at https://github.com/sfu-mial/FuseNet.
Collapse
Affiliation(s)
- Hanene Ben Yedder
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, BC Canada V5A 1S6.
| | - Ben Cardoen
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, BC Canada V5A 1S6
| | - Majid Shokoufi
- School of Mechatronic Systems Engineering, Simon Fraser University, BC Canada V5A 1S6
| | - Farid Golnaraghi
- School of Mechatronic Systems Engineering, Simon Fraser University, BC Canada V5A 1S6
| | - Ghassan Hamarneh
- Medical Image Analysis Lab, School of Computing Science, Simon Fraser University, BC Canada V5A 1S6.
| |
Collapse
|
7
|
Li S, Zhang M, Xue M, Zhu Q. Real-time breast lesion classification combining diffuse optical tomography frequency domain data and BI-RADS assessment. JOURNAL OF BIOPHOTONICS 2024; 17:e202300483. [PMID: 38430216 PMCID: PMC11065578 DOI: 10.1002/jbio.202300483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/21/2023] [Revised: 02/07/2024] [Accepted: 02/08/2024] [Indexed: 03/03/2024]
Abstract
Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated potential for breast cancer diagnosis, in which real-time or near real-time diagnosis with high accuracy is desired. However, DOT's relatively slow data processing and image reconstruction speeds have hindered real-time diagnosis. Here, we propose a real-time classification scheme that combines US breast imaging reporting and data system (BI-RADS) readings and DOT frequency domain measurements. A convolutional neural network is trained to generate malignancy probability scores from DOT measurements. Subsequently, these scores are integrated with BI-RADS assessments using a support vector machine classifier, which then provides the final diagnostic output. An area under the receiver operating characteristic curve of 0.978 is achieved in distinguishing between benign and malignant breast lesions in patient data without image reconstruction.
Collapse
Affiliation(s)
- Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Menghao Zhang
- Department of Electrical & Systems Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Minghao Xue
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, 63130 St. Louis, USA
- Department of Electrical & Systems Engineering, Washington University in St. Louis, 63130 St. Louis, USA
- Department of Radiology, Washington University School of Medicine, 63110 St. Louis, USA
| |
Collapse
|
8
|
Carriero A, Groenhoff L, Vologina E, Basile P, Albera M. Deep Learning in Breast Cancer Imaging: State of the Art and Recent Advancements in Early 2024. Diagnostics (Basel) 2024; 14:848. [PMID: 38667493 PMCID: PMC11048882 DOI: 10.3390/diagnostics14080848] [Citation(s) in RCA: 18] [Impact Index Per Article: 18.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/29/2024] [Revised: 04/07/2024] [Accepted: 04/17/2024] [Indexed: 04/28/2024] Open
Abstract
The rapid advancement of artificial intelligence (AI) has significantly impacted various aspects of healthcare, particularly in the medical imaging field. This review focuses on recent developments in the application of deep learning (DL) techniques to breast cancer imaging. DL models, a subset of AI algorithms inspired by human brain architecture, have demonstrated remarkable success in analyzing complex medical images, enhancing diagnostic precision, and streamlining workflows. DL models have been applied to breast cancer diagnosis via mammography, ultrasonography, and magnetic resonance imaging. Furthermore, DL-based radiomic approaches may play a role in breast cancer risk assessment, prognosis prediction, and therapeutic response monitoring. Nevertheless, several challenges have limited the widespread adoption of AI techniques in clinical practice, emphasizing the importance of rigorous validation, interpretability, and technical considerations when implementing DL solutions. By examining fundamental concepts in DL techniques applied to medical imaging and synthesizing the latest advancements and trends, this narrative review aims to provide valuable and up-to-date insights for radiologists seeking to harness the power of AI in breast cancer care.
Collapse
Affiliation(s)
| | - Léon Groenhoff
- Radiology Department, Maggiore della Carità Hospital, 28100 Novara, Italy; (A.C.); (E.V.); (P.B.); (M.A.)
| | | | | | | |
Collapse
|
9
|
Liu Z, Jia J, Bai F, Ding Y, Han L, Bai G. Predicting rectal cancer tumor budding grading based on MRI and CT with multimodal deep transfer learning: A dual-center study. Heliyon 2024; 10:e28769. [PMID: 38590908 PMCID: PMC11000007 DOI: 10.1016/j.heliyon.2024.e28769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 03/24/2024] [Accepted: 03/24/2024] [Indexed: 04/10/2024] Open
Abstract
Objective To investigate the effectiveness of a multimodal deep learning model in predicting tumor budding (TB) grading in rectal cancer (RC) patients. Materials and methods A retrospective analysis was conducted on 355 patients with rectal adenocarcinoma from two different hospitals. Among them, 289 patients from our institution were randomly divided into an internal training cohort (n = 202) and an internal validation cohort (n = 87) in a 7:3 ratio, while an additional 66 patients from another hospital constituted an external validation cohort. Various deep learning models were constructed and compared for their performance using T1CE and CT-enhanced images, and the optimal models were selected for the creation of a multimodal fusion model. Based on single and multiple factor logistic regression, clinical N staging and fecal occult blood were identified as independent risk factors and used to construct the clinical model. A decision-level fusion was employed to integrate these two models to create an ensemble model. The predictive performance of each model was evaluated using the area under the curve (AUC), DeLong's test, calibration curve, and decision curve analysis (DCA). Model visualization Gradient-weighted Class Activation Mapping (Grad-CAM) was performed for model interpretation. Results The multimodal fusion model demonstrated superior performance compared to single-modal models, with AUC values of 0.869 (95% CI: 0.761-0.976) for the internal validation cohort and 0.848 (95% CI: 0.721-0.975) for the external validation cohort. N-stage and fecal occult blood were identified as clinically independent risk factors through single and multivariable logistic regression analysis. The final ensemble model exhibited the best performance, with AUC values of 0.898 (95% CI: 0.820-0.975) for the internal validation cohort and 0.868 (95% CI: 0.768-0.968) for the external validation cohort. Conclusion Multimodal deep learning models can effectively and non-invasively provide individualized predictions for TB grading in RC patients, offering valuable guidance for treatment selection and prognosis assessment.
Collapse
Affiliation(s)
- Ziyan Liu
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Jianye Jia
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Fan Bai
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Yuxin Ding
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| | - Lei Han
- Deparment of Medical Imaging, Huaian Hospital Affiliated to Xuzhou Medical University, Huaian, Jiangsu, China
| | - Genji Bai
- Deparment of Medical Imaging Center, The Affiliated Huaian NO.1 People's Hospital of Nanjing Medical University, Huaian, Jiangsu, China
| |
Collapse
|
10
|
Yi H, Yang R, Wang Y, Wang Y, Guo H, Cao X, Zhu S, He X. Enhanced model iteration algorithm with graph neural network for diffuse optical tomography. BIOMEDICAL OPTICS EXPRESS 2024; 15:1910-1925. [PMID: 38495688 PMCID: PMC10942675 DOI: 10.1364/boe.509775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 02/01/2024] [Accepted: 02/12/2024] [Indexed: 03/19/2024]
Abstract
Diffuse optical tomography (DOT) employs near-infrared light to reveal the optical parameters of biological tissues. Due to the strong scattering of photons in tissues and the limited surface measurements, DOT reconstruction is severely ill-posed. The Levenberg-Marquardt (LM) is a popular iteration method for DOT, however, it is computationally expensive and its reconstruction accuracy needs improvement. In this study, we propose a neural model based iteration algorithm which combines the graph neural network with Levenberg-Marquardt (GNNLM), which utilizes a graph data structure to represent the finite element mesh. In order to verify the performance of the graph neural network, two GNN variants, namely graph convolutional neural network (GCN) and graph attention neural network (GAT) were employed in the experiments. The results showed that GCNLM performs best in the simulation experiments within the training data distribution. However, GATLM exhibits superior performance in the simulation experiments outside the training data distribution and real experiments with breast-like phantoms. It demonstrated that the GATLM trained with simulation data can generalize well to situations outside the training data distribution without transfer training. This offers the possibility to provide more accurate absorption coefficient distributions in clinical practice.
Collapse
Affiliation(s)
- Huangjian Yi
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Ruigang Yang
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Yishuo Wang
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Yihan Wang
- School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710026, China
| | - Hongbo Guo
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| | - Xu Cao
- School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710026, China
| | - Shouping Zhu
- School of Life Science and Technology, Xidian University, Xi’an, Shaanxi 710026, China
| | - Xiaowei He
- School of Information Sciences and Technology, Northwest University, Xi’an, Shaanxi 710069, China
- The Xi’an Key Laboratory of Radiomics and Intelligent Perception, No. 1 Xuefu Avenue, 710127 Xi’an, Shaanxi, China
| |
Collapse
|
11
|
Oyelade ON, Irunokhai EA, Wang H. A twin convolutional neural network with hybrid binary optimizer for multimodal breast cancer digital image classification. Sci Rep 2024; 14:692. [PMID: 38184742 PMCID: PMC10771515 DOI: 10.1038/s41598-024-51329-8] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2023] [Accepted: 01/03/2024] [Indexed: 01/08/2024] Open
Abstract
There is a wide application of deep learning technique to unimodal medical image analysis with significant classification accuracy performance observed. However, real-world diagnosis of some chronic diseases such as breast cancer often require multimodal data streams with different modalities of visual and textual content. Mammography, magnetic resonance imaging (MRI) and image-guided breast biopsy represent a few of multimodal visual streams considered by physicians in isolating cases of breast cancer. Unfortunately, most studies applying deep learning techniques to solving classification problems in digital breast images have often narrowed their study to unimodal samples. This is understood considering the challenging nature of multimodal image abnormality classification where the fusion of high dimension heterogeneous features learned needs to be projected into a common representation space. This paper presents a novel deep learning approach combining a dual/twin convolutional neural network (TwinCNN) framework to address the challenge of breast cancer image classification from multi-modalities. First, modality-based feature learning was achieved by extracting both low and high levels features using the networks embedded with TwinCNN. Secondly, to address the notorious problem of high dimensionality associated with the extracted features, binary optimization method is adapted to effectively eliminate non-discriminant features in the search space. Furthermore, a novel method for feature fusion is applied to computationally leverage the ground-truth and predicted labels for each sample to enable multimodality classification. To evaluate the proposed method, digital mammography images and digital histopathology breast biopsy samples from benchmark datasets namely MIAS and BreakHis respectively. Experimental results obtained showed that the classification accuracy and area under the curve (AUC) for the single modalities yielded 0.755 and 0.861871 for histology, and 0.791 and 0.638 for mammography. Furthermore, the study investigated classification accuracy resulting from the fused feature method, and the result obtained showed that 0.977, 0.913, and 0.667 for histology, mammography, and multimodality respectively. The findings from the study confirmed that multimodal image classification based on combination of image features and predicted label improves performance. In addition, the contribution of the study shows that feature dimensionality reduction based on binary optimizer supports the elimination of non-discriminant features capable of bottle-necking the classifier.
Collapse
Affiliation(s)
- Olaide N Oyelade
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK.
| | | | - Hui Wang
- School of Electronics, Electrical Engineering and Computer Science, Queen's University Belfast, Belfast, BT9 SBN, UK
| |
Collapse
|
12
|
Xue M, Zhang M, Li S, Zou Y, Zhu Q. Automated pipeline for breast cancer diagnosis using US assisted diffuse optical tomography. BIOMEDICAL OPTICS EXPRESS 2023; 14:6072-6087. [PMID: 38021111 PMCID: PMC10659805 DOI: 10.1364/boe.502244] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/11/2023] [Accepted: 10/11/2023] [Indexed: 12/01/2023]
Abstract
Ultrasound (US)-guided diffuse optical tomography (DOT) is a portable and non-invasive imaging modality for breast cancer diagnosis and treatment response monitoring. However, DOT data pre-processing and imaging reconstruction often require labor intensive manual processing which hampers real-time diagnosis. In this study, we aim at providing an automated US-assisted DOT pre-processing, imaging and diagnosis pipeline to achieve near real-time diagnosis. We have developed an automated DOT pre-processing method including motion detection, mismatch classification using deep-learning approach, and outlier removal. US-lesion information needed for DOT reconstruction was extracted by a semi-automated lesion segmentation approach combined with a US reading algorithm. A deep learning model was used to evaluate the quality of the reconstructed DOT images and a two-step deep-learning model developed earlier is implemented to provide final diagnosis based on US imaging features and DOT measurements and imaging results. The presented US-assisted DOT pipeline accurately processed the DOT measurements and reconstruction and reduced the procedure time to 2 to 3 minutes while maintained a comparable classification result with manually processed dataset.
Collapse
Affiliation(s)
- Minghao Xue
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Menghao Zhang
- Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Shuying Li
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Yun Zou
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
| | - Quing Zhu
- Department of Biomedical Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
- Department of Electrical & Systems Engineering, Washington University in St. Louis, St. Louis, MO 63130, USA
- Department of Radiology, Washington University School of Medicine, St. Louis, MO 63110, USA
| |
Collapse
|
13
|
Nouizi F, Kwong TC, Turong B, Nikkhah D, Sampathkumaran U, Gulsen G. Fast ICCD-based temperature modulated fluorescence tomography. APPLIED OPTICS 2023; 62:7420-7430. [PMID: 37855510 PMCID: PMC11396546 DOI: 10.1364/ao.499281] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/05/2023] [Accepted: 09/06/2023] [Indexed: 10/20/2023]
Abstract
Fluorescence tomography (FT) has become a powerful preclinical imaging modality with a great potential for several clinical applications. Although it has superior sensitivity and utilizes low-cost instrumentation, the highly scattering nature of bio-tissue makes FT in thick samples challenging, resulting in poor resolution and low quantitative accuracy. To overcome the limitations of FT, we previously introduced a novel method, termed temperature modulated fluorescence tomography (TMFT), which is based on two key elements: (1) temperature-sensitive fluorescent agents (ThermoDots) and (2) high-intensity focused ultrasound (HIFU). The fluorescence emission of ThermoDots increases up to hundredfold with only several degree temperature elevation. The exceptional and reversible response of these ThermoDots enables their modulation, which effectively allows their localization using the HIFU. Their localization is then used as functional a priori during the FT image reconstruction process to resolve their distribution with higher spatial resolution. The last version of the TMFT system was based on a cooled CCD camera utilizing a step-and-shoot mode, which necessitated long total imaging time only for a small selected region of interest (ROI). In this paper, we present the latest version of our TMFT technology, which uses a much faster continuous HIFU scanning mode based on an intensified CCD (ICCD) camera. This new, to the best of our knowledge, version can capture the whole field-of-view (FOV) of 50×30m m 2 at once and reduces the total imaging time down to 30 min, while preserving the same high resolution (∼1.3m m) and superior quantitative accuracy (<7% error) as the previous versions. Therefore, this new method is an important step toward utilization of TMFT for preclinical imaging.
Collapse
|
14
|
Zhang M, Li S, Xue M, Zhu Q. Two-stage classification strategy for breast cancer diagnosis using ultrasound-guided diffuse optical tomography and deep learning. JOURNAL OF BIOMEDICAL OPTICS 2023; 28:086002. [PMID: 37638108 PMCID: PMC10457211 DOI: 10.1117/1.jbo.28.8.086002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 07/29/2023] [Accepted: 08/02/2023] [Indexed: 08/29/2023]
Abstract
Significance Ultrasound (US)-guided diffuse optical tomography (DOT) has demonstrated great potential for breast cancer diagnosis in which real-time or near real-time diagnosis with high accuracy is desired. Aim We aim to use US-guided DOT to achieve an automated, fast, and accurate classification of breast lesions. Approach We propose a two-stage classification strategy with deep learning. In the first stage, US images and histograms created from DOT perturbation measurements are combined to predict benign lesions. Then the non-benign suspicious lesions are passed through to the second stage, which combine US image features, DOT histogram features, and 3D DOT reconstructed images for final diagnosis. Results The first stage alone identified 73.0% of benign cases without image reconstruction. In distinguishing between benign and malignant breast lesions in patient data, the two-stage classification approach achieved an area under the receiver operating characteristic curve of 0.946, outperforming the diagnoses of all single-modality models and of a single-stage classification model that combines all US images, DOT histogram, and imaging features. Conclusions The proposed two-stage classification strategy achieves better classification accuracy than single-modality-only models and a single-stage classification model that combines all features. It can potentially distinguish breast cancers from benign lesions in near real-time.
Collapse
Affiliation(s)
- Menghao Zhang
- Washington University in St. Louis, Department of Electrical and Systems Engineering, St. Louis, Missouri, United States
| | - Shuying Li
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| | - Minghao Xue
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
| | - Quing Zhu
- Washington University in St. Louis, Department of Electrical and Systems Engineering, St. Louis, Missouri, United States
- Washington University in St. Louis, Department of Biomedical Engineering, St. Louis, Missouri, United States
- Washington University School of Medicine, Department of Radiology, St. Louis, Missouri, United States
| |
Collapse
|