1
|
Liu X, Zhang R, Chen J, Qin S, Chen L, Yi H, Liu X, Li G, Liu G. Computer-aided diagnosis tool utilizing a deep learning model for preoperative T-staging of rectal cancer based on three-dimensional endorectal ultrasound. Abdom Radiol (NY) 2025:10.1007/s00261-025-04966-0. [PMID: 40304753 DOI: 10.1007/s00261-025-04966-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2025] [Revised: 04/11/2025] [Accepted: 04/21/2025] [Indexed: 05/02/2025]
Abstract
BACKGROUND The prognosis and treatment outcomes for patients with rectal cancer are critically dependent on an accurate and comprehensive preoperative evaluation.Three-dimensional endorectal ultrasound (3D-ERUS) has demonstrated high accuracy in the T staging of rectal cancer. Thus, we aimed to develop a computer-aided diagnosis (CAD) tool using a deep learning model for the preoperative T-staging of rectal cancer with 3D-ERUS. METHODS We retrospectively analyzed the data of 216 rectal cancer patients who underwent 3D-ERUS. The patients were randomly assigned to a training cohort (n = 156) or a testing cohort (n = 60). Radiologists interpreted the 3D-ERUS images of the testing cohort with and without the CAD tool. The diagnostic performance of the CAD tool and its impact on the radiologists' interpretations were evaluated. RESULTS The CAD tool demonstrated high diagnostic efficacy for rectal cancer tumors of all T stages, with the best diagnostic performance achieved for T1-stage tumors (AUC, 0.85; 95% CI, 0.73-0.93). With assistance from the CAD tool, the AUC for T1 tumors improved from 0.76 (95% CI, 0.63-0.86) to 0.80 (95% CI, 0.68-0.94) (P = 0.020) for junior radiologist 2. For junior radiologist 1, the AUC improved from 0.61 (95% CI, 0.48-0.73) to 0.79 (95% CI, 0.66-0.88) (P = 0.013) for T2 tumors and from 0.73 (95% CI, 0.60-0.84) to 0.84 (95% CI, 0.72-0.92) (P = 0.038) for T3 tumors. The diagnostic consistency (κ value) also improved from 0.31 to 0.64 (P = 0.005) for the junior radiologists and from 0.52 to 0.66 (P = 0.005) for the senior radiologists. CONCLUSION A CAD tool utilizing a deep learning model based on 3D-ERUS images showed strong performance in T staging rectal cancer. This tool could improve the performance of and consistency between radiologists in preoperatively assessing rectal cancer patients.
Collapse
Affiliation(s)
- Xiaoyin Liu
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Ruifei Zhang
- School of Computer Science, Sun Yat-sen University, Guangzhou, China
| | - Junzhao Chen
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Si Qin
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Limei Chen
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Hang Yi
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xiaowen Liu
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Guanbin Li
- School of Computer Science, Sun Yat-sen University, Guangzhou, China.
| | - Guangjian Liu
- Department of Medical Ultrasonics, Biomedical Innovation Center, The Sixth Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
2
|
Tao X, Cao Y, Jiang Y, Wu X, Yan D, Xue W, Zhuang S, Yang X, Huang R, Zhang J, Ni D. Enhancing lesion detection in automated breast ultrasound using unsupervised multi-view contrastive learning with 3D DETR. Med Image Anal 2025; 101:103466. [PMID: 39854815 DOI: 10.1016/j.media.2025.103466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2023] [Revised: 12/28/2024] [Accepted: 01/09/2025] [Indexed: 01/27/2025]
Abstract
The inherent variability of lesions poses challenges in leveraging AI in 3D automated breast ultrasound (ABUS) for lesion detection. Traditional methods based on single scans have fallen short compared to comprehensive evaluations by experienced sonologists using multiple scans. To address this, our study introduces an innovative approach combining the multi-view co-attention mechanism (MCAM) with unsupervised contrastive learning. Rooted in the detection transformer (DETR) architecture, our model employs a one-to-many matching strategy, significantly boosting training efficiency and lesion recall metrics. The model integrates MCAM within the decoder, facilitating the interpretation of lesion data across diverse views. Simultaneously, unsupervised multi-view contrastive learning (UMCL) aligns features consistently across scans, improving detection performance. When tested on two multi-center datasets comprising 1509 patients, our approach outperforms existing state-of-the-art 3D detection models. Notably, our model achieves a 90.3% cancer detection rate with a false positive per image (FPPI) rate of 0.5 on the external validation dataset. This surpasses junior sonologists and matches the performance of seasoned experts.
Collapse
Affiliation(s)
- Xing Tao
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Yan Cao
- Shenzhen RayShape Medical Technology Co., Ltd, Shenzhen, China
| | - Yanhui Jiang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xiaoxi Wu
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Dan Yan
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Wen Xue
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Shulian Zhuang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China
| | - Xin Yang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China
| | - Ruobing Huang
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China.
| | - Jianxing Zhang
- Department of Ultrasound, Remote Consultation Center of ABUS, The Second Affiliated Hospital, Guangzhou University of Chinese Medicine, Guangzhou, China.
| | - Dong Ni
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Lab, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedical Engineering, Shenzhen University, Shenzhen, China; School of Biomedical Engineering and Informatics, Nanjing Medical University, Nanjing 211166, China.
| |
Collapse
|
3
|
MohammadiNasab P, Khakbaz A, Behnam H, Kozegar E, Soryani M. A multi-task self-supervised approach for mass detection in automated breast ultrasound using double attention recurrent residual U-Net. Comput Biol Med 2025; 188:109829. [PMID: 39983360 DOI: 10.1016/j.compbiomed.2025.109829] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2024] [Revised: 01/04/2025] [Accepted: 02/07/2025] [Indexed: 02/23/2025]
Abstract
Breast cancer is the most common and lethal cancer among women worldwide. Early detection using medical imaging technologies can significantly improve treatment outcomes. Automated breast ultrasound, known as ABUS, offers more advantages compared to traditional mammography and has recently gained considerable attention. However, reviewing hundreds of ABUS slices imposes a high workload on radiologists, increasing review time and potentially leading to diagnostic errors. Consequently, there is a strong need for efficient computer-aided detection, CADe, systems. In recent years, researchers have proposed deep learning-based CADe systems to enhance mass detection accuracy. However, these methods are highly dependent on the number of training samples and often struggle to balance detection accuracy with the false positive rate. To reduce the workload for radiologists and achieve high detection sensitivities with low false positive rates, this study introduces a novel CADe system based on a self-supervised framework that leverages unannotated ABUS datasets to improve detection results. The proposed framework is integrated into an innovative 3-D convolutional neural network called DATTR2U-Net, which employs a multi-task learning approach to simultaneously train inpainting and denoising pretext tasks. A fully convolutional network is then attached to the DATTR2U-Net for the detection task. The proposed method is validated on the TDSCABUS public dataset, demonstrating promising detection results with a recall of 0.7963 and a false positive rate of 5.67 per volume that signifies its potential to improve detection accuracy while reducing workload for radiologists. The code is available at: github.com/Pooryamn/SSL_ABUS.
Collapse
Affiliation(s)
- Poorya MohammadiNasab
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran; Research Center for Clinical AI-Research in Omics and Medical Data Science (CAROM), Department of Medicine, Danube Private University, Krems, Austria.
| | - Atousa Khakbaz
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Hamid Behnam
- School of Electrical Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Ehsan Kozegar
- Faculty of Technology and Engineering-East of Guilan, University of Guilan, Rudsar, Guilan, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
4
|
Luo L, Wang X, Lin Y, Ma X, Tan A, Chan R, Vardhanabhuti V, Chu WC, Cheng KT, Chen H. Deep Learning in Breast Cancer Imaging: A Decade of Progress and Future Directions. IEEE Rev Biomed Eng 2025; 18:130-151. [PMID: 38265911 DOI: 10.1109/rbme.2024.3357877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/26/2024]
Abstract
Breast cancer has reached the highest incidence rate worldwide among all malignancies since 2020. Breast imaging plays a significant role in early diagnosis and intervention to improve the outcome of breast cancer patients. In the past decade, deep learning has shown remarkable progress in breast cancer imaging analysis, holding great promise in interpreting the rich information and complex context of breast imaging modalities. Considering the rapid improvement in deep learning technology and the increasing severity of breast cancer, it is critical to summarize past progress and identify future challenges to be addressed. This paper provides an extensive review of deep learning-based breast cancer imaging research, covering studies on mammograms, ultrasound, magnetic resonance imaging, and digital pathology images over the past decade. The major deep learning methods and applications on imaging-based screening, diagnosis, treatment response prediction, and prognosis are elaborated and discussed. Drawn from the findings of this survey, we present a comprehensive discussion of the challenges and potential avenues for future research in deep learning-based breast cancer imaging.
Collapse
|
5
|
Gathright R, Mejia I, Gonzalez JM, Hernandez Torres SI, Berard D, Snider EJ. Overview of Wearable Healthcare Devices for Clinical Decision Support in the Prehospital Setting. SENSORS (BASEL, SWITZERLAND) 2024; 24:8204. [PMID: 39771939 PMCID: PMC11679471 DOI: 10.3390/s24248204] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2024] [Revised: 12/16/2024] [Accepted: 12/17/2024] [Indexed: 01/11/2025]
Abstract
Prehospital medical care is a major challenge for both civilian and military situations as resources are limited, yet critical triage and treatment decisions must be rapidly made. Prehospital medicine is further complicated during mass casualty situations or remote applications that require more extensive medical treatments to be monitored. It is anticipated on the future battlefield where air superiority will be contested that prolonged field care will extend to as much 72 h in a prehospital environment. Traditional medical monitoring is not practical in these situations and, as such, wearable sensor technology may help support prehospital medicine. However, sensors alone are not sufficient in the prehospital setting where limited personnel without specialized medical training must make critical decisions based on physiological signals. Machine learning-based clinical decision support systems can instead be utilized to interpret these signals for diagnosing injuries, making triage decisions, or driving treatments. Here, we summarize the challenges of the prehospital medical setting and review wearable sensor technology suitability for this environment, including their use with medical decision support triage or treatment guidance options. Further, we discuss recommendations for wearable healthcare device development and medical decision support technology to better support the prehospital medical setting. With further design improvement and integration with decision support tools, wearable healthcare devices have the potential to simplify and improve medical care in the challenging prehospital environment.
Collapse
Affiliation(s)
| | | | | | | | | | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| |
Collapse
|
6
|
Li Y, Chen G, Wang G, Zhou Z, An S, Dai S, Jin Y, Zhang C, Zhang M, Yu F. Dominating Alzheimer's disease diagnosis with deep learning on sMRI and DTI-MD. Front Neurol 2024; 15:1444795. [PMID: 39211812 PMCID: PMC11358067 DOI: 10.3389/fneur.2024.1444795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Accepted: 07/25/2024] [Indexed: 09/04/2024] Open
Abstract
Background Alzheimer's disease (AD) is a progressive and irreversible neurodegenerative disorder that has become one of the major health concerns for the elderly. Computer-aided AD diagnosis can assist doctors in quickly and accurately determining patients' severity and affected regions. Methods In this paper, we propose a method called MADNet for computer-aided AD diagnosis using multimodal datasets. The method selects ResNet-10 as the backbone network, with dual-branch parallel extraction of discriminative features for AD classification. It incorporates long-range dependencies modeling using attention scores in the decision-making layer and fuses the features based on their importance across modalities. To validate the effectiveness of our proposed multimodal classification method, we construct a multimodal dataset based on the publicly available ADNI dataset and a collected XWNI dataset, which includes examples of AD, Mild Cognitive Impairment (MCI), and Cognitively Normal (CN). Results On this dataset, we conduct binary classification experiments of AD vs. CN and MCI vs. CN, and demonstrate that our proposed method outperforms other traditional single-modal deep learning models. Furthermore, this conclusion also confirms the necessity of using multimodal sMRI and DTI data for computer-aided AD diagnosis, as these two modalities complement and convey information to each other. We visualize the feature maps extracted by MADNet using Grad-CAM, generating heatmaps that guide doctors' attention to important regions in patients' sMRI, which play a crucial role in the development of AD, establishing trust between human experts and machine learning models. Conclusion We propose a simple yet effective multimodal deep convolutional neural network model MADNet that outperforms traditional deep learning methods that use a single-modality dataset for AD diagnosis.
Collapse
Affiliation(s)
- Yuxia Li
- Department of Neurology, Tangshan Central Hospital, Hebei, China
| | - Guanqun Chen
- Department of Neurology, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Guoxin Wang
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| | - Zhiyi Zhou
- JD Health International Inc., Beijing, China
| | - Shan An
- JD Health International Inc., Beijing, China
| | - Shipeng Dai
- College of Science, Northeastern University, Shenyang, China
| | - Yuxin Jin
- JD Health International Inc., Beijing, China
| | - Chao Zhang
- JD Health International Inc., Beijing, China
| | - Mingkai Zhang
- Department of Neurology, XuanWu Hospital of Capital Medical University, Beijing, China
| | - Feng Yu
- College of Biomedical Engineering and Instrument Science, Zhejiang University, Hangzhou, China
| |
Collapse
|
7
|
Barekatrezaei S, Kozegar E, Salamati M, Soryani M. Mass detection in automated three dimensional breast ultrasound using cascaded convolutional neural networks. Phys Med 2024; 124:103433. [PMID: 39002423 DOI: 10.1016/j.ejmp.2024.103433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 07/03/2024] [Accepted: 07/08/2024] [Indexed: 07/15/2024] Open
Abstract
PURPOSE Early detection of breast cancer has a significant effect on reducing its mortality rate. For this purpose, automated three-dimensional breast ultrasound (3-D ABUS) has been recently used alongside mammography. The 3-D volume produced in this imaging system includes many slices. The radiologist must review all the slices to find the mass, a time-consuming task with a high probability of mistakes. Therefore, many computer-aided detection (CADe) systems have been developed to assist radiologists in this task. In this paper, we propose a novel CADe system for mass detection in 3-D ABUS images. METHODS The proposed system includes two cascaded convolutional neural networks. The goal of the first network is to achieve the highest possible sensitivity, and the second network's goal is to reduce false positives while maintaining high sensitivity. In both networks, an improved version of 3-D U-Net architecture is utilized in which two types of modified Inception modules are used in the encoder section. In the second network, new attention units are also added to the skip connections that receive the results of the first network as saliency maps. RESULTS The system was evaluated on a dataset containing 60 3-D ABUS volumes from 43 patients and 55 masses. A sensitivity of 91.48% and a mean false positive of 8.85 per patient were achieved. CONCLUSIONS The suggested mass detection system is fully automatic without any user interaction. The results indicate that the sensitivity and the mean FP per patient of the CADe system outperform competing techniques.
Collapse
Affiliation(s)
- Sepideh Barekatrezaei
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| | - Ehsan Kozegar
- Department of Computer Engineering and Engineering Sciences, Faculty of Technology and Engineering, University of Guilan, Rudsar-Vajargah, Guilan, Iran.
| | - Masoumeh Salamati
- Department of Reproductive Imaging, Reproductive Biomedicine Research Center, Royan Institute for Reproductive Biomedicine, ACECR, Tehran, Iran.
| | - Mohsen Soryani
- School of Computer Engineering, Iran University of Science and Technology, Tehran, Iran.
| |
Collapse
|
8
|
Hernandez Torres SI, Holland L, Edwards TH, Venn EC, Snider EJ. Deep learning models for interpretation of point of care ultrasound in military working dogs. Front Vet Sci 2024; 11:1374890. [PMID: 38903685 PMCID: PMC11187302 DOI: 10.3389/fvets.2024.1374890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Introduction Military working dogs (MWDs) are essential for military operations in a wide range of missions. With this pivotal role, MWDs can become casualties requiring specialized veterinary care that may not always be available far forward on the battlefield. Some injuries such as pneumothorax, hemothorax, or abdominal hemorrhage can be diagnosed using point of care ultrasound (POCUS) such as the Global FAST® exam. This presents a unique opportunity for artificial intelligence (AI) to aid in the interpretation of ultrasound images. In this article, deep learning classification neural networks were developed for POCUS assessment in MWDs. Methods Images were collected in five MWDs under general anesthesia or deep sedation for all scan points in the Global FAST® exam. For representative injuries, a cadaver model was used from which positive and negative injury images were captured. A total of 327 ultrasound clips were captured and split across scan points for training three different AI network architectures: MobileNetV2, DarkNet-19, and ShrapML. Gradient class activation mapping (GradCAM) overlays were generated for representative images to better explain AI predictions. Results Performance of AI models reached over 82% accuracy for all scan points. The model with the highest performance was trained with the MobileNetV2 network for the cystocolic scan point achieving 99.8% accuracy. Across all trained networks the diaphragmatic hepatorenal scan point had the best overall performance. However, GradCAM overlays showed that the models with highest accuracy, like MobileNetV2, were not always identifying relevant features. Conversely, the GradCAM heatmaps for ShrapML show general agreement with regions most indicative of fluid accumulation. Discussion Overall, the AI models developed can automate POCUS predictions in MWDs. Preliminarily, ShrapML had the strongest performance and prediction rate paired with accurately tracking fluid accumulation sites, making it the most suitable option for eventual real-time deployment with ultrasound systems. Further integration of this technology with imaging technologies will expand use of POCUS-based triage of MWDs.
Collapse
Affiliation(s)
- Sofia I. Hernandez Torres
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Lawrence Holland
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Thomas H. Edwards
- Hemorrhage Control and Vascular Dysfunction Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
- Texas A&M University, School of Veterinary Medicine, College Station, TX, United States
| | - Emilee C. Venn
- Veterinary Support Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| | - Eric J. Snider
- Organ Support and Automation Technologies Group, U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX, United States
| |
Collapse
|
9
|
Vijayan S, Panneerselvam R, Roshini TV. Hybrid machine learning-based breast cancer segmentation framework using ultrasound images with optimal weighted features. Cell Biochem Funct 2024; 42:e4054. [PMID: 38783623 DOI: 10.1002/cbf.4054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 04/08/2024] [Accepted: 05/12/2024] [Indexed: 05/25/2024]
Abstract
One of the most dangerous conditions in clinical practice is breast cancer because it affects the entire life of women in recent days. Nevertheless, the existing techniques for diagnosing breast cancer are complicated, expensive, and inaccurate. Many trans-disciplinary and computerized systems are recently created to prevent human errors in both quantification and diagnosis. Ultrasonography is a crucial imaging technique for cancer detection. Therefore, it is essential to develop a system that enables the healthcare sector to rapidly and effectively detect breast cancer. Due to its benefits in predicting crucial feature identification from complicated breast cancer datasets, machine learning is widely employed in the categorization of breast cancer patterns. The performance of machine learning models is limited by the absence of a successful feature enhancement strategy. There are a few issues that need to be handled with the traditional breast cancer detection method. Thus, a novel breast cancer detection model is designed based on machine learning approaches and employing ultrasonic images. At first, ultrasound images utilized for the analysis is acquired from the benchmark resources and offered as the input to preprocessing phase. The images are preprocessed by utilizing a filtering and contrast enhancement approach and attained the preprocessed image. Then, the preprocessed images are subjected to the segmentation phase. In this phase, segmentation is performed by employing Fuzzy C-Means, active counter, and watershed algorithm and also attained the segmented images. Later, the segmented images are provided to the pixel selection phase. Here, the pixels are selected by the developed hybrid model Conglomerated Aphid with Galactic Swarm Optimization (CAGSO) to attain the final segmented pixels. Then, the selected segmented pixel is fed in to feature extraction phase for attaining the shape features and the textual features. Further, the acquired features are offered to the optimal weighted feature selection phase, and also their weights are tuned tune by the developed CAGSO. Finally, the optimal weighted features are offered to the breast cancer detection phase. Finally, the developed breast cancer detection model secured an enhanced performance rate than the classical approaches throughout the experimental analysis.
Collapse
Affiliation(s)
- Sudharsana Vijayan
- Department of Electronics and Communication Engineering, College of Engineering and Technology, SRM Institute of Science and Technology Kattankulathur, Chengalpattu, Tamil Nadu, India
| | - Radhika Panneerselvam
- Department of Electronics and Communication Engineering, College of Engineering and Technology, SRM Institute of Science and Technology Kattankulathur, Chengalpattu, Tamil Nadu, India
| | - Thundi Valappil Roshini
- Department of Electronics and Communication Engineering, Vimal Jyothi Engineering College, Chemperi, Kannur, Kerala, India
| |
Collapse
|
10
|
Geng S, Guo P, Wang J, Zhang Y, Shi Y, Li X, Cao M, Song Y, Zhang H, Zhang Z, Zhang K, Song H, Shi J, Liu J. Ultrasensitive Optical Detection and Elimination of Residual Microtumors with a Postoperative Implantable Hydrogel Sensor for Preventing Cancer Recurrence. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2024; 36:e2307923. [PMID: 38174840 DOI: 10.1002/adma.202307923] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 12/16/2023] [Indexed: 01/05/2024]
Abstract
In vivo optical imaging of trace biomarkers in residual microtumors holds significant promise for cancer prognosis but poses a formidable challenge. Here, a novel hydrogel sensor is designed for ultrasensitive and specific imaging of the elusive biomarker. This hydrogel sensor seamlessly integrates a molecular beacon nanoprobe with fibroblasts, offering both high tissue retention capability and an impressive signal-to-noise ratio for imaging. Signal amplification is accomplished through exonuclease I-mediated biomarker recycling. The resulting hydrogel sensor sensitively detects the biomarker carcinoembryonic antigen with a detection limit of 1.8 pg mL-1 in test tubes. Moreover, it successfully identifies residual cancer nodules with a median diameter of less than 2 mm in mice bearing partially removed primary triple-negative breast carcinomas (4T1). Notably, this hydrogel sensor is proven effective for the sensitive diagnosis of invasive tumors in post-surgical mice with infiltrating 4T1 cells, leveraging the role of fibroblasts in locally enriching tumor cells. Furthermore, the residual microtumor is rapidly photothermal ablation by polydopamine-based nanoprobe under the guidance of visualization, achieving ≈100% suppression of tumor recurrence and lung metastasis. This work offers a promising alternative strategy for visually detecting residual microtumors, potentially enhancing the prognosis of cancer patients following surgical interventions.
Collapse
Affiliation(s)
- Shizhen Geng
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Pengke Guo
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Jing Wang
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Yunya Zhang
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Yaru Shi
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Xinling Li
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Mengnian Cao
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Yutong Song
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
| | - Hongling Zhang
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
- Key Laboratory of Targeting Therapy and Diagnosis for Critical Diseases, Key Laboratory of Advanced Drug Preparation Technologies, Ministry of Education, Zhengzhou, 450001, China
| | - Zhenzhong Zhang
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
- Key Laboratory of Targeting Therapy and Diagnosis for Critical Diseases, Key Laboratory of Advanced Drug Preparation Technologies, Ministry of Education, Zhengzhou, 450001, China
| | - Kaixiang Zhang
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
- Key Laboratory of Targeting Therapy and Diagnosis for Critical Diseases, Key Laboratory of Advanced Drug Preparation Technologies, Ministry of Education, Zhengzhou, 450001, China
| | - Haiwei Song
- Department of Biochemistry, National University of Singapore, SingaporeCity, 138673, Singapore
| | - Jinjin Shi
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
- Key Laboratory of Targeting Therapy and Diagnosis for Critical Diseases, Key Laboratory of Advanced Drug Preparation Technologies, Ministry of Education, Zhengzhou, 450001, China
| | - Junjie Liu
- School of Pharmaceutical Sciences, Zhengzhou University, Zhengzhou, 450001, China
- Key Laboratory of Targeting Therapy and Diagnosis for Critical Diseases, Key Laboratory of Advanced Drug Preparation Technologies, Ministry of Education, Zhengzhou, 450001, China
| |
Collapse
|
11
|
Jakkaladiki SP, Maly F. Integrating hybrid transfer learning with attention-enhanced deep learning models to improve breast cancer diagnosis. PeerJ Comput Sci 2024; 10:e1850. [PMID: 38435578 PMCID: PMC10909230 DOI: 10.7717/peerj-cs.1850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 01/10/2024] [Indexed: 03/05/2024]
Abstract
Cancer, with its high fatality rate, instills fear in countless individuals worldwide. However, effective diagnosis and treatment can often lead to a successful cure. Computer-assisted diagnostics, especially in the context of deep learning, have become prominent methods for primary screening of various diseases, including cancer. Deep learning, an artificial intelligence technique that enables computers to reason like humans, has recently gained significant attention. This study focuses on training a deep neural network to predict breast cancer. With the advancements in medical imaging technologies such as X-ray, magnetic resonance imaging (MRI), and computed tomography (CT) scans, deep learning has become essential in analyzing and managing extensive image datasets. The objective of this research is to propose a deep-learning model for the identification and categorization of breast tumors. The system's performance was evaluated using the breast cancer identification (BreakHis) classification datasets from the Kaggle repository and the Wisconsin Breast Cancer Dataset (WBC) from the UCI repository. The study's findings demonstrated an impressive accuracy rate of 100%, surpassing other state-of-the-art approaches. The suggested model was thoroughly evaluated using F1-score, recall, precision, and accuracy metrics on the WBC dataset. Training, validation, and testing were conducted using pre-processed datasets, leading to remarkable results of 99.8% recall rate, 99.06% F1-score, and 100% accuracy rate on the BreakHis dataset. Similarly, on the WBC dataset, the model achieved a 99% accuracy rate, a 98.7% recall rate, and a 99.03% F1-score. These outcomes highlight the potential of deep learning models in accurately diagnosing breast cancer. Based on our research, it is evident that the proposed system outperforms existing approaches in this field.
Collapse
Affiliation(s)
- Sudha Prathyusha Jakkaladiki
- Faculty of Informatics and Management, University of Hradec Králové, Hradec Kralove, Hradec Kralove, Czech Republic
| | - Filip Maly
- Faculty of Informatics and Management, University of Hradec Králové, Hradec Kralove, Hradec Kralove, Czech Republic
| |
Collapse
|
12
|
Chen L, Zeng B, Shen J, Xu J, Cai Z, Su S, Chen J, Cai X, Ying T, Hu B, Wu M, Chen X, Zheng Y. Bone age assessment based on three-dimensional ultrasound and artificial intelligence compared with paediatrician-read radiographic bone age: protocol for a prospective, diagnostic accuracy study. BMJ Open 2024; 14:e079969. [PMID: 38401893 PMCID: PMC10895244 DOI: 10.1136/bmjopen-2023-079969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 01/31/2024] [Indexed: 02/26/2024] Open
Abstract
INTRODUCTION Radiographic bone age (BA) assessment is widely used to evaluate children's growth disorders and predict their future height. Moreover, children are more sensitive and vulnerable to X-ray radiation exposure than adults. The purpose of this study is to develop a new, safer, radiation-free BA assessment method for children by using three-dimensional ultrasound (3D-US) and artificial intelligence (AI), and to test the diagnostic accuracy and reliability of this method. METHODS AND ANALYSIS This is a prospective, observational study. All participants will be recruited through Paediatric Growth and Development Clinic. All participants will receive left hand 3D-US and X-ray examination at the Shanghai Sixth People's Hospital on the same day, all images will be recorded. These image related data will be collected and randomly divided into training set (80% of all) and test set (20% of all). The training set will be used to establish a cascade network of 3D-US skeletal image segmentation and BA prediction model to achieve end-to-end prediction of image to BA. The test set will be used to evaluate the accuracy of AI BA model of 3D-US. We have developed a new ultrasonic scanning device, which can be proposed to automatic 3D-US scanning of hands. AI algorithms, such as convolutional neural network, will be used to identify and segment the skeletal structures in the hand 3D-US images. We will achieve automatic segmentation of hand skeletal 3D-US images, establish BA prediction model of 3D-US, and test the accuracy of the prediction model. ETHICS AND DISSEMINATION The Ethics Committee of Shanghai Sixth People's Hospital approved this study. The approval number is 2022-019. A written informed consent will be obtained from their parent or guardian of each participant. Final results will be published in peer-reviewed journals and presented at national and international conferences. TRIAL REGISTRATION NUMBER ChiCTR2200057236.
Collapse
Affiliation(s)
- Li Chen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bolun Zeng
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jian Shen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Jiangchang Xu
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Zehang Cai
- Shantou Institute of Ultrasonic Instruments Co., Ltd, Shantou, China
| | - Shudian Su
- Shantou Institute of Ultrasonic Instruments Co., Ltd, Shantou, China
| | - Jie Chen
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Cai
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Tao Ying
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Bing Hu
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Min Wu
- Department of Pediatrics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
- Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China
| | - Yuanyi Zheng
- Department of Ultrasound in Medicine, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai Institute of Ultrasound in Medicine, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| |
Collapse
|
13
|
Avital G, Hernandez Torres SI, Knowlton ZJ, Bedolla C, Salinas J, Snider EJ. Toward Smart, Automated Junctional Tourniquets-AI Models to Interpret Vessel Occlusion at Physiological Pressure Points. Bioengineering (Basel) 2024; 11:109. [PMID: 38391595 PMCID: PMC10885917 DOI: 10.3390/bioengineering11020109] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2023] [Revised: 01/05/2024] [Accepted: 01/18/2024] [Indexed: 02/24/2024] Open
Abstract
Hemorrhage is the leading cause of preventable death in both civilian and military medicine. Junctional hemorrhages are especially difficult to manage since traditional tourniquet placement is often not possible. Ultrasound can be used to visualize and guide the caretaker to apply pressure at physiological pressure points to stop hemorrhage. However, this process is technically challenging, requiring the vessel to be properly positioned over rigid boney surfaces and applying sufficient pressure to maintain proper occlusion. As a first step toward automating this life-saving intervention, we demonstrate an artificial intelligence algorithm that classifies a vessel as patent or occluded, which can guide a user to apply the appropriate pressure required to stop flow. Neural network models were trained using images captured from a custom tissue-mimicking phantom and an ex vivo swine model of the inguinal region, as pressure was applied using an ultrasound probe with and without color Doppler overlays. Using these images, we developed an image classification algorithm suitable for the determination of patency or occlusion in an ultrasound image containing color Doppler overlay. Separate AI models for both test platforms were able to accurately detect occlusion status in test-image sets to more than 93% accuracy. In conclusion, this methodology can be utilized for guiding and monitoring proper vessel occlusion, which, when combined with automated actuation and other AI models, can allow for automated junctional tourniquet application.
Collapse
Affiliation(s)
- Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Israel Defense Forces Medical Corps, Ramat Gan 52620, Israel
- Division of Anesthesia, Intensive Care, and Pain Management, Tel-Aviv Medical Center, Affiliated with the Faculty of Medicine, Tel Aviv University, Tel Aviv 64239, Israel
| | | | - Zechariah J Knowlton
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Carlos Bedolla
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Jose Salinas
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Eric J Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| |
Collapse
|
14
|
Irmici G, Cè M, Pepa GD, D'Ascoli E, De Berardinis C, Giambersio E, Rabiolo L, La Rocca L, Carriero S, Depretto C, Scaperrotta G, Cellina M. Exploring the Potential of Artificial Intelligence in Breast Ultrasound. Crit Rev Oncog 2024; 29:15-28. [PMID: 38505878 DOI: 10.1615/critrevoncog.2023048873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2024]
Abstract
Breast ultrasound has emerged as a valuable imaging modality in the detection and characterization of breast lesions, particularly in women with dense breast tissue or contraindications for mammography. Within this framework, artificial intelligence (AI) has garnered significant attention for its potential to improve diagnostic accuracy in breast ultrasound and revolutionize the workflow. This review article aims to comprehensively explore the current state of research and development in harnessing AI's capabilities for breast ultrasound. We delve into various AI techniques, including machine learning, deep learning, as well as their applications in automating lesion detection, segmentation, and classification tasks. Furthermore, the review addresses the challenges and hurdles faced in implementing AI systems in breast ultrasound diagnostics, such as data privacy, interpretability, and regulatory approval. Ethical considerations pertaining to the integration of AI into clinical practice are also discussed, emphasizing the importance of maintaining a patient-centered approach. The integration of AI into breast ultrasound holds great promise for improving diagnostic accuracy, enhancing efficiency, and ultimately advancing patient's care. By examining the current state of research and identifying future opportunities, this review aims to contribute to the understanding and utilization of AI in breast ultrasound and encourage further interdisciplinary collaboration to maximize its potential in clinical practice.
Collapse
Affiliation(s)
- Giovanni Irmici
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Maurizio Cè
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Gianmarco Della Pepa
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Elisa D'Ascoli
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Claudia De Berardinis
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Emilia Giambersio
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Lidia Rabiolo
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata, Policlinico Università di Palermo, Palermo, Italy
| | - Ludovica La Rocca
- Postgraduation School in Radiodiagnostics, Università degli Studi di Napoli
| | - Serena Carriero
- Postgraduation School in Radiodiagnostics, Università degli Studi di Milano, Via Festa del Perdono, 7, 20122 Milan, Italy
| | - Catherine Depretto
- Breast Radiology Unit, Fondazione IRCCS, Istituto Nazionale Tumori, Milano, Italy
| | | | - Michaela Cellina
- Radiology Department, Fatebenefratelli Hospital, ASST Fatebenefratelli Sacco, Milano, Piazza Principessa Clotilde 3, 20121, Milan, Italy
| |
Collapse
|
15
|
Oh K, Lee SE, Kim EK. 3-D breast nodule detection on automated breast ultrasound using faster region-based convolutional neural networks and U-Net. Sci Rep 2023; 13:22625. [PMID: 38114666 PMCID: PMC10730541 DOI: 10.1038/s41598-023-49794-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/12/2023] [Indexed: 12/21/2023] Open
Abstract
Mammography is currently the most commonly used modality for breast cancer screening. However, its sensitivity is relatively low in women with dense breasts. Dense breast tissues show a relatively high rate of interval cancers and are at high risk for developing breast cancer. As a supplemental screening tool, ultrasonography is a widely adopted imaging modality to standard mammography, especially for dense breasts. Lately, automated breast ultrasound imaging has gained attention due to its advantages over hand-held ultrasound imaging. However, automated breast ultrasound imaging requires considerable time and effort for reading because of the lengthy data. Hence, developing a computer-aided nodule detection system for automated breast ultrasound is invaluable and impactful practically. This study proposes a three-dimensional breast nodule detection system based on a simple two-dimensional deep-learning model exploiting automated breast ultrasound. Additionally, we provide several postprocessing steps to reduce false positives. In our experiments using the in-house automated breast ultrasound datasets, a sensitivity of [Formula: see text] with 8.6 false positives is achieved on unseen test data at best.
Collapse
Affiliation(s)
- Kangrok Oh
- Research Institute of Radiological Science and Center for Clinical Imaging Data Science, Yonsei University College of Medicine, Seoul, 03722, Republic of Korea
| | - Si Eun Lee
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea
| | - Eun-Kyung Kim
- Department of Radiology, Yongin Severance Hospital, Yonsei University College of Medicine, 363, Dongbaekjukjeon-daero, Giheung-gu, Yongin, Gyeonggi-do, 16995, Republic of Korea.
| |
Collapse
|
16
|
Liu N, Fenster A, Tessier D, Chun J, Gou S, Chong J. Self-supervised enhanced thyroid nodule detection in ultrasound examination video sequences with multi-perspective evaluation. Phys Med Biol 2023; 68:235007. [PMID: 37918343 DOI: 10.1088/1361-6560/ad092a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 11/02/2023] [Indexed: 11/04/2023]
Abstract
Objective.Ultrasound is the most commonly used examination for the detection and identification of thyroid nodules. Since manual detection is time-consuming and subjective, attempts to introduce machine learning into this process are ongoing. However, the performance of these methods is limited by the low signal-to-noise ratio and tissue contrast of ultrasound images. To address these challenges, we extend thyroid nodule detection from image-based to video-based using the temporal context information in ultrasound videos.Approach.We propose a video-based deep learning model with adjacent frame perception (AFP) for accurate and real-time thyroid nodule detection. Compared to image-based methods, AFP can aggregate semantically similar contextual features in the video. Furthermore, considering the cost of medical image annotation for video-based models, a patch scale self-supervised model (PASS) is proposed. PASS is trained on unlabeled datasets to improve the performance of the AFP model without additional labelling costs.Main results.The PASS model is trained by 92 videos containing 23 773 frames, of which 60 annotated videos containing 16 694 frames were used to train and evaluate the AFP model. The evaluation is performed from the video, frame, nodule, and localization perspectives. In the evaluation of the localization perspective, we used the average precision metric with the intersection-over-union threshold set to 50% (AP@50), which is the area under the smoothed Precision-Recall curve. Our proposed AFP improved AP@50 from 0.256 to 0.390, while the PASS-enhanced AFP further improved the AP@50 to 0.425. AFP and PASS also improve the performance in the valuations of other perspectives based on the localization results.Significance.Our video-based model can mitigate the effects of low signal-to-noise ratio and tissue contrast in ultrasound images and enable the accurate detection of thyroid nodules in real-time. The evaluation from multiple perspectives of the ablation experiments demonstrates the effectiveness of our proposed AFP and PASS models.
Collapse
Affiliation(s)
- Ningtao Liu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710126, People's Republic of China
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| | - Aaron Fenster
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
- Department of Medical Imaging, Western University, London, ON, N6A 5A5, Canada
- Department of Medical Biophysics, Western University, London, ON, N6A 5C1, Canada
| | - David Tessier
- Robarts Research Institute, Western University, London, ON, N6A 5B7, Canada
| | - Jin Chun
- Schulich School of Medicine, Western University, London, ON, N6A 5C1, Canada
| | - Shuiping Gou
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710126, People's Republic of China
| | - Jaron Chong
- Department of Medical Imaging, Western University, London, ON, N6A 5A5, Canada
| |
Collapse
|
17
|
Yan S, Li J, Wu W. Artificial intelligence in breast cancer: application and future perspectives. J Cancer Res Clin Oncol 2023; 149:16179-16190. [PMID: 37656245 DOI: 10.1007/s00432-023-05337-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2023] [Accepted: 08/24/2023] [Indexed: 09/02/2023]
Abstract
Breast cancer is one of the most common cancers and is one of the leading causes of cancer-related deaths in women worldwide. Early diagnosis and treatment are the key for a favorable prognosis. The application of artificial intelligence technology in the medical field is increasingly extensive, including image analysis, automated diagnosis, intelligent pharmaceutical system, personalized treatment and so on. AI-based breast cancer imaging, pathology and adjuvant therapy technology cannot only reduce the workload of clinicians, but also continuously improve the accuracy and sensitivity of breast cancer diagnosis and treatment. This paper reviews the application of AI in breast cancer, as well as looks ahead and poses challenges to the future development of AI for breast cancer detection and therapeutic, so as to provide ideas for future research.
Collapse
Affiliation(s)
- Shuixin Yan
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Jiadi Li
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China
| | - Weizhu Wu
- The Affiliated Lihuili Hospital of Ningbo University, Ningbo, 315000, Zhejiang, China.
| |
Collapse
|
18
|
Sun R, Zhang X, Xie Y, Nie S. Weakly supervised breast lesion detection in DCE-MRI using self-transfer learning. Med Phys 2023; 50:4960-4972. [PMID: 36820793 DOI: 10.1002/mp.16296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Revised: 02/03/2023] [Accepted: 02/04/2023] [Indexed: 02/24/2023] Open
Abstract
BACKGROUND Breast cancer is a typically diagnosed and life-threatening cancer in women. Thus, dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is increasingly used for breast lesion detection and diagnosis because of the high resolution of soft tissues. Moreover, supervised detection methods have been implemented for breast lesion detection. However, these methods require substantial time and specialized staff to develop the labeled training samples. PURPOSE To investigate the potential of weakly supervised deep learning models for breast lesion detection. METHODS A total of 1003 breast DCE-MRI studies were collected, including 603 abnormal cases with 770 breast lesions and 400 normal subjects. The proposed model was trained using breast DCE-MRI considering only the image-level labels (normal and abnormal) and optimized for classification and detection sub-tasks simultaneously. Ablation experiments were performed to evaluate different convolutional neural network (CNN) backbones (VGG19 and ResNet50) as shared convolutional layers, as well as to evaluate the effect of the preprocessing methods. RESULTS Our weakly supervised model performed better with VGG19 than with ResNet50 (p < 0.05). The average precision (AP) of the classification sub-task was 91.7% for abnormal cases and 88.0% for normal samples. The area under the receiver operating characteristic (ROC) curve (AUC) was 0.939 (95% confidence interval [CI]: 0.920-0.941). The weakly supervised detection task AP was 85.7%, and the correct location (CorLoc) was 90.2%. A sensitivity of 84.0% at two-false positives per image was assessed based on free-response ROC (FROC) curve. CONCLUSIONS The results confirm that a weakly supervised CNN based on self-transfer learning is an effective and promising auxiliary tool for detecting breast lesions.
Collapse
Affiliation(s)
- Rong Sun
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Xiaobing Zhang
- Department of Radiology, Ruijin Hospital Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Yuanzhong Xie
- Medical Imaging Center, Taian Center Hospital, Shandong, China
| | - Shengdong Nie
- School of Health Science and Engineering, University of Shanghai for Science and Technology, Shanghai, China
| |
Collapse
|
19
|
Hernandez-Torres SI, Hennessey RP, Snider EJ. Performance Comparison of Object Detection Networks for Shrapnel Identification in Ultrasound Images. Bioengineering (Basel) 2023; 10:807. [PMID: 37508834 PMCID: PMC10376403 DOI: 10.3390/bioengineering10070807] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2023] [Revised: 06/20/2023] [Accepted: 06/30/2023] [Indexed: 07/30/2023] Open
Abstract
Ultrasound imaging is a critical tool for triaging and diagnosing subjects but only if images can be properly interpreted. Unfortunately, in remote or military medicine situations, the expertise to interpret images can be lacking. Machine-learning image interpretation models that are explainable to the end user and deployable in real time with ultrasound equipment have the potential to solve this problem. We have previously shown how a YOLOv3 (You Only Look Once) object detection algorithm can be used for tracking shrapnel, artery, vein, and nerve fiber bundle features in a tissue phantom. However, real-time implementation of an object detection model requires optimizing model inference time. Here, we compare the performance of five different object detection deep-learning models with varying architectures and trainable parameters to determine which model is most suitable for this shrapnel-tracking ultrasound image application. We used a dataset of more than 16,000 ultrasound images from gelatin tissue phantoms containing artery, vein, nerve fiber, and shrapnel features for training and evaluating each model. Every object detection model surpassed 0.85 mean average precision except for the detection transformer model. Overall, the YOLOv7tiny model had the higher mean average precision and quickest inference time, making it the obvious model choice for this ultrasound imaging application. Other object detection models were overfitting the data as was determined by lower testing performance compared with higher training performance. In summary, the YOLOv7tiny object detection model had the best mean average precision and inference time and was selected as optimal for this application. Next steps will implement this object detection algorithm for real-time applications, an important next step in translating AI models for emergency and military medicine.
Collapse
Affiliation(s)
| | - Ryan P Hennessey
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | - Eric J Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| |
Collapse
|
20
|
Mohamed AAA, Hançerlioğullari A, Rahebi J, Ray MK, Roy S. Colon Disease Diagnosis with Convolutional Neural Network and Grasshopper Optimization Algorithm. Diagnostics (Basel) 2023; 13:diagnostics13101728. [PMID: 37238212 DOI: 10.3390/diagnostics13101728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 05/10/2023] [Accepted: 05/11/2023] [Indexed: 05/28/2023] Open
Abstract
This paper presents a robust colon cancer diagnosis method based on the feature selection method. The proposed method for colon disease diagnosis can be divided into three steps. In the first step, the images' features were extracted based on the convolutional neural network. Squeezenet, Resnet-50, AlexNet, and GoogleNet were used for the convolutional neural network. The extracted features are huge, and the number of features cannot be appropriate for training the system. For this reason, the metaheuristic method is used in the second step to reduce the number of features. This research uses the grasshopper optimization algorithm to select the best features from the feature data. Finally, using machine learning methods, colon disease diagnosis was found to be accurate and successful. Two classification methods are applied for the evaluation of the proposed method. These methods include the decision tree and the support vector machine. The sensitivity, specificity, accuracy, and F1Score have been used to evaluate the proposed method. For Squeezenet based on the support vector machine, we obtained results of 99.34%, 99.41%, 99.12%, 98.91% and 98.94% for sensitivity, specificity, accuracy, precision, and F1Score, respectively. In the end, we compared the suggested recognition method's performance to the performances of other methods, including 9-layer CNN, random forest, 7-layer CNN, and DropBlock. We demonstrated that our solution outperformed the others.
Collapse
Affiliation(s)
- Amna Ali A Mohamed
- Department of Material Science and Engineering, University of Kastamonu, Kastamonu 37150, Turkey
| | | | - Javad Rahebi
- Department of Software Engineering, Istanbul Topkapi University, Istanbul 34087, Turkey
| | - Mayukh K Ray
- Department of Physics, Amity Institute of Applied Sciences, Amity University, Kolkata 700135, India
| | - Sudipta Roy
- Artificial Intelligence & Data Science, Jio Institute, Navi Mumbai 410206, India
| |
Collapse
|
21
|
Snider EJ, Hernandez-Torres SI, Hennessey R. Using Ultrasound Image Augmentation and Ensemble Predictions to Prevent Machine-Learning Model Overfitting. Diagnostics (Basel) 2023; 13:417. [PMID: 36766522 PMCID: PMC9914871 DOI: 10.3390/diagnostics13030417] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Revised: 01/07/2023] [Accepted: 01/18/2023] [Indexed: 01/26/2023] Open
Abstract
Deep learning predictive models have the potential to simplify and automate medical imaging diagnostics by lowering the skill threshold for image interpretation. However, this requires predictive models that are generalized to handle subject variability as seen clinically. Here, we highlight methods to improve test accuracy of an image classifier model for shrapnel identification using tissue phantom image sets. Using a previously developed image classifier neural network-termed ShrapML-blind test accuracy was less than 70% and was variable depending on the training/test data setup, as determined by a leave one subject out (LOSO) holdout methodology. Introduction of affine transformations for image augmentation or MixUp methodologies to generate additional training sets improved model performance and overall accuracy improved to 75%. Further improvements were made by aggregating predictions across five LOSO holdouts. This was done by bagging confidences or predictions from all LOSOs or the top-3 LOSO confidence models for each image prediction. Top-3 LOSO confidence bagging performed best, with test accuracy improved to greater than 85% accuracy for two different blind tissue phantoms. This was confirmed by gradient-weighted class activation mapping to highlight that the image classifier was tracking shrapnel in the image sets. Overall, data augmentation and ensemble prediction approaches were suitable for creating more generalized predictive models for ultrasound image analysis, a critical step for real-time diagnostic deployment.
Collapse
Affiliation(s)
- Eric J. Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | | | | |
Collapse
|
22
|
Madani M, Behzadi MM, Nabavi S. The Role of Deep Learning in Advancing Breast Cancer Detection Using Different Imaging Modalities: A Systematic Review. Cancers (Basel) 2022; 14:5334. [PMID: 36358753 PMCID: PMC9655692 DOI: 10.3390/cancers14215334] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 10/23/2022] [Accepted: 10/25/2022] [Indexed: 12/02/2022] Open
Abstract
Breast cancer is among the most common and fatal diseases for women, and no permanent treatment has been discovered. Thus, early detection is a crucial step to control and cure breast cancer that can save the lives of millions of women. For example, in 2020, more than 65% of breast cancer patients were diagnosed in an early stage of cancer, from which all survived. Although early detection is the most effective approach for cancer treatment, breast cancer screening conducted by radiologists is very expensive and time-consuming. More importantly, conventional methods of analyzing breast cancer images suffer from high false-detection rates. Different breast cancer imaging modalities are used to extract and analyze the key features affecting the diagnosis and treatment of breast cancer. These imaging modalities can be divided into subgroups such as mammograms, ultrasound, magnetic resonance imaging, histopathological images, or any combination of them. Radiologists or pathologists analyze images produced by these methods manually, which leads to an increase in the risk of wrong decisions for cancer detection. Thus, the utilization of new automatic methods to analyze all kinds of breast screening images to assist radiologists to interpret images is required. Recently, artificial intelligence (AI) has been widely utilized to automatically improve the early detection and treatment of different types of cancer, specifically breast cancer, thereby enhancing the survival chance of patients. Advances in AI algorithms, such as deep learning, and the availability of datasets obtained from various imaging modalities have opened an opportunity to surpass the limitations of current breast cancer analysis methods. In this article, we first review breast cancer imaging modalities, and their strengths and limitations. Then, we explore and summarize the most recent studies that employed AI in breast cancer detection using various breast imaging modalities. In addition, we report available datasets on the breast-cancer imaging modalities which are important in developing AI-based algorithms and training deep learning models. In conclusion, this review paper tries to provide a comprehensive resource to help researchers working in breast cancer imaging analysis.
Collapse
Affiliation(s)
- Mohammad Madani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Mohammad Mahdi Behzadi
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT 06269, USA
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| | - Sheida Nabavi
- Department of Computer Science and Engineering, University of Connecticut, Storrs, CT 06269, USA
| |
Collapse
|
23
|
Xiao A, Shen B, Shi X, Zhang Z, Zhang Z, Tian J, Ji N, Hu Z. Intraoperative Glioma Grading Using Neural Architecture Search and Multi-Modal Imaging. IEEE TRANSACTIONS ON MEDICAL IMAGING 2022; 41:2570-2581. [PMID: 35404810 DOI: 10.1109/tmi.2022.3166129] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Glioma grading during surgery can help clinical treatment planning and prognosis, but intraoperative pathological examination of frozen sections is limited by the long processing time and complex procedures. Near-infrared fluorescence imaging provides chances for fast and accurate real-time diagnosis. Recently, deep learning techniques have been actively explored for medical image analysis and disease diagnosis. However, issues of near-infrared fluorescence images, including small-scale, noise, and low-resolution, increase the difficulty of training a satisfying network. Multi-modal imaging can provide complementary information to boost model performance, but simultaneously designing a proper network and utilizing the information of multi-modal data is challenging. In this work, we propose a novel neural architecture search method DLS-DARTS to automatically search for network architectures to handle these issues. DLS-DARTS has two learnable stems for multi-modal low-level feature fusion and uses a modified perturbation-based derivation strategy to improve the performance on the area under the curve and accuracy. White light imaging and fluorescence imaging in the first near-infrared window (650-900 nm) and the second near-infrared window (1,000-1,700 nm) are applied to provide multi-modal information on glioma tissues. In the experiments on 1,115 surgical glioma specimens, DLS-DARTS achieved an area under the curve of 0.843 and an accuracy of 0.634, which outperformed manually designed convolutional neural networks including ResNet, PyramidNet, and EfficientNet, and a state-of-the-art neural architecture search method for multi-modal medical image classification. Our study demonstrates that DLS-DARTS has the potential to help neurosurgeons during surgery, showing high prospects in medical image analysis.
Collapse
|
24
|
Zhang Z, Liu N, Guo Z, Jiao L, Fenster A, Jin W, Zhang Y, Chen J, Yan C, Gou S. Ageing and degeneration analysis using ageing-related dynamic attention on lateral cephalometric radiographs. NPJ Digit Med 2022; 5:151. [PMID: 36168038 PMCID: PMC9515216 DOI: 10.1038/s41746-022-00681-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 08/22/2022] [Indexed: 11/25/2022] Open
Abstract
With the increase of the ageing in the world's population, the ageing and degeneration studies of physiological characteristics in human skin, bones, and muscles become important topics. Research on the ageing of bones, especially the skull, are paid much attention in recent years. In this study, a novel deep learning method representing the ageing-related dynamic attention (ARDA) is proposed. The proposed method can quantitatively display the ageing salience of the bones and their change patterns with age on lateral cephalometric radiographs images (LCR) images containing the craniofacial and cervical spine. An age estimation-based deep learning model based on 14142 LCR images from 4 to 40 years old individuals is trained to extract ageing-related features, and based on these features the ageing salience maps are generated by the Grad-CAM method. All ageing salience maps with the same age are merged as an ARDA map corresponding to that age. Ageing salience maps show that ARDA is mainly concentrated in three regions in LCR images: the teeth, craniofacial, and cervical spine regions. Furthermore, the dynamic distribution of ARDA at different ages and instances in LCR images is quantitatively analyzed. The experimental results on 3014 cases show that ARDA can accurately reflect the development and degeneration patterns in LCR images.
Collapse
Affiliation(s)
- Zhiyong Zhang
- Key Laboratory of Shaanxi Province for Craniofacial Precision Medicine Research, College of Stomatology, Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China
- Department of Orthodontics, the Affiliated Stomatological Hospital of Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Ningtao Liu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710071, Shaanxi, China
- Robarts Research Institute, Western University, London, N6A 3K7, ON, Canada
| | - Zhang Guo
- Academy of Advanced Interdisciplinary Research, Xidian University, Xi'an, 710071, Shaanxi, China
| | - Licheng Jiao
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710071, Shaanxi, China
| | - Aaron Fenster
- Robarts Research Institute, Western University, London, N6A 3K7, ON, Canada
| | - Wenfan Jin
- Department of Radiology, the Affiliated Stomatological Hospital of Xi'an Jiaotong University, Xi'an, 710004, Shaanxi, China
| | - Yuxiang Zhang
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China
| | - Jie Chen
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China
| | - Chunxia Yan
- College of Forensic Medicine, Xi'an Jiaotong University Health Science Center, Xi'an, 710061, Shaanxi, China.
| | - Shuiping Gou
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, 710071, Shaanxi, China.
| |
Collapse
|
25
|
Snider EJ, Hernandez-Torres SI, Avital G, Boice EN. Evaluation of an Object Detection Algorithm for Shrapnel and Development of a Triage Tool to Determine Injury Severity. J Imaging 2022; 8:jimaging8090252. [PMID: 36135417 PMCID: PMC9501864 DOI: 10.3390/jimaging8090252] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/07/2022] [Accepted: 09/12/2022] [Indexed: 01/25/2023] Open
Abstract
Emergency medicine in austere environments rely on ultrasound imaging as an essential diagnostic tool. Without extensive training, identifying abnormalities such as shrapnel embedded in tissue, is challenging. Medical professionals with appropriate expertise are limited in resource-constrained environments. Incorporating artificial intelligence models to aid the interpretation can reduce the skill gap, enabling identification of shrapnel, and its proximity to important anatomical features for improved medical treatment. Here, we apply a deep learning object detection framework, YOLOv3, for shrapnel detection in various sizes and locations with respect to a neurovascular bundle. Ultrasound images were collected in a tissue phantom containing shrapnel, vein, artery, and nerve features. The YOLOv3 framework, classifies the object types and identifies the location. In the testing dataset, the model was successful at identifying each object class, with a mean Intersection over Union and average precision of 0.73 and 0.94, respectively. Furthermore, a triage tool was developed to quantify shrapnel distance from neurovascular features that could notify the end user when a proximity threshold is surpassed, and, thus, may warrant evacuation or surgical intervention. Overall, object detection models such as this will be vital to compensate for lack of expertise in ultrasound interpretation, increasing its availability for emergency and military medicine.
Collapse
Affiliation(s)
- Eric J. Snider
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
| | | | - Guy Avital
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Trauma & Combat Medicine Branch, Surgeon General’s Headquarters, Israel Defense Forces, Ramat-Gan 52620, Israel
- Division of Anesthesia, Intensive Care & Pain Management, Tel-Aviv Sourasky Medical Center, Affiliated with the Sackler Faculty of Medicine, Tel-Aviv University, Tel-Aviv 64239, Israel
| | - Emily N. Boice
- U.S. Army Institute of Surgical Research, JBSA Fort Sam Houston, San Antonio, TX 78234, USA
- Correspondence: ; Tel.: +1-210-539-8721
| |
Collapse
|
26
|
Ma JJ, Meng S, Dang SJ, Wang JZ, Yuan Q, Yang Q, Song CX. Evaluation of a new method of calculating breast tumor volume based on automated breast ultrasound. Front Oncol 2022; 12:895575. [PMID: 36176389 PMCID: PMC9513394 DOI: 10.3389/fonc.2022.895575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2022] [Accepted: 08/26/2022] [Indexed: 11/13/2022] Open
Abstract
Objective To evaluate the effectiveness and advantages of a new method for calculating breast tumor volume based on an automated breast ultrasound system (ABUS). Methods A total of 42 patients (18–70 years old) with breast lesions were selected for this study. The Ivenia ABUS 2.0 (General Electric Company, USA) was used, with a probe frequency of 6–15 MHz. Adobe Photoshop CS6 software was used to calculate the pixel ratio of each ABUS image, and to draw an outline of the tumor cross-section. The resulting area (in pixels) was multiplied by the pixel ratio to yield the area of the tumor cross-section. The Wilcoxon signed rank test and Bland-Altman plot were used to compare mean differences and mean values, respectively, between the two methods. Results There was no significant difference between the tumor volumes calculated by pixel method as compared to the traditional method (P>0.05). Repeated measurements of the same tumor volume were more consistent with the pixel method. Conclusion The new pixel method is feasible for measuring breast tumor volume and has good validity and measurement stability.
Collapse
Affiliation(s)
- Jing-Jing Ma
- Department of Internal Medicine, Xi’an Fifth Hospital, Xi’an, China
| | - Shan Meng
- Department of Hematology, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Sha-Jie Dang
- Department of Anesthesia, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
| | - Jia-Zhong Wang
- Department of General Surgery, The Second Affiliated Hospital of Xi’an Jiaotong University, Xi’an, China
| | - Quan Yuan
- Department of Ultrasound, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
| | - Qi Yang
- Department of Surgery, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
| | - Can-Xu Song
- Department of Ultrasound, Shaanxi Provincial Cancer Hospital, Affiliated to Xi’an Jiaotong University, Xi’an, China
- *Correspondence: Can-Xu Song,
| |
Collapse
|
27
|
Kumar P, Kumar A, Srivastava S, Padma Sai Y. A novel bi-modal extended Huber loss function based refined mask RCNN approach for automatic multi instance detection and localization of breast cancer. Proc Inst Mech Eng H 2022; 236:1036-1053. [DOI: 10.1177/09544119221095416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Breast cancer is an extremely aggressive cancer in women. Its abnormalities can be observed in the form of masses, calcification and lumps. In order to reduce the mortality rate of women its detection is needed at an early stage. The present paper proposes a novel bi-modal extended Huber loss function based refined mask regional convolutional neural network for automatic multi-instance detection and localization of breast cancer. To refine and increase the efficacy of the proposed method three changes are casted. First, a pre-processing step is performed for mammogram and ultrasound breast images. Second, the features of the region proposal network are separately mapped for accurate region of interest. Third, to reduce overfitting and fast convergence, an extended Huber loss function is used at the place of Smooth L1( x) in boundary loss. To extend the functionality of Huber loss, the delta parameter is automated by the aid of median absolute deviation with grid search algorithm. It provides the best optimum value of delta instead of user-based value. The proposed method is compared with pre-existing methods in terms of accuracy, true positive rate, true negative rate, precision, F-score, balanced classification rate, Youden’s index, Jaccard Index and dice coefficient on CBIS-DDSM and ultrasound database. The experimental result shows that the proposed method is a better suited approach for multi-instance detection, localization and classification of breast cancer. It can be used as a diagnostic medium that helps in clinical purposes and leads to a precise diagnosis of breast cancer abnormalities.
Collapse
Affiliation(s)
- Pradeep Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Abhinav Kumar
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Subodh Srivastava
- Department of Electronics and Communication Engineering, National Institute of Technology Patna, Patna, Bihar, India
| | - Yarlagadda Padma Sai
- Department of Electronics and Communication Engineering, VNR VJIET, Hyderabad, Telangana, India
| |
Collapse
|
28
|
Mask Branch Network: Weakly Supervised Branch Network with a Template Mask for Classifying Masses in 3D Automated Breast Ultrasound. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12136332] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
Abstract
Automated breast ultrasound (ABUS) is being rapidly utilized for screening and diagnosing breast cancer. Breast masses, including cancers shown in ABUS scans, often appear as irregular hypoechoic areas that are hard to distinguish from background shadings. We propose a novel branch network architecture incorporating segmentation information of masses in the training process. The branch network is integrated into neural network, providing the spatial attention effect. The branch network boosts the performance of existing classifiers, helping to learn meaningful features around the target breast mass. For the segmentation information, we leverage the existing radiology reports without additional labeling efforts. The reports, which is generated in medical image reading process, should include the characteristics of breast masses, such as shape and orientation, and a template mask can be created in a rule-based manner. Experimental results show that the proposed branch network with a template mask significantly improves the performance of existing classifiers. We also provide qualitative interpretation of the proposed method by visualizing the attention effect on target objects.
Collapse
|
29
|
Boice EN, Hernandez-Torres SI, Snider EJ. Comparison of Ultrasound Image Classifier Deep Learning Algorithms for Shrapnel Detection. J Imaging 2022; 8:jimaging8050140. [PMID: 35621904 PMCID: PMC9144026 DOI: 10.3390/jimaging8050140] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2022] [Revised: 05/17/2022] [Accepted: 05/18/2022] [Indexed: 02/06/2023] Open
Abstract
Ultrasound imaging is essential in emergency medicine and combat casualty care, oftentimes used as a critical triage tool. However, identifying injuries, such as shrapnel embedded in tissue or a pneumothorax, can be challenging without extensive ultrasonography training, which may not be available in prolonged field care or emergency medicine scenarios. Artificial intelligence can simplify this by automating image interpretation but only if it can be deployed for use in real time. We previously developed a deep learning neural network model specifically designed to identify shrapnel in ultrasound images, termed ShrapML. Here, we expand on that work to further optimize the model and compare its performance to that of conventional models trained on the ImageNet database, such as ResNet50. Through Bayesian optimization, the model’s parameters were further refined, resulting in an F1 score of 0.98. We compared the proposed model to four conventional models: DarkNet-19, GoogleNet, MobileNetv2, and SqueezeNet which were down-selected based on speed and testing accuracy. Although MobileNetv2 achieved a higher accuracy than ShrapML, there was a tradeoff between accuracy and speed, with ShrapML being 10× faster than MobileNetv2. In conclusion, real-time deployment of algorithms such as ShrapML can reduce the cognitive load for medical providers in high-stress emergency or miliary medicine scenarios.
Collapse
|
30
|
Snider EJ, Hernandez-Torres SI, Boice EN. An image classification deep-learning algorithm for shrapnel detection from ultrasound images. Sci Rep 2022; 12:8427. [PMID: 35589931 PMCID: PMC9117994 DOI: 10.1038/s41598-022-12367-2] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 05/06/2022] [Indexed: 01/01/2023] Open
Abstract
Ultrasound imaging is essential for non-invasively diagnosing injuries where advanced diagnostics may not be possible. However, image interpretation remains a challenge as proper expertise may not be available. In response, artificial intelligence algorithms are being investigated to automate image analysis and diagnosis. Here, we highlight an image classification convolutional neural network for detecting shrapnel in ultrasound images. As an initial application, different shrapnel types and sizes were embedded first in a tissue mimicking phantom and then in swine thigh tissue. The algorithm architecture was optimized stepwise by minimizing validation loss and maximizing F1 score. The final algorithm design trained on tissue phantom image sets had an F1 score of 0.95 and an area under the ROC curve of 0.95. It maintained higher than a 90% accuracy for each of 8 shrapnel types. When trained only on swine image sets, the optimized algorithm format had even higher metrics: F1 and area under the ROC curve of 0.99. Overall, the algorithm developed resulted in strong classification accuracy for both the tissue phantom and animal tissue. This framework can be applied to other trauma relevant imaging applications such as internal bleeding to further simplify trauma medicine when resources and image interpretation are scarce.
Collapse
Affiliation(s)
- Eric J Snider
- Engineering Technology and Automation Combat Casualty Care Research Team, United States Army Institute of Surgical Research, Ft. Sam Houston, TX, USA.
| | - Sofia I Hernandez-Torres
- Engineering Technology and Automation Combat Casualty Care Research Team, United States Army Institute of Surgical Research, Ft. Sam Houston, TX, USA
| | - Emily N Boice
- Engineering Technology and Automation Combat Casualty Care Research Team, United States Army Institute of Surgical Research, Ft. Sam Houston, TX, USA
| |
Collapse
|
31
|
Wang Q, Chen H, Luo G, Li B, Shang H, Shao H, Sun S, Wang Z, Wang K, Cheng W. Performance of novel deep learning network with the incorporation of the automatic segmentation network for diagnosis of breast cancer in automated breast ultrasound. Eur Radiol 2022; 32:7163-7172. [PMID: 35488916 DOI: 10.1007/s00330-022-08836-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/15/2022] [Accepted: 04/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.
Collapse
Affiliation(s)
- Qiucheng Wang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - He Chen
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Bo Li
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Haitao Shang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Hua Shao
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Shanshan Sun
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Zhongshuai Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China.
| |
Collapse
|
32
|
Balkenende L, Teuwen J, Mann RM. Application of Deep Learning in Breast Cancer Imaging. Semin Nucl Med 2022; 52:584-596. [PMID: 35339259 DOI: 10.1053/j.semnuclmed.2022.02.003] [Citation(s) in RCA: 44] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2022] [Revised: 02/15/2022] [Accepted: 02/16/2022] [Indexed: 11/11/2022]
Abstract
This review gives an overview of the current state of deep learning research in breast cancer imaging. Breast imaging plays a major role in detecting breast cancer at an earlier stage, as well as monitoring and evaluating breast cancer during treatment. The most commonly used modalities for breast imaging are digital mammography, digital breast tomosynthesis, ultrasound and magnetic resonance imaging. Nuclear medicine imaging techniques are used for detection and classification of axillary lymph nodes and distant staging in breast cancer imaging. All of these techniques are currently digitized, enabling the possibility to implement deep learning (DL), a subset of Artificial intelligence, in breast imaging. DL is nowadays embedded in a plethora of different tasks, such as lesion classification and segmentation, image reconstruction and generation, cancer risk prediction, and prediction and assessment of therapy response. Studies show similar and even better performances of DL algorithms compared to radiologists, although it is clear that large trials are needed, especially for ultrasound and magnetic resonance imaging, to exactly determine the added value of DL in breast cancer imaging. Studies on DL in nuclear medicine techniques are only sparsely available and further research is mandatory. Legal and ethical issues need to be considered before the role of DL can expand to its full potential in clinical breast care practice.
Collapse
Affiliation(s)
- Luuk Balkenende
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands
| | - Jonas Teuwen
- Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands; Department of Radiation Oncology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands
| | - Ritse M Mann
- Department of Radiology, Netherlands Cancer Institute (NKI), Amsterdam, The Netherlands; Department of Medical Imaging, Radboud University Medical Center, Nijmegen, The Netherlands.
| |
Collapse
|
33
|
Luo X, Xu M, Tang G, PhD YW, Wang N, PhD DN, PhD XL, Li AH. The lesion detection efficacy of deep learning on automatic breast ultrasound and factors affecting its efficacy: a pilot study. Br J Radiol 2022; 95:20210438. [PMID: 34860574 PMCID: PMC8822545 DOI: 10.1259/bjr.20210438] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/03/2023] Open
Abstract
OBJECTIVES The aim of this study was to investigate the detection efficacy of deep learning (DL) for automatic breast ultrasound (ABUS) and factors affecting its efficacy. METHODS Females who underwent ABUS and handheld ultrasound from May 2016 to June 2017 (N = 397) were enrolled and divided into training (n = 163 patients with breast cancer and 33 with benign lesions), test (n = 57) and control (n = 144) groups. A convolutional neural network was optimized to detect lesions in ABUS. The sensitivity and false positives (FPs) were evaluated and compared for different breast tissue compositions, lesion sizes, morphologies and echo patterns. RESULTS In the training set, with 688 lesion regions (LRs), the network achieved sensitivities of 93.8%, 97.2% and 100%, based on volume, lesion and patient, respectively, with 1.9 FPs per volume. In the test group with 247 LRs, the sensitivities were 92.7%, 94.5% and 96.5%, respectively, with 2.4 FPs per volume. The control group, with 900 volumes, showed 0.24 FPs per volume. The sensitivity was 98% for lesions > 1 cm3, but 87% for those ≤1 cm3 (p < 0.05). Similar sensitivities and FPs were observed for different breast tissue compositions (homogeneous, 97.5%, 2.1; heterogeneous, 93.6%, 2.1), lesion morphologies (mass, 96.3%, 2.1; non-mass, 95.8%, 2.0) and echo patterns (homogeneous, 96.1%, 2.1; heterogeneous 96.8%, 2.1). CONCLUSIONS DL had high detection sensitivity with a low FP but was affected by lesion size. ADVANCES IN KNOWLEDGE DL is technically feasible for the automatic detection of lesions in ABUS.
Collapse
Affiliation(s)
| | | | | | - Yi Wang PhD
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Na Wang
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | - Dong Ni PhD
- National Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China, and also with the Medical UltraSound Image Computing (MUSIC) Lab, Shenzhen, China
| | | | | |
Collapse
|
34
|
Wang S, Zhu Y, Lee S, Elton DC, Shen TC, Tang Y, Peng Y, Lu Z, Summers RM. Global-Local attention network with multi-task uncertainty loss for abnormal lymph node detection in MR images. Med Image Anal 2022; 77:102345. [PMID: 35051899 PMCID: PMC8988884 DOI: 10.1016/j.media.2021.102345] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2021] [Revised: 12/27/2021] [Accepted: 12/28/2021] [Indexed: 12/21/2022]
Abstract
Accurate and reliable detection of abnormal lymph nodes in magnetic resonance (MR) images is very helpful for the diagnosis and treatment of numerous diseases. However, it is still a challenging task due to similar appearances between abnormal lymph nodes and other tissues. In this paper, we propose a novel network based on an improved Mask R-CNN framework for the detection of abnormal lymph nodes in MR images. Instead of laboriously collecting large-scale pixel-wise annotated training data, pseudo masks generated from RECIST bookmarks on hand are utilized as the supervision. Different from the standard Mask R-CNN architecture, there are two main innovations in our proposed network: 1) global-local attention which encodes the global and local scale context for detection and utilizes the channel attention mechanism to extract more discriminative features and 2) multi-task uncertainty loss which adaptively weights multiple objective loss functions based on the uncertainty of each task to automatically search the optimal solution. For the experiments, we built a new abnormal lymph node dataset with 821 RECIST bookmarks of 41 different types of abnormal abdominal lymph nodes from 584 different patients. The experimental results showed the superior performance of our algorithm over compared state-of-the-art approaches.
Collapse
Affiliation(s)
- Shuai Wang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA; School of Mechanical, Electrical and Information Engineering, Shandong University, Weihai 264209, PR China.
| | - Yingying Zhu
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Sungwon Lee
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Daniel C Elton
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Thomas C Shen
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Youbao Tang
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| | - Yifan Peng
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Zhiyong Lu
- National Center for Biotechnology Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20894, USA
| | - Ronald M Summers
- Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892, USA
| |
Collapse
|
35
|
Liu R, Liu M, Sheng B, Li H, Li P, Song H, Zhang P, Jiang L, Shen D. NHBS-Net: A Feature Fusion Attention Network for Ultrasound Neonatal Hip Bone Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3446-3458. [PMID: 34106849 DOI: 10.1109/tmi.2021.3087857] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Ultrasound is a widely used technology for diagnosing developmental dysplasia of the hip (DDH) because it does not use radiation. Due to its low cost and convenience, 2-D ultrasound is still the most common examination in DDH diagnosis. In clinical usage, the complexity of both ultrasound image standardization and measurement leads to a high error rate for sonographers. The automatic segmentation results of key structures in the hip joint can be used to develop a standard plane detection method that helps sonographers decrease the error rate. However, current automatic segmentation methods still face challenges in robustness and accuracy. Thus, we propose a neonatal hip bone segmentation network (NHBS-Net) for the first time for the segmentation of seven key structures. We design three improvements, an enhanced dual attention module, a two-class feature fusion module, and a coordinate convolution output head, to help segment different structures. Compared with current state-of-the-art networks, NHBS-Net gains outstanding performance accuracy and generalizability, as shown in the experiments. Additionally, image standardization is a common need in ultrasonography. The ability of segmentation-based standard plane detection is tested on a 50-image standard dataset. The experiments show that our method can help healthcare workers decrease their error rate from 6%-10% to 2%. In addition, the segmentation performance in another ultrasound dataset (fetal heart) demonstrates the ability of our network.
Collapse
|
36
|
Xu J, Xie H, Liu C, Yang F, Zhang S, Chen X, Zhang Y. Hip Landmark Detection With Dependency Mining in Ultrasound Image. IEEE TRANSACTIONS ON MEDICAL IMAGING 2021; 40:3762-3774. [PMID: 34264824 DOI: 10.1109/tmi.2021.3097355] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Developmental dysplasia of the hip (DDH) is a common and serious disease in infants. Hip landmark detection plays a critical role in diagnosing the development of neonatal hip in the ultrasound image. However, the local confusion and the regional weakening make this task challenging. To solve these challenges, we explore the stable hip structure and the distinguishable local features to provide dependencies for hip landmark detection. In this paper, we propose a novel architecture named Dependency Mining ResNet (DM-ResNet), which investigates end-to-end dependency mining for more accurate and much faster hip landmark detection. First of all, we convert the landmark detection to the heatmap estimation by ResNet to build a strong baseline architecture for fast and accurate detection. Secondly, a dependency mining module is explored to mine the dependencies and leverage both the local and global information to decline the local confusion and strengthen the weakening region. Thirdly, we propose a simple but effective local voting algorithm (LVA) that seeks trade-off between long-range and short-range dependencies in the hip ultrasound image. Besides, a dataset with 2000 annotated hip ultrasound images is constructed in our work. It is the first public hip ultrasound dataset for open research. Experimental results show that our method achieves excellent precision in hip landmark detection (average point error of 0.719mm and successful detection rate within 1mm of 79.9%).
Collapse
|
37
|
Liang CW, Fang PW, Huang HY, Lo CM. Deep Convolutional Neural Networks Detect Tumor Genotype from Pathological Tissue Images in Gastrointestinal Stromal Tumors. Cancers (Basel) 2021; 13:5787. [PMID: 34830948 PMCID: PMC8616403 DOI: 10.3390/cancers13225787] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2021] [Revised: 11/07/2021] [Accepted: 11/16/2021] [Indexed: 11/17/2022] Open
Abstract
Gastrointestinal stromal tumors (GIST) are common mesenchymal tumors, and their effective treatment depends upon the mutational subtype of the KIT/PDGFRA genes. We established deep convolutional neural network (DCNN) models to rapidly predict drug-sensitive mutation subtypes from images of pathological tissue. A total of 5153 pathological images of 365 different GISTs from three different laboratories were collected and divided into training and validation sets. A transfer learning mechanism based on DCNN was used with four different network architectures, to identify cases with drug-sensitive mutations. The accuracy ranged from 87% to 75%. Cross-institutional inconsistency, however, was observed. Using gray-scale images resulted in a 7% drop in accuracy (accuracy 80%, sensitivity 87%, specificity 73%). Using images containing only nuclei (accuracy 81%, sensitivity 87%, specificity 73%) or cytoplasm (accuracy 79%, sensitivity 88%, specificity 67%) produced 6% and 8% drops in accuracy rate, respectively, suggesting buffering effects across subcellular components in DCNN interpretation. The proposed DCNN model successfully inferred cases with drug-sensitive mutations with high accuracy. The contribution of image color and subcellular components was also revealed. These results will help to generate a cheaper and quicker screening method for tumor gene testing.
Collapse
Affiliation(s)
- Cher-Wei Liang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City 243, Taiwan; (C.-W.L.); (P.-W.F.)
- School of Medicine, College of Medicine, Fu Jen Catholic University, New Taipei City 242, Taiwan
- Graduate Institute of Pathology, College of Medicine, National Taiwan University, Taipei 100, Taiwan
| | - Pei-Wei Fang
- Department of Pathology, Fu Jen Catholic University Hospital, Fu Jen Catholic University, New Taipei City 243, Taiwan; (C.-W.L.); (P.-W.F.)
| | - Hsuan-Ying Huang
- Department of Anatomic Pathology, Chang Gung Memorial Hospital, College of Medicine, Chang Gung University, Kaohsiung 833, Taiwan;
| | - Chung-Ming Lo
- Graduate Institute of Library, Information and Archival Studies, National Chengchi University, Taipei 116, Taiwan
| |
Collapse
|
38
|
Rahman A, Rahman M, Kundu D, Karim MR, Band SS, Sookhak M. Study on IoT for SARS-CoV-2 with healthcare: present and future perspective. MATHEMATICAL BIOSCIENCES AND ENGINEERING : MBE 2021; 18:9697-9726. [PMID: 34814364 DOI: 10.3934/mbe.2021475] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The ever-evolving and contagious nature of the Coronavirus (COVID-19) has immobilized the world around us. As the daily number of infected cases increases, the containment of the spread of this virus is proving to be an overwhelming task. Healthcare facilities around the world are overburdened with an ominous responsibility to combat an ever-worsening scenario. To aid the healthcare system, Internet of Things (IoT) technology provides a better solution-tracing, testing of COVID patients efficiently is gaining rapid pace. This study discusses the role of IoT technology in healthcare during the SARS-CoV-2 pandemics. The study overviews different research, platforms, services, products where IoT is used to combat the COVID-19 pandemic. Further, we intelligently integrate IoT and healthcare for COVID-19 related applications. Again, we focus on a wide range of IoT applications in regards to SARS-CoV-2 tracing, testing, and treatment. Finally, we effectively consider further challenges, issues, and some direction regarding IoT in order to uplift the healthcare system during COVID-19 and future pandemics.
Collapse
Affiliation(s)
- Anichur Rahman
- Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of Dhaka University, Savar, Dhaka-1350, Bangladesh
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Muaz Rahman
- Department of Electrical and Electronic Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of Dhaka University, Savar, Dhaka-1350, Bangladesh
| | - Dipanjali Kundu
- Department of Computer Science and Engineering, National Institute of Textile Engineering and Research (NITER), Constituent Institute of Dhaka University, Savar, Dhaka-1350, Bangladesh
| | - Md Razaul Karim
- Department of Computer Science and Engineering, Mawlana Bhashani Science and Technology University, Tangail, Bangladesh
| | - Shahab S Band
- Future Technology Research Center, College of Future, National Yunlin University of Science and Technology, 123 University Road, Section 3, Douliou, Yunlin 64002, Taiwan
| | - Mehdi Sookhak
- Dept. of Computer Science, Texas A & M University-Corpus Christi, 6300 Ocean Drive, Corpus Christi, Texas, USA, 78412
| |
Collapse
|
39
|
You H, Yu L, Tian S, Ma X, Xing Y, Xin N, Cai W. MC-Net: Multiple max-pooling integration module and cross multi-scale deconvolution network. Knowl Based Syst 2021. [DOI: 10.1016/j.knosys.2021.107456] [Citation(s) in RCA: 39] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
40
|
Ma L, Zhang F. End-to-end predictive intelligence diagnosis in brain tumor using lightweight neural network. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107666] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
41
|
Fouad M, El Ghany MAA, Huebner M, Schmitz G. A Deep Learning Signal-Based Approach to Fast Harmonic Imaging. 2021 IEEE INTERNATIONAL ULTRASONICS SYMPOSIUM (IUS) 2021. [DOI: 10.1109/ius52206.2021.9593348] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
|
42
|
Xu Y, Wu T, Charlton JR, Gao F, Bennett KM. Small Blob Detector Using Bi-Threshold Constrained Adaptive Scales. IEEE Trans Biomed Eng 2021; 68:2654-2665. [PMID: 33347401 PMCID: PMC8461780 DOI: 10.1109/tbme.2020.3046252] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
Abstract
Recent advances in medical imaging technology bring great promises for medicine practices. Imaging biomarkers are discovered to inform disease diagnosis, prognosis, and treatment assessment. Detecting and segmenting objects from images are often the first steps in quantitative measurement of these biomarkers. The challenges of detecting objects in images, particularly small objects known as blobs, include low image resolution, image noise and overlap among the blobs. This research proposes a Bi-Threshold Constrained Adaptive Scale (BTCAS) blob detector to uncover the relationship between the U-Net threshold and the Difference of Gaussian (DoG) scale to derive a multi-threshold, multi-scale small blob detector. With lower and upper bounds on the probability thresholds from U-Net, two binarized maps of the distance are rendered between blob centers. Each blob is transformed to a DoG space with an adaptively identified local optimum scale. A Hessian convexity map is rendered using the adaptive scale, and the under-segmentation typical of the U-Net is resolved. To validate the performance of the proposed BTCAS, a 3D simulated dataset (n = 20) of blobs, a 3D MRI dataset of human kidneys and a 3D MRI dataset of mouse kidneys, are studied. BTCAS is compared against four state-of-the-art methods: HDoG, U-Net with standard thresholding, U-Net with optimal thresholding, and UH-DoG using precision, recall, F-score, Dice and IoU. We conclude that BTCAS statistically outperforms the compared detectors.
Collapse
Affiliation(s)
- Yanzhe Xu
- School of Computing, Informatics and Decision Systems Engineering, and ASU-Mayo Center for Innovative Imaging, Arizona State University, Tempe, AZ, 85281, USA
| | - Teresa Wu
- School of Computing, Informatics and Decision Systems Engineering, and ASU-Mayo Center for Innovative Imaging, Arizona State University, Tempe, AZ, 85281, USA
| | - Jennifer R. Charlton
- Department of Pediatrics, Division Nephrology, University of Virginia, Charlottesville, 22908-0386, USA
| | - Fei Gao
- School of Computing, Informatics and Decision Systems Engineering, and ASU-Mayo Center for Innovative Imaging, Arizona State University, Tempe, AZ, 85281, USA
| | - Kevin M. Bennett
- Department of Radiology, Washington University, St. Louis, MO, 63130, USA
| |
Collapse
|
43
|
Kazemimoghadam M, Chi W, Rahimi A, Kim N, Alluri P, Nwachukwu C, Lu W, Gu X. Saliency-guided deep learning network for automatic tumor bed volume delineation in post-operative breast irradiation. Phys Med Biol 2021; 66:10.1088/1361-6560/ac176d. [PMID: 34298539 PMCID: PMC8639319 DOI: 10.1088/1361-6560/ac176d] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/23/2021] [Indexed: 11/12/2022]
Abstract
Efficient, reliable and reproducible target volume delineation is a key step in the effective planning of breast radiotherapy. However, post-operative breast target delineation is challenging as the contrast between the tumor bed volume (TBV) and normal breast tissue is relatively low in CT images. In this study, we propose to mimic the marker-guidance procedure in manual target delineation. We developed a saliency-based deep learning segmentation (SDL-Seg) algorithm for accurate TBV segmentation in post-operative breast irradiation. The SDL-Seg algorithm incorporates saliency information in the form of markers' location cues into a U-Net model. The design forces the model to encode the location-related features, which underscores regions with high saliency levels and suppresses low saliency regions. The saliency maps were generated by identifying markers on CT images. Markers' location were then converted to probability maps using a distance transformation coupled with a Gaussian filter. Subsequently, the CT images and the corresponding saliency maps formed a multi-channel input for the SDL-Seg network. Our in-house dataset was comprised of 145 prone CT images from 29 post-operative breast cancer patients, who received 5-fraction partial breast irradiation (PBI) regimen on GammaPod. The 29 patients were randomly split into training (19), validation (5) and test (5) sets. The performance of the proposed method was compared against basic U-Net. Our model achieved mean (standard deviation) of 76.4(±2.7) %, 6.76(±1.83) mm, and 1.9(±0.66) mm for Dice similarity coefficient, 95 percentile Hausdorff distance, and average symmetric surface distance respectively on the test set with computation time of below 11 seconds per one CT volume. SDL-Seg showed superior performance relative to basic U-Net for all the evaluation metrics while preserving low computation cost. The findings demonstrate that SDL-Seg is a promising approach for improving the efficiency and accuracy of the on-line treatment planning procedure of PBI, such as GammaPod based PBI.
Collapse
Affiliation(s)
- Mahdieh Kazemimoghadam
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Weicheng Chi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
- School of Software Engineering, South China University of Technology, Guangzhou, Guangdong 510006, People's Republic of China
| | - Asal Rahimi
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Nathan Kim
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Prasanna Alluri
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | - Chika Nwachukwu
- Department of Radiation Oncology, University of Texas Southwestern Medical Center, Dallas, TX, United States of America
| | | | - Xuejun Gu
- Stanford University, Palo Alto, CA, United States of America
| |
Collapse
|
44
|
|
45
|
Zhang P, Ma Z, Zhang Y, Chen X, Wang G. Improved Inception V3 method and its effect on radiologists' performance of tumor classification with automated breast ultrasound system. Gland Surg 2021; 10:2232-2245. [PMID: 34422594 PMCID: PMC8340346 DOI: 10.21037/gs-21-328] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2021] [Accepted: 06/17/2021] [Indexed: 11/06/2022]
Abstract
BACKGROUND The automated breast ultrasound system (ABUS) is recognized as a valuable detection tool in addition to mammography. The purpose of this study was to propose a novel computer-aided diagnosis (CAD) system by extracting the textural features from ABUS images and to investigate the efficiency of using this CAD for breast cancer detection. METHODS This retrospective study involved 149 breast nodules [maximum diameter: mean size 18.89 mm, standard deviation (SD) 10.238, and range 5-59 mm] in 135. We assigned 3 novice readers (<3 years of experience and 3 experienced readers (≥10 years of experience to review the imaging data and stratify the 149 breast nodules as either malignant or benign. The Improved Inception V3 (II3) method was developed and used as an assistant tool to help the 6 readers to re-interpret the images. RESULTS Our method (II3) achieved an accuracy of 88.6% for the final result. The 3 novice readers had an average accuracy of 71.37%±4.067% while the 3 experienced readers was 83.03%±3.371% on the first-reading. With the help of II3 on the second-reading, the average accuracy of the novice readers increased to 84.13%±1.662% and the experienced readers increased to 89.50%±0.346%.The areas under the curve (AUCs) were similar compared with linear algorithms. The mean AUC of the novice readers was improved from 0.7751 (without II3) to 0.8232 (with II3). The mean AUC of the experienced readers was improved from 0.8939 (without II3) to 0.9211 (with II3). The mean AUC for all readers improved in both the second-reading mode (from 0.8345 to 0.8722, P=0.0081<0.05). CONCLUSIONS With the help of the II3, the diagnostic accuracy of the two groups were both improved, and II3 was more helpful for novice readers than for experienced readers. Our results showed that II3 is valuable in the differentiation of benign and malignant breast nodules and it also improves the experience and skill of some novice radiologists. The II3 cannot completely replace the influence of experience in the diagnostic process and will retain an auxiliary role in the clinic at present.
Collapse
Affiliation(s)
- Panpan Zhang
- Department of Ultrasound, Taizhou Hospital of Zhejiang Province, Zhejiang University, Linhai, China
| | - Zhaosheng Ma
- Department of Ultrasound, Taizhou Hospital of Zhejiang Province, Zhejiang University, Linhai, China
| | - Yingtao Zhang
- Department of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Xiaodan Chen
- Department of Computer Science and Technology, Harbin Institute of Technology, Harbin, China
| | - Gang Wang
- Department of Ultrasound, Taizhou Hospital of Zhejiang Province, Zhejiang University, Linhai, China
| |
Collapse
|
46
|
A Review of Explainable Deep Learning Cancer Detection Models in Medical Imaging. APPLIED SCIENCES-BASEL 2021. [DOI: 10.3390/app11104573] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Abstract
Deep learning has demonstrated remarkable accuracy analyzing images for cancer detection tasks in recent years. The accuracy that has been achieved rivals radiologists and is suitable for implementation as a clinical tool. However, a significant problem is that these models are black-box algorithms therefore they are intrinsically unexplainable. This creates a barrier for clinical implementation due to lack of trust and transparency that is a characteristic of black box algorithms. Additionally, recent regulations prevent the implementation of unexplainable models in clinical settings which further demonstrates a need for explainability. To mitigate these concerns, there have been recent studies that attempt to overcome these issues by modifying deep learning architectures or providing after-the-fact explanations. A review of the deep learning explanation literature focused on cancer detection using MR images is presented here. The gap between what clinicians deem explainable and what current methods provide is discussed and future suggestions to close this gap are provided.
Collapse
|
47
|
Lei Y, He X, Yao J, Wang T, Wang L, Li W, Curran WJ, Liu T, Xu D, Yang X. Breast tumor segmentation in 3D automatic breast ultrasound using Mask scoring R-CNN. Med Phys 2021; 48:204-214. [PMID: 33128230 DOI: 10.1002/mp.14569] [Citation(s) in RCA: 57] [Impact Index Per Article: 14.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Revised: 10/20/2020] [Accepted: 10/20/2020] [Indexed: 12/24/2022] Open
Abstract
PURPOSE Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer-aided diagnosis (CAD). This work aims to develop a deep learning-based method for breast tumor segmentation using three-dimensional (3D) ABUS automatically. METHODS For breast tumor segmentation in ABUS, we developed a Mask scoring region-based convolutional neural network (R-CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R-CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross-validation and 30 were used for hold-out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland-Altman analysis. RESULTS The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross-validation and hold-out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([-0.77, 1.71)) for the cross-validation and 0.23 cc ([-0.23 0.69]) for hold-out test, respectively. CONCLUSION We developed a novel Mask scoring R-CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning-based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
Collapse
Affiliation(s)
- Yang Lei
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Xiuxiu He
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Jincao Yao
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Tonghe Wang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Lijing Wang
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Wei Li
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Walter J Curran
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Tian Liu
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| | - Dong Xu
- Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital
- Institute of Cancer and Basic Medicine (IBMC), Chinese Academy of Sciences, Hangzhou, 310022, China
| | - Xiaofeng Yang
- Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA
| |
Collapse
|
48
|
Chiu LY, Kuo WH, Chen CN, Chang KJ, Chen A. A 2-Phase Merge Filter Approach to Computer-Aided Detection of Breast Tumors on 3-Dimensional Ultrasound Imaging. JOURNAL OF ULTRASOUND IN MEDICINE : OFFICIAL JOURNAL OF THE AMERICAN INSTITUTE OF ULTRASOUND IN MEDICINE 2020; 39:2439-2455. [PMID: 32567133 DOI: 10.1002/jum.15365] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/07/2019] [Revised: 05/13/2020] [Accepted: 05/15/2020] [Indexed: 06/11/2023]
Abstract
OBJECTIVES The role of image analysis in 3-dimensional (3D) automated breast ultrasound (ABUS) images is increasingly important because of its widespread use as a screening tool in whole-breast examinations. However, reviewing a large number of images acquired from ABUS is time-consuming and sometimes error prone. The aim of this study, therefore, was to develop an efficient computer-aided detection (CADe) algorithm to assist the review process. METHODS The proposed CADe algorithm consisted of 4 major steps. First, initial tumor candidates were formed by extracting and merging hypoechoic square cells on 2-dimensional (2D) transverse images. Second, a feature-based classifier was then constructed using 2D features to filter out nontumor candidates. Third, the remaining 2D candidates were merged longitudinally into 3D masses. Finally, a 3D feature-based classifier was used to further filter out nontumor masses to obtain the final detected masses. The proposed method was validated with 176 passes of breast images acquired by an Acuson S2000 automated breast volume scanner (Siemens Medical Solutions USA, Inc., Malvern, PA), including 44 normal passes and 132 abnormal passes containing 162 proven lesions (79 benign and 83 malignant). RESULTS The proposed CADe system could achieve overall sensitivity of 100% and 90% with 6.71 and 5.14 false-positives (FPs) per pass, respectively. Our results also showed that the average number of FPs per normal pass (7.16) was more than the number of FPs per abnormal pass (6.56) at 100% sensitivity. CONCLUSIONS The proposed CADe system has a great potential for becoming a good companion tool with ABUS imaging by ensuring high sensitivity with a relatively small number of FPs.
Collapse
Affiliation(s)
- Ling-Ying Chiu
- Institute of Industrial Engineering, National Taiwan University, Taipei, Taiwan
| | - Wen-Hung Kuo
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Chiung-Nien Chen
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - King-Jen Chang
- Department of Surgery, National Taiwan University Hospital and National Taiwan University College of Medicine, Taipei, Taiwan
| | - Argon Chen
- Institute of Industrial Engineering, National Taiwan University, Taipei, Taiwan
- Department of Mechanical Engineering, National Taiwan University, Taipei, Taiwan
| |
Collapse
|
49
|
Kim J, Kim HJ, Kim C, Kim WH. Artificial intelligence in breast ultrasonography. Ultrasonography 2020; 40:183-190. [PMID: 33430577 PMCID: PMC7994743 DOI: 10.14366/usg.20117] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 11/12/2020] [Indexed: 12/13/2022] Open
Abstract
Although breast ultrasonography is the mainstay modality for differentiating between benign and malignant breast masses, it has intrinsic problems with false positives and substantial interobserver variability. Artificial intelligence (AI), particularly with deep learning models, is expected to improve workflow efficiency and serve as a second opinion. AI is highly useful for performing three main clinical tasks in breast ultrasonography: detection (localization/segmentation), differential diagnosis (classification), and prognostication (prediction). This article provides a current overview of AI applications in breast ultrasonography, with a discussion of methodological considerations in the development of AI models and an up-to-date literature review of potential clinical applications.
Collapse
Affiliation(s)
- Jaeil Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Hye Jung Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| | - Chanho Kim
- School of Computer Science and Engineering, Kyungpook National University, Daegu, Korea
| | - Won Hwa Kim
- Department of Radiology, School of Medicine, Kyungpook National University, Kyungpook National University Chilgok Hospital, Daegu, Korea
| |
Collapse
|
50
|
|