1
|
Wang DD, Lin S, Lyu GR. Advances in the Application of Artificial Intelligence in the Ultrasound Diagnosis of Vulnerable Carotid Atherosclerotic Plaque. ULTRASOUND IN MEDICINE & BIOLOGY 2025; 51:607-614. [PMID: 39828500 DOI: 10.1016/j.ultrasmedbio.2024.12.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/23/2024] [Revised: 12/16/2024] [Accepted: 12/17/2024] [Indexed: 01/22/2025]
Abstract
Vulnerable atherosclerotic plaque is a type of plaque that poses a significant risk of high mortality in patients with cardiovascular disease. Ultrasound has long been used for carotid atherosclerosis screening and plaque assessment due to its safety, low cost and non-invasive nature. However, conventional ultrasound techniques have limitations such as subjectivity, operator dependence, and low inter-observer agreement, leading to inconsistent and possibly inaccurate diagnoses. In recent years, a promising approach to address these limitations has emerged through the integration of artificial intelligence (AI) into ultrasound imaging. It was found that by training AI algorithms with large data sets of ultrasound images, the technology can learn to recognize specific characteristics and patterns associated with vulnerable plaques. This allows for a more objective and consistent assessment, leading to improved diagnostic accuracy. This article reviews the application of AI in the field of diagnostic ultrasound, with a particular focus on carotid vulnerable plaques, and discusses the limitations and prospects of AI-assisted ultrasound. This review also provides a deeper understanding of the role of AI in diagnostic ultrasound and promotes more research in the field.
Collapse
Affiliation(s)
- Dan-Dan Wang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Shu Lin
- Centre of Neurological and Metabolic Research, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Group of Neuroendocrinology, Garvan Institute of Medical Research, Sydney, Australia
| | - Guo-Rong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China; Departments of Medical Imaging, Quanzhou Medical College, Quanzhou, China.
| |
Collapse
|
2
|
Pasynkov D, Egoshin I, Kolchev A, Kliouchkin I, Pasynkova O, Saad Z, Daou A, Abuzenar EM. Automated Segmentation of Breast Cancer Focal Lesions on Ultrasound Images. SENSORS (BASEL, SWITZERLAND) 2025; 25:1593. [PMID: 40096452 PMCID: PMC11902609 DOI: 10.3390/s25051593] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/23/2025] [Revised: 02/26/2025] [Accepted: 03/04/2025] [Indexed: 03/19/2025]
Abstract
Ultrasound (US) remains the main modality for the differential diagnosis of changes revealed by mammography. However, the US images themselves are subject to various types of noise and artifacts from reflections, which can worsen the quality of their analysis. Deep learning methods have a number of disadvantages, including the often insufficient substantiation of the model, and the complexity of collecting a representative training database. Therefore, it is necessary to develop effective algorithms for the segmentation, classification, and analysis of US images. The aim of the work is to develop a method for the automated detection of pathological lesions in breast US images and their segmentation. A method is proposed that includes two stages of video image processing: (1) searching for a region of interest using a random forest classifier, which classifies normal tissues, (2) selecting the contour of the lesion based on the difference in brightness of image pixels. The test set included 52 ultrasound videos which contained histologically proven suspicious lesions. The average frequency of lesion detection per frame was 91.89%, and the average accuracy of contour selection according to the IoU metric was 0.871. The proposed method can be used to segment a suspicious lesion.
Collapse
Affiliation(s)
- Dmitry Pasynkov
- Medical Institute, Department of Radiology and Oncology, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (I.E.); (O.P.); (Z.S.); (E.M.A.)
- Kazan State Medical Academy—Branch Campus of the Federal State Budgetary Educational Institution of Further Professional Education, Russian Medical Academy of Continuous Professional Education, Ministry of Healthcare of the Russian Federation, 36 Butlerov St., Kazan 420012, Russia
| | - Ivan Egoshin
- Medical Institute, Department of Radiology and Oncology, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (I.E.); (O.P.); (Z.S.); (E.M.A.)
| | - Alexey Kolchev
- Institute of Computational Mathematics and Information Technologies, Kazan Federal University, 18 Kremlevskaya St., Kazan 420008, Russia;
| | - Ivan Kliouchkin
- Pediatric Faculty, Kazan Medical University, Ministry of Health of Russian Federation, 49 Butlerov St., Kazan 420012, Russia;
| | - Olga Pasynkova
- Medical Institute, Department of Radiology and Oncology, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (I.E.); (O.P.); (Z.S.); (E.M.A.)
| | - Zahraa Saad
- Medical Institute, Department of Radiology and Oncology, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (I.E.); (O.P.); (Z.S.); (E.M.A.)
| | - Anis Daou
- Pharmaceutical Sciences Department, College of Pharmacy, QU Health, Qatar University, Doha 2713, Qatar
| | - Esam Mohamed Abuzenar
- Medical Institute, Department of Radiology and Oncology, Mari State University, Ministry of Education and Science of Russian Federation, 1 Lenin Square, Yoshkar-Ola 424000, Russia; (I.E.); (O.P.); (Z.S.); (E.M.A.)
| |
Collapse
|
3
|
Sivanandan R, Jayakumari J. Active contour-based ultrasound tumour segmentation with contour initialization using thresholding based on phase gradients. THE IMAGING SCIENCE JOURNAL 2022. [DOI: 10.1080/13682199.2022.2141866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Affiliation(s)
- Revathy Sivanandan
- Department of ECE, Mar Baselios College of Engineering and Technology, APJ Abdul Kalam Technological University, Trivandrum, India
| | - J. Jayakumari
- Department of ECE, Mar Baselios College of Engineering and Technology, APJ Abdul Kalam Technological University, Trivandrum, India
| |
Collapse
|
4
|
Breast Tumor Ultrasound Image Segmentation Method Based on Improved Residual U-Net Network. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:3905998. [PMID: 35795762 PMCID: PMC9252688 DOI: 10.1155/2022/3905998] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/25/2022] [Revised: 05/19/2022] [Accepted: 05/31/2022] [Indexed: 11/25/2022]
Abstract
In order to achieve efficient and accurate breast tumor recognition and diagnosis, this paper proposes a breast tumor ultrasound image segmentation method based on U-Net framework, combined with residual block and attention mechanism. In this method, the residual block is introduced into U-Net network for improvement to avoid the degradation of model performance caused by the gradient disappearance and reduce the training difficulty of deep network. At the same time, considering the features of spatial and channel attention, a fusion attention mechanism is proposed to be introduced into the image analysis model to improve the ability to obtain the feature information of ultrasound images and realize the accurate recognition and extraction of breast tumors. The experimental results show that the Dice index value of the proposed method can reach 0.921, which shows excellent image segmentation performance.
Collapse
|
5
|
Wang Q, Chen H, Luo G, Li B, Shang H, Shao H, Sun S, Wang Z, Wang K, Cheng W. Performance of novel deep learning network with the incorporation of the automatic segmentation network for diagnosis of breast cancer in automated breast ultrasound. Eur Radiol 2022; 32:7163-7172. [PMID: 35488916 DOI: 10.1007/s00330-022-08836-x] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2022] [Revised: 04/15/2022] [Accepted: 04/21/2022] [Indexed: 11/29/2022]
Abstract
OBJECTIVE To develop novel deep learning network (DLN) with the incorporation of the automatic segmentation network (ASN) for morphological analysis and determined the performance for diagnosis breast cancer in automated breast ultrasound (ABUS). METHODS A total of 769 breast tumors were enrolled in this study and were randomly divided into training set and test set at 600 vs. 169. The novel DLNs (Resent v2, ResNet50 v2, ResNet101 v2) added a new ASN to the traditional ResNet networks and extracted morphological information of breast tumors. The accuracy, sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), area under the receiver operating characteristic (ROC) curve (AUC), and average precision (AP) were calculated. The diagnostic performances of novel DLNs were compared with those of two radiologists with different experience. RESULTS The ResNet34 v2 model had higher specificity (76.81%) and PPV (82.22%) than the other two, the ResNet50 v2 model had higher accuracy (78.11%) and NPV (72.86%), and the ResNet101 v2 model had higher sensitivity (85.00%). According to the AUCs and APs, the novel ResNet101 v2 model produced the best result (AUC 0.85 and AP 0.90) compared with the remaining five DLNs. Compared with the novice radiologist, the novel DLNs performed better. The F1 score was increased from 0.77 to 0.78, 0.81, and 0.82 by three novel DLNs. However, their diagnostic performance was worse than that of the experienced radiologist. CONCLUSIONS The novel DLNs performed better than traditional DLNs and may be helpful for novice radiologists to improve their diagnostic performance of breast cancer in ABUS. KEY POINTS • A novel automatic segmentation network to extract morphological information was successfully developed and implemented with ResNet deep learning networks. • The novel deep learning networks in our research performed better than the traditional deep learning networks in the diagnosis of breast cancer using ABUS images. • The novel deep learning networks in our research may be useful for novice radiologists to improve diagnostic performance.
Collapse
Affiliation(s)
- Qiucheng Wang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - He Chen
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Gongning Luo
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Bo Li
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Haitao Shang
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Hua Shao
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Shanshan Sun
- Department of Breast Surgery, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China
| | - Zhongshuai Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Kuanquan Wang
- School of Computer Science and Technology, Harbin Institute of Technology, No. 92, Xidazhi Street, Nangang District, Harbin, Heilongjiang Province, China
| | - Wen Cheng
- Department of Ultrasound, Harbin Medical University Cancer Hospital, No. 150, Haping Road, Nangang District, Harbin, Heilongjiang Province, China.
| |
Collapse
|
6
|
Gómez-Flores W, Coelho de Albuquerque Pereira W. A comparative study of pre-trained convolutional neural networks for semantic segmentation of breast tumors in ultrasound. Comput Biol Med 2020; 126:104036. [PMID: 33059238 DOI: 10.1016/j.compbiomed.2020.104036] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Revised: 09/23/2020] [Accepted: 10/03/2020] [Indexed: 12/15/2022]
Abstract
The automatic segmentation of breast tumors in ultrasound (BUS) has recently been addressed using convolutional neural networks (CNN). These CNN-based approaches generally modify a previously proposed CNN architecture or they design a new architecture using CNN ensembles. Although these methods have reported satisfactory results, the trained CNN architectures are often unavailable for reproducibility purposes. Moreover, these methods commonly learn from small BUS datasets with particular properties, which limits generalization in new cases. This paper evaluates four public CNN-based semantic segmentation models that were developed by the computer vision community, as follows: (1) Fully Convolutional Network (FCN) with AlexNet network, (2) U-Net network, (3) SegNet using VGG16 and VGG19 networks, and (4) DeepLabV3+ using ResNet18, ResNet50, MobileNet-V2, and Xception networks. By transfer learning, these CNNs are fine-tuned to segment BUS images in normal and tumoral pixels. The goal is to select a potential CNN-based segmentation model to be further used in computer-aided diagnosis (CAD) systems. The main significance of this study is the comparison of eight well-established CNN architectures using a more extensive BUS dataset than those used by approaches that are currently found in the literature. More than 3000 BUS images acquired from seven US machine models are used for training and validation. The F1-score (F1s) and the Intersection over Union (IoU) quantify the segmentation performance. The segmentation models based on SegNet and DeepLabV3+ obtain the best results with F1s>0.90 and IoU>0.81. In the case of U-Net, the segmentation performance is F1s=0.89 and IoU=0.80, whereas FCN-AlexNet attains the lowest results with F1s=0.84 and IoU=0.73. In particular, ResNet18 obtains F1s=0.905 and IoU=0.827 and requires less training time among SegNet and DeepLabV3+ networks. Hence, ResNet18 is a potential candidate for implementing fully automated end-to-end CAD systems. The CNN models generated in this study are available to researchers at https://github.com/wgomezf/CNN-BUS-segment, which attempts to impact the fair comparison with other CNN-based segmentation approaches for BUS images.
Collapse
Affiliation(s)
- Wilfrido Gómez-Flores
- Centro de Investigación y de Estudios Avanzados del Instituto Politécnico Nacional, Unidad Tamaulipas, Ciudad Victoria, Tamaulipas, Mexico.
| | | |
Collapse
|
7
|
Li S, Wang Z, Visser LC, Wisner ER, Cheng H. Pilot study: Application of artificial intelligence for detecting left atrial enlargement on canine thoracic radiographs. Vet Radiol Ultrasound 2020; 61:611-618. [PMID: 32783354 PMCID: PMC7689842 DOI: 10.1111/vru.12901] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2019] [Revised: 05/11/2020] [Accepted: 05/17/2020] [Indexed: 12/15/2022] Open
Abstract
Although deep learning has been explored extensively for computer‐aided medical imaging diagnosis in human medicine, very little has been done in veterinary medicine. The goal of this retrospective, pilot project was to apply the deep learning artificial intelligence technique using thoracic radiographs for detection of canine left atrial enlargement and compare results with those of veterinary radiologist interpretations. Seven hundred ninety‐two right lateral radiographs from canine patients with thoracic radiographs and contemporaneous echocardiograms were used to train, validate, and test a convolutional neural network algorithm. The accuracy, sensitivity, and specificity for determination of left atrial enlargement were then compared with those of board‐certified veterinary radiologists as recorded on radiology reports. The accuracy, sensitivity, and specificity were 82.71%, 68.42%, and 87.09%, respectively, using an accuracy driven variant of the convolutional neural network algorithm and 79.01%, 73.68%, and 80.64%, respectively, using a sensitivity driven variant. By comparison, accuracy, sensitivity, and specificity achieved by board‐certified veterinary radiologists was 82.71%, 68.42%, and 87.09%, respectively. Although overall accuracy of the accuracy driven convolutional neural network algorithm and veterinary radiologists was identical, concordance between the two approaches was 85.19%. This study documents proof‐of‐concept for application of deep learning techniques for computer‐aided diagnosis in veterinary medicine.
Collapse
Affiliation(s)
- Shen Li
- William R. Pritchard Veterinary Medical Teaching Hospital, School of Veterinary MedicineUniversity of CaliforniaDavisCaliforniaUSA
| | - Zigui Wang
- Department of Animal SciencesUniversity of CaliforniaDavisCaliforniaUSA
| | - Lance C. Visser
- Department of Medicine and Epidemiology, School of Veterinary MedicineUniversity of CaliforniaDavisCaliforniaUSA
| | - Erik R. Wisner
- Department of Surgical and Radiological Sciences, School of Veterinary MedicineUniversity of CaliforniaDavisCaliforniaUSA
| | - Hao Cheng
- Department of Animal SciencesUniversity of CaliforniaDavisCaliforniaUSA
| |
Collapse
|
8
|
Shiji TP, Remya S, Lakshmanan R, Pratab T, Thomas V. Evolutionary intelligence for breast lesion detection in ultrasound images: A wavelet modulus maxima and SVM based approach. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2020. [DOI: 10.3233/jifs-179709] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Affiliation(s)
- T. P. Shiji
- Department of Electronics Engineering, Model Engineering College, Kochi, India
| | - S. Remya
- Department of Electronics Engineering, Model Engineering College, Kochi, India
| | - Rekha Lakshmanan
- Department of Computer Engineering, KMEA College of Engineering, Kerala, India
| | | | - Vinu Thomas
- Department of Electronics Engineering, Model Engineering College, Kochi, India
| |
Collapse
|
9
|
Automatic Identification of Breast Ultrasound Image Based on Supervised Block-Based Region Segmentation Algorithm and Features Combination Migration Deep Learning Model. IEEE J Biomed Health Inform 2020; 24:984-993. [DOI: 10.1109/jbhi.2019.2960821] [Citation(s) in RCA: 31] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
10
|
|
11
|
Dalwinder S, Birmohan S, Manpreet K. Simultaneous feature weighting and parameter determination of Neural Networks using Ant Lion Optimization for the classification of breast cancer. Biocybern Biomed Eng 2020. [DOI: 10.1016/j.bbe.2019.12.004] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
12
|
Tao C, Chen K, Han L, Peng Y, Li C, Hua Z, Lin J. New one-step model of breast tumor locating based on deep learning. JOURNAL OF X-RAY SCIENCE AND TECHNOLOGY 2019; 27:839-856. [PMID: 31306148 DOI: 10.3233/xst-190548] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Chao Tao
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Ke Chen
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Lin Han
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| | - Yulan Peng
- Department of Ultrasound, West China Hospital of Sichuan University, Chengdu, China
| | - Cheng Li
- China-Japan Friendship Hospital, Beijing, China
| | - Zhan Hua
- China-Japan Friendship Hospital, Beijing, China
| | - Jiangli Lin
- Department of Biomedical Engineering, College of Materials Science and Engineering, Sichuan University, Chengdu, China
| |
Collapse
|