1
|
Cece A, Agresti M, De Falco N, Sperlongano P, Moccia G, Luongo P, Miele F, Allaria A, Torelli F, Bassi P, Sciarra A, Avenia S, Della Monica P, Colapietra F, Di Domenico M, Docimo L, Parmeggiani D. Role of Artificial Intelligence in Thyroid Cancer Diagnosis. J Clin Med 2025; 14:2422. [PMID: 40217871 PMCID: PMC11989500 DOI: 10.3390/jcm14072422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2025] [Revised: 03/24/2025] [Accepted: 03/25/2025] [Indexed: 04/14/2025] Open
Abstract
The progress of artificial intelligence (AI), particularly its core algorithms-machine learning (ML) and deep learning (DL)-has been significant in the medical field, impacting both scientific research and clinical practice. These algorithms are now capable of analyzing ultrasound images, processing them, and providing outcomes, such as determining the benignity or malignancy of thyroid nodules. This integration into ultrasound machines is referred to as computer-aided diagnosis (CAD). The use of such software extends beyond ultrasound to include cytopathological and molecular assessments, enhancing the estimation of malignancy risk. AI's considerable potential in cancer diagnosis and prevention is evident. This article provides an overview of AI models based on ML and DL algorithms used in thyroid diagnostics. Recent studies demonstrate their effectiveness and diagnostic role in ultrasound, pathology, and molecular fields. Notable advancements include content-based image retrieval (CBIR), enhanced saliency CBIR (SE-CBIR), Restore-Generative Adversarial Networks (GANs), and Vision Transformers (ViTs). These new algorithms show remarkable results, indicating their potential as diagnostic and prognostic tools for thyroid pathology. The future trend points to these AI systems becoming the preferred choice for thyroid diagnostics.
Collapse
Affiliation(s)
- Alessio Cece
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Massimo Agresti
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Nadia De Falco
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Pasquale Sperlongano
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Giancarlo Moccia
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Pasquale Luongo
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Francesco Miele
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Alfredo Allaria
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Francesco Torelli
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Paola Bassi
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Antonella Sciarra
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| | - Stefano Avenia
- Department of Medicine and Surgery, University of Perugia, 06126 Perugia, Italy;
| | - Paola Della Monica
- Department of Precision Medicine, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (P.D.M.); (F.C.); (M.D.D.)
| | - Federica Colapietra
- Department of Precision Medicine, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (P.D.M.); (F.C.); (M.D.D.)
| | - Marina Di Domenico
- Department of Precision Medicine, University of Campania “Luigi Vanvitelli”, 80138 Naples, Italy; (P.D.M.); (F.C.); (M.D.D.)
| | - Ludovico Docimo
- Department of General and Specialistic Surgery, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy;
| | - Domenico Parmeggiani
- Department of Integrated Activities in Surgery, Orthopedy and Hepato-Gastroenterology, Universitary Policlinico “Luigi Vanvitelli”, 80138 Naples, Italy; (A.C.); (M.A.); (N.D.F.); (P.S.); (G.M.); (P.L.); (F.M.); (A.A.); (F.T.); (P.B.); (A.S.)
| |
Collapse
|
2
|
Dafni MF, Shih M, Manoel AZ, Yousif MYE, Spathi S, Harshal C, Bhatt G, Chodnekar SY, Chune NS, Rasool W, Umar TP, Moustakas DC, Achkar R, Kumar H, Naz S, Acuña-Chavez LM, Evgenikos K, Gulraiz S, Ali ESM, Elaagib A, Uggh IHP. Empowering cancer prevention with AI: unlocking new frontiers in prediction, diagnosis, and intervention. Cancer Causes Control 2025; 36:353-367. [PMID: 39672997 DOI: 10.1007/s10552-024-01942-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2024] [Accepted: 11/18/2024] [Indexed: 12/15/2024]
Abstract
Artificial intelligence is rapidly changing our world at an exponential rate and its transformative power has extensively reached important sectors like healthcare. In the fight against cancer, AI proved to be a novel and powerful tool, offering new hope for prevention and early detection. In this review, we will comprehensively explore the medical applications of AI, including early cancer detection through pathological and imaging analysis, risk stratification, patient triage, and the development of personalized prevention approaches. However, despite the successful impact AI has contributed to, we will also discuss the myriad of challenges that we have faced so far toward optimal AI implementation. There are problems when it comes to the best way in which we can use AI systemically. Having the correct data that can be understood easily must remain one of the most significant concerns in all its uses including sharing information. Another challenge that exists is how to interpret AI models because they are too complicated for people to follow through examples used in their developments which may affect trust, especially among medical professionals. Other considerations like data privacy, algorithm bias, and equitable access to AI tools have also arisen. Finally, we will evaluate possible future directions for this promising field that highlight AI's capacity to transform preventative cancer care.
Collapse
Affiliation(s)
- Marianna-Foteini Dafni
- School of Medicine, Laboratory of Forensic Medicine and Toxicology, Aristotle Univerisity of Thessaloniki, Thessaloniki, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Mohamed Shih
- School of Medicine, Newgiza University, Giza, Egypt.
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece.
| | - Agnes Zanotto Manoel
- Faculty of Medicine, Federal University of Rio Grande, Rio Grande do Sul, Brazil
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Mohamed Yousif Elamin Yousif
- Faculty of Medicine, University of Khartoum, Khartoum, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Stavroula Spathi
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Chorya Harshal
- Faculty of Medicine, Medical College Baroda, Vadodara, India
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Gaurang Bhatt
- All India Institute of Medical Sciences, Rishikesh, India
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Swarali Yatin Chodnekar
- Faculty of Medicine, Teaching University Geomedi LLC, Tbilisi, Georgia
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Nicholas Stam Chune
- Faculty of Medicine, University of Nairobi, Nairobi, Kenya
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Warda Rasool
- Faculty of Medicine, King Edward Medical University, Lahore, Pakistan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Tungki Pratama Umar
- Division of Surgery and Interventional Science, Faculty of Medical Sciences, University College London, London, UK
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Dimitrios C Moustakas
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Robert Achkar
- Faculty of Medicine, Poznan University of Medical Sciences, Poznan, Poland
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Harendra Kumar
- Dow University of Health Sciences, Karachi, Pakistan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Suhaila Naz
- Tbilisi State Medical University, Tbilisi, Georgia
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Luis M Acuña-Chavez
- Facultad de Medicina de la Universidad Nacional de Trujillo, Trujillo, Peru
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Konstantinos Evgenikos
- Faculty of Medicine, National and Kapodistrian University of Athens, Athens, Greece
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Shaina Gulraiz
- Royal Bournemouth Hospital (University Hospitals Dorset), Bournemouth, UK
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Eslam Salih Musa Ali
- University of Dongola Faculty of Medicine and Health Science, Dongola, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Amna Elaagib
- Faculty of Medicine AlMughtaribeen University, Khartoum, Sudan
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| | - Innocent H Peter Uggh
- Kilimanjaro Clinical Research Institute, Kilimanjaro, Tanzania
- Cancer Prevention Research Group in Greece, Kifisias Avenue 44, Marousi, Greece
| |
Collapse
|
3
|
Lin Z, Zheng J, Deng Y, Du L, Liu F, Li Z. Deep learning-aided diagnosis of acute abdominal aortic dissection by ultrasound images. Emerg Radiol 2025; 32:233-239. [PMID: 39821588 DOI: 10.1007/s10140-025-02311-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2024] [Accepted: 01/07/2025] [Indexed: 01/19/2025]
Abstract
PURPOSE Acute abdominal aortic dissection (AD) is a serious disease. Early detection based on ultrasound (US) can improve the prognosis of AD, especially in emergency settings. We explored the ability of deep learning (DL) to diagnose abdominal AD in US images, which may help the diagnosis of AD by novice radiologists or non-professionals. METHODS There were 374 US images from patients treated before June 30, 2022. The images were classified as AD-positive and AD-negative images. Among them, 90% of images were used as the training set, and 10% of images were used as the test set. A Densenet-169 model and a VGG-16 model were used in this study and compared with two human readers. RESULTS DL models demonstrated high sensitivity and AUC for diagnosing abdominal AD in US images, and DL models showed generally better performance than human readers. CONCLUSION Our findings demonstrated the efficacy of DL-aided diagnosis of abdominal AD in US images, which can be helpful in emergency settings.
Collapse
Affiliation(s)
- Zhanye Lin
- Ultrasound Department of The Second Affiliated Hospital, School of Medicine, The Chinese University of Hong Kong, Shenzhen & Longgang District People's Hospital of Shenzhen, Shenzhen, China
| | - Jian Zheng
- Ultrasound Department of The Second Affiliated Hospital, School of Medicine, The Chinese University of Hong Kong, Shenzhen & Longgang District People's Hospital of Shenzhen, Shenzhen, China
| | - Yaohong Deng
- Department of Research and Development, Yizhun Medical AI Co. Ltd, Beijing, China
| | - Lingyue Du
- Department of Radiology, Huazhong University of Science and Technology Union Shenzhen Hospital, Shenzhen, China
| | - Fan Liu
- Ultrasound Department of The Second Affiliated Hospital, School of Medicine, The Chinese University of Hong Kong, Shenzhen & Longgang District People's Hospital of Shenzhen, Shenzhen, China
| | - Zhengyi Li
- Department of Ultrasound, Shenzhen Second People's Hospital, The First Affiliated Hospital of Shenzhen University, Shenzhen, China.
| |
Collapse
|
4
|
Oh SY, Lee YM, Kang DJ, Kwon HJ, Chakraborty S, Park JH. Breaking Barriers in Thyroid Cytopathology: Harnessing Deep Learning for Accurate Diagnosis. Bioengineering (Basel) 2025; 12:293. [PMID: 40150757 PMCID: PMC11939565 DOI: 10.3390/bioengineering12030293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2025] [Revised: 03/07/2025] [Accepted: 03/11/2025] [Indexed: 03/29/2025] Open
Abstract
BACKGROUND We address the application of artificial intelligence (AI) techniques in thyroid cytopathology, specifically for diagnosing papillary thyroid carcinoma (PTC), the most common type of thyroid cancer. METHODS Our research introduces deep learning frameworks that analyze cytological images from fine-needle aspiration cytology (FNAC), a key preoperative diagnostic method for PTC. The first framework is a patch-level classifier referred as "TCS-CNN", based on a convolutional neural network (CNN) architecture, to predict thyroid cancer based on the Bethesda System (TBS) category. The second framework is an attention-based deep multiple instance learning (AD-MIL) model, which employs a feature extractor using TCS-CNN and an attention mechanism to aggregate features from smaller-patch-level regions into predictions for larger-patch-level regions, referred to as bag-level predictions in this context. RESULTS The proposed TCS-CNN framework achieves an accuracy of 97% and a recall of 96% for small-patch-level classification, accurately capturing local malignancy information. Additionally, the AD-MIL framework also achieves approximately 96% accuracy and recall, demonstrating that this framework can maintain comparable performance while expanding the diagnostic coverage to larger regions through patch aggregation. CONCLUSIONS This study provides a feasibility analysis for thyroid cytopathology classification and visual interpretability for AI diagnosis, suggesting potential improvements in patient outcomes and reductions in healthcare costs.
Collapse
Affiliation(s)
- Seo Young Oh
- Terenz Co., Ltd., Busan 48060, Republic of Korea; (S.Y.O.); (D.J.K.)
| | - Yong Moon Lee
- Department of Pathology, College of Medicine, Dankook University, Cheonan 31116, Republic of Korea;
| | - Dong Joo Kang
- Terenz Co., Ltd., Busan 48060, Republic of Korea; (S.Y.O.); (D.J.K.)
| | - Hyeong Ju Kwon
- Department of Pathology, Wonju Severance Christian Hospital, Wonju College of Medicine, Yonsei University, Seoul 03722, Republic of Korea;
| | | | - Jae Hyun Park
- Department of Surgery, Wonju Severance Christian Hospital, Wonju College of Medicine, Yonsei University, Wonju 26492, Republic of Korea
| |
Collapse
|
5
|
Xiong S, Liu S, Zhang W, Zeng C, Liao D, Tang T, Wang S, Guo Y. Annotation-free genetic mutation estimation of thyroid cancer using cytological slides from multi-centers. Diagn Pathol 2025; 20:22. [PMID: 39985045 PMCID: PMC11846261 DOI: 10.1186/s13000-025-01618-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2024] [Accepted: 02/14/2025] [Indexed: 02/23/2025] Open
Abstract
Thyroid cancer is the most common form of endocrine malignancy and fine needle aspiration (FNA) cytology is a reliable method for clinical diagnosis. Identification of genetic mutation status has been proved efficient for accurate diagnosis and prognostic risk stratification. In this study, a dataset with thyroid cytological images of 310 indeterminate (TBS3 or 4) and 392 PTC (TBS5 or 6) was collected. We introduced a multimodal cascaded network framework to estimate BARF V600E and RAS mutations directly from thyroid cytological slides. The area under the curve in the external testing set achieved 0.902 ± 0.063 and 0.801 ± 0.137 AUCs for BRAF, and RAS, respectively. The results demonstrated that deep neural networks have the potential in cytologically predicting valuable diagnosis and comprehensive genetic status.
Collapse
Affiliation(s)
- Siping Xiong
- Department of Pathology, The Eighth Affiliated Hospital of Sun Yat-Sen University, Shenzhen, 518000, Guangdong, China
| | - Shuguang Liu
- Department of Pathology, The Eighth Affiliated Hospital of Sun Yat-Sen University, Shenzhen, 518000, Guangdong, China
| | - Wei Zhang
- Department of Pathology, The Eighth Affiliated Hospital of Sun Yat-Sen University, Shenzhen, 518000, Guangdong, China
| | - Chao Zeng
- Department of Pathology, The Eighth Affiliated Hospital of Sun Yat-Sen University, Shenzhen, 518000, Guangdong, China
| | - Degui Liao
- Department of Pathology, The Second Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510000, Guangdong, China
| | - Tian Tang
- Department of Pathology, The Second Affiliated Hospital of Guangzhou Medical University, Guangzhou, 510000, Guangdong, China.
| | - Shimin Wang
- Department of Pathology, The Eighth Affiliated Hospital of Sun Yat-Sen University, Shenzhen, 518000, Guangdong, China.
| | - Yimin Guo
- Department of Pathology, The Eighth Affiliated Hospital of Sun Yat-Sen University, Shenzhen, 518000, Guangdong, China.
| |
Collapse
|
6
|
VandeHaar MA, Al-Asi H, Doganay F, Yilmaz I, Alazab H, Xiao Y, Balan J, Dangott BJ, Nassar A, Reynolds JP, Akkus Z. Challenges and Opportunities in Cytopathology Artificial Intelligence. Bioengineering (Basel) 2025; 12:176. [PMID: 40001695 PMCID: PMC11851434 DOI: 10.3390/bioengineering12020176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2024] [Revised: 01/26/2025] [Accepted: 02/07/2025] [Indexed: 02/27/2025] Open
Abstract
Artificial Intelligence (AI) has the potential to revolutionize cytopathology by enhancing diagnostic accuracy, efficiency, and accessibility. However, the implementation of AI in this field presents significant challenges and opportunities. This review paper explores the current landscape of AI applications in cytopathology, highlighting the critical challenges, including data quality and availability, algorithm development, integration and standardization, and clinical validation. We discuss challenges such as the limitation of only one optical section and z-stack scanning, the complexities associated with acquiring high-quality labeled data, the intricacies of developing robust and generalizable AI models, and the difficulties in integrating AI tools into existing laboratory workflows. The review also identifies substantial opportunities that AI brings to cytopathology. These include the potential for improved diagnostic accuracy through enhanced detection capabilities and consistent, reproducible results, which can reduce observer variability. AI-driven automation of routine tasks can significantly increase efficiency, allowing cytopathologists to focus on more complex analyses. Furthermore, AI can serve as a valuable educational tool, augmenting the training of cytopathologists and facilitating global health initiatives by making high-quality diagnostics accessible in resource-limited settings. The review underscores the importance of addressing these challenges to harness the full potential of AI in cytopathology, ultimately improving patient care and outcomes.
Collapse
Affiliation(s)
- Meredith A. VandeHaar
- Cytology, Department of Laboratory Medicine and Pathology, Mayo Clinic, Rochester, MN 55905, USA;
| | - Hussien Al-Asi
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Fatih Doganay
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Ibrahim Yilmaz
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Heba Alazab
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Yao Xiao
- Computational Biology, Quantitative Health Science, Mayo Clinic, Rochester, MN 55905, USA; (Y.X.); (J.B.)
| | - Jagadheshwar Balan
- Computational Biology, Quantitative Health Science, Mayo Clinic, Rochester, MN 55905, USA; (Y.X.); (J.B.)
| | - Bryan J. Dangott
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Aziza Nassar
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Jordan P. Reynolds
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| | - Zeynettin Akkus
- Computational Pathology and Artificial Intelligence, Department of Laboratory Medicine, Mayo Clinic, Jacksonville, FL 32224, USA; (H.A.-A.); (F.D.); (I.Y.); (H.A.); (B.J.D.); (A.N.); (J.P.R.)
| |
Collapse
|
7
|
Al-Obeidat F, Hafez W, Rashid A, Jallo MK, Gador M, Cherrez-Ojeda I, Simancas-Racines D. Artificial intelligence for the detection of acute myeloid leukemia from microscopic blood images; a systematic review and meta-analysis. Front Big Data 2025; 7:1402926. [PMID: 39897067 PMCID: PMC11782132 DOI: 10.3389/fdata.2024.1402926] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 12/23/2024] [Indexed: 02/04/2025] Open
Abstract
Background Leukemia is the 11th most prevalent type of cancer worldwide, with acute myeloid leukemia (AML) being the most frequent malignant blood malignancy in adults. Microscopic blood tests are the most common methods for identifying leukemia subtypes. An automated optical image-processing system using artificial intelligence (AI) has recently been applied to facilitate clinical decision-making. Aim To evaluate the performance of all AI-based approaches for the detection and diagnosis of acute myeloid leukemia (AML). Methods Medical databases including PubMed, Web of Science, and Scopus were searched until December 2023. We used the "metafor" and "metagen" libraries in R to analyze the different models used in the studies. Accuracy and sensitivity were the primary outcome measures. Results Ten studies were included in our review and meta-analysis, conducted between 2016 and 2023. Most deep-learning models have been utilized, including convolutional neural networks (CNNs). The common- and random-effects models had accuracies of 1.0000 [0.9999; 1.0001] and 0.9557 [0.9312, and 0.9802], respectively. The common and random effects models had high sensitivity values of 1.0000 and 0.8581, respectively, indicating that the machine learning models in this study can accurately detect true-positive leukemia cases. Studies have shown substantial variations in accuracy and sensitivity, as shown by the Q values and I2 statistics. Conclusion Our systematic review and meta-analysis found an overall high accuracy and sensitivity of AI models in correctly identifying true-positive AML cases. Future research should focus on unifying reporting methods and performance assessment metrics of AI-based diagnostics. Systematic review registration https://www.crd.york.ac.uk/prospero/#recordDetails, CRD42024501980.
Collapse
Affiliation(s)
- Feras Al-Obeidat
- College of Technological Innovation, Zayed University, Abu Dhabi, United Arab Emirates
| | - Wael Hafez
- Internal Medicine Department, Medical Research and Clinical Studies Institute, The National Research Centre, Cairo, Egypt
- NMC Royal Hospital, Abu Dhabi, United Arab Emirates
| | - Asrar Rashid
- NMC Royal Hospital, Abu Dhabi, United Arab Emirates
| | - Mahir Khalil Jallo
- Department of Clinical Sciences, College of Medicine, Gulf Medical University, Ajman, United Arab Emirates
| | - Munier Gador
- NMC Royal Hospital, Abu Dhabi, United Arab Emirates
| | - Ivan Cherrez-Ojeda
- Department of Allergy and Immunology, Universidad Espiritu Santo, Samborondon, Ecuador
- Respiralab Research Group, Guayaquil, Ecuador
| | - Daniel Simancas-Racines
- Centro de Investigación de Salud Pública y Epidemiología Clínica (CISPEC), Universidad UTE, Quito, Ecuador
| |
Collapse
|
8
|
Ben Khalifa A, Mili M, Maatouk M, Ben Abdallah A, Abdellali M, Gaied S, Ben Ali A, Lahouel Y, Bedoui MH, Zrig A. Deep Transfer Learning for Classification of Late Gadolinium Enhancement Cardiac MRI Images into Myocardial Infarction, Myocarditis, and Healthy Classes: Comparison with Subjective Visual Evaluation. Diagnostics (Basel) 2025; 15:207. [PMID: 39857091 PMCID: PMC11765457 DOI: 10.3390/diagnostics15020207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2024] [Revised: 12/25/2024] [Accepted: 12/30/2024] [Indexed: 01/27/2025] Open
Abstract
Background/Objectives: To develop a computer-aided diagnosis (CAD) method for the classification of late gadolinium enhancement (LGE) cardiac MRI images into myocardial infarction (MI), myocarditis, and healthy classes using a fine-tuned VGG16 model hybridized with multi-layer perceptron (MLP) (VGG16-MLP) and assess our model's performance in comparison to various pre-trained base models and MRI readers. Methods: This study included 361 LGE images for MI, 222 for myocarditis, and 254 for the healthy class. The left ventricle was extracted automatically using a U-net segmentation model on LGE images. Fine-tuned VGG16 was performed for feature extraction. A spatial attention mechanism was implemented as a part of the neural network architecture. The MLP architecture was used for the classification. The evaluation metrics were calculated using a separate test set. To compare the VGG16 model's performance in feature extraction, various pre-trained base models were evaluated: VGG19, DenseNet121, DenseNet201, MobileNet, InceptionV3, and InceptionResNetV2. The Support Vector Machine (SVM) classifier was evaluated and compared to MLP for the classification task. The performance of the VGG16-MLP model was compared with a subjective visual analysis conducted by two blinded independent readers. Results: The VGG16-MLP model allowed high-performance differentiation between MI, myocarditis, and healthy LGE cardiac MRI images. It outperformed the other tested models with 96% accuracy, 97% precision, 96% sensitivity, and 96% F1-score. Our model surpassed the accuracy of Reader 1 by 27% and Reader 2 by 17%. Conclusions: Our study demonstrated that the VGG16-MLP model permits accurate classification of MI, myocarditis, and healthy LGE cardiac MRI images and could be considered a reliable computer-aided diagnosis approach specifically for radiologists with limited experience in cardiovascular imaging.
Collapse
Affiliation(s)
- Amani Ben Khalifa
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Manel Mili
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
- Faculty of Sciences of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Mezri Maatouk
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Asma Ben Abdallah
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Mabrouk Abdellali
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Sofiene Gaied
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Azza Ben Ali
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Yassir Lahouel
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| | - Mohamed Hedi Bedoui
- Technology and Medical Imaging Laboratory LR12ES06, Faculty of Medicine of Monastir, University of Monastir, Monastir 5019, Tunisia
| | - Ahmed Zrig
- LR18-SP08 Department of Radiology, University Hospital of Monastir, Monastir 5019, Tunisia
| |
Collapse
|
9
|
Parsa AA, Gharib H. Thyroid Nodules: Past, Present, and Future. Endocr Pract 2025; 31:114-123. [PMID: 38880348 DOI: 10.1016/j.eprac.2024.05.016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 05/09/2024] [Accepted: 05/29/2024] [Indexed: 06/18/2024]
Abstract
BACKGROUND Over the past millennia, the evaluation and management of thyroid nodules has essentially remained the same with thyroidectomy as the only reliable method to identify malignancy. However, in the last 30 years, technological advances have significantly improved diagnostic management of thyroid nodules. Advances in imaging have allowed development of a reliable risk- based stratification system to identify nodules at increased risk of malignancy. At the same time, sensitive imaging has caused collateral damage to the degree that we are now identifying and treating many small, low risk nodules with little to no clinical relevance. OBJECTIVE To review the history of thyroid nodule evaluation with emphasis on recent changes and future pathways. METHODS Literature review and discussion. RESULTS Thyroid ultrasound remains the best initial method to evaluate the thyroid gland for nodules. Different risk-of-malignancy protocols have been developed and introduced by different societies, reporting methods have been developed and improved each, with goals of improving the ability to recognize nodules requiring further intervention and minimizing excessive monitoring of those who do not. Once identified, cytological evaluation of nodules further enhances malignancy identification with molecular markers assisting in ruling out malignancies in indeterminate nodules preventing unneeded intervention. And all societies have urged avoidance of overdiagnosis and overtreatment of low-risk cancers of little to no clinical relevance. CONCLUSION In this review, we describe advancements in nodule evaluation and management, while emphasizing caution in overdiagnosing and overtreating low-risk lesions without clinical importance.
Collapse
Affiliation(s)
- Alan A Parsa
- John A. Burns School of Medicine, University of Hawai'i at Mānoa, Honolulu, Hawaii.
| | - Hossein Gharib
- Division of Endocrinology, Diabetes, Metabolism, and Nutrition, Mayo Clinic College of Medicine, Rochester, Minnesota
| |
Collapse
|
10
|
Talib MA, Moufti MA, Nasir Q, Kabbani Y, Aljaghber D, Afadar Y. Transfer Learning-Based Classifier to Automate the Extraction of False X-Ray Images From Hospital's Database. Int Dent J 2024; 74:1471-1482. [PMID: 39232939 PMCID: PMC11551570 DOI: 10.1016/j.identj.2024.08.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2024] [Revised: 07/11/2024] [Accepted: 08/02/2024] [Indexed: 09/06/2024] Open
Abstract
BACKGROUND During preclinical training, dental students take radiographs of acrylic (plastic) blocks containing extracted patient teeth. With the digitisation of medical records, a central archiving system was created to store and retrieve all x-ray images, regardless of whether they were images of teeth on acrylic blocks, or those from patients. In the early stage of the digitisation process, and due to the immaturity of the data management system, numerous images were mixed up and stored in random locations within a unified archiving system, including patient record files. Filtering out and expunging the undesired training images is imperative as manual searching for such images is problematic. Hence the aim of this stidy was to differentiate intraoral images from artificial images on acrylic blocks. METHODS An artificial intelligence (AI) solution to automatically differentiate between intraoral radiographs taken of patients and those taken of acrylic blocks was utilised in this study. The concept of transfer learning was applied to a dataset provided by a Dental Hospital. RESULTS An accuracy score, F1 score, and a recall score of 98.8%, 99.2%, and 100%, respectively, were achieved using a VGG16 pre-trained model. These results were more sensitive compared to those obtained initally using a baseline model with 96.5%, 97.5%, and 98.9% accuracy score, F1 score, and a recall score respectively. CONCLUSIONS The proposed system using transfer learning was able to accurately identify "fake" radiographs images and distinguish them from the real intraoral images.
Collapse
Affiliation(s)
- Manar Abu Talib
- Department of Computer Engineering, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Mohammad Adel Moufti
- Department of Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates.
| | - Qassim Nasir
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Yousuf Kabbani
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| | - Dana Aljaghber
- Department of Restorative Dentistry, College of Dental Medicine, University of Sharjah, Sharjah, United Arab Emirates
| | - Yaman Afadar
- Department of Computer Science, College of Computing and Informatics, University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
11
|
Ahadian K, Yudistira N, Rahayudi B, Basori AH, Malebary SJ, Alesawi S, Mansur ABF, Alorfi AS, Barukab OM. Maize disease classification using transfer learning and convolutional neural network with weighted loss. Heliyon 2024; 10:e39569. [PMID: 39524719 PMCID: PMC11543870 DOI: 10.1016/j.heliyon.2024.e39569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2024] [Revised: 10/08/2024] [Accepted: 10/17/2024] [Indexed: 11/16/2024] Open
Abstract
Maize stands out as a versatile commodity, finding applications in food and animal feed industries. Notably, half of the total demand for maize is met through its utilization as animal feed. Despite its importance, maize cultivation often grapples with crop failures resulting from delayed disease management or insufficient knowledge about these diseases, impeding timely intervention. The advent of technological advancements, particularly in Machine Learning, presents solutions to address these challenges. This research focuses on employing a Convolutional Neural Network (CNN) to classify maize plant diseases. Two datasets form the foundation of this study. The first dataset encompasses 4144 images distributed across 4 classes, while the second dataset comprises 5155 images distributed among 7 to 8 classes. The second dataset encounters issues related to imbalanced class distribution, where certain classes possess substantially more data than others. To mitigate this imbalance, the weighted cross-entropy loss method is employed. During experimentation, three distinct architectural models-ResNet-18, VGG16, and EfficientNet-are rigorously tested. Additionally, various optimizers are explored, with noteworthy results indicating that both datasets achieve peak accuracy through the use of the SGD (Stochastic Gradient Descent) optimization. For the first dataset, optimal results are obtained with the VGG16 architecture, leveraging a frozen layer in the classification stage and achieving an impressive accuracy of 97.146 %. Shifting the focus to the second dataset, the most favorable outcome is realized by employing the EfficientNet architecture without a frozen layer, coupled with the implementation of weighted loss to address the class imbalance, resulting in an accuracy of 94.798 %.
Collapse
Affiliation(s)
- Krisnanda Ahadian
- Informatics Department, Faculty of Computer Science, Brawijaya University, 65145, Malang, Indonesia
| | - Novanto Yudistira
- Informatics Department, Faculty of Computer Science, Brawijaya University, 65145, Malang, Indonesia
| | - Bayu Rahayudi
- Informatics Department, Faculty of Computer Science, Brawijaya University, 65145, Malang, Indonesia
| | - Ahmad Hoirul Basori
- Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Rabigh, 21911, Makkah, Saudi Arabia
| | - Sharaf J. Malebary
- Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Rabigh, 21911, Makkah, Saudi Arabia
| | - Sami Alesawi
- Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Rabigh, 21911, Makkah, Saudi Arabia
| | - Andi Besse Firdausiah Mansur
- Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Rabigh, 21911, Makkah, Saudi Arabia
| | - Almuhannad S. Alorfi
- Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Rabigh, 21911, Makkah, Saudi Arabia
| | - Omar M. Barukab
- Faculty of Computing and Information Technology in Rabigh, King Abdulaziz University, Rabigh, 21911, Makkah, Saudi Arabia
| |
Collapse
|
12
|
Hopson JB, Flaus A, McGinnity CJ, Neji R, Reader AJ, Hammers A. Deep Convolutional Backbone Comparison for Automated PET Image Quality Assessment. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2024; 8:893-901. [PMID: 39404656 PMCID: PMC7616552 DOI: 10.1109/trpms.2024.3436697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2024]
Abstract
Pretraining deep convolutional network mappings using natural images helps with medical imaging analysis tasks; this is important given the limited number of clinically-annotated medical images. Many two-dimensional pretrained backbone networks, however, are currently available. This work compared 18 different backbones from 5 architecture groups (pretrained on ImageNet) for the task of assessing [18F]FDG brain Positron Emission Transmission (PET) image quality (reconstructed at seven simulated doses), based on three clinical image quality metrics (global quality rating, pattern recognition, and diagnostic confidence). Using two-dimensional randomly sampled patches, up to eight patients (at three dose levels each) were used for training, with three separate patient datasets used for testing. Each backbone was trained five times with the same training and validation sets, and with six cross-folds. Training only the final fully connected layer (with ~6,000-20,000 trainable parameters) achieved a test mean-absolute-error of ~0.5 (which was within the intrinsic uncertainty of clinical scoring). To compare "classical" and over-parameterized regimes, the pretrained weights of the last 40% of the network layers were then unfrozen. The mean-absolute-error fell below 0.5 for 14 out of the 18 backbones assessed, including two that previously failed to train. Generally, backbones with residual units (e.g. DenseNets and ResNetV2s), were suited to this task, in terms of achieving the lowest mean-absolute-error at test time (~0.45 - 0.5). This proof-of-concept study shows that over-parameterization may also be important for automated PET image quality assessments.
Collapse
Affiliation(s)
| | - Anthime Flaus
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Colm J McGinnity
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Radhouene Neji
- Department of Biomedical Engineering, King's College London; Siemens Healthcare Limited
| | | | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| |
Collapse
|
13
|
Ilyas N, Naseer F, Khan A, Raja A, Lee YM, Park JH, Lee B. CytoNet: an efficient dual attention based automatic prediction of cancer sub types in cytology studies. Sci Rep 2024; 14:25809. [PMID: 39468153 PMCID: PMC11519499 DOI: 10.1038/s41598-024-76512-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2023] [Accepted: 10/14/2024] [Indexed: 10/30/2024] Open
Abstract
Computer-assisted diagnosis (CAD) plays a key role in cancer diagnosis or screening. Whereas, current CAD performs poorly on whole slide image (WSI) analysis, and thus fails to generalize well. This research aims to develop an automatic classification system to distinguish between different types of carcinomas. Obtaining rich deep features in multi-class classification while achieving high accuracy is still a challenging problem. The detection and classification of cancerous cells in WSI are quite challenging due to the misclassification of normal lumps and cancerous cells. This is due to cluttering, occlusion, and irregular cell distribution. Researchers in the past mostly obtained the hand-crafted features while neglecting the above-mentioned challenges which led to a reduction of the classification accuracy. To mitigate this problem we proposed an efficient dual attention-based network (CytoNet). The proposed network is composed of two main modules (i) Efficient-Net and (ii) Dual Attention Module (DAM). Efficient-Net is capable of obtaining higher accuracy and enhancing efficiency as compared to existing Convolutional Neural Networks (CNNs). It is also useful to obtain the most generic features as it has been trained on ImageNet. Whereas DAM is very robust in obtaining attention and targeted features while negating the background. In this way, the combination of an efficient and attention module is useful to obtain the robust, and intrinsic features to obtain comparable performance. Further, we evaluated the proposed network on two well-known datasets (i) Our generated thyroid dataset (ii) Mendeley Cervical dataset (Hussain in Data Brief, 2019) with enhanced performance compared to their counterparts. CytoNet demonstrated a 99% accuracy rate on the thyroid dataset in comparison to its counterpart. The precision, recall, and F1-score values achieved on the Mendeley Cervical dataset are 0.992, 0.985, and 0.977, respectively. The code implementation is available on GitHub. https://github.com/naveedilyas/CytoNet-An-Efficient-Dual-Attention-based-Automatic-Prediction-of-Cancer-Sub-types-in-Cytol.
Collapse
Affiliation(s)
- Naveed Ilyas
- Department of Bioengineering, University of California Riverside, Riverside, CA, 92521, USA
| | - Farhat Naseer
- School of Mechanical and Manufacturing Engineering (SMME), National University of Science and Technology (NUST), Islamabad, Pakistan
| | - Anwar Khan
- VIB-KU Leuven Center for Cancer Biology, Katholieke Universiteit Leuven, UZ Leuven, Leuven, Belgium
| | - Aamir Raja
- Department of Physics, Khalifa University of Science and Technology, Abu Dhabi, UAE
| | - Yong-Moon Lee
- Department of Pathology, College of Medicine, Dankook University, Cheonan, South Korea
| | - Jae Hyun Park
- Department of Surgery, Wonju Severance Christian Hospital, Wonju College of Medicine, Yonsei University, Wonju, South Korea
| | - Boreom Lee
- Department of Biomedical Science and Engineering, Gwangju Institute of Science and Technology, Gwangju, South Korea.
| |
Collapse
|
14
|
Fu Z, Xi J, Ji Z, Zhang R, Wang J, Shi R, Pu X, Yu J, Xue F, Liu J, Wang Y, Zhong H, Feng J, Zhang M, He Y. Analysis of anterior segment in primary angle closure suspect with deep learning models. BMC Med Inform Decis Mak 2024; 24:251. [PMID: 39251987 PMCID: PMC11385134 DOI: 10.1186/s12911-024-02658-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2024] [Accepted: 08/29/2024] [Indexed: 09/11/2024] Open
Abstract
OBJECTIVE To analyze primary angle closure suspect (PACS) patients' anatomical characteristics of anterior chamber configuration, and to establish artificial intelligence (AI)-aided diagnostic system for PACS screening. METHODS A total of 1668 scans of 839 patients were included in this cross-sectional study. The subjects were divided into two groups: PACS group and normal group. With anterior segment optical coherence tomography scans, the anatomical diversity between two groups was compared, and anterior segment structure features of PACS were extracted. Then, AI-aided diagnostic system was constructed, which based different algorithms such as classification and regression tree (CART), random forest (RF), logistic regression (LR), VGG-16 and Alexnet. Then the diagnostic efficiencies of different algorithms were evaluated, and compared with junior physicians and experienced ophthalmologists. RESULTS RF [sensitivity (Se) = 0.84; specificity (Sp) = 0.92; positive predict value (PPV) = 0.82; negative predict value (NPV) = 0.95; area under the curve (AUC) = 0.90] and CART (Se = 0.76, Sp = 0.93, PPV = 0.85, NPV = 0.92, AUC = 0.90) showed better performance than LR (Se = 0.68, Sp = 0.91, PPV = 0.79, NPV = 0.90, AUC = 0.86). In convolutional neural networks (CNN), Alexnet (Se = 0.83, Sp = 0.95, PPV = 0.92, NPV = 0.87, AUC = 0.85) was better than VGG-16 (Se = 0.84, Sp = 0.90, PPV = 0.85, NPV = 0.90, AUC = 0.79). The performance of 2 CNN algorithms was better than 5 junior physicians, and the mean value of diagnostic indicators of 2 CNN algorithm was similar to experienced ophthalmologists. CONCLUSION PACS patients have distinct anatomical characteristics compared with health controls. AI models for PACS screening are reliable and powerful, equivalent to experienced ophthalmologists.
Collapse
Affiliation(s)
- Ziwei Fu
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China
- Xi'an Key Laboratory for the Prevention and Treatment of Eye and Brain Neurological Related Diseases, Xi'an, Shaanxi, 710038, China
| | - Jinwei Xi
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China
| | - Zhi Ji
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China
| | - Ruxue Zhang
- School of Mathematics, Northwest University, Xi'an, 710127, China
| | - Jianping Wang
- Shaanxi Provincial People's Hospital, Xi'an, Shaanxi, 710068, China
| | - Rui Shi
- Shaanxi Provincial People's Hospital, Xi'an, Shaanxi, 710068, China
| | - Xiaoli Pu
- Xianyang First People's Hospital, Xianyang, Shaanxi Province, 712000, China
| | - Jingni Yu
- Xi'an People's Hospital, Xi'an, Shaanxi, 712099, China
| | - Fang Xue
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China
| | - Jianrong Liu
- Xi'an People's Hospital, Xi'an, Shaanxi, 712099, China
| | - Yanrong Wang
- Yan'an People's Hospital, Yan'an, Shaanxi, 716099, China
| | - Hua Zhong
- The First Affiliated Hospital of Kunming Medical University, Kunming, Yunnan Province, 650032, China
| | - Jun Feng
- School of Mathematics, Northwest University, Xi'an, 710127, China
| | - Min Zhang
- School of Mathematics, Northwest University, Xi'an, 710127, China.
| | - Yuan He
- The Second Affiliated Hospital of Xi'an Medical University, Xi'an, Shaanxi, 710038, China.
- Xi'an Medical University, Xi'an, Shaanxi, 710021, China.
- Xi'an Key Laboratory for the Prevention and Treatment of Eye and Brain Neurological Related Diseases, Xi'an, Shaanxi, 710038, China.
| |
Collapse
|
15
|
Xu H, Yu Y, Chang J, Hu X, Tian Z, Li O. Precision lung cancer screening from CT scans using a VGG16-based convolutional neural network. Front Oncol 2024; 14:1424546. [PMID: 39228981 PMCID: PMC11369893 DOI: 10.3389/fonc.2024.1424546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 07/31/2024] [Indexed: 09/05/2024] Open
Abstract
Objective The research aims to develop an advanced and precise lung cancer screening model based on Convolutional Neural Networks (CNN). Methods Based on the health medical big data platform of Shandong University, we developed a VGG16-Based CNN lung cancer screening model. This model was trained using the Computed Tomography scans data of patients from Pingyi Traditional Chinese Medicine Hospital in Shandong Province, from January to February 2023. Data augmentation techniques, including random resizing, cropping, horizontal flipping, color jitter, random rotation and normalization, were applied to improve model generalization. We used five-fold cross-validation to robustly assess performance. The model was fine-tuned with an SGD optimizer (learning rate 0.001, momentum 0.9, and L2 regularization) and a learning rate scheduler. Dropout layers were added to prevent the model from relying too heavily on specific neurons, enhancing its ability to generalize. Early stopping was implemented when validation loss did not decrease over 10 epochs. In addition, we evaluated the model's performance with Area Under the Curve (AUC), Classification accuracy, Positive Predictive Value (PPV), and Negative Predictive Value (NPV), Sensitivity, Specificity and F1 score. External validation used an independent dataset from the same hospital, covering January to February 2022. Results The training and validation loss and accuracy over iterations show that both accuracy metrics peak at over 0.9 by iteration 15, prompting early stopping to prevent overfitting. Based on five-fold cross-validation, the ROC curves for the VGG16-Based CNN model, demonstrate an AUC of 0.963 ± 0.004, highlighting its excellent diagnostic capability. Confusion matrices provide average metrics with a classification accuracy of 0.917 ± 0.004, PPV of 0.868 ± 0.015, NPV of 0.931 ± 0.003, Sensitivity of 0.776 ± 0.01, Specificity of 0.962 ± 0.005 and F1 score of 0.819 ± 0.008, respectively. External validation confirmed the model's robustness across different patient populations and imaging conditions. Conclusion The VGG16-Based CNN lung screening model constructed in this study can effectively identify lung tumors, demonstrating reliability and effectiveness in real-world medical settings, and providing strong theoretical and empirical support for its use in lung cancer screening.
Collapse
Affiliation(s)
- Hua Xu
- Department of Infection Control, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Shandong, Jinan, China
| | - Yuanyuan Yu
- Data Science Institute, Shandong University, Jinan, Shandong, China
- Institute for Medical Dataology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Jie Chang
- Institute for Medical Dataology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Xifeng Hu
- Institute for Medical Dataology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Zitong Tian
- Institute for Medical Dataology, Cheeloo College of Medicine, Shandong University, Jinan, China
| | - Ouwen Li
- International Center, Jinan Foreign Language School, Shandong, Jinan, China
| |
Collapse
|
16
|
AlMohimeed A, Shehata M, El-Rashidy N, Mostafa S, Samy Talaat A, Saleh H. ViT-PSO-SVM: Cervical Cancer Predication Based on Integrating Vision Transformer with Particle Swarm Optimization and Support Vector Machine. Bioengineering (Basel) 2024; 11:729. [PMID: 39061811 PMCID: PMC11273508 DOI: 10.3390/bioengineering11070729] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 07/10/2024] [Accepted: 07/11/2024] [Indexed: 07/28/2024] Open
Abstract
Cervical cancer (CCa) is the fourth most prevalent and common cancer affecting women worldwide, with increasing incidence and mortality rates. Hence, early detection of CCa plays a crucial role in improving outcomes. Non-invasive imaging procedures with good diagnostic performance are desirable and have the potential to lessen the degree of intervention associated with the gold standard, biopsy. Recently, artificial intelligence-based diagnostic models such as Vision Transformers (ViT) have shown promising performance in image classification tasks, rivaling or surpassing traditional convolutional neural networks (CNNs). This paper studies the effect of applying a ViT to predict CCa using different image benchmark datasets. A newly developed approach (ViT-PSO-SVM) was presented for boosting the results of the ViT based on integrating the ViT with particle swarm optimization (PSO), and support vector machine (SVM). First, the proposed framework extracts features from the Vision Transformer. Then, PSO is used to reduce the complexity of extracted features and optimize feature representation. Finally, a softmax classification layer is replaced with an SVM classification model to precisely predict CCa. The models are evaluated using two benchmark cervical cell image datasets, namely SipakMed and Herlev, with different classification scenarios: two, three, and five classes. The proposed approach achieved 99.112% accuracy and 99.113% F1-score for SipakMed with two classes and achieved 97.778% accuracy and 97.805% F1-score for Herlev with two classes outperforming other Vision Transformers, CNN models, and pre-trained models. Finally, GradCAM is used as an explainable artificial intelligence (XAI) tool to visualize and understand the regions of a given image that are important for a model's prediction. The obtained experimental results demonstrate the feasibility and efficacy of the developed ViT-PSO-SVM approach and hold the promise of providing a robust, reliable, accurate, and non-invasive diagnostic tool that will lead to improved healthcare outcomes worldwide.
Collapse
Affiliation(s)
- Abdulaziz AlMohimeed
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 13318, Saudi Arabia;
| | - Mohamed Shehata
- Bioengineering Department, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| | - Nora El-Rashidy
- Machine Learning and Information Retrieval Department, Faculty of Artificial Intelligence, Kafrelsheiksh University, Kafrelsheiksh 13518, Egypt;
| | - Sherif Mostafa
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt;
| | - Amira Samy Talaat
- Computers and Systems Department, Electronics Research Institute, Cairo 12622, Egypt;
| | - Hager Saleh
- Faculty of Computers and Artificial Intelligence, South Valley University, Hurghada 84511, Egypt;
- Insight SFI Research Centre for Data Analytics, Galway University, H91 TK33 Galway, Ireland
- Research Development, Atlantic Technological University, Letterkenny, H91 AH5K Donegal, Ireland
| |
Collapse
|
17
|
Wang J, Zheng N, Wan H, Yao Q, Jia S, Zhang X, Fu S, Ruan J, He G, Chen X, Li S, Chen R, Lai B, Wang J, Jiang Q, Ouyang N, Zhang Y. Deep learning models for thyroid nodules diagnosis of fine-needle aspiration biopsy: a retrospective, prospective, multicentre study in China. Lancet Digit Health 2024; 6:e458-e469. [PMID: 38849291 DOI: 10.1016/s2589-7500(24)00085-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2023] [Revised: 03/25/2024] [Accepted: 04/17/2024] [Indexed: 06/09/2024]
Abstract
BACKGROUND Accurately distinguishing between malignant and benign thyroid nodules through fine-needle aspiration cytopathology is crucial for appropriate therapeutic intervention. However, cytopathologic diagnosis is time consuming and hindered by the shortage of experienced cytopathologists. Reliable assistive tools could improve cytopathologic diagnosis efficiency and accuracy. We aimed to develop and test an artificial intelligence (AI)-assistive system for thyroid cytopathologic diagnosis according to the Thyroid Bethesda Reporting System. METHODS 11 254 whole-slide images (WSIs) from 4037 patients were used to train deep learning models. Among the selected WSIs, cell level was manually annotated by cytopathologists according to The Bethesda System for Reporting Thyroid Cytopathology (TBSRTC) guidelines of the second edition (2017 version). A retrospective dataset of 5638 WSIs of 2914 patients from four medical centres was used for validation. 469 patients were recruited for the prospective study of the performance of AI models and their 537 thyroid nodule samples were used. Cohorts for training and validation were enrolled between Jan 1, 2016, and Aug 1, 2022, and the prospective dataset was recruited between Aug 1, 2022, and Jan 1, 2023. The performance of our AI models was estimated as the area under the receiver operating characteristic (AUROC), sensitivity, specificity, accuracy, positive predictive value, and negative predictive value. The primary outcomes were the prediction sensitivity and specificity of the model to assist cyto-diagnosis of thyroid nodules. FINDINGS The AUROC of TBSRTC III+ (which distinguishes benign from TBSRTC classes III, IV, V, and VI) was 0·930 (95% CI 0·921-0·939) for Sun Yat-sen Memorial Hospital of Sun Yat-sen University (SYSMH) internal validation and 0·944 (0·929 - 0·959), 0·939 (0·924-0·955), 0·971 (0·938-1·000) for The First People's Hospital of Foshan (FPHF), Sichuan Cancer Hospital & Institute (SCHI), and The Third Affiliated Hospital of Guangzhou Medical University (TAHGMU) medical centres, respectively. The AUROC of TBSRTC V+ (which distinguishes benign from TBSRTC classes V and VI) was 0·990 (95% CI 0·986-0·995) for SYSMH internal validation and 0·988 (0·980-0·995), 0·965 (0·953-0·977), and 0·991 (0·972-1·000) for FPHF, SCHI, and TAHGMU medical centres, respectively. For the prospective study at SYSMH, the AUROC of TBSRTC III+ and TBSRTC V+ was 0·977 and 0·981, respectively. With the assistance of AI, the specificity of junior cytopathologists was boosted from 0·887 (95% CI 0·8440-0·922) to 0·993 (0·974-0·999) and the accuracy was improved from 0·877 (0·846-0·904) to 0·948 (0·926-0·965). 186 atypia of undetermined significance samples from 186 patients with BRAF mutation information were collected; 43 of them harbour the BRAFV600E mutation. 91% (39/43) of BRAFV600E-positive atypia of undetermined significance samples were identified as malignant by the AI models. INTERPRETATION In this study, we developed an AI-assisted model named the Thyroid Patch-Oriented WSI Ensemble Recognition (ThyroPower) system, which facilitates rapid and robust cyto-diagnosis of thyroid nodules, potentially enhancing the diagnostic capabilities of cytopathologists. Moreover, it serves as a potential solution to mitigate the scarcity of cytopathologists. FUNDING Guangdong Science and Technology Department. TRANSLATION For the Chinese translation of the abstract see Supplementary Materials section.
Collapse
Affiliation(s)
- Jue Wang
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Nafen Zheng
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Huan Wan
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinyue Yao
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Shijun Jia
- Department of Pathology, Sichuan Clinical Research Center for Cancer, Sichuan Cancer Hospital & Institute, Sichuan Cancer Center, Affiliated Cancer Hospital of University of Electronic Science and Technology of China, Chengdu, China
| | - Xin Zhang
- Department of Pathology, The First People's Hospital of Foshan, Foshan, China
| | - Sha Fu
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jingliang Ruan
- Department of Ultrasound, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Gui He
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Xulin Chen
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Suiping Li
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Rui Chen
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Boan Lai
- Department of Pathology, The Third Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Jin Wang
- Cells Vision (Guangzhou) Medical Technology, Guangzhou, China
| | - Qingping Jiang
- Department of Pathology, The Third Affiliated Hospital, Guangzhou Medical University, Guangzhou, China
| | - Nengtai Ouyang
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Yin Zhang
- Department of Cellular and Molecular Diagnostics Center, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China; Guangdong Provincial Key Laboratory of Malignant Tumor Epigenetics and Gene Regulation, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China.
| |
Collapse
|
18
|
Rahman Z, Pasam T, Rishab, Dandekar MP. Binary classification model of machine learning detected altered gut integrity in controlled-cortical impact model of traumatic brain injury. Int J Neurosci 2024; 134:163-174. [PMID: 35758006 DOI: 10.1080/00207454.2022.2095271] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2022] [Accepted: 06/23/2022] [Indexed: 10/17/2022]
Abstract
Aim of the study: To examine the effect of controlled-cortical impact (CCI), a preclinical model of traumatic brain injury (TBI), on intestinal integrity using a binary classification model of machine learning (ML).Materials and methods: Adult, male C57BL/6 mice were subjected to CCI surgery using a stereotaxic impactor (Impact One™). The rotarod and hot-plate tests were performed to assess the neurological deficits.Results: Mice underwent CCI displayed a remarkable neurological deficit as noticed by decreased latency to fall and lesser paw withdrawal latency in rotarod and hot plate test, respectively. Animals were sacrificed 3 days post-injury (dpi). The colon sections were stained with hematoxylin and eosin (H&E) to integrate with machinery tool-based algorithms. Several stained colon images were captured to build a dataset for ML model to predict the impact of CCI vs sham procedure. The best results were obtained with VGG16 features with SVM RBF kernel and VGG16 features with stacked fully connected layers on top. We achieved a test accuracy of 84% and predicted the disrupted gut permeability and epithelium wall of colon in CCI group as compared to sham-operated mice.Conclusion: We suggest that ML may become an important tool in the development of preclinical TBI model and discovery of newer therapeutics.
Collapse
Affiliation(s)
- Zara Rahman
- Department of Pharmacology & Toxicology, National Institute of Pharmaceutical Education and Research (NIPER), Balanagar, Hyderabad, India
| | - Tulasi Pasam
- Department of Pharmacology & Toxicology, National Institute of Pharmaceutical Education and Research (NIPER), Balanagar, Hyderabad, India
| | - Rishab
- Department of Computer Science and Engineering, International Institute of Information Technology (IIIT), Hyderabad, India
| | - Manoj P Dandekar
- Department of Pharmacology & Toxicology, National Institute of Pharmaceutical Education and Research (NIPER), Balanagar, Hyderabad, India
| |
Collapse
|
19
|
Lee Y, Alam MR, Park H, Yim K, Seo KJ, Hwang G, Kim D, Chung Y, Gong G, Cho NH, Yoo CW, Chong Y, Choi HJ. Improved Diagnostic Accuracy of Thyroid Fine-Needle Aspiration Cytology with Artificial Intelligence Technology. Thyroid 2024; 34:723-734. [PMID: 38874262 DOI: 10.1089/thy.2023.0384] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/15/2024]
Abstract
Background: Artificial intelligence (AI) is increasingly being applied in pathology and cytology, showing promising results. We collected a large dataset of whole slide images (WSIs) of thyroid fine-needle aspiration cytology (FNA), incorporating z-stacking, from institutions across the nation to develop an AI model. Methods: We conducted a multicenter retrospective diagnostic accuracy study using thyroid FNA dataset from the Open AI Dataset Project that consists of digitalized images samples collected from 3 university hospitals and 215 Korean institutions through extensive quality check during the case selection, scanning, labeling, and reviewing process. Multiple z-layer images were captured using three different scanners and image patches were extracted from WSIs and resized after focus fusion and color normalization. We pretested six AI models, determining Inception ResNet v2 as the best model using a subset of dataset, and subsequently tested the final model with total datasets. Additionally, we compared the performance of AI and cytopathologists using randomly selected 1031 image patches and reevaluated the cytopathologists' performance after reference to AI results. Results: A total of 10,332 image patches from 306 thyroid FNAs, comprising 78 malignant (papillary thyroid carcinoma) and 228 benign from 86 institutions were used for the AI training. Inception ResNet v2 achieved highest accuracy of 99.7%, 97.7%, and 94.9% for training, validation, and test dataset, respectively (sensitivity 99.9%, 99.6%, and 100% and specificity 99.6%, 96.4%, and 90.4% for training, validation, and test dataset, respectively). In the comparison between AI and human, AI model showed higher accuracy and specificity than the average expert cytopathologists beyond the two-standard deviation (accuracy 99.71% [95% confidence interval (CI), 99.38-100.00%] vs. 88.91% [95% CI, 86.99-90.83%], sensitivity 99.81% [95% CI, 99.54-100.00%] vs. 87.26% [95% CI, 85.22-89.30%], and specificity 99.61% [95% CI, 99.23-99.99%] vs. 90.58% [95% CI, 88.80-92.36%]). Moreover, after referring to the AI results, the performance of all the experts (accuracy 96%, 95%, and 96%, respectively) and the diagnostic agreement (from 0.64 to 0.84) increased. Conclusions: These results suggest that the application of AI technology to thyroid FNA cytology may improve the diagnostic accuracy as well as intra- and inter-observer variability among pathologists. Further confirmatory research is needed.
Collapse
Affiliation(s)
- Yujin Lee
- Department of Hospital Pathology, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Suwon, Republic of Korea
| | - Mohammad Rizwan Alam
- Department of Hospital Pathology, Uijeongbu St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Uijeongbu, Republic of Korea
| | - Hongsik Park
- Department of Hospital Pathology, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Suwon, Republic of Korea
| | - Kwangil Yim
- Department of Hospital Pathology, Uijeongbu St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Uijeongbu, Republic of Korea
| | - Kyung Jin Seo
- Department of Hospital Pathology, Uijeongbu St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Uijeongbu, Republic of Korea
| | | | | | | | - Gyungyub Gong
- Department of Pathology, Asan Medical Center, Seoul, Korea
| | - Nam Hoon Cho
- Department of Pathology, Yonsei University College of Medicine, Seoul, Korea
| | - Chong Woo Yoo
- Department of Pathology, National Cancer Center, Ilsan, Republic of Korea
| | - Yosep Chong
- Department of Hospital Pathology, Uijeongbu St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Uijeongbu, Republic of Korea
| | - Hyun Joo Choi
- Department of Hospital Pathology, St. Vincent's Hospital, College of Medicine, The Catholic University of Korea, Suwon, Republic of Korea
| |
Collapse
|
20
|
Zhao D, Luo M, Zeng M, Yang Z, Guan Q, Wan X, Wang Y, Zhang H, Wang Y, Lu H, Xiang J. Deep convolutional neural network model ResNeSt for discrimination of papillary thyroid carcinomas and benign nodules in thyroid nodules diagnosed as atypia of undetermined significance. Gland Surg 2024; 13:619-629. [PMID: 38845827 PMCID: PMC11150190 DOI: 10.21037/gs-23-486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Accepted: 04/11/2024] [Indexed: 06/09/2024]
Abstract
Background A deep convolutional neural network (DCNN) model was employed for the differentiation of thyroid nodules diagnosed as atypia of undetermined significance (AUS) according to the 2023 Bethesda System for Reporting Thyroid Cytopathology (TBSRTC). The aim of this study was to investigate the efficiency of ResNeSt in improving the diagnostic accuracy of fine-needle aspiration (FNA) biopsy. Methods Fragmented images were used to train and test DCNN models. A training dataset was built from 1,330 samples diagnosed as papillary thyroid carcinoma (PTC) or benign nodules, and a test dataset was built from 173 samples diagnosed as AUS. ResNeSt was trained and tested to provide a differentiation. With regard to AUS samples, the characteristics of the cell nuclei were compared using the Wilcoxon test. Results The ResNeSt model achieved an accuracy of 92.49% (160/173) on fragmented images and 84.78% (39/46) from a patient wise viewpoint in discrimination of PTC and benign nodules in AUS nodules. The sensitivity and specificity of ResNeSt model were 95.79% and 88.46%. The κ value between ResNeSt and the pathological results was 0.847 (P<0.001). With regard to the cell nuclei of AUS nodules, both area and perimeter of malignant nodules were larger than those of benign ones, which were 2,340.00 (1,769.00, 2,807.00) vs. 1,941.00 (1,567.50, 2,455.75), P<0.001 and 190.46 (167.64, 208.46) vs. 171.71 (154.95, 193.65), P<0.001, respectively. The grayscale (0 for black, 255 for white) of malignant lesions was lower than that of benign ones, which was 37.52 (31.41, 46.67) vs. 45.84 (31.88, 57.36), P <0.001, indicating nuclear staining of malignant lesions were deeper than benign ones. Conclusions In summary, the DCNN model ResNeSt showed great potential in discriminating thyroid nodules diagnosed as AUS. Among those nodules, malignant nodules showed larger and more deeply stained nuclei than benign nodules.
Collapse
Affiliation(s)
- Dan Zhao
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Mukun Luo
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Min Zeng
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
- Department of Nursing Administration, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Zhou Yang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Qing Guan
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Xiaochun Wan
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yu Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Hao Zhang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, China
| | - Yunjun Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| | - Hongtao Lu
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Jun Xiang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, China
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai, China
| |
Collapse
|
21
|
Zhang L, Wong C, Li Y, Huang T, Wang J, Lin C. Artificial intelligence assisted diagnosis of early tc markers and its application. Discov Oncol 2024; 15:172. [PMID: 38761260 PMCID: PMC11102422 DOI: 10.1007/s12672-024-01017-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/18/2024] [Accepted: 05/06/2024] [Indexed: 05/20/2024] Open
Abstract
Thyroid cancer (TC) is a common endocrine malignancy with an increasing incidence worldwide. Early diagnosis is particularly important for TC patients, because it allows patients to receive treatment as early as possible. Artificial intelligence (AI) provides great advantages for complex healthcare systems by analyzing big data based on machine learning. Nowadays, AI is widely used in the early diagnosis of cancer such as TC. Ultrasound detection and fine needle aspiration biopsy are the main methods for early diagnosis of TC. AI has been widely used in the detection of malignancy in thyroid nodules by ultrasound images, cytopathology images and molecular markers. It shows great potential in auxiliary medical diagnosis. The latest clinical trial has shown that the performance of AI models matches with the diagnostic efficiency of experienced clinicians, and more efficient AI tools will be developed in the future. Therefore, in this review, we summarized the recent advances in the application of AI algorithms in assessing the risk of malignancy in thyroid nodules. The objective of this review was to provide a data base for the clinical use of AI-assisted diagnosis in TC, as well as to provide new ideas for the next generation of AI-assisted diagnosis in TC.
Collapse
Affiliation(s)
- Laney Zhang
- Yale School of Public Health, New Haven, CT, USA
| | - Chinting Wong
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, Jilin, China
| | - Yungeng Li
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, Jilin, China
| | | | - Jiawen Wang
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, Jilin, China
| | - Chenghe Lin
- Department of Nuclear Medicine, The First Hospital of Jilin University, Changchun, Jilin, China.
| |
Collapse
|
22
|
Rende PRF, Pires JM, Nakadaira KS, Lopes S, Vale J, Hecht F, Beltrão FEL, Machado GJR, Kimura ET, Eloy C, Ramos HE. Revisiting the utility of identifying nuclear grooves as unique nuclear changes by an object detector model. J Pathol Transl Med 2024; 58:117-126. [PMID: 38684222 PMCID: PMC11106606 DOI: 10.4132/jptm.2024.03.07] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 02/12/2024] [Accepted: 03/06/2024] [Indexed: 05/02/2024] Open
Abstract
BACKGROUND Among other structures, nuclear grooves are vastly found in papillary thyroid carcinoma (PTC). Considering that the application of artificial intelligence in thyroid cytology has potential for diagnostic routine, our goal was to develop a new supervised convolutional neural network capable of identifying nuclear grooves in Diff-Quik stained whole-slide images (WSI) obtained from thyroid fineneedle aspiration. METHODS We selected 22 Diff-Quik stained cytological slides with cytological diagnosis of PTC and concordant histological diagnosis. Each of the slides was scanned, forming a WSI. Images that contained the region of interest were obtained, followed by pre-formatting, annotation of the nuclear grooves and data augmentation techniques. The final dataset was divided into training and validation groups in a 7:3 ratio. RESULTS This is the first artificial intelligence model based on object detection applied to nuclear structures in thyroid cytopathology. A total of 7,255 images were obtained from 22 WSI, totaling 7,242 annotated nuclear grooves. The best model was obtained after it was submitted 15 times with the train dataset (14th epoch), with 67% true positives, 49.8% for sensitivity and 43.1% for predictive positive value. CONCLUSIONS The model was able to develop a structure predictor rule, indicating that the application of an artificial intelligence model based on object detection in the identification of nuclear grooves is feasible. Associated with a reduction in interobserver variability and in time per slide, this demonstrates that nuclear evaluation constitutes one of the possibilities for refining the diagnosis through computational models.
Collapse
Affiliation(s)
- Pedro R. F. Rende
- Bioregulation Department, Health and Science Institute, Federal University of Bahia, Salvador, Brazil
| | | | | | - Sara Lopes
- Endocrinology Department, Hospital de Braga, Braga, Portugal
| | - João Vale
- Laboratory of Pathology of the Institute of Pathology and Molecular Immunology, University of Porto, Porto, Portugal
| | - Fabio Hecht
- Department of Biomedical Genetics, University of Rochester, Rochester, New York, USA
| | - Fabyan E. L. Beltrão
- Bioregulation Department, Health and Science Institute, Federal University of Bahia, Salvador, Brazil
| | - Gabriel J. R. Machado
- Bioregulation Department, Health and Science Institute, Federal University of Bahia, Salvador, Brazil
| | - Edna T. Kimura
- Institute of Biomedical Sciences, University of São Paulo, São Paulo, Brazil
| | - Catarina Eloy
- Laboratory of Pathology of the Institute of Pathology and Molecular Immunology, University of Porto, Porto, Portugal
- Faculty of Medicine, University of Porto, Porto, Portugal
| | - Helton E. Ramos
- Bioregulation Department, Health and Science Institute, Federal University of Bahia, Salvador, Brazil
- Postgraduate Program in Medicine and Health, Bahia Faculty of Medicine, Federal University of Bahia, Salvador, Brazil
| |
Collapse
|
23
|
Baffa MDFO, Zezell DM, Bachmann L, Pereira TM, Deserno TM, Felipe JC. Deep neural networks can differentiate thyroid pathologies on infrared hyperspectral images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 247:108100. [PMID: 38442622 DOI: 10.1016/j.cmpb.2024.108100] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Revised: 02/12/2024] [Accepted: 02/23/2024] [Indexed: 03/07/2024]
Abstract
BACKGROUND AND OBJECTIVE The thyroid is a gland responsible for producing important body hormones. Several pathologies can affect this gland, such as thyroiditis, hypothyroidism, and thyroid cancer. The visual histological analysis of thyroid specimens is a valuable process that enables pathologists to detect diseases with high efficiency, providing the patient with a better prognosis. Existing computer vision systems developed to aid in the analysis of histological samples have limitations in distinguishing pathologies with similar characteristics or samples containing multiple diseases. To overcome this challenge, hyperspectral images are being studied to represent biological samples based on their molecular interaction with light. METHODS In this study, we address the acquisition of infrared absorbance spectra from each voxel of histological specimens. This data is then used for the development of a multiclass fully-connected neural network model that discriminates spectral patterns, enabling the classification of voxels as healthy, cancerous, or goiter. RESULTS Through experiments using the k-fold cross-validation protocol, we obtained an average accuracy of 93.66 %, a sensitivity of 93.47 %, and a specificity of 96.93 %. Our results demonstrate the feasibility of using infrared hyperspectral imaging to characterize healthy tissue and thyroid pathologies using absorbance measurements. The proposed deep learning model has the potential to improve diagnostic efficiency and enhance patient outcomes.
Collapse
Affiliation(s)
| | | | - Luciano Bachmann
- Department of Physics, University of São Paulo, Ribeirão Preto, SP, Brazil
| | - Thiago Martini Pereira
- Department of Science and Technology, Federal University of São Paulo, São José dos Campos, SP, Brazil
| | - Thomas Martin Deserno
- Peter L. Reichertz Institute for Medical Informatics, Technische Universität Braunschweig, Braunschweig, Germany
| | - Joaquim Cezar Felipe
- Department of Computing and Mathematics, University of São Paulo, Ribeirão Preto, SP, Brazil
| |
Collapse
|
24
|
Kim D, Sundling KE, Virk R, Thrall MJ, Alperstein S, Bui MM, Chen-Yost H, Donnelly AD, Lin O, Liu X, Madrigal E, Michelow P, Schmitt FC, Vielh PR, Zakowski MF, Parwani AV, Jenkins E, Siddiqui MT, Pantanowitz L, Li Z. Digital cytology part 2: artificial intelligence in cytology: a concept paper with review and recommendations from the American Society of Cytopathology Digital Cytology Task Force. J Am Soc Cytopathol 2024; 13:97-110. [PMID: 38158317 DOI: 10.1016/j.jasc.2023.11.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2023] [Revised: 11/28/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024]
Abstract
Digital cytology and artificial intelligence (AI) are gaining greater adoption in the cytology laboratory. However, peer-reviewed real-world data and literature are lacking in regard to the current clinical landscape. The American Society of Cytopathology in conjunction with the International Academy of Cytology and the Digital Pathology Association established a special task force comprising 20 members with expertise and/or interest in digital cytology. The aim of the group was to investigate the feasibility of incorporating digital cytology, specifically cytology whole slide scanning and AI applications, into the workflow of the laboratory. In turn, the impact on cytopathologists, cytologists (cytotechnologists), and cytology departments were also assessed. The task force reviewed existing literature on digital cytology, conducted a worldwide survey, and held a virtual roundtable discussion on digital cytology and AI with multiple industry corporate representatives. This white paper, presented in 2 parts, summarizes the current state of digital cytology and AI practice in global cytology practice. Part 1 of the white paper is presented as a separate paper which details a review and best practice recommendations for incorporating digital cytology into practice. Part 2 of the white paper presented here provides a comprehensive review of AI in cytology practice along with best practice recommendations and legal considerations. Additionally, the cytology global survey results highlighting current AI practices by various laboratories, as well as current attitudes, are reported.
Collapse
Affiliation(s)
- David Kim
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Kaitlin E Sundling
- The Wisconsin State Laboratory of Hygiene and Department of Pathology and Laboratory Medicine, University of Wisconsin-Madison, Madison, Wisconsin
| | - Renu Virk
- Department of Pathology and Cell Biology, Columbia University, New York, New York
| | - Michael J Thrall
- Department of Pathology and Genomic Medicine, Houston Methodist Hospital, Houston, Texas
| | - Susan Alperstein
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Marilyn M Bui
- The Department of Pathology, Moffitt Cancer Center & Research Institute, Tampa, Florida
| | | | - Amber D Donnelly
- Diagnostic Cytology Education, University of Nebraska Medical Center, College of Allied Health Professions, Omaha, Nebraska
| | - Oscar Lin
- Department of Pathology & Laboratory Medicine, Memorial Sloan-Kettering Cancer Center, New York, New York
| | - Xiaoying Liu
- Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, New Hampshire
| | - Emilio Madrigal
- Department of Pathology, Massachusetts General Hospital, Boston, Massachusetts
| | - Pamela Michelow
- Division of Anatomical Pathology, School of Pathology, University of the Witwatersrand, Johannesburg, South Africa; Department of Pathology, National Health Laboratory Services, Johannesburg, South Africa
| | - Fernando C Schmitt
- Department of Pathology, Medical Faculty of Porto University, Porto, Portugal
| | - Philippe R Vielh
- Department of Pathology, Medipath and American Hospital of Paris, Paris, France
| | | | - Anil V Parwani
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | | - Momin T Siddiqui
- Department of Pathology and Laboratory Medicine, New York Presbyterian-Weill Cornell Medicine, New York, New York
| | - Liron Pantanowitz
- Department of Pathology, University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania.
| | - Zaibo Li
- Department of Pathology, The Ohio State University Wexner Medical Center, Columbus, Ohio.
| |
Collapse
|
25
|
Gökmen Inan N, Kocadağlı O, Yıldırım D, Meşe İ, Kovan Ö. Multi-class classification of thyroid nodules from automatic segmented ultrasound images: Hybrid ResNet based UNet convolutional neural network approach. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 243:107921. [PMID: 37950926 DOI: 10.1016/j.cmpb.2023.107921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Revised: 10/20/2023] [Accepted: 11/06/2023] [Indexed: 11/13/2023]
Abstract
BACKGROUND AND OBJECTIVES Early detection and diagnosis of thyroid nodule types are important because they can be treated more effectively in their early stages. The types of thyroid nodules are generally stated as atypia of undetermined significance/follicular lesion of undetermined significance (AUS/FLUS), benign follicular, and papillary follicular. The risk of malignancy for AUS/FLUS is typically stated to be between 5 and 15 %, while some studies indicate a risk as high as 25 %. Without complete histology, it is difficult to classify nodules and these diagnostic operations are pricey and risky. To minimize laborious workload and misdiagnosis, recently various AI-based decision support systems have been developed. METHODS In this study, a novel AI-based decision support system has been developed for the automated segmentation and classification of the types of thyroid nodules. This system is based on a hybrid deep-learning procedure that makes both an automatic thyroid nodule segmentation and classification tasks, respectively. In this framework, the segmentation is executed with some U-Net architectures such as ResUNet and ResUNet++ integrating with the feature extraction and upsampling with dropout operations to prevent overfitting. The nodule classification task is achieved by various deep nets architecture such as VGG-16, DenseNet121, ResNet-50, and Inception ResNet-v2 considering some accurate classification criteria such as Intersection over Union (IOU), Dice coefficient, accuracy, precision, and recall. RESULTS In analysis, a total of 880 patients with ages ranging from 10 to 90 years were included by taking the ultrasound images and demographics. The experimental evaluations showed that ResUNet++ demonstrated excellent segmentation outcomes, attaining remarkable evaluation scores including a dice coefficient of 92.4 % and a mean IOU of 89.7 %. ResNet-50 and Inception ResNet-v2 trained over the images segmented with UNets have shown better performance in terms of achieving high evaluation scores for the classification accuracy such as 96.6 % and 95.0 %, respectively. In addition, ResNet-50 and Inception ResNet-v2 classified AUS/FLUS from the images segmented with UNets with AUC=97.0 % and 96.0 %, respectively. CONCLUSIONS The proposed AI-based decision support system improves the automatic segmentation performance of AUS/FLUS and it has shown better performance than available approaches in the literature with respect to ACC, Jaccard and DICE losses. This system has great potential for clinical use by both radiologists and surgeons as well.
Collapse
Affiliation(s)
- Neslihan Gökmen Inan
- College of Engineering, Computer Engineering Department, Koç University, Türkiye
| | - Ozan Kocadağlı
- Department of Statistics, Faculty of Science and Letters, Mimar Sinan Fine Arts University, Silahsör Cad. No. 81, 34380 Bomonti/Sisli, Istanbul, Türkiye.
| | | | - İsmail Meşe
- Department of Radiology, Erenkoy Mental Health and Neurology Training and Research Hospital, Health Sciences University, Türkiye
| | - Özge Kovan
- Vocational School of Health Services, Medical Imaging Techniques, Acıbadem University, Türkiye
| |
Collapse
|
26
|
Chen L, Chen H, Pan Z, Xu S, Lai G, Chen S, Wang S, Gu X, Zhang Y. ThyroidNet: A Deep Learning Network for Localization and Classification of Thyroid Nodules. COMPUTER MODELING IN ENGINEERING & SCIENCES : CMES 2023; 139:361-382. [PMID: 38566835 PMCID: PMC7615790 DOI: 10.32604/cmes.2023.031229] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Aim This study aims to establish an artificial intelligence model, ThyroidNet, to diagnose thyroid nodules using deep learning techniques accurately. Methods A novel method, ThyroidNet, is introduced and evaluated based on deep learning for the localization and classification of thyroid nodules. First, we propose the multitask TransUnet, which combines the TransUnet encoder and decoder with multitask learning. Second, we propose the DualLoss function, tailored to the thyroid nodule localization and classification tasks. It balances the learning of the localization and classification tasks to help improve the model's generalization ability. Third, we introduce strategies for augmenting the data. Finally, we submit a novel deep learning model, ThyroidNet, to accurately detect thyroid nodules. Results ThyroidNet was evaluated on private datasets and was comparable to other existing methods, including U-Net and TransUnet. Experimental results show that ThyroidNet outperformed these methods in localizing and classifying thyroid nodules. It achieved improved accuracy of 3.9% and 1.5%, respectively. Conclusion ThyroidNet significantly improves the clinical diagnosis of thyroid nodules and supports medical image analysis tasks. Future research directions include optimization of the model structure, expansion of the dataset size, reduction of computational complexity and memory requirements, and exploration of additional applications of ThyroidNet in medical image analysis.
Collapse
Affiliation(s)
- Lu Chen
- Ultrasonic Department, Zhongda Hospital Affiliated to Southeast University, Nanjing, 210009, China
| | - Huaqiang Chen
- School of Physics and Information Engineering, Jiangsu Second Normal University, Nanjing, 211200, China
| | - Zhikai Pan
- School of Software Engineering, Quanzhou Normal University, Quanzhou, 362000, China
| | - Sheng Xu
- School of Physics and Information Engineering, Jiangsu Second Normal University, Nanjing, 211200, China
| | - Guangsheng Lai
- School of Physics and Information Engineering, Jiangsu Second Normal University, Nanjing, 211200, China
| | - Shuwen Chen
- School of Physics and Information Engineering, Jiangsu Second Normal University, Nanjing, 211200, China
- State Key Laboratory of Millimeter Waves, Southeast University, Nanjing, 210096, China
- Jiangsu Province Engineering Research Center of Basic Education Big Data Application, Jiangsu Second Normal University, Nanjing, 211200, China
| | - Shuihua Wang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
- Department of Biological Sciences, Xi’an Jiaotong-Liverpool University, Suzhou, 215123, China
| | - Xiaodong Gu
- School of Physics and Information Engineering, Jiangsu Second Normal University, Nanjing, 211200, China
- Jiangsu Province Engineering Research Center of Basic Education Big Data Application, Jiangsu Second Normal University, Nanjing, 211200, China
| | - Yudong Zhang
- School of Computing and Mathematical Sciences, University of Leicester, Leicester, LE1 7RH, UK
- Department of Information Systems, Faculty of Computing and Information Technology, King Abdulaziz University, Jeddah, 21589, Saudi Arabia
| |
Collapse
|
27
|
Chen W, Ayoub M, Liao M, Shi R, Zhang M, Su F, Huang Z, Li Y, Wang Y, Wong KK. A fusion of VGG-16 and ViT models for improving bone tumor classification in computed tomography. J Bone Oncol 2023; 43:100508. [PMID: 38021075 PMCID: PMC10654018 DOI: 10.1016/j.jbo.2023.100508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2023] [Revised: 08/14/2023] [Accepted: 09/20/2023] [Indexed: 12/01/2023] Open
Abstract
Background and Objective Bone tumors present significant challenges in orthopedic medicine due to variations in clinical treatment approaches for different tumor types, which includes benign, malignant, and intermediate cases. Convolutional Neural Networks (CNNs) have emerged as prominent models for tumor classification. However, their limited perception ability hinders the acquisition of global structural information, potentially affecting classification accuracy. To address this limitation, we propose an optimized deep learning algorithm for precise classification of diverse bone tumors. Materials and Methods Our dataset comprises 786 computed tomography (CT) images of bone tumors, featuring sections from two distinct bone species, namely the tibia and femur. Sourced from The Second Affiliated Hospital of Fujian Medical University, the dataset was meticulously preprocessed with noise reduction techniques. We introduce a novel fusion model, VGG16-ViT, leveraging the advantages of the VGG-16 network and the Vision Transformer (ViT) model. Specifically, we select 27 features from the third layer of VGG-16 and input them into the Vision Transformer encoder for comprehensive training. Furthermore, we evaluate the impact of secondary migration using CT images from Xiangya Hospital for validation. Results The proposed fusion model demonstrates notable improvements in classification performance. It effectively reduces the training time while achieving an impressive classification accuracy rate of 97.6%, marking a significant enhancement of 8% in sensitivity and specificity optimization. Furthermore, the investigation into secondary migration's effects on experimental outcomes across the three models reveals its potential to enhance system performance. Conclusion Our novel VGG-16 and Vision Transformer joint network exhibits robust classification performance on bone tumor datasets. The integration of these models enables precise and efficient classification, accommodating the diverse characteristics of different bone tumor types. This advancement holds great significance for the early detection and prognosis of bone tumor patients in the future.
Collapse
Affiliation(s)
- Weimin Chen
- School of Information and Electronics, Hunan City University, Yiyang 413000, China
| | - Muhammad Ayoub
- School of Computer Science and Engineering, Central South University, Changsha 410083, Hunan, China
| | - Mengyun Liao
- School of Computer Science and Engineering, Central South University, Changsha 410083, Hunan, China
| | - Ruizheng Shi
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Mu Zhang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Feng Su
- Department of Emergency, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Zhiguo Huang
- Department of Emergency, Xiangya Hospital, Central South University, Changsha 410008, Hunan, China
| | - Yuanzhe Li
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Yi Wang
- Department of CT/MRI, The Second Affiliated Hospital of Fujian Medical University, Quanzhou 362000, China
| | - Kevin K.L. Wong
- School of Information and Electronics, Hunan City University, Yiyang 413000, China
- Department of Mechanical Engineering, College of Engineering, University of Saskatchewan, Saskatoon, SK S7N 5A9, Canada
| |
Collapse
|
28
|
Slabaugh G, Beltran L, Rizvi H, Deloukas P, Marouli E. Applications of machine and deep learning to thyroid cytology and histopathology: a review. Front Oncol 2023; 13:958310. [PMID: 38023130 PMCID: PMC10661921 DOI: 10.3389/fonc.2023.958310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2022] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
This review synthesises past research into how machine and deep learning can improve the cyto- and histopathology processing pipelines for thyroid cancer diagnosis. The current gold-standard preoperative technique of fine-needle aspiration cytology has high interobserver variability, often returns indeterminate samples and cannot reliably identify some pathologies; histopathology analysis addresses these issues to an extent, but it requires surgical resection of the suspicious lesions so cannot influence preoperative decisions. Motivated by these issues, as well as by the chronic shortage of trained pathologists, much research has been conducted into how artificial intelligence could improve current pipelines and reduce the pressure on clinicians. Many past studies have indicated the significant potential of automated image analysis in classifying thyroid lesions, particularly for those of papillary thyroid carcinoma, but these have generally been retrospective, so questions remain about both the practical efficacy of these automated tools and the realities of integrating them into clinical workflows. Furthermore, the nature of thyroid lesion classification is significantly more nuanced in practice than many current studies have addressed, and this, along with the heterogeneous nature of processing pipelines in different laboratories, means that no solution has proven itself robust enough for clinical adoption. There are, therefore, multiple avenues for future research: examine the practical implementation of these algorithms as pathologist decision-support systems; improve interpretability, which is necessary for developing trust with clinicians and regulators; and investigate multiclassification on diverse multicentre datasets, aiming for methods that demonstrate high performance in a process- and equipment-agnostic manner.
Collapse
Affiliation(s)
- Greg Slabaugh
- Digital Environment Research Institute, Queen Mary University of London, London, United Kingdom
| | - Luis Beltran
- Barts Health NHS Trust, The Royal London Hospital, London, United Kingdom
| | - Hasan Rizvi
- Barts Health NHS Trust, The Royal London Hospital, London, United Kingdom
| | - Panos Deloukas
- William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| | - Eirini Marouli
- Digital Environment Research Institute, Queen Mary University of London, London, United Kingdom
- Barts Health NHS Trust, The Royal London Hospital, London, United Kingdom
- William Harvey Research Institute, Barts and The London School of Medicine and Dentistry, Queen Mary University of London, London, United Kingdom
| |
Collapse
|
29
|
Esce A, Redemann JP, Olson GT, Hanson JA, Agarwal S, Yenwongfai L, Ferreira J, Boyd NH, Bocklage T, Martin DR. Lymph Node Metastases in Papillary Thyroid Carcinoma can be Predicted by a Convolutional Neural Network: a Multi-Institution Study. Ann Otol Rhinol Laryngol 2023; 132:1373-1379. [PMID: 36896865 DOI: 10.1177/00034894231158464] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/11/2023]
Abstract
OBJECTIVES The presence of nodal metastases in patients with papillary thyroid carcinoma (PTC) has both staging and treatment implications. However, lymph nodes are often not removed during thyroidectomy. Prior work has demonstrated the capability of artificial intelligence (AI) to predict the presence of nodal metastases in PTC based on the primary tumor histopathology alone. This study aimed to replicate these results with multi-institutional data. METHODS Cases of conventional PTC were identified from the records of 2 large academic institutions. Only patients with complete pathology data, including at least 3 sampled lymph nodes, were included in the study. Tumors were designated "positive" if they had at least 5 positive lymph node metastases. First, algorithms were trained separately on each institution's data and tested independently on the other institution's data. Then, the data sets were combined and new algorithms were developed and tested. The primary tumors were randomized into 2 groups, one to train the algorithm and another to test it. A low level of supervision was used to train the algorithm. Board-certified pathologists annotated the slides. HALO-AI convolutional neural network and image software was used to perform training and testing. Receiver operator characteristic curves and the Youden J statistic were used for primary analysis. RESULTS There were 420 cases used in analyses, 45% of which were negative. The best performing single institution algorithm had an area under the curve (AUC) of 0.64 with a sensitivity and specificity of 65% and 61% respectively, when tested on the other institution's data. The best performing combined institution algorithm had an AUC of 0.84 with a sensitivity and specificity of 68% and 91% respectively. CONCLUSION A convolutional neural network can produce an accurate and robust algorithm that is capable of predicting nodal metastases from primary PTC histopathology alone even in the setting of multi-institutional data.
Collapse
Affiliation(s)
- Antoinette Esce
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Jordan P Redemann
- Department of Pathology, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Garth T Olson
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Joshua A Hanson
- Department of Pathology, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Shweta Agarwal
- Department of Pathology, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Leonard Yenwongfai
- Department of Pathology, University of Kentucky College of Medicine, Lexington, KY, USA
| | - Juanita Ferreira
- Department of Pathology, University of Kentucky College of Medicine, Lexington, KY, USA
| | - Nathan H Boyd
- Department of Surgery, Division of Otolaryngology Head and Neck Surgery, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| | - Thèrése Bocklage
- Department of Pathology, University of Kentucky College of Medicine, Lexington, KY, USA
| | - David R Martin
- Department of Pathology, University of New Mexico Health Sciences Center, Albuquerque, NM, USA
| |
Collapse
|
30
|
Jia J, Wei Z, Sun M. EMDL_m6Am: identifying N6,2'-O-dimethyladenosine sites based on stacking ensemble deep learning. BMC Bioinformatics 2023; 24:397. [PMID: 37880673 PMCID: PMC10598967 DOI: 10.1186/s12859-023-05543-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 10/20/2023] [Indexed: 10/27/2023] Open
Abstract
BACKGROUND N6, 2'-O-dimethyladenosine (m6Am) is an abundant RNA methylation modification on vertebrate mRNAs and is present in the transcription initiation region of mRNAs. It has recently been experimentally shown to be associated with several human disorders, including obesity genes, and stomach cancer, among others. As a result, N6,2'-O-dimethyladenosine (m6Am) site will play a crucial part in the regulation of RNA if it can be correctly identified. RESULTS This study proposes a novel deep learning-based m6Am prediction model, EMDL_m6Am, which employs one-hot encoding to expressthe feature map of the RNA sequence and recognizes m6Am sites by integrating different CNN models via stacking. Including DenseNet, Inflated Convolutional Network (DCNN) and Deep Multiscale Residual Network (MSRN), the sensitivity (Sn), specificity (Sp), accuracy (ACC), Mathews correlation coefficient (MCC) and area under the curve (AUC) of our model on the training data set reach 86.62%, 88.94%, 87.78%, 0.7590 and 0.8778, respectively, and the prediction results on the independent test set are as high as 82.25%, 79.72%, 80.98%, 0.6199, and 0.8211. CONCLUSIONS In conclusion, the experimental results demonstrated that EMDL_m6Am greatly improved the predictive performance of the m6Am sites and could provide a valuable reference for the next part of the study. The source code and experimental data are available at: https://github.com/13133989982/EMDL-m6Am .
Collapse
Affiliation(s)
- Jianhua Jia
- School of Information Engineering, Jingdezhen Ceramic University, Jingdezhen, 333403, China.
| | - Zhangying Wei
- School of Information Engineering, Jingdezhen Ceramic University, Jingdezhen, 333403, China.
| | - Mingwei Sun
- School of Information Engineering, Jingdezhen Ceramic University, Jingdezhen, 333403, China
| |
Collapse
|
31
|
Huang X, Chen X, Zhong X, Tian T. The CNN model aided the study of the clinical value hidden in the implant images. J Appl Clin Med Phys 2023; 24:e14141. [PMID: 37656066 PMCID: PMC10562019 DOI: 10.1002/acm2.14141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 08/14/2023] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
PURPOSE This article aims to construct a new method to evaluate radiographic image identification results based on artificial intelligence, which can complement the limited vision of researchers when studying the effect of various factors on clinical implantation outcomes. METHODS We constructed a convolutional neural network (CNN) model using the clinical implant radiographic images. Moreover, we used gradient-weighted class activation mapping (Grad-CAM) to obtain thermal maps to present identification differences before performing statistical analyses. Subsequently, to verify whether these differences presented by the Grad-CAM algorithm would be of value to clinical practices, we measured the bone thickness around the identified sites. Finally, we analyzed the influence of the implant type on the implantation according to the measurement results. RESULTS The thermal maps showed that the sites with significant differences between Straumann BL and Bicon implants as identified by the CNN model were mainly the thread and neck area. (2) The heights of the mesial, distal, buccal, and lingual bone of the Bicon implant post-op were greater than those of Straumann BL (P < 0.05). (3) Between the first and second stages of surgery, the amount of bone thickness variation at the buccal and lingual sides of the Bicon implant platform was greater than that of the Straumann BL implant (P < 0.05). CONCLUSION According to the results of this study, we found that the identified-neck-area of the Bicon implant was placed deeper than the Straumann BL implant, and there was more bone resorption on the buccal and lingual sides at the Bicon implant platform between the first and second stages of surgery. In summary, this study proves that using the CNN classification model can identify differences that complement our limited vision.
Collapse
Affiliation(s)
- Xinxu Huang
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xingyu Chen
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Xinnan Zhong
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| | - Taoran Tian
- State Key Laboratory of Oral DiseasesNational Clinical Research Center for Oral DiseasesWest China Hospital of StomatologySichuan UniversityChengduChina
| |
Collapse
|
32
|
Ren W, Zhu Y, Wang Q, Song Y, Fan Z, Bai Y, Lin D. Deep learning prediction model for central lymph node metastasis in papillary thyroid microcarcinoma based on cytology. Cancer Sci 2023; 114:4114-4124. [PMID: 37574759 PMCID: PMC10551586 DOI: 10.1111/cas.15930] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Revised: 07/11/2023] [Accepted: 08/01/2023] [Indexed: 08/15/2023] Open
Abstract
Controversy exists regarding whether patients with low-risk papillary thyroid microcarcinoma (PTMC) should undergo surgery or active surveillance; the inaccuracy of the preoperative clinical lymph node status assessment is one of the primary factors contributing to the controversy. It is imperative to accurately predict the lymph node status of PTMC before surgery. We selected 208 preoperative fine-needle aspiration (FNA) liquid-based preparations of PTMC as our research objects; all of these instances underwent lymph node dissection and, aside from lymph node status, were consistent with low-risk PTMC. We separated them into two groups according to whether the postoperative pathology showed central lymph node metastases. The deep learning model was expected to predict, based on the preoperative thyroid FNA liquid-based preparation, whether PTMC was accompanied by central lymph node metastases. Our deep learning model attained a sensitivity, specificity, positive prediction value (PPV), negative prediction value (NPV), and accuracy of 78.9% (15/19), 73.9% (17/23), 71.4% (15/21), 81.0% (17/21), and 76.2% (32/42), respectively. The area under the receiver operating characteristic curve (value was 0.8503. The predictive performance of the deep learning model was superior to that of the traditional clinical evaluation, and further analysis revealed the cell morphologies that played key roles in model prediction. Our study suggests that the deep learning model based on preoperative thyroid FNA liquid-based preparation is a reliable strategy for predicting central lymph node metastases in thyroid micropapillary carcinoma, and its performance surpasses that of traditional clinical examination.
Collapse
Affiliation(s)
- Wenhao Ren
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Yanli Zhu
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Qian Wang
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Yuntao Song
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of Head and Neck SurgeryPeking University Cancer Hospital and InstituteBeijingChina
| | - Zhihui Fan
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of UltrasoundPeking University Cancer Hospital and InstituteBeijingChina
| | - Yanhua Bai
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| | - Dongmei Lin
- Key Laboratory of Carcinogenesis and Translational Research (Ministry of Education), Department of PathologyPeking University Cancer Hospital and InstituteBeijingChina
| |
Collapse
|
33
|
Levy JJ, Chan N, Marotti JD, Kerr DA, Gutmann EJ, Glass RE, Dodge CP, Suriawinata AA, Christensen B, Liu X, Vaickus LJ. Large-scale validation study of an improved semiautonomous urine cytology assessment tool: AutoParis-X. Cancer Cytopathol 2023; 131:637-654. [PMID: 37377320 PMCID: PMC11251731 DOI: 10.1002/cncy.22732] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2023] [Revised: 05/11/2023] [Accepted: 05/12/2023] [Indexed: 06/29/2023]
Abstract
BACKGROUND Adopting a computational approach for the assessment of urine cytology specimens has the potential to improve the efficiency, accuracy, and reliability of bladder cancer screening, which has heretofore relied on semisubjective manual assessment methods. As rigorous, quantitative criteria and guidelines have been introduced for improving screening practices (e.g., The Paris System for Reporting Urinary Cytology), algorithms to emulate semiautonomous diagnostic decision-making have lagged behind, in part because of the complex and nuanced nature of urine cytology reporting. METHODS In this study, the authors report on the development and large-scale validation of a deep-learning tool, AutoParis-X, which can facilitate rapid, semiautonomous examination of urine cytology specimens. RESULTS The results of this large-scale, retrospective validation study indicate that AutoParis-X can accurately determine urothelial cell atypia and aggregate a wide variety of cell-related and cluster-related information across a slide to yield an atypia burden score, which correlates closely with overall specimen atypia and is predictive of Paris system diagnostic categories. Importantly, this approach accounts for challenges associated with the assessment of overlapping cell cluster borders, which improve the ability to predict specimen atypia and accurately estimate the nuclear-to-cytoplasm ratio for cells in these clusters. CONCLUSIONS The authors developed a publicly available, open-source, interactive web application that features a simple, easy-to-use display for examining urine cytology whole-slide images and determining the level of atypia in specific cells, flagging the most abnormal cells for pathologist review. The accuracy of AutoParis-X (and other semiautomated digital pathology systems) indicates that these technologies are approaching clinical readiness and necessitates full evaluation of these algorithms in head-to-head clinical trials.
Collapse
Affiliation(s)
- Joshua J. Levy
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Department of Dermatology, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Natt Chan
- Program in Quantitative Biomedical Sciences, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Jonathan D. Marotti
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Darcy A. Kerr
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Edward J. Gutmann
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | | | | | - Arief A. Suriawinata
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Brock Christensen
- Department of Epidemiology, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
- Department of Molecular and Systems Biology, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
- Department of Community and Family Medicine, Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Xiaoying Liu
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| | - Louis J. Vaickus
- Emerging Diagnostic and Investigative Technologies, Department of Pathology and Laboratory Medicine, Dartmouth Hitchcock Medical Center, Lebanon, NH, 03766
- Dartmouth College Geisel School of Medicine, Hanover, NH, 03756
| |
Collapse
|
34
|
Aboumerhi K, Güemes A, Liu H, Tenore F, Etienne-Cummings R. Neuromorphic applications in medicine. J Neural Eng 2023; 20:041004. [PMID: 37531951 DOI: 10.1088/1741-2552/aceca3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 08/02/2023] [Indexed: 08/04/2023]
Abstract
In recent years, there has been a growing demand for miniaturization, low power consumption, quick treatments, and non-invasive clinical strategies in the healthcare industry. To meet these demands, healthcare professionals are seeking new technological paradigms that can improve diagnostic accuracy while ensuring patient compliance. Neuromorphic engineering, which uses neural models in hardware and software to replicate brain-like behaviors, can help usher in a new era of medicine by delivering low power, low latency, small footprint, and high bandwidth solutions. This paper provides an overview of recent neuromorphic advancements in medicine, including medical imaging and cancer diagnosis, processing of biosignals for diagnosis, and biomedical interfaces, such as motor, cognitive, and perception prostheses. For each section, we provide examples of how brain-inspired models can successfully compete with conventional artificial intelligence algorithms, demonstrating the potential of neuromorphic engineering to meet demands and improve patient outcomes. Lastly, we discuss current struggles in fitting neuromorphic hardware with non-neuromorphic technologies and propose potential solutions for future bottlenecks in hardware compatibility.
Collapse
Affiliation(s)
- Khaled Aboumerhi
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - Amparo Güemes
- Electrical Engineering Division, Department of Engineering, University of Cambridge, 9 JJ Thomson Ave, Cambridge CB3 0FA, United Kingdom
| | - Hongtao Liu
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| | - Francesco Tenore
- Research and Exploratory Development Department, The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, United States of America
| | - Ralph Etienne-Cummings
- Department of Electrical and Computer Engineering, The Johns Hopkins University, Baltimore, MD, United States of America
| |
Collapse
|
35
|
Jang J, Kim YH, Westgate B, Zong Y, Hallinan C, Akalin A, Lee K. Screening adequacy of unstained thyroid fine needle aspiration samples using a deep learning-based classifier. Sci Rep 2023; 13:13525. [PMID: 37598279 PMCID: PMC10439921 DOI: 10.1038/s41598-023-40652-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2022] [Accepted: 08/16/2023] [Indexed: 08/21/2023] Open
Abstract
Fine needle aspiration (FNA) biopsy of thyroid nodules is a safe, cost-effective, and accurate diagnostic method for detecting thyroid cancer. However, about 10% of initial FNA biopsy samples from patients are non-diagnostic and require repeated FNA, which delays the diagnosis and appropriate care. On-site evaluation of the FNA sample can be performed to filter out non-diagnostic FNA samples. Unfortunately, it involves a time-consuming staining process, and a cytopathologist has to be present at the time of FNA. To bypass the staining process and expert interpretation of FNA specimens at the clinics, we developed a deep learning-based ensemble model termed FNA-Net that allows in situ screening of adequacy of unstained thyroid FNA samples smeared on a glass slide which can decrease the non-diagnostic rate in thyroid FNA. FNA-Net combines two deep learning models, a patch-based whole slide image classifier and Faster R-CNN, to detect follicular clusters with high precision. Then, FNA-Net classifies sample slides to be non-diagnostic if the total number of detected follicular clusters is less than a predetermined threshold. With bootstrapped sampling, FNA-Net achieved a 0.81 F1 score and 0.84 AUC in the precision-recall curve for detecting the non-diagnostic slides whose follicular clusters are less than six. We expect that FNA-Net can dramatically reduce the diagnostic cost associated with FNA biopsy and improve the quality of patient care.
Collapse
Affiliation(s)
- Junbong Jang
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
- Vascular Biology Program, Boston Children's Hospital, Boston, MA, 02115, USA
| | - Young H Kim
- Department of Radiology, University of Massachusetts Medical School, Worcester, MA, 01655, USA.
| | - Brian Westgate
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA
| | - Yang Zong
- Department of Pathology, University of Massachusetts Medical School, Worcester, MA, 01655, USA
| | - Caleb Hallinan
- Vascular Biology Program, Boston Children's Hospital, Boston, MA, 02115, USA
| | - Ali Akalin
- Department of Pathology, University of Massachusetts Medical School, Worcester, MA, 01655, USA.
| | - Kwonmoo Lee
- Department of Biomedical Engineering, Worcester Polytechnic Institute, Worcester, MA, 01609, USA.
- Vascular Biology Program, Boston Children's Hospital, Boston, MA, 02115, USA.
- Department of Surgery, Harvard Medical School, Boston, MA, 02115, USA.
| |
Collapse
|
36
|
Toro-Tobon D, Loor-Torres R, Duran M, Fan JW, Singh Ospina N, Wu Y, Brito JP. Artificial Intelligence in Thyroidology: A Narrative Review of the Current Applications, Associated Challenges, and Future Directions. Thyroid 2023; 33:903-917. [PMID: 37279303 PMCID: PMC10440669 DOI: 10.1089/thy.2023.0132] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Background: The use of artificial intelligence (AI) in health care has grown exponentially with the promise of facilitating biomedical research and enhancing diagnosis, treatment, monitoring, disease prevention, and health care delivery. We aim to examine the current state, limitations, and future directions of AI in thyroidology. Summary: AI has been explored in thyroidology since the 1990s, and currently, there is an increasing interest in applying AI to improve the care of patients with thyroid nodules (TNODs), thyroid cancer, and functional or autoimmune thyroid disease. These applications aim to automate processes, improve the accuracy and consistency of diagnosis, personalize treatment, decrease the burden for health care professionals, improve access to specialized care in areas lacking expertise, deepen the understanding of subtle pathophysiologic patterns, and accelerate the learning curve of less experienced clinicians. There are promising results for many of these applications. Yet, most are in the validation or early clinical evaluation stages. Only a few are currently adopted for risk stratification of TNODs by ultrasound and determination of the malignant nature of indeterminate TNODs by molecular testing. Challenges of the currently available AI applications include the lack of prospective and multicenter validations and utility studies, small and low diversity of training data sets, differences in data sources, lack of explainability, unclear clinical impact, inadequate stakeholder engagement, and inability to use outside of the research setting, which might limit the value of their future adoption. Conclusions: AI has the potential to improve many aspects of thyroidology; however, addressing the limitations affecting the suitability of AI interventions in thyroidology is a prerequisite to ensure that AI provides added value for patients with thyroid disease.
Collapse
Affiliation(s)
- David Toro-Tobon
- Division of Endocrinology, Diabetes, Metabolism and Nutrition, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Ricardo Loor-Torres
- Division of Endocrinology, Diabetes, Metabolism and Nutrition, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Mayra Duran
- Division of Endocrinology, Diabetes, Metabolism and Nutrition, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| | - Jungwei W. Fan
- Department of Artificial Intelligence and Informatics, Mayo Clinic, Rochester, Minnesota, USA
| | - Naykky Singh Ospina
- Division of Endocrinology, Department of Medicine, University of Florida, Gainesville, Florida, USA
| | - Yonghui Wu
- Department of Health Outcomes and Biomedical Informatics, College of Medicine, University of Florida, Gainesville, Florida, USA
| | - Juan P. Brito
- Division of Endocrinology, Diabetes, Metabolism and Nutrition, Department of Medicine, Mayo Clinic, Rochester, Minnesota, USA
| |
Collapse
|
37
|
Jia J, Wei Z, Cao X. EMDL-ac4C: identifying N4-acetylcytidine based on ensemble two-branch residual connection DenseNet and attention. Front Genet 2023; 14:1232038. [PMID: 37519885 PMCID: PMC10372626 DOI: 10.3389/fgene.2023.1232038] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 06/29/2023] [Indexed: 08/01/2023] Open
Abstract
Introduction: N4-acetylcytidine (ac4C) is a critical acetylation modification that has an essential function in protein translation and is associated with a number of human diseases. Methods: The process of identifying ac4C sites by biological experiments is too cumbersome and costly. And the performance of several existing computational models needs to be improved. Therefore, we propose a new deep learning tool EMDL-ac4C to predict ac4C sites, which uses a simple one-hot encoding for a unbalanced dataset using a downsampled ensemble deep learning network to extract important features to identify ac4C sites. The base learner of this ensemble model consists of a modified DenseNet and Squeeze-and-Excitation Networks. In addition, we innovatively add a convolutional residual structure in parallel with the dense block to achieve the effect of two-layer feature extraction. Results: The average accuracy (Acc), mathews correlation coefficient (MCC), and area under the curve Area under curve of EMDL-ac4C on ten independent testing sets are 80.84%, 61.77%, and 87.94%, respectively. Discussion: Multiple experimental comparisons indicate that EMDL-ac4C outperforms existing predictors and it greatly improved the predictive performance of the ac4C sites. At the same time, EMDL-ac4C could provide a valuable reference for the next part of the study. The source code and experimental data are available at: https://github.com/13133989982/EMDLac4C.
Collapse
Affiliation(s)
- Jianhua Jia
- *Correspondence: Jianhua Jia, ; Zhangying Wei,
| | | | | |
Collapse
|
38
|
Lee YK, Ryu D, Kim S, Park J, Park SY, Ryu D, Lee H, Lim S, Min HS, Park Y, Lee EK. Machine-learning-based diagnosis of thyroid fine-needle aspiration biopsy synergistically by Papanicolaou staining and refractive index distribution. Sci Rep 2023; 13:9847. [PMID: 37330568 PMCID: PMC10276805 DOI: 10.1038/s41598-023-36951-2] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 06/13/2023] [Indexed: 06/19/2023] Open
Abstract
We developed a machine learning algorithm (MLA) that can classify human thyroid cell clusters by exploiting both Papanicolaou staining and intrinsic refractive index (RI) as correlative imaging contrasts and evaluated the effects of this combination on diagnostic performance. Thyroid fine-needle aspiration biopsy (FNAB) specimens were analyzed using correlative optical diffraction tomography, which can simultaneously measure both, the color brightfield of Papanicolaou staining and three-dimensional RI distribution. The MLA was designed to classify benign and malignant cell clusters using color images, RI images, or both. We included 1535 thyroid cell clusters (benign: malignancy = 1128:407) from 124 patients. Accuracies of MLA classifiers using color images, RI images, and both were 98.0%, 98.0%, and 100%, respectively. As information for classification, the nucleus size was mainly used in the color image; however, detailed morphological information of the nucleus was also used in the RI image. We demonstrate that the present MLA and correlative FNAB imaging approach has the potential for diagnosing thyroid cancer, and complementary information from color and RI images can improve the performance of the MLA.
Collapse
Affiliation(s)
- Young Ki Lee
- Division of Endocrinology and Metabolism, Department of Internal Medicine, National Cancer Center, Goyang, 10408, South Korea
| | | | - Seungwoo Kim
- Artificial Intelligence Graduate School, Ulsan National Institute of Science and Technology (UNIST), Ulsan, 44919, South Korea
| | - Juyeon Park
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141, South Korea
| | - Seog Yun Park
- Deparment of Pathology, National Cancer Center, Goyang, 10408, South Korea
| | - Donghun Ryu
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, South Korea
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141, South Korea
- Department of Electrical Engineering and Computer Science (EECS), MIT, Cambridge, MA, 02139, USA
| | - Hayoung Lee
- Division of Endocrinology and Metabolism, Department of Internal Medicine, National Cancer Center, Goyang, 10408, South Korea
| | - Sungbin Lim
- Department of Statistics, Korea University, Seoul, 02841, South Korea
| | | | - YongKeun Park
- Tomocube Inc., Daejeon, 34051, South Korea.
- Department of Physics, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, 34141, South Korea.
- KAIST Institute for Health Science and Technology, KAIST, Daejeon, 34141, South Korea.
| | - Eun Kyung Lee
- Division of Endocrinology and Metabolism, Department of Internal Medicine, National Cancer Center, Goyang, 10408, South Korea.
| |
Collapse
|
39
|
Assaad S, Dov D, Davis R, Kovalsky S, Lee WT, Kahmke R, Rocke D, Cohen J, Henao R, Carin L, Range DE. Thyroid Cytopathology Cancer Diagnosis from Smartphone Images Using Machine Learning. Mod Pathol 2023; 36:100129. [PMID: 36931041 PMCID: PMC10293075 DOI: 10.1016/j.modpat.2023.100129] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2022] [Revised: 01/17/2023] [Accepted: 01/31/2023] [Indexed: 02/17/2023]
Abstract
We examined the performance of deep learning models on the classification of thyroid fine-needle aspiration biopsies using microscope images captured in 2 ways: with a high-resolution scanner and with a mobile phone camera. Our training set consisted of images from 964 whole-slide images captured with a high-resolution scanner. Our test set consisted of 100 slides; 20 manually selected regions of interest (ROIs) from each slide were captured in 2 ways as mentioned above. Applying a baseline machine learning algorithm trained on scanner ROIs resulted in performance deterioration when applied to the smartphone ROIs (97.8% area under the receiver operating characteristic curve [AUC], CI = [95.4%, 100.0%] for scanner images vs 89.5% AUC, CI = [82.3%, 96.6%] for mobile images, P = .019). Preliminary analysis via histogram matching showed that the baseline model was overly sensitive to slight color variations in the images (specifically, to color differences between mobile and scanner images). Adding color augmentation during training reduces this sensitivity and narrows the performance gap between mobile and scanner images (97.6% AUC, CI = [95.0%, 100.0%] for scanner images vs 96.0% AUC, CI = [91.8%, 100.0%] for mobile images, P = .309), with both modalities on par with human pathologist performance (95.6% AUC, CI = [91.6%, 99.5%]) for malignancy prediction (P = .398 for pathologist vs scanner and P = .875 for pathologist vs mobile). For indeterminate cases (pathologist-assigned Bethesda category of 3, 4, or 5), color augmentations confer some improvement (88.3% AUC, CI = [73.7%, 100.0%] for the baseline model vs 96.2% AUC, CI = [90.9%, 100.0%] with color augmentations, P = .158). In addition, we found that our model's performance levels off after 15 ROIs, a promising indication that ROI data collection would not be time-consuming for our diagnostic system. Finally, we showed that the model has sensible Bethesda category (TBS) predictions (increasing risk malignancy rate with predicted TBS category, with 0% malignancy for predicted TBS 2 and 100% malignancy for TBS 6).
Collapse
Affiliation(s)
- Serge Assaad
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina
| | - David Dov
- I-Medata AI Center, Tel Aviv Sourasky Medical Center, Tel Aviv-Yafo, Israel; Department of Pathology, Duke University Medical Center, Durham, North Carolina
| | - Richard Davis
- Department of Pathology, Duke University Medical Center, Durham, North Carolina
| | - Shahar Kovalsky
- Department of Mathematics, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina
| | - Walter T Lee
- Department of Pathology, Duke University Medical Center, Durham, North Carolina
| | - Russel Kahmke
- Department of Head and Neck Surgery and Communication Sciences, Duke University Medical Center, Durham, North Carolina
| | - Daniel Rocke
- Department of Head and Neck Surgery and Communication Sciences, Duke University Medical Center, Durham, North Carolina
| | - Jonathan Cohen
- Department of Head and Neck Surgery and Communication Sciences, Duke University Medical Center, Durham, North Carolina
| | - Ricardo Henao
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina; King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | - Lawrence Carin
- Department of Electrical and Computer Engineering, Duke University, Durham, North Carolina; King Abdullah University of Science and Technology, Thuwal, Saudi Arabia
| | | |
Collapse
|
40
|
Shang J, Chen Y, Nie J. LaserNet: a method of laser stripe center extraction under non-ideal conditions. APPLIED OPTICS 2023; 62:3387-3397. [PMID: 37132839 DOI: 10.1364/ao.486107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
The extraction of the center of a laser stripe is a key step in line-structure measurement, where noise interference and changes in the surface color of an object are the main factors affecting extraction accuracy. To obtain sub-pixel level center coordinates under such non-ideal conditions, we propose LaserNet, a novel deep learning-based algorithm, to the best of our knowledge, which consists of a laser region detection sub-network and a laser position optimization sub-network. The laser region detection sub-network is used to determine potential stripe regions, and the laser position optimization sub-network uses the local image of these regions to obtain the accurate center position of the laser stripe. The experimental results show that LaserNet can eliminate noise interference, handle color changes, and give accurate results under non-ideal conditions. The three-dimensional reconstruction experiments further demonstrate the effectiveness of the proposed method.
Collapse
|
41
|
Hopson JB, Neji R, Dunn JT, McGinnity CJ, Flaus A, Reader AJ, Hammers A. Pre-training via Transfer Learning and Pretext Learning a Convolutional Neural Network for Automated Assessments of Clinical PET Image Quality. IEEE TRANSACTIONS ON RADIATION AND PLASMA MEDICAL SCIENCES 2023; 7:372-381. [PMID: 37051163 PMCID: PMC7614424 DOI: 10.1109/trpms.2022.3231702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Positron emission tomography (PET) using a fraction of the usual injected dose would reduce the amount of radioligand needed, as well as the radiation dose to patients and staff, but would compromise reconstructed image quality. For performing the same clinical tasks with such images, a clinical (rather than numerical) image quality assessment is essential. This process can be automated with convolutional neural networks (CNNs). However, the scarcity of clinical quality readings is a challenge. We hypothesise that exploiting easily available quantitative information in pretext learning tasks or using established pre-trained networks could improve CNN performance for predicting clinical assessments with limited data. CNNs were pre-trained to predict injected dose from image patches extracted from eight real patient datasets, reconstructed using between 0.5%-100% of the available data. Transfer learning with seven different patients was used to predict three clinically-scored quality metrics ranging from 0-3: global quality rating, pattern recognition and diagnostic confidence. This was compared to pre-training via a VGG16 network at varying pre-training levels. Pre-training improved test performance for this task: the mean absolute error of 0.53 (compared to 0.87 without pre-training), was within clinical scoring uncertainty. Future work may include using the CNN for novel reconstruction methods performance assessment.
Collapse
Affiliation(s)
| | | | - Joel T Dunn
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Colm J McGinnity
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | - Anthime Flaus
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| | | | - Alexander Hammers
- King's College London & Guy's and St Thomas' PET Centre, King's College London
| |
Collapse
|
42
|
Dey P, Bansal B, Saini T. An emerging era of computational cytology. Diagn Cytopathol 2023; 51:270-275. [PMID: 36633016 DOI: 10.1002/dc.25101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2022] [Revised: 10/31/2022] [Accepted: 01/02/2023] [Indexed: 01/13/2023]
Abstract
BACKGROUND The significant advancement in digital imaging, data management, advanced computational power, and artificial neural network have an immense impact on the field of cytology. The amalgamation of these areas has generated a newer discipline known as computational cytology. AIMS AND OBJECTIVE In To discuss the various important aspects of computational cytology. MATERIALS AND METHODS We reviewed the different studies published in English during the last few years on computational cytology. RESULT Computational cytology is a newer and emerging discipline in pathology that deals with the patient's meta-data and digital image data to make a mathematical model to produce diagnostic interpretations and predictions. The role of the cytologist is now changing from a simple observational scientist and slide interpreter to a dynamic and integrated multi-parametric prediction-based scientist. CONCLUSION In the current stage, the cytologist must understand the situation and should have a vision of the complete scenario on computational cytology.
Collapse
Affiliation(s)
- Pranab Dey
- Department of Cytology and Gynecological Pathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India
| | - Baneet Bansal
- Department of Cytology and Gynecological Pathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India
| | - Tarunpreet Saini
- Department of Pathology, Postgraduate Institute of Medical Education and Research, Chandigarh, India
| |
Collapse
|
43
|
An Improved Co-Training and Generative Adversarial Network (Diff-CoGAN) for Semi-Supervised Medical Image Segmentation. INFORMATION 2023. [DOI: 10.3390/info14030190] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/19/2023] Open
Abstract
Semi-supervised learning is a technique that utilizes a limited set of labeled data and a large amount of unlabeled data to overcome the challenges of obtaining a perfect dataset in deep learning, especially in medical image segmentation. The accuracy of the predicted labels for the unlabeled data is a critical factor that affects the training performance, thus reducing the accuracy of segmentation. To address this issue, a semi-supervised learning method based on the Diff-CoGAN framework was proposed, which incorporates co-training and generative adversarial network (GAN) strategies. The proposed Diff-CoGAN framework employs two generators and one discriminator. The generators work together by providing mutual information guidance to produce predicted maps that are more accurate and closer to the ground truth. To further improve segmentation accuracy, the predicted maps are subjected to an intersection operation to identify a high-confidence region of interest, which reduces boundary segmentation errors. The predicted maps are then fed into the discriminator, and the iterative process of adversarial training enhances the generators’ ability to generate more precise maps, while also improving the discriminator’s ability to distinguish between the predicted maps and the ground truth. This study conducted experiments on the Hippocampus and Spleen images from the Medical Segmentation Decathlon (MSD) dataset using three semi-supervised methods: co-training, semi-GAN, and Diff-CoGAN. The experimental results demonstrated that the proposed Diff-CoGAN approach significantly enhanced segmentation accuracy compared to the other two methods by benefiting on the mutual guidance of the two generators and the adversarial training between the generators and discriminator. The introduction of the intersection operation prior to the discriminator also further reduced boundary segmentation errors.
Collapse
|
44
|
Li D, Li X, Li S, Qi M, Sun X, Hu G. Relationship between the deep features of the full-scan pathological map of mucinous gastric carcinoma and related genes based on deep learning. Heliyon 2023; 9:e14374. [PMID: 36942252 PMCID: PMC10023952 DOI: 10.1016/j.heliyon.2023.e14374] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Revised: 02/28/2023] [Accepted: 03/02/2023] [Indexed: 03/11/2023] Open
Abstract
Background Long-term differential expression of disease-associated genes is a crucial driver of pathological changes in mucinous gastric carcinoma. Therefore, there should be a correlation between depth features extracted from pathology-based full-scan images using deep learning and disease-associated gene expression. This study tried to provides preliminary evidence that long-term differentially expressed (disease-associated) genes lead to subtle changes in disease pathology by exploring their correlation, and offer a new ideas for precise analysis of pathomics and combined analysis of pathomics and genomics. Methods Full pathological scans, gene sequencing data, and clinical data of patients with mucinous gastric carcinoma were downloaded from TCGA data. The VGG-16 network architecture was used to construct a binary classification model to explore the potential of VGG-16 applications and extract the deep features of the pathology-based full-scan map. Differential gene expression analysis was performed and a protein-protein interaction network was constructed to screen disease-related core genes. Differential, Lasso regression, and extensive correlation analyses were used to screen for valuable deep features. Finally, a correlation analysis was used to determine whether there was a correlation between valuable deep features and disease-related core genes. Result The accuracy of the binary classification model was 0.775 ± 0.129. A total of 24 disease-related core genes were screened, including ASPM, AURKA, AURKB, BUB1, BUB1B, CCNA2, CCNB1, CCNB2, CDCA8, CDK1, CENPF, DLGAP5, KIF11, KIF20A, KIF2C, KIF4A, MELK, PBK, RRM2, TOP2A, TPX2, TTK, UBE2C, and ZWINT. In addition, differential, Lasso regression, and extensive correlation analyses were used to screen eight valuable deep features, including features 51, 106, 109, 118, 257, 282, 326, and 487. Finally, the results of the correlation analysis suggested that valuable deep features were either positively or negatively correlated with core gene expression. Conclusion The preliminary results of this study support our hypotheses. Deep learning may be an important bridge for the joint analysis of pathomics and genomics and provides preliminary evidence for long-term abnormal expression of genes leading to subtle changes in pathology.
Collapse
Affiliation(s)
- Ding Li
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaoyuan Li
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Shifang Li
- Department of Neurosurgery, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Mengmeng Qi
- Department of Endocrinology, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Xiaowei Sun
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| | - Guojie Hu
- Department of Traditional Chinese Medicine, The Affiliated Hospital of Qingdao University, Qingdao, Shandong, China
| |
Collapse
|
45
|
Çalışkan A. Detecting human activity types from 3D posture data using deep learning models. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
|
46
|
Su SS, Li LY, Wang Y, Li YZ. Stroke risk prediction by color Doppler ultrasound of carotid artery-based deep learning using Inception V3 and VGG-16. Front Neurol 2023; 14:1111906. [PMID: 36864909 PMCID: PMC9971808 DOI: 10.3389/fneur.2023.1111906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2022] [Accepted: 01/16/2023] [Indexed: 02/16/2023] Open
Abstract
Purpose This study aims to automatically classify color Doppler images into two categories for stroke risk prediction based on the carotid plaque. The first category is high-risk carotid vulnerable plaque, and the second is stable carotid plaque. Method In this research study, we used a deep learning framework based on transfer learning to classify color Doppler images into two categories: one is high-risk carotid vulnerable plaque, and the other is stable carotid plaque. The data were collected from the Second Affiliated Hospital of Fujian Medical University, including stable and vulnerable cases. A total of 87 patients with risk factors for atherosclerosis in our hospital were selected. We used 230 color Doppler ultrasound images for each category and further divided those into the training set and test set in a ratio of 70 and 30%, respectively. We have implemented Inception V3 and VGG-16 pre-trained models for this classification task. Results Using the proposed framework, we implemented two transfer deep learning models: Inception V3 and VGG-16. We achieved the highest accuracy of 93.81% by using fine-tuned and adjusted hyperparameters according to our classification problem. Conclusion In this research, we classified color Doppler ultrasound images into high-risk carotid vulnerable and stable carotid plaques. We fine-tuned pre-trained deep learning models to classify color Doppler ultrasound images according to our dataset. Our suggested framework helps prevent incorrect diagnoses caused by low image quality and individual experience, among other factors.
Collapse
Affiliation(s)
- Shan-Shan Su
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China,*Correspondence: Shan-Shan Su ✉
| | - Li-Ya Li
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China,Li-Ya Li ✉
| | - Yi Wang
- Department of Computed Tomography and Magnetic Resonance Imaging (CT/MRI), The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| | - Yuan-Zhe Li
- Department of Computed Tomography and Magnetic Resonance Imaging (CT/MRI), The Second Affiliated Hospital of Fujian Medical University, Quanzhou, China
| |
Collapse
|
47
|
Jassal K, Koohestani A, Kiu A, Strong A, Ravintharan N, Yeung M, Grodski S, Serpell JW, Lee JC. Artificial Intelligence for Pre-operative Diagnosis of Malignant Thyroid Nodules Based on Sonographic Features and Cytology Category. World J Surg 2023; 47:330-339. [PMID: 36336771 PMCID: PMC9803749 DOI: 10.1007/s00268-022-06798-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/08/2022] [Indexed: 11/08/2022]
Abstract
BACKGROUND Current diagnosis and classification of thyroid nodules are susceptible to subjective factors. Despite widespread use of ultrasonography (USG) and fine needle aspiration cytology (FNAC) to assess thyroid nodules, the interpretation of results is nuanced and requires specialist endocrine surgery input. Using readily available pre-operative data, the aims of this study were to develop artificial intelligence (AI) models to classify nodules into likely benign or malignant and to compare the diagnostic performance of the models. METHODS Patients undergoing surgery for thyroid nodules between 2010 and 2020 were recruited from our institution's database into training and testing groups. Demographics, serum TSH level, cytology, ultrasonography features and histopathology data were extracted. The training group USG images were re-reviewed by a study radiologist experienced in thyroid USG, who reported the relevant features and supplemented with data extracted from existing reports to reduce sampling bias. Testing group USG features were extracted solely from existing reports to reflect real-life practice of a non-thyroid specialist. We developed four AI models based on classification algorithms (k-Nearest Neighbour, Support Vector Machine, Decision Tree, Naïve Bayes) and evaluated their diagnostic performance of thyroid malignancy. RESULTS In the training group (n = 857), 75% were female and 27% of cases were malignant. The testing group (n = 198) consisted of 77% females and 17% malignant cases. Mean age was 54.7 ± 16.2 years for the training group and 50.1 ± 17.4 years for the testing group. Following validation with the testing group, support vector machine classifier was found to perform best in predicting final histopathology with an accuracy of 89%, sensitivity 89%, specificity 83%, F-score 94% and AUROC 0.86. CONCLUSION We have developed a first of its kind, pilot AI model that can accurately predict malignancy in thyroid nodules using USG features, FNAC, demographics and serum TSH. There is potential for a model like this to be used as a decision support tool in under-resourced areas as well as by non-thyroid specialists.
Collapse
Affiliation(s)
- Karishma Jassal
- Monash University Endocrine Surgery Unit, The Alfred Hospital, 55 Commercial Road, Melbourne, VIC 3004, Australia.
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia.
| | - Afsanesh Koohestani
- Monash University Endocrine Surgery Unit, The Alfred Hospital, 55 Commercial Road, Melbourne, VIC 3004, Australia
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - Andrew Kiu
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - April Strong
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - Nandhini Ravintharan
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - Meei Yeung
- Monash University Endocrine Surgery Unit, The Alfred Hospital, 55 Commercial Road, Melbourne, VIC 3004, Australia
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - Simon Grodski
- Monash University Endocrine Surgery Unit, The Alfred Hospital, 55 Commercial Road, Melbourne, VIC 3004, Australia
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - Jonathan W Serpell
- Monash University Endocrine Surgery Unit, The Alfred Hospital, 55 Commercial Road, Melbourne, VIC 3004, Australia
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| | - James C Lee
- Monash University Endocrine Surgery Unit, The Alfred Hospital, 55 Commercial Road, Melbourne, VIC 3004, Australia
- Department of Surgery, Central Clinical School, Monash University, Melbourne, Australia
| |
Collapse
|
48
|
Deep learning for computational cytology: A survey. Med Image Anal 2023; 84:102691. [PMID: 36455333 DOI: 10.1016/j.media.2022.102691] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Revised: 10/22/2022] [Accepted: 11/09/2022] [Indexed: 11/16/2022]
Abstract
Computational cytology is a critical, rapid-developing, yet challenging topic in medical image computing concerned with analyzing digitized cytology images by computer-aided technologies for cancer screening. Recently, an increasing number of deep learning (DL) approaches have made significant achievements in medical image analysis, leading to boosting publications of cytological studies. In this article, we survey more than 120 publications of DL-based cytology image analysis to investigate the advanced methods and comprehensive applications. We first introduce various deep learning schemes, including fully supervised, weakly supervised, unsupervised, and transfer learning. Then, we systematically summarize public datasets, evaluation metrics, versatile cytology image analysis applications including cell classification, slide-level cancer screening, nuclei or cell detection and segmentation. Finally, we discuss current challenges and potential research directions of computational cytology.
Collapse
|
49
|
Abstract
Background: Artificial intelligence (AI) is broadly defined as the ability of machines to apply human-like reasoning to problem solving. Recent years have seen a rapid growth of AI in many disciplines. This review will focus on AI applications in the assessment of thyroid nodules. Summary: AI encompasses two related computational techniques: machine learning, in which computers learn by observing data provided by humans, and deep learning, which employs neural networks that mimic brain structure and function to analyze data. Some experts believe the way AI systems reach a conclusion should be transparent, or explainable, while others disagree. Most AI platforms in thyroid disease have focused on malignancy risk stratification of nodules. To date, four have been approved by the United States Food and Drug Administration. While the results of validation studies have been mixed, there is ample evidence that AI can exceed the performance of some humans, particularly physicians with less experience. AI has also been applied to assessment of lymph nodes and cytopathology specimens. Conclusions: Adoption of AI in thyroid disease will require vendors to demonstrate that their software works as intended, is readily usable in real-world settings, and is cost effective. AI platforms that perform best in head-to-head comparisons will dominate and spur wider adoption.
Collapse
Affiliation(s)
- Franklin N Tessler
- Department of Radiology, University of Alabama at Birmingham, Birmingham, Alabama, USA
| | - Johnson Thomas
- Department of Endocrinology, Mercy, Springfield, Missouri, USA
| |
Collapse
|
50
|
The Use of Artificial Intelligence in the Diagnosis and Classification of Thyroid Nodules: An Update. Cancers (Basel) 2023; 15:cancers15030708. [PMID: 36765671 PMCID: PMC9913834 DOI: 10.3390/cancers15030708] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2022] [Revised: 01/20/2023] [Accepted: 01/20/2023] [Indexed: 01/27/2023] Open
Abstract
The incidence of thyroid nodules diagnosed is increasing every year, leading to a greater risk of unnecessary procedures being performed or wrong diagnoses being made. In our paper, we present the latest knowledge on the use of artificial intelligence in diagnosing and classifying thyroid nodules. We particularly focus on the usefulness of artificial intelligence in ultrasonography for the diagnosis and characterization of pathology, as these are the two most developed fields. In our search of the latest innovations, we reviewed only the latest publications of specific types published from 2018 to 2022. We analyzed 930 papers in total, from which we selected 33 that were the most relevant to the topic of our work. In conclusion, there is great scope for the use of artificial intelligence in future thyroid nodule classification and diagnosis. In addition to the most typical uses of artificial intelligence in cancer differentiation, we identified several other novel applications of artificial intelligence during our review.
Collapse
|