1
|
Cetera GE, Tozzi AE, Chiappa V, Castiglioni I, Merli CEM, Vercellini P. Artificial Intelligence in the Management of Women with Endometriosis and Adenomyosis: Can Machines Ever Be Worse Than Humans? J Clin Med 2024; 13:2950. [PMID: 38792490 PMCID: PMC11121846 DOI: 10.3390/jcm13102950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2024] [Revised: 04/08/2024] [Accepted: 05/06/2024] [Indexed: 05/26/2024] Open
Abstract
Artificial intelligence (AI) is experiencing advances and integration in all medical specializations, and this creates excitement but also concerns. This narrative review aims to critically assess the state of the art of AI in the field of endometriosis and adenomyosis. By enabling automation, AI may speed up some routine tasks, decreasing gynecologists' risk of burnout, as well as enabling them to spend more time interacting with their patients, increasing their efficiency and patients' perception of being taken care of. Surgery may also benefit from AI, especially through its integration with robotic surgery systems. This may improve the detection of anatomical structures and enhance surgical outcomes by combining intra-operative findings with pre-operative imaging. Not only that, but AI promises to improve the quality of care by facilitating clinical research. Through the introduction of decision-support tools, it can enhance diagnostic assessment; it can also predict treatment effectiveness and side effects, as well as reproductive prognosis and cancer risk. However, concerns exist regarding the fact that good quality data used in tool development and compliance with data sharing guidelines are crucial. Also, professionals are worried AI may render certain specialists obsolete. This said, AI is more likely to become a well-liked team member rather than a usurper.
Collapse
Affiliation(s)
- Giulia Emily Cetera
- Gynecology Unit, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, 20122 Milan, Italy; (G.E.C.); (C.E.M.M.)
- Academic Center for Research on Adenomyosis and Endometriosis, Department of Clinical Sciences and Community Health, Università degli Studi di Milano, 20122 Milan, Italy
| | - Alberto Eugenio Tozzi
- Predictive and Preventive Medicine Research Unit, Bambino Gesù Children’s Hospital, IRCCS, 00165 Rome, Italy;
| | - Valentina Chiappa
- Gynaecologic Oncology, Fondazione IRCCS Istituto Nazionale dei Tumori, 20133 Milan, Italy;
| | | | - Camilla Erminia Maria Merli
- Gynecology Unit, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, 20122 Milan, Italy; (G.E.C.); (C.E.M.M.)
| | - Paolo Vercellini
- Gynecology Unit, Fondazione IRCCS Ca’ Granda Ospedale Maggiore Policlinico, 20122 Milan, Italy; (G.E.C.); (C.E.M.M.)
- Academic Center for Research on Adenomyosis and Endometriosis, Department of Clinical Sciences and Community Health, Università degli Studi di Milano, 20122 Milan, Italy
| |
Collapse
|
2
|
Wang Z, Luo S, Chen J, Jiao Y, Cui C, Shi S, Yang Y, Zhao J, Jiang Y, Zhang Y, Xu F, Xu J, Lin Q, Dong F. Multi-modality deep learning model reaches high prediction accuracy in the diagnosis of ovarian cancer. iScience 2024; 27:109403. [PMID: 38523785 PMCID: PMC10959660 DOI: 10.1016/j.isci.2024.109403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 12/29/2023] [Accepted: 02/28/2024] [Indexed: 03/26/2024] Open
Abstract
We evaluated the diagnostic performance of a multimodal deep-learning (DL) model for ovarian mass differential diagnosis. This single-center retrospective study included 1,054 ultrasound (US)-detected ovarian tumors (699 benign and 355 malignant). Patients were randomly divided into training (n = 675), validation (n = 169), and testing (n = 210) sets. The model was developed using ResNet-50. Three DL-based models were proposed for benign-malignant classification of these lesions: single-modality model that only utilized US images; dual-modality model that used US images and menopausal status as inputs; and multi-modality model that integrated US images, menopausal status, and serum indicators. After 5-fold cross-validation, 210 lesions were tested. We evaluated the three models using the area under the curve (AUC), accuracy, sensitivity, and specificity. The multimodal model outperformed the single- and dual-modality models with 93.80% accuracy and 0.983 AUC. The Multimodal ResNet-50 DL model outperformed the single- and dual-modality models in identifying benign and malignant ovarian tumors.
Collapse
Affiliation(s)
- Zimo Wang
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Shuyu Luo
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Jing Chen
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Yang Jiao
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Chen Cui
- Illuminate, LLC, 6B, Building 5, Tianyu Xiangshan Garden, No. 33, Nongxuan Road, Futian District, Donghai Community, Xiangmihu Street, Futian District, Shenzhen 518000, China
- Microport Prophecy, 1601 ZhangDong Road, ZJHi-Tech Park, Shanghai 201203, China
| | - Siyuan Shi
- Illuminate, LLC, 6B, Building 5, Tianyu Xiangshan Garden, No. 33, Nongxuan Road, Futian District, Donghai Community, Xiangmihu Street, Futian District, Shenzhen 518000, China
- Microport Prophecy, 1601 ZhangDong Road, ZJHi-Tech Park, Shanghai 201203, China
| | - Yang Yang
- Illuminate, LLC, 6B, Building 5, Tianyu Xiangshan Garden, No. 33, Nongxuan Road, Futian District, Donghai Community, Xiangmihu Street, Futian District, Shenzhen 518000, China
- Microport Prophecy, 1601 ZhangDong Road, ZJHi-Tech Park, Shanghai 201203, China
| | - Junyi Zhao
- University of Shanghai for Science and Technology, Shanghai 201203, China
| | - Yitao Jiang
- Illuminate, LLC, 6B, Building 5, Tianyu Xiangshan Garden, No. 33, Nongxuan Road, Futian District, Donghai Community, Xiangmihu Street, Futian District, Shenzhen 518000, China
- Microport Prophecy, 1601 ZhangDong Road, ZJHi-Tech Park, Shanghai 201203, China
| | - Yujuan Zhang
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Fanhua Xu
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Jinfeng Xu
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Qi Lin
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| | - Fajin Dong
- Second Clinical College of Jinan University, Department of Ultrasound, Shenzhen People’s Hospital, First Affiliated Hospital of Southern University of Science and Technology, Shenzhen Medical Ultrasound Engineering Center. Shenzhen, Guangdong 518020, China
| |
Collapse
|
3
|
Burla L, Sartoretti E, Mannil M, Seidel S, Sartoretti T, Krentel H, De Wilde RL, Imesch P. MRI-Based Radiomics as a Promising Noninvasive Diagnostic Technique for Adenomyosis. J Clin Med 2024; 13:2344. [PMID: 38673617 PMCID: PMC11051471 DOI: 10.3390/jcm13082344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 04/08/2024] [Accepted: 04/10/2024] [Indexed: 04/28/2024] Open
Abstract
Background: MRI diagnostics are important for adenomyosis, especially in cases with inconclusive ultrasound. This study assessed the potential of MRI-based radiomics as a novel tool for differentiating between uteri with and without adenomyosis. Methods: This retrospective proof-of-principle single-center study included nine patients with and six patients without adenomyosis. All patients had preoperative T2w MR images and histological findings served as the reference standard. The uterus of each patient was segmented in 3D using dedicated software, and 884 radiomics features were extracted. After dimension reduction and feature selection, the diagnostic yield of individual and combined features implemented in the machine learning models were assessed by means of receiver operating characteristics analyses. Results: Eleven relevant radiomics features were identified. The diagnostic performance of individual features in differentiating adenomyosis from the control group was high, with areas under the curve (AUCs) ranging from 0.78 to 0.98. The performance of ML models incorporating several features was excellent, with AUC scores of 1 and an area under the precision-recall curve of 0.4. Conclusions: The set of radiomics features derived from routine T2w MRI enabled accurate differentiation of uteri with adenomyosis. Radiomics could enhance diagnosis and furthermore serve as an imaging biomarker to aid in personalizing therapies and monitoring treatment responses.
Collapse
Affiliation(s)
- Laurin Burla
- Department of Gynecology, University Hospital Zurich, 8091 Zurich, Switzerland; (L.B.)
- Department of Gynecology and Obstetrics, Hospital of Schaffhausen, 8208 Schaffhausen, Switzerland
| | | | - Manoj Mannil
- Clinic for Radiology, Muenster University Hospital, 48149 Muenster, Germany
| | - Stefan Seidel
- Institute for Radiology and Nuclear Medicine, Hospital of Schaffhausen, 8208 Schaffhausen, Switzerland
| | | | - Harald Krentel
- Department of Gynecology, Obstetrics and Gynecological Oncology, Bethesda Hospital Duisburg, 47053 Duisburg, Germany
| | - Rudy Leon De Wilde
- Clinic of Gynecology, Obstetrics and Gynecological Oncology, University Hospital for Gynecology, Pius-Hospital Oldenburg, Medical Campus University of Oldenburg, 26121 Oldenburg, Germany
| | - Patrick Imesch
- Department of Gynecology, University Hospital Zurich, 8091 Zurich, Switzerland; (L.B.)
- Clinic for Gynecology, Bethanien Clinic, 8044 Zurich, Switzerland
| |
Collapse
|
4
|
Du Y, Guo W, Xiao Y, Chen H, Yao J, Wu J. Ultrasound-based deep learning radiomics model for differentiating benign, borderline, and malignant ovarian tumours: a multi-class classification exploratory study. BMC Med Imaging 2024; 24:89. [PMID: 38622546 PMCID: PMC11020982 DOI: 10.1186/s12880-024-01251-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 03/18/2024] [Indexed: 04/17/2024] Open
Abstract
BACKGROUND Accurate preoperative identification of ovarian tumour subtypes is imperative for patients as it enables physicians to custom-tailor precise and individualized management strategies. So, we have developed an ultrasound (US)-based multiclass prediction algorithm for differentiating between benign, borderline, and malignant ovarian tumours. METHODS We randomised data from 849 patients with ovarian tumours into training and testing sets in a ratio of 8:2. The regions of interest on the US images were segmented and handcrafted radiomics features were extracted and screened. We applied the one-versus-rest method in multiclass classification. We inputted the best features into machine learning (ML) models and constructed a radiomic signature (Rad_Sig). US images of the maximum trimmed ovarian tumour sections were inputted into a pre-trained convolutional neural network (CNN) model. After internal enhancement and complex algorithms, each sample's predicted probability, known as the deep transfer learning signature (DTL_Sig), was generated. Clinical baseline data were analysed. Statistically significant clinical parameters and US semantic features in the training set were used to construct clinical signatures (Clinic_Sig). The prediction results of Rad_Sig, DTL_Sig, and Clinic_Sig for each sample were fused as new feature sets, to build the combined model, namely, the deep learning radiomic signature (DLR_Sig). We used the receiver operating characteristic (ROC) curve and the area under the ROC curve (AUC) to estimate the performance of the multiclass classification model. RESULTS The training set included 440 benign, 44 borderline, and 196 malignant ovarian tumours. The testing set included 109 benign, 11 borderline, and 49 malignant ovarian tumours. DLR_Sig three-class prediction model had the best overall and class-specific classification performance, with micro- and macro-average AUC of 0.90 and 0.84, respectively, on the testing set. Categories of identification AUC were 0.84, 0.85, and 0.83 for benign, borderline, and malignant ovarian tumours, respectively. In the confusion matrix, the classifier models of Clinic_Sig and Rad_Sig could not recognise borderline ovarian tumours. However, the proportions of borderline and malignant ovarian tumours identified by DLR_Sig were the highest at 54.55% and 63.27%, respectively. CONCLUSIONS The three-class prediction model of US-based DLR_Sig can discriminate between benign, borderline, and malignant ovarian tumours. Therefore, it may guide clinicians in determining the differential management of patients with ovarian tumours.
Collapse
Affiliation(s)
- Yangchun Du
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, No.6 Shuangyong Road, Qingxiu District, 530021, Nanning, China
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region & Guangxi Academy of Medical Sciences, No.6 Taoyuan Road, Qingxiu District, 530021, Nanning, China
| | - Wenwen Guo
- Department of Pathology, The People's Hospital of Guangxi Zhuang Autonomous Region & Guangxi Academy of Medical Sciences, No.6 Taoyuan Road, Qingxiu District, 530021, Nanning, China
| | - Yanju Xiao
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region & Guangxi Academy of Medical Sciences, No.6 Taoyuan Road, Qingxiu District, 530021, Nanning, China
| | - Haining Chen
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region & Guangxi Academy of Medical Sciences, No.6 Taoyuan Road, Qingxiu District, 530021, Nanning, China
| | - Jinxiu Yao
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region & Guangxi Academy of Medical Sciences, No.6 Taoyuan Road, Qingxiu District, 530021, Nanning, China
| | - Ji Wu
- Department of Ultrasound, The First Affiliated Hospital of Guangxi Medical University, No.6 Shuangyong Road, Qingxiu District, 530021, Nanning, China.
| |
Collapse
|
5
|
Du Y, Xiao Y, Guo W, Yao J, Lan T, Li S, Wen H, Zhu W, He G, Zheng H, Chen H. Development and validation of an ultrasound-based deep learning radiomics nomogram for predicting the malignant risk of ovarian tumours. Biomed Eng Online 2024; 23:41. [PMID: 38594729 PMCID: PMC11003110 DOI: 10.1186/s12938-024-01234-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 04/02/2024] [Indexed: 04/11/2024] Open
Abstract
BACKGROUND The timely identification and management of ovarian cancer are critical determinants of patient prognosis. In this study, we developed and validated a deep learning radiomics nomogram (DLR_Nomogram) based on ultrasound (US) imaging to accurately predict the malignant risk of ovarian tumours and compared the diagnostic performance of the DLR_Nomogram to that of the ovarian-adnexal reporting and data system (O-RADS). METHODS This study encompasses two research tasks. Patients were randomly divided into training and testing sets in an 8:2 ratio for both tasks. In task 1, we assessed the malignancy risk of 849 patients with ovarian tumours. In task 2, we evaluated the malignancy risk of 391 patients with O-RADS 4 and O-RADS 5 ovarian neoplasms. Three models were developed and validated to predict the risk of malignancy in ovarian tumours. The predicted outcomes of the models for each sample were merged to form a new feature set that was utilised as an input for the logistic regression (LR) model for constructing a combined model, visualised as the DLR_Nomogram. Then, the diagnostic performance of these models was evaluated by the receiver operating characteristic curve (ROC). RESULTS The DLR_Nomogram demonstrated superior predictive performance in predicting the malignant risk of ovarian tumours, as evidenced by area under the ROC curve (AUC) values of 0.985 and 0.928 for the training and testing sets of task 1, respectively. The AUC value of its testing set was lower than that of the O-RADS; however, the difference was not statistically significant. The DLR_Nomogram exhibited the highest AUC values of 0.955 and 0.869 in the training and testing sets of task 2, respectively. The DLR_Nomogram showed satisfactory fitting performance for both tasks in Hosmer-Lemeshow testing. Decision curve analysis demonstrated that the DLR_Nomogram yielded greater net clinical benefits for predicting malignant ovarian tumours within a specific range of threshold values. CONCLUSIONS The US-based DLR_Nomogram has shown the capability to accurately predict the malignant risk of ovarian tumours, exhibiting a predictive efficacy comparable to that of O-RADS.
Collapse
Affiliation(s)
- Yangchun Du
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Yanju Xiao
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Wenwen Guo
- Department of Pathology, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Jinxiu Yao
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Tongliu Lan
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Sijin Li
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Huoyue Wen
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Wenying Zhu
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Guangling He
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China
| | - Hongyu Zheng
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China.
| | - Haining Chen
- Department of Ultrasound, The People's Hospital of Guangxi Zhuang Autonomous Region and Guangxi Academy of Medical Sciences, No. 6 Taoyuan Road, Qingxiu District, Nanning, 530021, China.
| |
Collapse
|
6
|
Liu L, Cai W, Zhou C, Tian H, Wu B, Zhang J, Yue G, Hao Y. Ultrasound radiomics-based artificial intelligence model to assist in the differential diagnosis of ovarian endometrioma and ovarian dermoid cyst. Front Med (Lausanne) 2024; 11:1362588. [PMID: 38523908 PMCID: PMC10957533 DOI: 10.3389/fmed.2024.1362588] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2023] [Accepted: 02/27/2024] [Indexed: 03/26/2024] Open
Abstract
Background Accurately differentiating between ovarian endometrioma and ovarian dermoid cyst is of clinical significance. However, the ultrasound appearance of these two diseases is variable, occasionally causing confusion and overlap with each other. This study aimed to develop a diagnostic classification model based on ultrasound radiomics to intelligently distinguish and diagnose the two diseases. Methods We collected ovarian ultrasound images from participants diagnosed as patients with ovarian endometrioma or ovarian dermoid cyst. Feature extraction and selection were performed using the Mann-Whitney U-test, Spearman correlation analysis, and the least absolute shrinkage and selection operator (LASSO) regression. We then input the final features into the machine learning classifiers for model construction. A nomogram was established by combining the radiomic signature and clinical signature. Results A total of 407 participants with 407 lesions were included and categorized into the ovarian endometriomas group (n = 200) and the dermoid cyst group (n = 207). In the test cohort, Logistic Regression (LR) achieved the highest area under curve (AUC) value (0.981, 95% CI: 0.963-1.000), the highest accuracy (94.8%), and the highest sensitivity (95.5%), while LightGBM achieved the highest specificity (97.1%). A nomogram incorporating both clinical features and radiomic features achieved the highest level of performance (AUC: 0.987, 95% CI: 0.967-1.000, accuracy: 95.1%, sensitivity: 88.0%, specificity: 100.0%, PPV: 100.0%, NPV: 88.0%, precision: 93.6%). No statistical difference in diagnostic performance was observed between the radiomic model and the nomogram (P > 0.05). The diagnostic indexes of radiomic model were comparable to that of senior radiologists and superior to that of junior radiologist. The diagnostic performance of junior radiologists significantly improved with the assistance of the model. Conclusion This ultrasound radiomics-based model demonstrated superior diagnostic performance compared to those of junior radiologists and comparable diagnostic performance to those of senior radiologists, and it has the potential to enhance the diagnostic performance of junior radiologists.
Collapse
Affiliation(s)
- Lu Liu
- Department of Ultrasound Medicine, South China Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Wenjun Cai
- Department of Ultrasound, Shenzhen University General Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Chenyang Zhou
- Department of Information, South China Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Hongyan Tian
- Department of Ultrasound Medicine, South China Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Beibei Wu
- Department of Ultrasound Medicine, South China Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Jing Zhang
- Department of Ultrasound Medicine, South China Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Guanghui Yue
- National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen University, Shenzhen, P. R. China
| | - Yi Hao
- Department of Ultrasound Medicine, South China Hospital, Medical School, Shenzhen University, Shenzhen, P. R. China
| |
Collapse
|
7
|
Fischerova D, Smet C, Scovazzi U, Sousa DN, Hundarova K, Haldorsen IS. Staging by imaging in gynecologic cancer and the role of ultrasound: an update of European joint consensus statements. Int J Gynecol Cancer 2024; 34:363-378. [PMID: 38438175 PMCID: PMC10958454 DOI: 10.1136/ijgc-2023-004609] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 01/05/2024] [Indexed: 03/06/2024] Open
Abstract
In recent years the role of diagnostic imaging by pelvic ultrasound in the diagnosis and staging of gynecological cancers has been growing exponentially. Evidence from recent prospective multicenter studies has demonstrated high accuracy for pre-operative locoregional ultrasound staging in gynecological cancers. Therefore, in many leading gynecologic oncology units, ultrasound is implemented next to pelvic MRI as the first-line imaging modality for gynecological cancer. The work herein is a consensus statement on the role of pre-operative imaging by ultrasound and other imaging modalities in gynecological cancer, following European Society guidelines.
Collapse
Affiliation(s)
- Daniela Fischerova
- Gynecologic Oncology Center, Department of Gynecology, Obstetrics and Neonatology, First Faculty of Medicine, Charles University and General University Hospital in Prague, Prague, Czech Republic
| | - Carolina Smet
- Department of Obstetrics and Gynecology, São Francisco de Xavier Hospital in Lisbon, Lisbon, Portugal
| | - Umberto Scovazzi
- Department of Gynecology and Obstetrics, Ospedale Policlinico San Martino and University of Genoa, Genoa, Italy
| | | | - Kristina Hundarova
- Department of Gynecology and Obstetrics A, Hospital and University Centre of Coimbra, Coimbra, Portugal
| | - Ingfrid Salvesen Haldorsen
- Mohn Medical Imaging and Visualization Centre (MMIV), Department of Radiology and Department of Clinical Medicine, Haukeland University Hospital and the University of Bergen, Bergen, Norway
| |
Collapse
|
8
|
Wang Y, Lin W, Zhuang X, Wang X, He Y, Li L, Lyu G. Advances in artificial intelligence for the diagnosis and treatment of ovarian cancer (Review). Oncol Rep 2024; 51:46. [PMID: 38240090 PMCID: PMC10828921 DOI: 10.3892/or.2024.8705] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2023] [Accepted: 01/05/2024] [Indexed: 01/23/2024] Open
Abstract
Artificial intelligence (AI) has emerged as a crucial technique for extracting high‑throughput information from various sources, including medical images, pathological images, and genomics, transcriptomics, proteomics and metabolomics data. AI has been widely used in the field of diagnosis, for the differentiation of benign and malignant ovarian cancer (OC), and for prognostic assessment, with favorable results. Notably, AI‑based radiomics has proven to be a non‑invasive, convenient and economical approach, making it an essential asset in a gynecological setting. The present study reviews the application of AI in the diagnosis, differentiation and prognostic assessment of OC. It is suggested that AI‑based multi‑omics studies have the potential to improve the diagnostic and prognostic predictive ability in patients with OC, thereby facilitating the realization of precision medicine.
Collapse
Affiliation(s)
- Yanli Wang
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Weihong Lin
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Xiaoling Zhuang
- Department of Pathology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Xiali Wang
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, Fujian 362000, P.R. China
| | - Yifang He
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Luhong Li
- Department of Obstetrics and Gynecology, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
| | - Guorong Lyu
- Department of Ultrasound, The Second Affiliated Hospital of Fujian Medical University, Quanzhou, Fujian 362000, P.R. China
- Department of Clinical Medicine, Quanzhou Medical College, Quanzhou, Fujian 362000, P.R. China
| |
Collapse
|
9
|
Barcroft JF, Linton-Reid K, Landolfo C, Al-Memar M, Parker N, Kyriacou C, Munaretto M, Fantauzzi M, Cooper N, Yazbek J, Bharwani N, Lee SR, Kim JH, Timmerman D, Posma J, Savelli L, Saso S, Aboagye EO, Bourne T. Machine learning and radiomics for segmentation and classification of adnexal masses on ultrasound. NPJ Precis Oncol 2024; 8:41. [PMID: 38378773 PMCID: PMC10879532 DOI: 10.1038/s41698-024-00527-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2023] [Accepted: 01/30/2024] [Indexed: 02/22/2024] Open
Abstract
Ultrasound-based models exist to support the classification of adnexal masses but are subjective and rely upon ultrasound expertise. We aimed to develop an end-to-end machine learning (ML) model capable of automating the classification of adnexal masses. In this retrospective study, transvaginal ultrasound scan images with linked diagnoses (ultrasound subjective assessment or histology) were extracted and segmented from Imperial College Healthcare, UK (ICH development dataset; n = 577 masses; 1444 images) and Morgagni-Pierantoni Hospital, Italy (MPH external dataset; n = 184 masses; 476 images). A segmentation and classification model was developed using convolutional neural networks and traditional radiomics features. Dice surface coefficient (DICE) was used to measure segmentation performance and area under the ROC curve (AUC), F1-score and recall for classification performance. The ICH and MPH datasets had a median age of 45 (IQR 35-60) and 48 (IQR 38-57) years old and consisted of 23.1% and 31.5% malignant cases, respectively. The best segmentation model achieved a DICE score of 0.85 ± 0.01, 0.88 ± 0.01 and 0.85 ± 0.01 in the ICH training, ICH validation and MPH test sets. The best classification model achieved a recall of 1.00 and F1-score of 0.88 (AUC:0.93), 0.94 (AUC:0.89) and 0.83 (AUC:0.90) in the ICH training, ICH validation and MPH test sets, respectively. We have developed an end-to-end radiomics-based model capable of adnexal mass segmentation and classification, with a comparable predictive performance (AUC 0.90) to the published performance of expert subjective assessment (gold standard), and current risk models. Further prospective evaluation of the classification performance of this ML model against existing methods is required.
Collapse
Affiliation(s)
- Jennifer F Barcroft
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | | | - Chiara Landolfo
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | - Maya Al-Memar
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | - Nina Parker
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | - Chris Kyriacou
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | - Maria Munaretto
- Department of Obstetrics and Gynaecology, Ospedale Morgagni-Pierantoni, Forli, Italy
| | - Martina Fantauzzi
- Department of Medicine and Surgery, University of Milan-Bicocca, Milan, Italy
| | - Nina Cooper
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | - Joseph Yazbek
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
| | - Nishat Bharwani
- Department of Radiology, Imperial College Healthcare NHS Trust, London, UK
| | - Sa Ra Lee
- Department of Obstetrics and Gynaecology, Asan Medical Center, Seoul, South Korea
| | - Ju Hee Kim
- Department of Obstetrics and Gynaecology, Asan Medical Center, Seoul, South Korea
| | - Dirk Timmerman
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynecology, University Hospitals Leuven, Leuven, Belgium
- Department of Development and Regeneration, KU Leuven, Leuven, Belgium
| | - Joram Posma
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
| | - Luca Savelli
- Department of Obstetrics and Gynaecology, Ospedale Morgagni-Pierantoni, Forli, Italy
| | - Srdjan Saso
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
- Department of Surgery and Cancer, Imperial College London, London, UK
| | - Eric O Aboagye
- Department of Surgery and Cancer, Imperial College London, London, UK.
| | - Tom Bourne
- Department of Metabolism, Digestion and Reproduction, Imperial College London, London, UK
- Department of Obstetrics and Gynaecology, Imperial College Healthcare NHS Trust, London, UK
- Department of Development and Regeneration, KU Leuven, Leuven, Belgium
| |
Collapse
|
10
|
Brandão M, Mendes F, Martins M, Cardoso P, Macedo G, Mascarenhas T, Mascarenhas Saraiva M. Revolutionizing Women's Health: A Comprehensive Review of Artificial Intelligence Advancements in Gynecology. J Clin Med 2024; 13:1061. [PMID: 38398374 PMCID: PMC10889757 DOI: 10.3390/jcm13041061] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Revised: 02/04/2024] [Accepted: 02/05/2024] [Indexed: 02/25/2024] Open
Abstract
Artificial intelligence has yielded remarkably promising results in several medical fields, namely those with a strong imaging component. Gynecology relies heavily on imaging since it offers useful visual data on the female reproductive system, leading to a deeper understanding of pathophysiological concepts. The applicability of artificial intelligence technologies has not been as noticeable in gynecologic imaging as in other medical fields so far. However, due to growing interest in this area, some studies have been performed with exciting results. From urogynecology to oncology, artificial intelligence algorithms, particularly machine learning and deep learning, have shown huge potential to revolutionize the overall healthcare experience for women's reproductive health. In this review, we aim to establish the current status of AI in gynecology, the upcoming developments in this area, and discuss the challenges facing its clinical implementation, namely the technological and ethical concerns for technology development, implementation, and accountability.
Collapse
Affiliation(s)
- Marta Brandão
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
| | - Francisco Mendes
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Miguel Martins
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Pedro Cardoso
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Guilherme Macedo
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| | - Teresa Mascarenhas
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Obstetrics and Gynecology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal
| | - Miguel Mascarenhas Saraiva
- Faculty of Medicine, University of Porto, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (M.B.); (P.C.); (G.M.); (T.M.)
- Department of Gastroenterology, São João University Hospital, Alameda Professor Hernâni Monteiro, 4200-427 Porto, Portugal; (F.M.); (M.M.)
- WGO Gastroenterology and Hepatology Training Center, 4200-427 Porto, Portugal
| |
Collapse
|
11
|
Deslandes A, Avery J, Chen H, Leonardi M, Condous G, Hull ML. Artificial intelligence as a teaching tool for gynaecological ultrasound: A systematic search and scoping review. Australas J Ultrasound Med 2024; 27:5-11. [PMID: 38434541 PMCID: PMC10902831 DOI: 10.1002/ajum.12368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2024] Open
Abstract
Purpose The aim of this study was to investigate the current application of artificial intelligence (AI) tools in the teaching of ultrasound skills as they pertain to gynaecological ultrasound. Methods A scoping review was performed. Eight databases (MEDLINE, EMBASE, EMCARE, CINAHL, Scopus, Web of Science, IEEE Xplore and ACM digital library) were searched in December 2022 using predefined keywords. All types of publications were eligible for inclusion so long as they reported the use of an AI tool, included reference to or discussion of teaching or the improvement of ultrasound skills and pertained to gynaecological ultrasound. Conference abstracts and non-English language papers which could not be adequately translated into English were excluded. Results The initial database search returned 481 articles. After screening against our inclusion and exclusion criteria, two were deemed to meet the inclusion criteria. Neither of the articles included reported original research (one systematic review and one review article). Neither of the included articles explicitly provided details of specific tools developed for the teaching of ultrasound skills for gynaecological imaging but highlighted similar applications within the field of obstetrics which could potentially be expanded. Conclusion Artificial intelligence can potentially assist in the training of sonographers and other ultrasound operators, including in the field of gynaecological ultrasound. This scoping review revealed however that to date, no original research has been published reporting the use or development of such a tool specifically for gynaecological ultrasound.
Collapse
Affiliation(s)
- Alison Deslandes
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - Jodie Avery
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - Hsiang‐Ting Chen
- School of Computer and Mathematical SciencesUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - Mathew Leonardi
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
- Department of Obstetrics and GynecologyMcMaster UniversityHamiltonOntarioCanada
| | - George Condous
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| | - M. Louise Hull
- Robinson Research InstituteUniversity of AdelaideAdelaideSouth AustraliaAustralia
| |
Collapse
|
12
|
Sadeghi MH, Sina S, Omidi H, Farshchitabrizi AH, Alavi M. Deep learning in ovarian cancer diagnosis: a comprehensive review of various imaging modalities. Pol J Radiol 2024; 89:e30-e48. [PMID: 38371888 PMCID: PMC10867948 DOI: 10.5114/pjr.2024.134817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2023] [Accepted: 12/27/2023] [Indexed: 02/20/2024] Open
Abstract
Ovarian cancer poses a major worldwide health issue, marked by high death rates and a deficiency in reliable diagnostic methods. The precise and prompt detection of ovarian cancer holds great importance in advancing patient outcomes and determining suitable treatment plans. Medical imaging techniques are vital in diagnosing ovarian cancer, but achieving accurate diagnoses remains challenging. Deep learning (DL), particularly convolutional neural networks (CNNs), has emerged as a promising solution to improve the accuracy of ovarian cancer detection. This systematic review explores the role of DL in improving the diagnostic accuracy for ovarian cancer. The methodology involved the establishment of research questions, inclusion and exclusion criteria, and a comprehensive search strategy across relevant databases. The selected studies focused on DL techniques applied to ovarian cancer diagnosis using medical imaging modalities, as well as tumour differentiation and radiomics. Data extraction, analysis, and synthesis were performed to summarize the characteristics and findings of the selected studies. The review emphasizes the potential of DL in enhancing the diagnosis of ovarian cancer by accelerating the diagnostic process and offering more precise and efficient solutions. DL models have demonstrated their effectiveness in categorizing ovarian tissues and achieving comparable diagnostic performance to that of experienced radiologists. The integration of DL into ovarian cancer diagnosis holds the promise of improving patient outcomes, refining treatment approaches, and supporting well-informed decision-making. Nevertheless, additional research and validation are necessary to ensure the dependability and applicability of DL models in everyday clinical settings.
Collapse
Affiliation(s)
| | - Sedigheh Sina
- Shiraz University, Shiraz, Iran
- Radiation Research Center, Shiraz University, Shiraz, Iran
| | | | | | | |
Collapse
|
13
|
Mitchell S, Nikolopoulos M, El-Zarka A, Al-Karawi D, Al-Zaidi S, Ghai A, Gaughran JE, Sayasneh A. Artificial Intelligence in Ultrasound Diagnoses of Ovarian Cancer: A Systematic Review and Meta-Analysis. Cancers (Basel) 2024; 16:422. [PMID: 38275863 PMCID: PMC10813993 DOI: 10.3390/cancers16020422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 01/27/2024] Open
Abstract
Ovarian cancer is the sixth most common malignancy, with a 35% survival rate across all stages at 10 years. Ultrasound is widely used for ovarian tumour diagnosis, and accurate pre-operative diagnosis is essential for appropriate patient management. Artificial intelligence is an emerging field within gynaecology and has been shown to aid in the ultrasound diagnosis of ovarian cancers. For this study, Embase and MEDLINE databases were searched, and all original clinical studies that used artificial intelligence in ultrasound examinations for the diagnosis of ovarian malignancies were screened. Studies using histopathological findings as the standard were included. The diagnostic performance of each study was analysed, and all the diagnostic performances were pooled and assessed. The initial search identified 3726 papers, of which 63 were suitable for abstract screening. Fourteen studies that used artificial intelligence in ultrasound diagnoses of ovarian malignancies and had histopathological findings as a standard were included in the final analysis, each of which had different sample sizes and used different methods; these studies examined a combined total of 15,358 ultrasound images. The overall sensitivity was 81% (95% CI, 0.80-0.82), and specificity was 92% (95% CI, 0.92-0.93), indicating that artificial intelligence demonstrates good performance in ultrasound diagnoses of ovarian cancer. Further prospective work is required to further validate AI for its use in clinical practice.
Collapse
Affiliation(s)
- Sian Mitchell
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Manolis Nikolopoulos
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Alaa El-Zarka
- Department of Gynaecology, Alexandria Faculty of Medicine, Alexandria 21433, Egypt
| | | | | | - Avi Ghai
- School of Life Course Sciences, Faculty of Life Sciences and Medicine, King’s College London, Strand, London WC2R 2LS, UK
| | - Jonathan E. Gaughran
- Department of Women’s Health, Guy’s and St Thomas’ Hospital NHS Foundation Trust, London SE1 7EH, UK
| | - Ahmad Sayasneh
- Department of Gynaecological Oncology, Surgical Oncology Directorate, Cancer Centre, Guy’s Hospital, Great Maze Pond, London SE1 9RT, UK
- School of Life Course Sciences, Faculty of Life Sciences and Medicine, St Thomas Hospital, Westminster Bridge Road, London SE1 7EH, UK
| |
Collapse
|
14
|
Huang DM, Wang SH. In Situ Monitoring and Assessment of Ischemic Skin Flap by High-Frequency Ultrasound and Quantitative Parameters. SENSORS (BASEL, SWITZERLAND) 2024; 24:363. [PMID: 38257456 PMCID: PMC10820102 DOI: 10.3390/s24020363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 12/24/2023] [Accepted: 01/04/2024] [Indexed: 01/24/2024]
Abstract
Skin flap surgery is a critical procedure for treating severe skin injury in which post-surgery lesions must well monitored and cared for noninvasively. In the present study, attempts using high-frequency ultrasound imaging, quantitative parameters, and statistical analysis were made to extensively assess variations in the skin flap. Experiments were arranged by incising the dorsal skin of rats to create a skin flap using the chamber model. Measurements, including photographs, 30 MHz ultrasound B-mode images, skin thickness, echogenicity, Nakagami statistics, and histological analysis of post-surgery skin flap, were performed. Photograph results showed that color variations in different parts of the skin flap may readily correspond to ischemic states of local tissues. Compared to post-surgery skin flap on day 7, both integrated backscatter (IB) and Nakagami parameter (m) of the distal part of tissues were increased, and those of the skin thickness were decreased. Overall, relative skin thickness, IB, and m of the distal part of post-surgery skin flap varied from 100 to 67%, -66 to -61 dB, and 0.48 to 0.36, respectively. These results demonstrate that this modality and quantitative parameters can be feasibly applied for long-term and in situ assessment of skin flap tissues.
Collapse
Affiliation(s)
- Da-Ming Huang
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 70101, Taiwan;
| | - Shyh-Hau Wang
- Department of Computer Science and Information Engineering, National Cheng Kung University, Tainan 70101, Taiwan;
- Institute of Medical Informatics, National Cheng Kung University, Tainan 70101, Taiwan
| |
Collapse
|
15
|
Jiang Y, Wang C, Zhou S. Artificial intelligence-based risk stratification, accurate diagnosis and treatment prediction in gynecologic oncology. Semin Cancer Biol 2023; 96:82-99. [PMID: 37783319 DOI: 10.1016/j.semcancer.2023.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 08/27/2023] [Accepted: 09/25/2023] [Indexed: 10/04/2023]
Abstract
As data-driven science, artificial intelligence (AI) has paved a promising path toward an evolving health system teeming with thrilling opportunities for precision oncology. Notwithstanding the tremendous success of oncological AI in such fields as lung carcinoma, breast tumor and brain malignancy, less attention has been devoted to investigating the influence of AI on gynecologic oncology. Hereby, this review sheds light on the ever-increasing contribution of state-of-the-art AI techniques to the refined risk stratification and whole-course management of patients with gynecologic tumors, in particular, cervical, ovarian and endometrial cancer, centering on information and features extracted from clinical data (electronic health records), cancer imaging including radiological imaging, colposcopic images, cytological and histopathological digital images, and molecular profiling (genomics, transcriptomics, metabolomics and so forth). However, there are still noteworthy challenges beyond performance validation. Thus, this work further describes the limitations and challenges faced in the real-word implementation of AI models, as well as potential solutions to address these issues.
Collapse
Affiliation(s)
- Yuting Jiang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Chengdi Wang
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China
| | - Shengtao Zhou
- Department of Obstetrics and Gynecology, Key Laboratory of Birth Defects and Related Diseases of Women and Children of MOE and State Key Laboratory of Biotherapy, West China Second Hospital, Sichuan University and Collaborative Innovation Center, Chengdu, Sichuan 610041, China; Department of Pulmonary and Critical Care Medicine, State Key Laboratory of Respiratory Health and Multimorbidity, Frontiers Science Center for Disease-related Molecular Network, West China Hospital, Sichuan University, Chengdu, Sichuan 610041, China.
| |
Collapse
|
16
|
Jost E, Kosian P, Jimenez Cruz J, Albarqouni S, Gembruch U, Strizek B, Recker F. Evolving the Era of 5D Ultrasound? A Systematic Literature Review on the Applications for Artificial Intelligence Ultrasound Imaging in Obstetrics and Gynecology. J Clin Med 2023; 12:6833. [PMID: 37959298 PMCID: PMC10649694 DOI: 10.3390/jcm12216833] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 10/17/2023] [Accepted: 10/25/2023] [Indexed: 11/15/2023] Open
Abstract
Artificial intelligence (AI) has gained prominence in medical imaging, particularly in obstetrics and gynecology (OB/GYN), where ultrasound (US) is the preferred method. It is considered cost effective and easily accessible but is time consuming and hindered by the need for specialized training. To overcome these limitations, AI models have been proposed for automated plane acquisition, anatomical measurements, and pathology detection. This study aims to overview recent literature on AI applications in OB/GYN US imaging, highlighting their benefits and limitations. For the methodology, a systematic literature search was performed in the PubMed and Cochrane Library databases. Matching abstracts were screened based on the PICOS (Participants, Intervention or Exposure, Comparison, Outcome, Study type) scheme. Articles with full text copies were distributed to the sections of OB/GYN and their research topics. As a result, this review includes 189 articles published from 1994 to 2023. Among these, 148 focus on obstetrics and 41 on gynecology. AI-assisted US applications span fetal biometry, echocardiography, or neurosonography, as well as the identification of adnexal and breast masses, and assessment of the endometrium and pelvic floor. To conclude, the applications for AI-assisted US in OB/GYN are abundant, especially in the subspecialty of obstetrics. However, while most studies focus on common application fields such as fetal biometry, this review outlines emerging and still experimental fields to promote further research.
Collapse
Affiliation(s)
- Elena Jost
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Philipp Kosian
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Jorge Jimenez Cruz
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Shadi Albarqouni
- Department of Diagnostic and Interventional Radiology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
- Helmholtz AI, Helmholtz Munich, Ingolstädter Landstraße 1, 85764 Neuherberg, Germany
| | - Ulrich Gembruch
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Brigitte Strizek
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| | - Florian Recker
- Department of Obstetrics and Gynecology, University Hospital Bonn, Venusberg Campus 1, 53127 Bonn, Germany
| |
Collapse
|
17
|
Deeparani M, Kalamani M. Gynecological Healthcare: Unveiling Pelvic Masses Classification through Evolutionary Gravitational Neocognitron Neural Network Optimized with Nomadic People Optimizer. Diagnostics (Basel) 2023; 13:3131. [PMID: 37835875 PMCID: PMC10572945 DOI: 10.3390/diagnostics13193131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2023] [Revised: 09/01/2023] [Accepted: 09/04/2023] [Indexed: 10/15/2023] Open
Abstract
Accurate and early detection of malignant pelvic mass is important for a suitable referral, triage, and for further care for the women diagnosed with a pelvic mass. Several deep learning (DL) methods have been proposed to detect pelvic masses but other methods cannot provide sufficient accuracy and increase the computational time while classifying the pelvic mass. To overcome these issues, in this manuscript, the evolutionary gravitational neocognitron neural network optimized with nomadic people optimizer for gynecological abdominal pelvic masses classification is proposed for classifying the pelvic masses (EGNNN-NPOA-PM-UI). The real time ultrasound pelvic mass images are augmented using random transformation. Then the augmented images are given to the 3D Tsallis entropy-based multilevel thresholding technique for extraction of the ROI region and its features are further extracted with the help of fast discrete curvelet transform with the wrapping (FDCT-WRP) method. Therefore, in this work, EGNNN optimized with nomadic people optimizer (NPOA) was utilized for classifying the gynecological abdominal pelvic masses. It was executed in PYTHON and the efficiency of the proposed method analyzed under several performance metrics. The proposed EGNNN-NPOA-PM-UI methods attained 99.8%. Ultrasound image analysis using the proposed EGNNN-NPOA-PM-UI methods can accurately predict pelvic masses analyzed with the existing methods.
Collapse
Affiliation(s)
- M. Deeparani
- Department of Biomedical Engineering, Hindusthan College of Engineering and Technology, Coimbatore 641032, India;
| | - M. Kalamani
- Department of Electronics and Communication Engineering, KPR Institute of Engineering and Technology, Coimbatore 641407, India
| |
Collapse
|
18
|
Wan F, He W, Zhang W, Zhang Y, Zhang H, Guang Y. Preoperative prediction of extrathyroidal extension: radiomics signature based on multimodal ultrasound to papillary thyroid carcinoma. BMC Med Imaging 2023; 23:96. [PMID: 37474935 PMCID: PMC10360306 DOI: 10.1186/s12880-023-01049-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 06/16/2023] [Indexed: 07/22/2023] Open
Abstract
BACKGROUND There is a recognized need for additional approaches to improve the accuracy of extrathyroidal extension (ETE) diagnosis in papillary thyroid carcinoma (PTC) before surgery. Up to now, multimodal ultrasound has been widely applied in disease diagnosis. We investigated the value of radiomic features extracted from multimodal ultrasound in the preoperative prediction of ETE. METHODS We retrospectively pathologically confirmed PTC lesions in 235 patients from January 2019 to April 2022 in our hospital, including 45 ETE lesions and 205 non-ETE lesions. MaZda software was employed to obtain radiomics parameters in multimodal sonography. The most valuable radiomics features were selected by the Fisher coefficient, mutual information, probability of classification error and average correlation coefficient methods (F + MI + PA) in combination with the least absolute shrinkage and selection operator (LASSO) method. Finally, the multimodal model was developed by incorporating the clinical records and radiomics features through fivefold cross-validation with a linear support vector machine algorithm. The predictive performance was evaluated by sensitivity, specificity, accuracy, F1 scores and the area under the receiver operating characteristic curve (AUC) in the training and test sets. RESULTS A total of 5972 radiomics features were extracted from multimodal sonography, and the 13 most valuable radiomics features were selected from the training set using the F + MI + PA method combined with LASSO regression. The multimodal prediction model yielded AUCs of 0.911 (95% CI 0.866-0.957) and 0.716 (95% CI 0.522-0.910) in the cross-validation and test sets, respectively. The multimodal model and radiomics model showed good discrimination between ETE and non-ETE lesions. CONCLUSION Radiomics features based on multimodal ultrasonography could play a promising role in detecting ETE before surgery.
Collapse
Affiliation(s)
- Fang Wan
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, No. 119 West Road of South 4th Ring Road, Fengtai District, 100160, Beijing, China
| | - Wen He
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, No. 119 West Road of South 4th Ring Road, Fengtai District, 100160, Beijing, China.
| | - Wei Zhang
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, No. 119 West Road of South 4th Ring Road, Fengtai District, 100160, Beijing, China
| | - Yukang Zhang
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, No. 119 West Road of South 4th Ring Road, Fengtai District, 100160, Beijing, China
| | - Hongxia Zhang
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, No. 119 West Road of South 4th Ring Road, Fengtai District, 100160, Beijing, China
| | - Yang Guang
- Department of Ultrasound, Beijing Tiantan Hospital, Capital Medical University, No. 119 West Road of South 4th Ring Road, Fengtai District, 100160, Beijing, China.
| |
Collapse
|
19
|
Wu M, Cui G, Lv S, Chen L, Tian Z, Yang M, Bai W. Deep convolutional neural networks for multiple histologic types of ovarian tumors classification in ultrasound images. Front Oncol 2023; 13:1154200. [PMID: 37427129 PMCID: PMC10326903 DOI: 10.3389/fonc.2023.1154200] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 06/12/2023] [Indexed: 07/11/2023] Open
Abstract
Objective This study aimed to evaluate and validate the performance of deep convolutional neural networks when discriminating different histologic types of ovarian tumor in ultrasound (US) images. Material and methods Our retrospective study took 1142 US images from 328 patients from January 2019 to June 2021. Two tasks were proposed based on US images. Task 1 was to classify benign and high-grade serous carcinoma in original ovarian tumor US images, in which benign ovarian tumor was divided into six classes: mature cystic teratoma, endometriotic cyst, serous cystadenoma, granulosa-theca cell tumor, mucinous cystadenoma and simple cyst. The US images in task 2 were segmented. Deep convolutional neural networks (DCNN) were applied to classify different types of ovarian tumors in detail. We used transfer learning on six pre-trained DCNNs: VGG16, GoogleNet, ResNet34, ResNext50, DensNet121 and DensNet201. Several metrics were adopted to assess the model performance: accuracy, sensitivity, specificity, FI-score and the area under the receiver operating characteristic curve (AUC). Results The DCNN performed better in labeled US images than in original US images. The best predictive performance came from the ResNext50 model. The model had an overall accuracy of 0.952 for in directly classifying the seven histologic types of ovarian tumors. It achieved a sensitivity of 90% and a specificity of 99.2% for high-grade serous carcinoma, and a sensitivity of over 90% and a specificity of over 95% in most benign pathological categories. Conclusion DCNN is a promising technique for classifying different histologic types of ovarian tumors in US images, and provide valuable computer-aided information.
Collapse
Affiliation(s)
- Meijing Wu
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Guangxia Cui
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Shuchang Lv
- The Department of Electronics and Information Engineering, Beihang University, Beijing, China
| | - Lijiang Chen
- The Department of Electronics and Information Engineering, Beihang University, Beijing, China
| | - Zongmei Tian
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Min Yang
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| | - Wenpei Bai
- The Department of Gynecology and Obstetrics, Beijing Shijitan Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
20
|
Jan YT, Tsai PS, Huang WH, Chou LY, Huang SC, Wang JZ, Lu PH, Lin DC, Yen CS, Teng JP, Mok GSP, Shih CT, Wu TH. Machine learning combined with radiomics and deep learning features extracted from CT images: a novel AI model to distinguish benign from malignant ovarian tumors. Insights Imaging 2023; 14:68. [PMID: 37093321 PMCID: PMC10126170 DOI: 10.1186/s13244-023-01412-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2023] [Accepted: 03/20/2023] [Indexed: 04/25/2023] Open
Abstract
BACKGROUND To develop an artificial intelligence (AI) model with radiomics and deep learning (DL) features extracted from CT images to distinguish benign from malignant ovarian tumors. METHODS We enrolled 149 patients with pathologically confirmed ovarian tumors. A total of 185 tumors were included and divided into training and testing sets in a 7:3 ratio. All tumors were manually segmented from preoperative contrast-enhanced CT images. CT image features were extracted using radiomics and DL. Five models with different combinations of feature sets were built. Benign and malignant tumors were classified using machine learning (ML) classifiers. The model performance was compared with five radiologists on the testing set. RESULTS Among the five models, the best performing model is the ensemble model with a combination of radiomics, DL, and clinical feature sets. The model achieved an accuracy of 82%, specificity of 89% and sensitivity of 68%. Compared with junior radiologists averaged results, the model had a higher accuracy (82% vs 66%) and specificity (89% vs 65%) with comparable sensitivity (68% vs 67%). With the assistance of the model, the junior radiologists achieved a higher average accuracy (81% vs 66%), specificity (80% vs 65%), and sensitivity (82% vs 67%), approaching to the performance of senior radiologists. CONCLUSIONS We developed a CT-based AI model that can differentiate benign and malignant ovarian tumors with high accuracy and specificity. This model significantly improved the performance of less-experienced radiologists in ovarian tumor assessment, and may potentially guide gynecologists to provide better therapeutic strategies for these patients.
Collapse
Affiliation(s)
- Ya-Ting Jan
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Pei-Shan Tsai
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Wen-Hui Huang
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Ling-Ying Chou
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Shih-Chieh Huang
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Jing-Zhe Wang
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Pei-Hsuan Lu
- Department of Radiology, MacKay Memorial Hospital, Taipei, Taiwan
- Department of Medicine, MacKay Medical College, New Taipei City, Taiwan
- MacKay Junior College of Medicine, Nursing and Management, New Taipei City, Taiwan
| | - Dao-Chen Lin
- Division of Endocrine and Metabolism, Department of Medicine, Taipei Veterans General Hospital, Taipei, Taiwan
- Department of Radiology, Taipei Veterans General Hospital, Taipei, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Chun-Sheng Yen
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
| | - Ju-Ping Teng
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan
| | - Greta S P Mok
- Biomedical Imaging Laboratory (BIG), Department of Electrical and Computer Engineering, Faculty of Science and Technology, University of Macau, Macau, China
| | - Cheng-Ting Shih
- Department of Biomedical Imaging and Radiological Science, China Medical University, Taichung, 404, Taiwan.
| | - Tung-Hsin Wu
- Department of Biomedical Imaging and Radiological Sciences, National Yang Ming Chiao Tung University, Taipei, 112, Taiwan.
| |
Collapse
|
21
|
Koch AH, Jeelof LS, Muntinga CLP, Gootzen TA, van de Kruis NMA, Nederend J, Boers T, van der Sommen F, Piek JMJ. Analysis of computer-aided diagnostics in the preoperative diagnosis of ovarian cancer: a systematic review. Insights Imaging 2023; 14:34. [PMID: 36790570 PMCID: PMC9931983 DOI: 10.1186/s13244-022-01345-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Accepted: 12/05/2022] [Indexed: 02/16/2023] Open
Abstract
OBJECTIVES Different noninvasive imaging methods to predict the chance of malignancy of ovarian tumors are available. However, their predictive value is limited due to subjectivity of the reviewer. Therefore, more objective prediction models are needed. Computer-aided diagnostics (CAD) could be such a model, since it lacks bias that comes with currently used models. In this study, we evaluated the available data on CAD in predicting the chance of malignancy of ovarian tumors. METHODS We searched for all published studies investigating diagnostic accuracy of CAD based on ultrasound, CT and MRI in pre-surgical patients with an ovarian tumor compared to reference standards. RESULTS In thirty-one included studies, extracted features from three different imaging techniques were used in different mathematical models. All studies assessed CAD based on machine learning on ultrasound, CT scan and MRI scan images. Per imaging method, subsequently ultrasound, CT and MRI, sensitivities ranged from 40.3 to 100%; 84.6-100% and 66.7-100% and specificities ranged from 76.3-100%; 69-100% and 77.8-100%. Results could not be pooled, due to broad heterogeneity. Although the majority of studies report high performances, they are at considerable risk of overfitting due to the absence of an independent test set. CONCLUSION Based on this literature review, different CAD for ultrasound, CT scans and MRI scans seem promising to aid physicians in assessing ovarian tumors through their objective and potentially cost-effective character. However, performance should be evaluated per imaging technique. Prospective and larger datasets with external validation are desired to make their results generalizable.
Collapse
Affiliation(s)
- Anna H. Koch
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Lara S. Jeelof
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Caroline L. P. Muntinga
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - T. A. Gootzen
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Nienke M. A. van de Kruis
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Joost Nederend
- grid.413532.20000 0004 0398 8384Department of Radiology, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| | - Tim Boers
- grid.6852.90000 0004 0398 8763Department of Electrical Engineering, VCA Group, University of Technology Eindhoven, 5600 MB Eindhoven, Noord-Brabant The Netherlands
| | - Fons van der Sommen
- grid.6852.90000 0004 0398 8763Department of Electrical Engineering, VCA Group, University of Technology Eindhoven, 5600 MB Eindhoven, Noord-Brabant The Netherlands
| | - Jurgen M. J. Piek
- grid.413532.20000 0004 0398 8384Department of Gynaecology and Obstetrics and Catharina Cancer Institute, Catharina Hospital, 5623 EJ Eindhoven, Noord-Brabant, The Netherlands
| |
Collapse
|
22
|
A Deep Learning Fusion Approach to Diagnosis the Polycystic Ovary Syndrome (PCOS). APPLIED COMPUTATIONAL INTELLIGENCE AND SOFT COMPUTING 2023. [DOI: 10.1155/2023/9686697] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023] Open
Abstract
One of the leading causes of female infertility is PCOS, which is a hormonal disorder affecting women of childbearing age. The common symptoms of PCOS include increased acne, irregular period, increase in body hair, and overweight. Early diagnosis of PCOS is essential to manage the symptoms and reduce the associated health risks. Nonetheless, the diagnosis is based on Rotterdam criteria, including a high level of androgen hormones, ovulation failure, and polycystic ovaries on the ultrasound image (PCOM). At present, doctors and radiologists manually perform PCOM detection using ovary ultrasound by counting the number of follicles and determining their volume in the ovaries, which is one of the challenging PCOS diagnostic criteria. Moreover, such physicians require more tests and checks for biochemical/clinical signs in addition to the patient’s symptoms in order to decide the PCOS diagnosis. Furthermore, clinicians do not utilize a single diagnostic test or specific method to examine patients. This paper introduces the data set that includes the ultrasound image of the ovary with clinical data related to the patient that has been classified as PCOS and non-PCOS. Next, we proposed a deep learning model that can diagnose the PCOM based on the ultrasound image, which achieved 84.81% accuracy using the Inception model. Then, we proposed a fusion model that includes the ultrasound image with clinical data to diagnose the patient if they have PCOS or not. The best model that has been developed achieved 82.46% accuracy by extracting the image features using MobileNet architecture and combine with clinical features.
Collapse
|
23
|
Evaluating the Risk of Inguinal Lymph Node Metastases before Surgery Using the Morphonode Predictive Model: A Prospective Diagnostic Study in Vulvar Cancer Patients. Cancers (Basel) 2023; 15:cancers15041121. [PMID: 36831462 PMCID: PMC9953890 DOI: 10.3390/cancers15041121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2023] [Revised: 02/03/2023] [Accepted: 02/05/2023] [Indexed: 02/12/2023] Open
Abstract
Ultrasound examination is an accurate method in the preoperative evaluation of the inguinofemoral lymph nodes when performed by experienced operators. The purpose of the study was to build a robust, multi-modular model based on machine learning to discriminate between metastatic and non-metastatic inguinal lymph nodes in patients with vulvar cancer. One hundred and twenty-seven women were selected at our center from March 2017 to April 2020, and 237 inguinal regions were analyzed (75 were metastatic and 162 were non-metastatic at histology). Ultrasound was performed before surgery by experienced examiners. Ultrasound features were defined according to previous studies and collected prospectively. Fourteen informative features were used to train and test the machine to obtain a diagnostic model (Morphonode Predictive Model). The following data classifiers were integrated: (I) random forest classifiers (RCF), (II) regression binomial model (RBM), (III) decisional tree (DT), and (IV) similarity profiling (SP). RFC predicted metastatic/non-metastatic lymph nodes with an accuracy of 93.3% and a negative predictive value of 97.1%. DT identified four specific signatures correlated with the risk of metastases and the point risk of each signature was 100%, 81%, 16% and 4%, respectively. The Morphonode Predictive Model could be easily integrated into the clinical routine for preoperative stratification of vulvar cancer patients.
Collapse
|
24
|
Improving the Segmentation Accuracy of Ovarian-Tumor Ultrasound Images Using Image Inpainting. Bioengineering (Basel) 2023; 10:bioengineering10020184. [PMID: 36829679 PMCID: PMC9952248 DOI: 10.3390/bioengineering10020184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Revised: 01/05/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
Diagnostic results can be radically influenced by the quality of 2D ovarian-tumor ultrasound images. However, clinically processed 2D ovarian-tumor ultrasound images contain many artificially recognized symbols, such as fingers, crosses, dashed lines, and letters which assist artificial intelligence (AI) in image recognition. These symbols are widely distributed within the lesion's boundary, which can also affect the useful feature-extraction-utilizing networks and thus decrease the accuracy of lesion classification and segmentation. Image inpainting techniques are used for noise and object elimination from images. To solve this problem, we observed the MMOTU dataset and built a 2D ovarian-tumor ultrasound image inpainting dataset by finely annotating the various symbols in the images. A novel framework called mask-guided generative adversarial network (MGGAN) is presented in this paper for 2D ovarian-tumor ultrasound images to remove various symbols from the images. The MGGAN performs to a high standard in corrupted regions by using an attention mechanism in the generator to pay more attention to valid information and ignore symbol information, making lesion boundaries more realistic. Moreover, fast Fourier convolutions (FFCs) and residual networks are used to increase the global field of perception; thus, our model can be applied to high-resolution ultrasound images. The greatest benefit of this algorithm is that it achieves pixel-level inpainting of distorted regions without clean images. Compared with other models, our model achieveed better results with only one stage in terms of objective and subjective evaluations. Our model obtained the best results for 256 × 256 and 512 × 512 resolutions. At a resolution of 256 × 256, our model achieved 0.9246 for SSIM, 22.66 for FID, and 0.07806 for LPIPS. At a resolution of 512 × 512, our model achieved 0.9208 for SSIM, 25.52 for FID, and 0.08300 for LPIPS. Our method can considerably improve the accuracy of computerized ovarian tumor diagnosis. The segmentation accuracy was improved from 71.51% to 76.06% for the Unet model and from 61.13% to 66.65% for the PSPnet model in clean images.
Collapse
|
25
|
Dicle O. Artificial intelligence in diagnostic ultrasonography. Diagn Interv Radiol 2023; 29:40-45. [PMID: 36959754 PMCID: PMC10679601 DOI: 10.4274/dir.2022.211260] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2022] [Accepted: 03/28/2022] [Indexed: 01/15/2023]
Abstract
Artificial intelligence (AI) continues to change paradigms in the field of medicine with new applications that are applicable to daily life. The field of ultrasonography, which has been developing since the 1950s and continues to be one of the most powerful tools in the field of diagnosis, is also the subject of AI studies, despite its unique problems. It is predicted that many operations, such as appropriate diagnostic tool selection, use of the most relevant parameters, improvement of low-quality images, automatic lesion detection and diagnosis from the image, and classification of pathologies, will be performed using AI tools in the near future. Especially with the use of convolutional neural networks, successful results can be obtained for lesion detection, segmentation, and classification from images. In this review, relevant developments are summarized based on the literature, and examples of the tools used in the field are presented.
Collapse
Affiliation(s)
- Oğuz Dicle
- Department of Radiology, Dokuz Eylül University Faculty of Medicine, İzmir, Turkey
| |
Collapse
|
26
|
Raimondo D, Raffone A, Aru AC, Giorgi M, Giaquinto I, Spagnolo E, Travaglino A, Galatolo FA, Cimino MGCA, Lenzi J, Centini G, Lazzeri L, Mollo A, Seracchioli R, Casadio P. Application of Deep Learning Model in the Sonographic Diagnosis of Uterine Adenomyosis. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:ijerph20031724. [PMID: 36767092 PMCID: PMC9914280 DOI: 10.3390/ijerph20031724] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 01/14/2023] [Accepted: 01/16/2023] [Indexed: 06/01/2023]
Abstract
BACKGROUND This study aims to evaluate the diagnostic performance of Deep Learning (DL) machine for the detection of adenomyosis on uterine ultrasonographic images and compare it to intermediate ultrasound skilled trainees. METHODS Prospective observational study were conducted between 1 and 30 April 2022. Transvaginal ultrasound (TVUS) diagnosis of adenomyosis was investigated by an experienced sonographer on 100 fertile-age patients. Videoclips of the uterine corpus were recorded and sequential ultrasound images were extracted. Intermediate ultrasound-skilled trainees and DL machine were asked to make a diagnosis reviewing uterine images. We evaluated and compared the accuracy, sensitivity, positive predictive value, F1-score, specificity and negative predictive value of the DL model and the trainees for adenomyosis diagnosis. RESULTS Accuracy of DL and intermediate ultrasound-skilled trainees for the diagnosis of adenomyosis were 0.51 (95% CI, 0.48-0.54) and 0.70 (95% CI, 0.60-0.79), respectively. Sensitivity, specificity and F1-score of DL were 0.43 (95% CI, 0.38-0.48), 0.82 (95% CI, 0.79-0.85) and 0.46 (0.42-0.50), respectively, whereas intermediate ultrasound-skilled trainees had sensitivity of 0.72 (95% CI, 0.52-0.86), specificity of 0.69 (95% CI, 0.58-0.79) and F1-score of 0.55 (95% CI, 0.43-0.66). CONCLUSIONS In this preliminary study DL model showed a lower accuracy but a higher specificity in diagnosing adenomyosis on ultrasonographic images compared to intermediate-skilled trainees.
Collapse
Affiliation(s)
- Diego Raimondo
- Division of Gynecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40126 Bologna, Italy
| | - Antonio Raffone
- Division of Gynecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40126 Bologna, Italy
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40126 Bologna, Italy
| | - Anna Chiara Aru
- Division of Gynecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40126 Bologna, Italy
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40126 Bologna, Italy
| | - Matteo Giorgi
- Department of Molecular and Developmental Medicine, Obstetrics and Gynecological Clinic, University of Siena, 53100 Siena, Italy
| | - Ilaria Giaquinto
- Department of Obstetrics and Gynecology, Morgagni–Pierantoni Hospital, 47100 Forlì, Italy
| | - Emanuela Spagnolo
- Department of Obstetrics and Gynecology, Hospital Universitario La Paz, Paseo de la Castellana, 28046 Madrid, Spain
| | - Antonio Travaglino
- Pathology Unit, Department of Woman and Child’s Health and Public Health Sciences, Fondazione Policlinico Universitario Agostino Gemelli IRCCS, 00168 Rome, Italy
- Pathology Unit, Department of Advanced Biomedical Sciences, School of Medicine, University of Naples Federico II, 80138 Naples, Italy
| | | | | | - Jacopo Lenzi
- Department of Biomedical and Neuromotor Sciences, University of Bologna, 40126 Bologna, Italy
| | - Gabriele Centini
- Department of Molecular and Developmental Medicine, Obstetrics and Gynecological Clinic, University of Siena, 53100 Siena, Italy
| | - Lucia Lazzeri
- Department of Molecular and Developmental Medicine, Obstetrics and Gynecological Clinic, University of Siena, 53100 Siena, Italy
| | - Antonio Mollo
- Gynecology and Obstetrics Unit, Department of Medicine, Surgery and Dentistry “Schola Medica Salernitana”, University of Salerno, 84084 Baronissi, Italy
| | - Renato Seracchioli
- Division of Gynecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40126 Bologna, Italy
- Department of Medical and Surgical Sciences (DIMEC), University of Bologna, 40126 Bologna, Italy
| | - Paolo Casadio
- Division of Gynecology and Human Reproduction Physiopathology, IRCCS Azienda Ospedaliero-Universitaria di Bologna, 40126 Bologna, Italy
| |
Collapse
|
27
|
Ma L, Huang L, Chen Y, Zhang L, Nie D, He W, Qi X. AI diagnostic performance based on multiple imaging modalities for ovarian tumor: A systematic review and meta-analysis. Front Oncol 2023; 13:1133491. [PMID: 37152032 PMCID: PMC10160474 DOI: 10.3389/fonc.2023.1133491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/30/2023] [Indexed: 05/09/2023] Open
Abstract
Background In recent years, AI has been applied to disease diagnosis in many medical and engineering researches. We aimed to explore the diagnostic performance of the models based on different imaging modalities for ovarian cancer. Methods PubMed, EMBASE, Web of Science, and Wanfang Database were searched. The search scope was all published Chinese and English literatures about AI diagnosis of benign and malignant ovarian tumors. The literature was screened and data extracted according to inclusion and exclusion criteria. Quadas-2 was used to evaluate the quality of the included literature, STATA 17.0. was used for statistical analysis, and forest plots and funnel plots were drawn to visualize the study results. Results A total of 11 studies were included, 3 of them were modeled based on ultrasound, 6 based on MRI, and 2 based on CT. The pooled AUROCs of studies based on ultrasound, MRI and CT were 0.94 (95% CI 0.88-1.00), 0.82 (95% CI 0.71-0.93) and 0.82 (95% Cl 0.78-0.86), respectively. The values of I2 were 99.92%, 99.91% and 92.64% based on ultrasound, MRI and CT. Funnel plot suggested no publication bias. Conclusion The models based on ultrasound have the best performance in diagnostic of ovarian cancer.
Collapse
Affiliation(s)
- Lin Ma
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Liqiong Huang
- Department of Ultrasound, Chengdu First People's Hospital, Chengdu, Chengdu, China
| | - Yan Chen
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Lei Zhang
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Dunli Nie
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
| | - Wenjing He
- Big Data Research Center, University of Electronic Science and Technology of China, Chengdu, China
| | - Xiaoxue Qi
- Department of Obstetrics and Gynecology, Chengdu First People's Hospital, Chengdu, China
- *Correspondence: Xiaoxue Qi,
| |
Collapse
|
28
|
Villamanca JJ, Hermogino LJ, Ong KD, Paguia B, Abanilla L, Lim A, Angeles LM, Espiritu B, Isais M, Tomas RC, Albano PM. Predicting the Likelihood of Colorectal Cancer with Artificial Intelligence Tools Using Fourier Transform Infrared Signals Obtained from Tumor Samples. APPLIED SPECTROSCOPY 2022; 76:1412-1428. [PMID: 35821580 DOI: 10.1177/00037028221116083] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The early and accurate detection of colorectal cancer (CRC) significantly affects its prognosis and clinical management. However, current standard diagnostic procedures for CRC often lack sensitivity and specificity since most rely on visual examination. Hence, there is a need to develop more accurate methods for its diagnosis. Support vector machine (SVM) and feedforward neural network (FNN) models were designed using the Fourier transform infrared (FT-IR) spectral data of several colorectal tissues that were unanimously identified as either benign or malignant by different unrelated pathologists. The set of samples in which the pathologists had discordant readings were then analyzed using the AI models described above. Between the SVM and NN models, the NN model was able to outperform the SVM model based on their prediction confidence scores. Using the spectral data of the concordant samples as training set, the FNN was able to predict the histologically diagnosed malignant tissues (n = 118) at 59.9-99.9% confidence (average = 93.5%). Of the 118 samples, 84 (71.18%) were classified with an above average confidence score, 34 (28.81%) classified below the average confidence score, and none was misclassified. Moreover, it was able to correctly identify the histologically confirmed benign samples (n = 83) at 51.5-99.7% confidence (average = 91.64%). Of the 83 samples, 60 (72.29%) were classified with an above average confidence score, 22 (26.51%) classified below the average confidence score, and only 1 sample (1.20%) was misclassified. The study provides additional proof of the ability of attenuated total reflection (ATR) FT-IR enhanced by AI tools to predict the likelihood of CRC without dependence on morphological changes in tissues.
Collapse
Affiliation(s)
- John Jerald Villamanca
- Department of Biological Sciences, College of Science, 564927University of Santo Tomas, Manila, Philippines
| | - Lemuel John Hermogino
- Department of Biological Sciences, College of Science, 564927University of Santo Tomas, Manila, Philippines
| | - Katherine Denise Ong
- Department of Biological Sciences, College of Science, 564927University of Santo Tomas, Manila, Philippines
| | - Brian Paguia
- Department of Biological Sciences, College of Science, 564927University of Santo Tomas, Manila, Philippines
| | - Lorenzo Abanilla
- Department of Pathology, Divine Word Hospital, Tacloban City, Philippines
| | - Antonio Lim
- Department of Pathology, Divine Word Hospital, Tacloban City, Philippines
| | - Lara Mae Angeles
- Department of Pathology, 596481University of Santo Tomas Hospital, Manila, Philippines
| | - Bernadette Espiritu
- Department of Pathology, 603332Bulacan Medical Center, Malolos City, Philippines
| | - Maura Isais
- Department of Pathology, 603332Bulacan Medical Center, Malolos City, Philippines
- The Graduate School, 595547University of Santo Tomas, Manila, Philippines
| | - Rock Christian Tomas
- Department of Electrical Engineering, 54729University of the Philippines Los Baños, Los Baños, Philippines
| | - Pia Marie Albano
- Department of Biological Sciences, College of Science, 564927University of Santo Tomas, Manila, Philippines
- Department of Pathology, Divine Word Hospital, Tacloban City, Philippines
- Research Center for the Natural and Applied Sciences, 564927University of Santo Tomas, Manila, Philippines
| |
Collapse
|
29
|
Hsu ST, Su YJ, Hung CH, Chen MJ, Lu CH, Kuo CE. Automatic ovarian tumors recognition system based on ensemble convolutional neural network with ultrasound imaging. BMC Med Inform Decis Mak 2022; 22:298. [PMID: 36397100 PMCID: PMC9673368 DOI: 10.1186/s12911-022-02047-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2022] [Accepted: 11/14/2022] [Indexed: 11/18/2022] Open
Abstract
Background Upon the discovery of ovarian cysts, obstetricians, gynecologists, and ultrasound examiners must address the common clinical challenge of distinguishing between benign and malignant ovarian tumors. Numerous types of ovarian tumors exist, many of which exhibit similar characteristics that increase the ambiguity in clinical diagnosis. Using deep learning technology, we aimed to develop a method that rapidly and accurately assists the different diagnosis of ovarian tumors in ultrasound images. Methods Based on deep learning method, we used ten well-known convolutional neural network models (e.g., Alexnet, GoogleNet, and ResNet) for training of transfer learning. To ensure method stability and robustness, we repeated the random sampling of the training and validation data ten times. The mean of the ten test results was set as the final assessment data. After the training process was completed, the three models with the highest ratio of calculation accuracy to time required for classification were used for ensemble learning pertaining. Finally, the interpretation results of the ensemble classifier were used as the final results. We also applied ensemble gradient-weighted class activation mapping (Grad-CAM) technology to visualize the decision-making results of the models. Results The highest mean accuracy, mean sensitivity, and mean specificity of ten single CNN models were 90.51 ± 4.36%, 89.77 ± 4.16%, and 92.00 ± 5.95%, respectively. The mean accuracy, mean sensitivity, and mean specificity of the ensemble classifier method were 92.15 ± 2.84%, 91.37 ± 3.60%, and 92.92 ± 4.00%, respectively. The performance of the ensemble classifier is better than that of a single classifier in three evaluation metrics. Moreover, the standard deviation is also better which means the ensemble classifier is more stable and robust. Conclusion From the comprehensive perspective of data quantity, data diversity, robustness of validation strategy, and overall accuracy, the proposed method outperformed the methods used in previous studies. In future studies, we will continue to increase the number of authenticated images and apply our proposed method in clinical settings to increase its robustness and reliability.
Collapse
|
30
|
Xu HL, Gong TT, Liu FH, Chen HY, Xiao Q, Hou Y, Huang Y, Sun HZ, Shi Y, Gao S, Lou Y, Chang Q, Zhao YH, Gao QL, Wu QJ. Artificial intelligence performance in image-based ovarian cancer identification: A systematic review and meta-analysis. EClinicalMedicine 2022; 53:101662. [PMID: 36147628 PMCID: PMC9486055 DOI: 10.1016/j.eclinm.2022.101662] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Revised: 08/25/2022] [Accepted: 08/30/2022] [Indexed: 11/28/2022] Open
Abstract
BACKGROUND Accurate identification of ovarian cancer (OC) is of paramount importance in clinical treatment success. Artificial intelligence (AI) is a potentially reliable assistant for the medical imaging recognition. We systematically review articles on the diagnostic performance of AI in OC from medical imaging for the first time. METHODS The Medline, Embase, IEEE, PubMed, Web of Science, and the Cochrane library databases were searched for related studies published until August 1, 2022. Inclusion criteria were studies that developed or used AI algorithms in the diagnosis of OC from medical images. The binary diagnostic accuracy data were extracted to derive the outcomes of interest: sensitivity (SE), specificity (SP), and Area Under the Curve (AUC). The study was registered with the PROSPERO, CRD42022324611. FINDINGS Thirty-four eligible studies were identified, of which twenty-eight studies were included in the meta-analysis with a pooled SE of 88% (95%CI: 85-90%), SP of 85% (82-88%), and AUC of 0.93 (0.91-0.95). Analysis for different algorithms revealed a pooled SE of 89% (85-92%) and SP of 88% (82-92%) for machine learning; and a pooled SE of 88% (84-91%) and SP of 84% (80-87%) for deep learning. Acceptable diagnostic performance was demonstrated in subgroup analyses stratified by imaging modalities (Ultrasound, Magnetic Resonance Imaging, or Computed Tomography), sample size (≤300 or >300), AI algorithms versus clinicians, year of publication (before or after 2020), geographical distribution (Asia or non Asia), and the different risk of bias levels (≥3 domain low risk or < 3 domain low risk). INTERPRETATION AI algorithms exhibited favorable performance for the diagnosis of OC through medical imaging. More rigorous reporting standards that address specific challenges of AI research could improve future studies. FUNDING This work was supported by the Natural Science Foundation of China (No. 82073647 to Q-JW and No. 82103914 to T-TG), LiaoNing Revitalization Talents Program (No. XLYC1907102 to Q-JW), and 345 Talent Project of Shengjing Hospital of China Medical University (No. M0268 to Q-JW and No. M0952 to T-TG).
Collapse
Key Words
- AI, Artificial intelligence
- AUC, Area Under the Curve
- Artificial intelligence
- CT, Computed Tomography
- DL, Deep learning
- ML, Machine learning
- MRI, Magnetic Resonance Imaging
- Medical imaging
- Meta-analysis
- OC, Ovarian cancer
- Ovarian cancer
- SE, Sensitivity
- SP, Specificity
- US, Ultrasound
- XAI, Explainable artificial intelligence
Collapse
Affiliation(s)
- He-Li Xu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ting-Ting Gong
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Fang-Hua Liu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Hong-Yu Chen
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qian Xiao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yang Hou
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Ying Huang
- Department of Ultrasound, Shengjing Hospital of China Medical University, Shenyang, China
| | - Hong-Zan Sun
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yu Shi
- Department of Radiology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Song Gao
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yan Lou
- Department of Intelligent Medicine, China Medical University, China
| | - Qing Chang
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Yu-Hong Zhao
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
| | - Qing-Lei Gao
- National Clinical Research Center for Obstetrics and Gynecology, Cancer Biology Research Centre (Key Laboratory of the Ministry of Education) and Department of Gynecology and Obstetrics, Tongji Hospital, Wuhan, China
| | - Qi-Jun Wu
- Department of Clinical Epidemiology, Shengjing Hospital of China Medical University, Shenyang, China
- Clinical Research Center, Shengjing Hospital of China Medical University, Shenyang, China
- Key Laboratory of Precision Medical Research on Major Chronic Disease, Shengjing Hospital of China Medical University, Shenyang, China
- Department of Obstetrics and Gynecology, Shengjing Hospital of China Medical University, Shenyang, China
- Corresponding author at: Department of Clinical Epidemiology, Department of Obstetrics and Gynecology, Clinical Research Center, Shengjing Hospital of China Medical University, Address: No. 36, San Hao Street, Shenyang, Liaoning 110004, PR China.
| |
Collapse
|
31
|
Ovarian tumor diagnosis using deep convolutional neural networks and a denoising convolutional autoencoder. Sci Rep 2022; 12:17024. [PMID: 36220853 PMCID: PMC9554195 DOI: 10.1038/s41598-022-20653-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 09/16/2022] [Indexed: 01/27/2023] Open
Abstract
Discrimination of ovarian tumors is necessary for proper treatment. In this study, we developed a convolutional neural network model with a convolutional autoencoder (CNN-CAE) to classify ovarian tumors. A total of 1613 ultrasound images of ovaries with known pathological diagnoses were pre-processed and augmented for deep learning analysis. We designed a CNN-CAE model that removes the unnecessary information (e.g., calipers and annotations) from ultrasound images and classifies ovaries into five classes. We used fivefold cross-validation to evaluate the performance of the CNN-CAE model in terms of accuracy, sensitivity, specificity, and the area under the receiver operating characteristic curve (AUC). Gradient-weighted class activation mapping (Grad-CAM) was applied to visualize and verify the CNN-CAE model results qualitatively. In classifying normal versus ovarian tumors, the CNN-CAE model showed 97.2% accuracy, 97.2% sensitivity, and 0.9936 AUC with DenseNet121 CNN architecture. In distinguishing malignant ovarian tumors, the CNN-CAE model showed 90.12% accuracy, 86.67% sensitivity, and 0.9406 AUC with DenseNet161 CNN architecture. Grad-CAM showed that the CNN-CAE model recognizes valid texture and morphology features from the ultrasound images and classifies ovarian tumors from these features. CNN-CAE is a feasible diagnostic tool that is capable of robustly classifying ovarian tumors by eliminating marks on ultrasound images. CNN-CAE demonstrates an important application value in clinical conditions.
Collapse
|
32
|
Shrestha P, Poudyal B, Yadollahi S, E. Wright D, V. Gregory A, D. Warner J, Korfiatis P, C. Green I, L. Rassier S, Mariani A, Kim B, K. Laughlin-Tommaso S, L. Kline T. A systematic review on the use of artificial intelligence in gynecologic imaging – Background, state of the art, and future directions. Gynecol Oncol 2022; 166:596-605. [PMID: 35914978 DOI: 10.1016/j.ygyno.2022.07.024] [Citation(s) in RCA: 15] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 07/15/2022] [Accepted: 07/19/2022] [Indexed: 11/04/2022]
Abstract
OBJECTIVE Machine learning, deep learning, and artificial intelligence (AI) are terms that have made their way into nearly all areas of medicine. In the case of medical imaging, these methods have become the state of the art in nearly all areas from image reconstruction to image processing and automated analysis. In contrast to other areas, such as brain and breast imaging, the impacts of AI have not been as strongly felt in gynecologic imaging. In this review article, we: (i) provide a background of clinically relevant AI concepts, (ii) describe methods and approaches in computer vision, and (iii) highlight prior work related to image classification tasks utilizing AI approaches in gynecologic imaging. DATA SOURCES A comprehensive search of several databases from each database's inception to March 18th, 2021, English language, was conducted. The databases included Ovid MEDLINE(R) and Epub Ahead of Print, In-Process & Other Non-Indexed Citations, and Daily, Ovid EMBASE, Ovid Cochrane Central Register of Controlled Trials, and Ovid Cochrane Database of Systematic Reviews and ClinicalTrials.gov. METHODS OF STUDY SELECTION We performed an extensive literature review with 61 articles curated by three reviewers and subsequent sorting by specialists using specific inclusion and exclusion criteria. TABULATION, INTEGRATION, AND RESULTS We summarize the literature grouped by each of the three most common gynecologic malignancies: endometrial, cervical, and ovarian. For each, a brief introduction encapsulating the AI methods, imaging modalities, and clinical parameters in the selected articles is presented. We conclude with a discussion of current developments, trends and limitations, and suggest directions for future study. CONCLUSION This review article should prove useful for collaborative teams performing research studies targeted at the incorporation of radiological imaging and AI methods into gynecological clinical practice.
Collapse
|
33
|
Lugtu EJ, Ramos DB, Agpalza AJ, Cabral EA, Carandang RP, Dee JE, Martinez A, Jose JE, Santillan A, Bangaoil R, Albano PM, Tomas RC. Artificial neural network in the discrimination of lung cancer based on infrared spectroscopy. PLoS One 2022; 17:e0268329. [PMID: 35551276 PMCID: PMC9098097 DOI: 10.1371/journal.pone.0268329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2021] [Accepted: 04/27/2022] [Indexed: 12/19/2022] Open
Abstract
Given the increasing prevalence of lung cancer worldwide, an auxiliary diagnostic method is needed alongside the microscopic examination of biopsy samples, which is dependent on the skills and experience of pathologists. Thus, this study aimed to advance lung cancer diagnosis by developing five (5) artificial neural network (NN) models that can discriminate malignant from benign samples based on infrared spectral data of lung tumors (n = 122; 56 malignant, 66 benign). NNs were benchmarked with classical machine learning (CML) models. Stratified 10-fold cross-validation was performed to evaluate the NN models, and the performance metrics—area under the curve (AUC), accuracy (ACC) positive predictive value (PPV), negative predictive value (NPV), specificity rate (SR), and recall rate (RR)—were averaged for comparison. All NNs were able to outperform the CML models, however, support vector machine is relatively comparable to NNs. Among the NNs, CNN performed best with an AUC of 92.28% ± 7.36%, ACC of 98.45% ± 1.72%, PPV of 96.62% ± 2.30%, NPV of 90.50% ± 11.92%, SR of 96.01% ± 3.09%, and RR of 89.21% ± 12.93%. In conclusion, NNs can be potentially used as a computational tool in lung cancer diagnosis based on infrared spectroscopy of lung tissues.
Collapse
Affiliation(s)
- Eiron John Lugtu
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
- * E-mail:
| | - Denise Bernadette Ramos
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Alliah Jen Agpalza
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Erika Antoinette Cabral
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Rian Paolo Carandang
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Jennica Elia Dee
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Angelica Martinez
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Julius Eleazar Jose
- Department of Medical Technology, Faculty of Pharmacy, University of Santo Tomas, Manila, Philippines
| | - Abegail Santillan
- Research Center for the Natural and Applied Sciences, University of Santo Tomas, Manila, Philippines
- The Graduate School, University of Santo Tomas, Manila, Philippines
| | - Ruth Bangaoil
- Research Center for the Natural and Applied Sciences, University of Santo Tomas, Manila, Philippines
- The Graduate School, University of Santo Tomas, Manila, Philippines
- University of Santo Tomas Hospital, Manila, Philippines
| | - Pia Marie Albano
- Research Center for the Natural and Applied Sciences, University of Santo Tomas, Manila, Philippines
- The Graduate School, University of Santo Tomas, Manila, Philippines
- Department of Biological Sciences, College of Science, University of Santo Tomas, Manila, Philippines
| | - Rock Christian Tomas
- Department of Electrical Engineering, University of the Philippines Los Baños, Laguna, Philippines
| |
Collapse
|
34
|
Chen H, Yang BW, Qian L, Meng YS, Bai XH, Hong XW, He X, Jiang MJ, Yuan F, Du QW, Feng WW. Deep Learning Prediction of Ovarian Malignancy at US Compared with O-RADS and Expert Assessment. Radiology 2022; 304:106-113. [PMID: 35412367 DOI: 10.1148/radiol.211367] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
Abstract
Background Deep learning (DL) algorithms could improve the classification of ovarian tumors assessed with multimodal US. Purpose To develop DL algorithms for the automated classification of benign versus malignant ovarian tumors assessed with US and to compare algorithm performance to Ovarian-Adnexal Reporting and Data System (O-RADS) and subjective expert assessment for malignancy. Materials and Methods This retrospective study included consecutive women with ovarian tumors undergoing gray scale and color Doppler US from January 2019 to November 2019. Histopathologic analysis was the reference standard. The data set was divided into training (70%), validation (10%), and test (20%) sets. Algorithms modified from residual network (ResNet) with two fusion strategies (feature fusion [hereafter, DLfeature] or decision fusion [hereafter, DLdecision]) were developed. DL prediction of malignancy was compared with O-RADS risk categorization and expert assessment by area under the receiver operating characteristic curve (AUC) analysis in the test set. Results A total of 422 women (mean age, 46.4 years ± 14.8 [SD]) with 304 benign and 118 malignant tumors were included; there were 337 women in the training and validation data set and 85 women in the test data set. DLfeature had an AUC of 0.93 (95% CI: 0.85, 0.97) for classifying malignant from benign ovarian tumors, comparable with O-RADS (AUC, 0.92; 95% CI: 0.85, 0.97; P = .88) and expert assessment (AUC, 0.97; 95% CI: 0.91, 0.99; P = .07), and similar to DLdecision (AUC, 0.90; 95% CI: 0.82, 0.96; P = .29). DLdecision, DLfeature, O-RADS, and expert assessment achieved sensitivities of 92%, 92%, 92%, and 96%, respectively, and specificities of 80%, 85%, 89%, and 87%, respectively, for malignancy. Conclusion Deep learning algorithms developed by using multimodal US images may distinguish malignant from benign ovarian tumors with diagnostic performance comparable to expert subjective and Ovarian-Adnexal Reporting and Data System assessment. © RSNA, 2022 Online supplemental material is available for this article.
Collapse
Affiliation(s)
- Hui Chen
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Bo-Wen Yang
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Le Qian
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Yi-Shuang Meng
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xiang-Hui Bai
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xiao-Wei Hong
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Xin He
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Mei-Jiao Jiang
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Fei Yuan
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Qin-Wen Du
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| | - Wei-Wei Feng
- From the Department of Obstetrics and Gynecology (H.C., B.W.Y., L.Q., X.H., M.J.J., Q.W.D., W.W.F.) and Department of Pathology (F.Y.), Ruijin Hospital, Shanghai Jiaotong University School of Medicine, 197 Ruijin 2nd Road, Huangpu District, Shanghai 200025, China; and Philips Research Asia Shanghai, Shanghai, China (Y.S.M., X.H.B., X.W.H.)
| |
Collapse
|
35
|
Drukker L. Real-time identification of fetal anomalies on ultrasound using artificial intelligence: what's next? ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 59:285-287. [PMID: 35239221 DOI: 10.1002/uog.24869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/14/2022] [Accepted: 01/17/2022] [Indexed: 06/14/2023]
Affiliation(s)
- L Drukker
- Women's Ultrasound, Department of Obstetrics and Gynecology, Beilinson Medical Center, Sackler Faculty of Medicine, Tel-Aviv University, Tel Aviv, Israel
- Nuffield Department of Women's & Reproductive Health, University of Oxford, John Radcliffe Hospital, Oxford, UK
| |
Collapse
|
36
|
Ghi T, Conversano F, Ramirez Zegarra R, Pisani P, Dall'Asta A, Lanzone A, Lau W, Vimercati A, Iliescu DG, Mappa I, Rizzo G, Casciaro S. Novel artificial intelligence approach for automatic differentiation of fetal occiput anterior and non-occiput anterior positions during labor. ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2022; 59:93-99. [PMID: 34309926 DOI: 10.1002/uog.23739] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 06/13/2021] [Accepted: 07/12/2021] [Indexed: 02/05/2023]
Abstract
OBJECTIVES To describe a newly developed machine-learning (ML) algorithm for the automatic recognition of fetal head position using transperineal ultrasound (TPU) during the second stage of labor and to describe its performance in differentiating between occiput anterior (OA) and non-OA positions. METHODS This was a prospective cohort study including singleton term (> 37 weeks of gestation) pregnancies in the second stage of labor, with a non-anomalous fetus in cephalic presentation. Transabdominal ultrasound was performed to determine whether the fetal head position was OA or non-OA. For each case, one sonographic image of the fetal head was then acquired in an axial plane using TPU and saved for later offline analysis. Using the transabdominal sonographic diagnosis as the gold standard, a ML algorithm based on a pattern-recognition feed-forward neural network was trained on the TPU images to discriminate between OA and non-OA positions. In the training phase, the model tuned its parameters to approximate the training data (i.e. the training dataset) such that it would identify correctly the fetal head position, by exploiting geometric, morphological and intensity-based features of the images. In the testing phase, the algorithm was blinded to the occiput position as determined by transabdominal ultrasound. Using the test dataset, the ability of the ML algorithm to differentiate OA from non-OA fetal positions was assessed in terms of diagnostic accuracy. The F1 -score and precision-recall area under the curve (PR-AUC) were calculated to assess the algorithm's performance. Cohen's kappa (κ) was calculated to evaluate the agreement between the algorithm and the gold standard. RESULTS Over a period of 24 months (February 2018 to January 2020), at 15 maternity hospitals affiliated to the International Study group on Labor ANd Delivery Sonography (ISLANDS), we enrolled into the study 1219 women in the second stage of labor. On the basis of transabdominal ultrasound, they were classified as OA (n = 801 (65.7%)) or non-OA (n = 418 (34.3%)). From the entire cohort (OA and non-OA), approximately 70% (n = 824) of the patients were assigned randomly to the training dataset and the rest (n = 395) were used as the test dataset. The ML-based algorithm correctly classified the fetal occiput position in 90.4% (357/395) of the test dataset, including 224/246 with OA (91.1%) and 133/149 with non-OA (89.3%) fetal head position. Evaluation of the algorithm's performance gave an F1 -score of 88.7% and a PR-AUC of 85.4%. The algorithm showed a balanced performance in the recognition of both OA and non-OA positions. The robustness of the algorithm was confirmed by high agreement with the gold standard (κ = 0.81; P < 0.0001). CONCLUSIONS This newly developed ML-based algorithm for the automatic assessment of fetal head position using TPU can differentiate accurately, in most cases, between OA and non-OA positions in the second stage of labor. This algorithm has the potential to support not only obstetricians but also midwives and accoucheurs in the clinical use of TPU to determine fetal occiput position in the labor ward. © 2021 International Society of Ultrasound in Obstetrics and Gynecology.
Collapse
Affiliation(s)
- T Ghi
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - F Conversano
- National Research Council, Institute of Clinical Physiology, Lecce, Italy
| | - R Ramirez Zegarra
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
- Department of Obstetrics and Gynecology, St Joseph Krankenhaus, Berlin, Germany
| | - P Pisani
- National Research Council, Institute of Clinical Physiology, Lecce, Italy
| | - A Dall'Asta
- Department of Medicine and Surgery, Obstetrics and Gynecology Unit, University of Parma, Parma, Italy
| | - A Lanzone
- Obstetrics and High-Risk Unit, Fondazione Policlinico A. Gemelli IRCCS, Rome, Italy
| | - W Lau
- Department of Obstetrics and Gynecology, Kwong Wah Hospital, Kowloon, Hong Kong
| | - A Vimercati
- Department of Obstetrics, Gynecology, Neonatology and Anesthesiology, University Hospital of Bari Consorziale Policlinico, Bari, Italy
| | - D G Iliescu
- University Emergency County Hospital, Craiova, Romania
- University of Medicine and Pharmacy, Craiova, Romania
| | - I Mappa
- Division of Maternal and Fetal Medicine, Cristo Re Hospital, University of Rome Tor Vergata, Rome, Italy
| | - G Rizzo
- Division of Maternal and Fetal Medicine, Cristo Re Hospital, University of Rome Tor Vergata, Rome, Italy
- Department of Obstetrics and Gynecology, The First I.M. Sechenov Moscow State Medical University, Moscow, Russia
| | - S Casciaro
- National Research Council, Institute of Clinical Physiology, Lecce, Italy
| | | |
Collapse
|
37
|
Weichert J, Welp A, Scharf JL, Dracopoulos C, Becker WH, Gembicki M. The Use of Artificial Intelligence in Automation in the Fields of Gynaecology and Obstetrics - an Assessment of the State of Play. Geburtshilfe Frauenheilkd 2021; 81:1203-1216. [PMID: 34754270 PMCID: PMC8568505 DOI: 10.1055/a-1522-3029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 06/01/2021] [Indexed: 11/20/2022] Open
Abstract
The long-awaited progress in digitalisation is generating huge amounts of medical data every day, and manual analysis and targeted, patient-oriented evaluation of this data is becoming increasingly difficult or even infeasible. This state of affairs and the associated, increasingly complex requirements for individualised precision medicine underline the need for modern software solutions and algorithms across the entire healthcare system. The utilisation of state-of-the-art equipment and techniques in almost all areas of medicine over the past few years has now indeed enabled automation processes to enter - at least in part - into routine clinical practice. Such systems utilise a wide variety of artificial intelligence (AI) techniques, the majority of which have been developed to optimise medical image reconstruction, noise reduction, quality assurance, triage, segmentation, computer-aided detection and classification and, as an emerging field of research, radiogenomics. Tasks handled by AI are completed significantly faster and more precisely, clearly demonstrated by now in the annual findings of the ImageNet Large-Scale Visual Recognition Challenge (ILSVCR), first conducted in 2015, with error rates well below those of humans. This review article will discuss the potential capabilities and currently available applications of AI in gynaecological-obstetric diagnostics. The article will focus, in particular, on automated techniques in prenatal sonographic diagnostics.
Collapse
Affiliation(s)
- Jan Weichert
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
- Zentrum für Pränatalmedizin an der Elbe, Hamburg, Germany
| | - Amrei Welp
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Jann Lennard Scharf
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | - Christoph Dracopoulos
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| | | | - Michael Gembicki
- Klinik für Frauenheilkunde und Geburtshilfe, Bereich Pränatalmedizin und Spezielle Geburtshilfe, Universitätsklinikum Schleswig-Holstein, Campus Lübeck, Lübeck, Germany
| |
Collapse
|
38
|
Qi L, Chen D, Li C, Li J, Wang J, Zhang C, Li X, Qiao G, Wu H, Zhang X, Ma W. Diagnosis of Ovarian Neoplasms Using Nomogram in Combination With Ultrasound Image-Based Radiomics Signature and Clinical Factors. Front Genet 2021; 12:753948. [PMID: 34650603 PMCID: PMC8505695 DOI: 10.3389/fgene.2021.753948] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 09/13/2021] [Indexed: 12/12/2022] Open
Abstract
Objectives: To establish and validate a nomogram integrating radiomics signatures from ultrasound and clinical factors to discriminate between benign, borderline, and malignant serous ovarian tumors. Materials and methods: In this study, a total of 279 pathology-confirmed serous ovarian tumors collected from 265 patients between March 2013 and December 2016 were used. The training cohort was generated by randomly selecting 70% of each of the three types (benign, borderline, and malignant) of tumors, while the remaining 30% was included in the validation cohort. From the transabdominal ultrasound scanning of ovarian tumors, the radiomics features were extracted, and a score was calculated. The ability of radiomics to differentiate between the grades of ovarian tumors was tested by comparing benign vs borderline and malignant (task 1) and borderline vs malignant (task 2). These results were compared with the diagnostic performance and subjective assessment by junior and senior sonographers. Finally, a clinical-feature alone model and a combined clinical-radiomics (CCR) model were built using predictive nomograms for the two tasks. Receiver operating characteristic (ROC) analysis, calibration curve, and decision curve analysis (DCA) were performed to evaluate the model performance. Results: The US-based radiomics models performed satisfactorily in both the tasks, showing especially higher accuracy in the second task by successfully discriminating borderline and malignant ovarian serous tumors compared to the evaluations by senior sonographers (AUC = 0.789 for seniors and 0.877 for radiomics models in task one; AUC = 0.612 for senior and 0.839 for radiomics model in task 2). We showed that the CCR model, comprising CA125 level, lesion location, ascites, and radiomics signatures, performed the best (AUC = 0.937, 95%CI 0.905-0.969 in task 1, AUC = 0.924, 95%CI 0.876-0.971 in task 2) in the training as well as in the validation cohorts (AUC = 0.914, 95%CI 0.851-0.976 in task 1, AUC = 0.890, 95%CI 0.794-0.987 in task 2). The calibration curve and DCA analysis of the CCR model more accurately predicted the classification of the tumors than the clinical features alone. Conclusion: This study integrates novel radiomics signatures from ultrasound and clinical factors to create a nomogram to provide preoperative diagnostic information for differentiating between benign, borderline, and malignant ovarian serous tumors, thereby reducing unnecessary and risky biopsies and surgeries.
Collapse
Affiliation(s)
- Lisha Qi
- Department of Pathology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China.,National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Dandan Chen
- Department of Pathology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China.,National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Chunxiang Li
- National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China.,Department of Ultrasonographic Diagnosis and Therapy, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Jinghan Li
- Department of Ultrasonographic Diagnosis and Therapy, Tianjin Ninghe Hospital, Tianjin, China
| | - Jingyi Wang
- Department of Pathology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China.,National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Chao Zhang
- National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China.,Department of Bone and Soft Tissue Tumors, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Xiaofeng Li
- National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China.,Department of Molecular Imaging and Nuclear Medicine, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Ge Qiao
- Department of Pathology, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China.,National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China
| | - Haixiao Wu
- National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China.,Department of Bone and Soft Tissue Tumors, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| | - Xiaofang Zhang
- Department of Clinical Laboratory, Tianjin Medical University General Hospital, Tianjin, China
| | - Wenjuan Ma
- National Clinical Research Center for Cancer, Tianjin, China.,Key Laboratory of Cancer Prevention and Therapy, Tianjin, China.,Tianjin's Clinical Research Center for Cancer, Tianjin, China.,Department of Breast Imaging, Tianjin Medical University Cancer Institute and Hospital, Tianjin, China
| |
Collapse
|
39
|
Westerlund AM, Hawe JS, Heinig M, Schunkert H. Risk Prediction of Cardiovascular Events by Exploration of Molecular Data with Explainable Artificial Intelligence. Int J Mol Sci 2021; 22:10291. [PMID: 34638627 PMCID: PMC8508897 DOI: 10.3390/ijms221910291] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 09/17/2021] [Accepted: 09/18/2021] [Indexed: 12/11/2022] Open
Abstract
Cardiovascular diseases (CVD) annually take almost 18 million lives worldwide. Most lethal events occur months or years after the initial presentation. Indeed, many patients experience repeated complications or require multiple interventions (recurrent events). Apart from affecting the individual, this leads to high medical costs for society. Personalized treatment strategies aiming at prediction and prevention of recurrent events rely on early diagnosis and precise prognosis. Complementing the traditional environmental and clinical risk factors, multi-omics data provide a holistic view of the patient and disease progression, enabling studies to probe novel angles in risk stratification. Specifically, predictive molecular markers allow insights into regulatory networks, pathways, and mechanisms underlying disease. Moreover, artificial intelligence (AI) represents a powerful, yet adaptive, framework able to recognize complex patterns in large-scale clinical and molecular data with the potential to improve risk prediction. Here, we review the most recent advances in risk prediction of recurrent cardiovascular events, and discuss the value of molecular data and biomarkers for understanding patient risk in a systems biology context. Finally, we introduce explainable AI which may improve clinical decision systems by making predictions transparent to the medical practitioner.
Collapse
Affiliation(s)
- Annie M. Westerlund
- Department of Cardiology, Deutsches Herzzentrum München, Technical University Munich, Lazarettstrasse 36, 80636 Munich, Germany; (A.M.W.); (J.S.H.)
- Institute of Computational Biology, HelmholtzZentrum München, Ingolstädter Landstrasse 1, 85764 Munich, Germany
| | - Johann S. Hawe
- Department of Cardiology, Deutsches Herzzentrum München, Technical University Munich, Lazarettstrasse 36, 80636 Munich, Germany; (A.M.W.); (J.S.H.)
| | - Matthias Heinig
- Institute of Computational Biology, HelmholtzZentrum München, Ingolstädter Landstrasse 1, 85764 Munich, Germany
- Department of Informatics, Technical University Munich, Boltzmannstrasse 3, 85748 Garching, Germany
| | - Heribert Schunkert
- Department of Cardiology, Deutsches Herzzentrum München, Technical University Munich, Lazarettstrasse 36, 80636 Munich, Germany; (A.M.W.); (J.S.H.)
- Deutsches Zentrum für Herz- und Kreislaufforschung (DZHK), Munich Heart Alliance, Biedersteiner Strasse 29, 80802 Munich, Germany
| |
Collapse
|
40
|
Komatsu M, Sakai A, Dozen A, Shozu K, Yasutomi S, Machino H, Asada K, Kaneko S, Hamamoto R. Towards Clinical Application of Artificial Intelligence in Ultrasound Imaging. Biomedicines 2021; 9:720. [PMID: 34201827 PMCID: PMC8301304 DOI: 10.3390/biomedicines9070720] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2021] [Revised: 06/13/2021] [Accepted: 06/18/2021] [Indexed: 12/12/2022] Open
Abstract
Artificial intelligence (AI) is being increasingly adopted in medical research and applications. Medical AI devices have continuously been approved by the Food and Drug Administration in the United States and the responsible institutions of other countries. Ultrasound (US) imaging is commonly used in an extensive range of medical fields. However, AI-based US imaging analysis and its clinical implementation have not progressed steadily compared to other medical imaging modalities. The characteristic issues of US imaging owing to its manual operation and acoustic shadows cause difficulties in image quality control. In this review, we would like to introduce the global trends of medical AI research in US imaging from both clinical and basic perspectives. We also discuss US image preprocessing, ingenious algorithms that are suitable for US imaging analysis, AI explainability for obtaining informed consent, the approval process of medical AI devices, and future perspectives towards the clinical application of AI-based US diagnostic support technologies.
Collapse
Affiliation(s)
- Masaaki Komatsu
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Akira Sakai
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| | - Ai Dozen
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Kanto Shozu
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Suguru Yasutomi
- Artificial Intelligence Laboratory, Research Unit, Fujitsu Research, Fujitsu Ltd., 4-1-1 Kamikodanaka, Nakahara-ku, Kawasaki, Kanagawa 211-8588, Japan; (A.S.); (S.Y.)
- RIKEN AIP—Fujitsu Collaboration Center, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan
| | - Hidenori Machino
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ken Asada
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Syuzo Kaneko
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
| | - Ryuji Hamamoto
- Cancer Translational Research Team, RIKEN Center for Advanced Intelligence Project, 1-4-1 Nihonbashi, Chuo-ku, Tokyo 103-0027, Japan; (H.M.); (K.A.); (S.K.)
- Division of Medical AI Research and Development, National Cancer Center Research Institute, 5-1-1 Tsukiji, Chuo-ku, Tokyo 104-0045, Japan; (A.D.); (K.S.)
- Biomedical Science and Engineering Track, Graduate School of Medical and Dental Sciences, Tokyo Medical and Dental University, 1-5-45 Yushima, Bunkyo-ku, Tokyo 113-8510, Japan
| |
Collapse
|
41
|
Guerriero S, Pascual M, Ajossa S, Neri M, Musa E, Graupera B, Rodriguez I, Alcazar JL. Artificial intelligence (AI) in the detection of rectosigmoid deep endometriosis. Eur J Obstet Gynecol Reprod Biol 2021; 261:29-33. [PMID: 33873085 DOI: 10.1016/j.ejogrb.2021.04.012] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2021] [Revised: 04/06/2021] [Accepted: 04/11/2021] [Indexed: 12/12/2022]
Abstract
OBJECTIVES The aim of this study was to compare the accuracy of seven classical Machine Learning (ML) models trained with ultrasound (US) soft markers to raise suspicion of endometriotic bowel involvement. MATERIALS AND METHODS Input data to the models was retrieved from a database of a previously published study on bowel endometriosis performed on 333 patients. The following models have been tested: k-nearest neighbors algorithm (k-NN), Naive Bayes, Neural Networks (NNET-neuralnet), Support Vector Machine (SVM), Decision Tree, Random Forest, and Logistic Regression. The data driven strategy has been to split randomly the complete dataset in two different datasets. The training dataset and the test dataset with a 67 % and 33 % of the original cases respectively. All models were trained on the training dataset and the predictions have been evaluated using the test dataset. The best model was chosen based on the accuracy demonstrated on the test dataset. The information used in all the models were: age; presence of US signs of uterine adenomyosis; presence of an endometrioma; adhesions of the ovary to the uterus; presence of "kissing ovaries"; absence of sliding sign. All models have been trained using CARET package in R with ten repeated 10-fold cross-validation. Accuracy, Sensitivity, Specificity, positive (PPV) and negative (NPV) predictive value were calculated using a 50 % threshold. Presence of intestinal involvement was defined in all cases in the test dataset with an estimated probability greater than 0.5. RESULTS In our previous study from where the inputs were retrieved, 106 women had a final expert US diagnosis of rectosigmoid endometriosis. In term of diagnostic accuracy the best model was the Neural Net (Accuracy, 0.73; sensitivity, 0.72; specificity 0.73; PPV 0.52; and NPV 0.86) but without significant difference with the others. CONCLUSIONS The accuracy of ultrasound soft markers in raising suspicion of rectosigmoid endometriosis using Artificial Intelligence (AI) models showed similar results to the logistic model.
Collapse
Affiliation(s)
- Stefano Guerriero
- Centro Integrato di Procreazione Medicalmente Assistita (PMA) e Diagnostica Ostetrico-Ginecologica, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy; University of Cagliari, Cagliari, Italy.
| | - MariaAngela Pascual
- Department of Obstetrics, Gynecology, and Reproduction, Hospital Universitari Dexeus, Spain
| | - Silvia Ajossa
- Department of Obstetrics and Gynecology, University of Cagliari, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy
| | - Manuela Neri
- Department of Obstetrics and Gynecology, University of Cagliari, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy
| | - Eleonora Musa
- Department of Obstetrics and Gynecology, University of Cagliari, Policlinico Universitario Duilio Casula, Monserrato, Cagliari, Italy
| | - Betlem Graupera
- Department of Obstetrics, Gynecology, and Reproduction, Hospital Universitari Dexeus, Spain
| | - Ignacio Rodriguez
- Unidad Epidemiología y Estadística, Departamento de Obstetricia, Ginecología y Reproducción, Hospital Universitario Quirón Dexeus, Barcelona, Spain
| | - Juan Luis Alcazar
- Department of Obstetrics and Gynecology, Clínica Universidad de Navarra, School of Medicine, University of Navarra, Pamplona, Spain
| |
Collapse
|
42
|
Odibo AO. UOG now and beyond! ULTRASOUND IN OBSTETRICS & GYNECOLOGY : THE OFFICIAL JOURNAL OF THE INTERNATIONAL SOCIETY OF ULTRASOUND IN OBSTETRICS AND GYNECOLOGY 2021; 57:7-8. [PMID: 33387409 DOI: 10.1002/uog.23567] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
|