1
|
Wang J, Jin Y, Jiang A, Chen W, Shan G, Gu Y, Ming Y, Li J, Yue C, Huang Z, Librach C, Lin G, Wang X, Zhao H, Sun Y, Zhang Z. Testing the generalizability and effectiveness of deep learning models among clinics: sperm detection as a pilot study. Reprod Biol Endocrinol 2024; 22:59. [PMID: 38778327 PMCID: PMC11110326 DOI: 10.1186/s12958-024-01232-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Accepted: 05/14/2024] [Indexed: 05/25/2024] Open
Abstract
BACKGROUND Deep learning has been increasingly investigated for assisting clinical in vitro fertilization (IVF). The first technical step in many tasks is to visually detect and locate sperm, oocytes, and embryos in images. For clinical deployment of such deep learning models, different clinics use different image acquisition hardware and different sample preprocessing protocols, raising the concern over whether the reported accuracy of a deep learning model by one clinic could be reproduced in another clinic. Here we aim to investigate the effect of each imaging factor on the generalizability of object detection models, using sperm analysis as a pilot example. METHODS Ablation studies were performed using state-of-the-art models for detecting human sperm to quantitatively assess how model precision (false-positive detection) and recall (missed detection) were affected by imaging magnification, imaging mode, and sample preprocessing protocols. The results led to the hypothesis that the richness of image acquisition conditions in a training dataset deterministically affects model generalizability. The hypothesis was tested by first enriching the training dataset with a wide range of imaging conditions, then validated through internal blind tests on new samples and external multi-center clinical validations. RESULTS Ablation experiments revealed that removing subsets of data from the training dataset significantly reduced model precision. Removing raw sample images from the training dataset caused the largest drop in model precision, whereas removing 20x images caused the largest drop in model recall. by incorporating different imaging and sample preprocessing conditions into a rich training dataset, the model achieved an intraclass correlation coefficient (ICC) of 0.97 (95% CI: 0.94-0.99) for precision, and an ICC of 0.97 (95% CI: 0.93-0.99) for recall. Multi-center clinical validation showed no significant differences in model precision or recall across different clinics and applications. CONCLUSIONS The results validated the hypothesis that the richness of data in the training dataset is a key factor impacting model generalizability. These findings highlight the importance of diversity in a training dataset for model evaluation and suggest that future deep learning models in andrology and reproductive medicine should incorporate comprehensive feature sets for enhanced generalizability across clinics.
Collapse
Affiliation(s)
- Jiaqi Wang
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China
| | - Yufei Jin
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China
| | - Aojun Jiang
- Department of Mechanical Engineering, University of Toronto, Toronto, Canada
| | - Wenyuan Chen
- Department of Mechanical Engineering, University of Toronto, Toronto, Canada
| | - Guanqiao Shan
- Department of Mechanical Engineering, University of Toronto, Toronto, Canada
| | - Yifan Gu
- Institute of Reproductive and Stem Cell Engineering, School of Basic Medical Science, Central South University, Changsha, China
- Reproductive & Genetic Hospital of Citic-Xiangya, Changsha, China
| | - Yue Ming
- School of Medicine, The Chinese University of Hong Kong, Shenzhen, China
| | - Jichang Li
- School of Medicine, The Chinese University of Hong Kong, Shenzhen, China
| | - Chunfeng Yue
- Suzhou Boundless Medical Technology Ltd., Co., Suzhou, China
| | - Zongjie Huang
- Suzhou Boundless Medical Technology Ltd., Co., Suzhou, China
| | | | - Ge Lin
- Institute of Reproductive and Stem Cell Engineering, School of Basic Medical Science, Central South University, Changsha, China
- Reproductive & Genetic Hospital of Citic-Xiangya, Changsha, China
| | - Xibu Wang
- The 3rd Affiliated Hospital of Shenzhen University, Shenzhen, China
| | - Huan Zhao
- The 3rd Affiliated Hospital of Shenzhen University, Shenzhen, China.
| | - Yu Sun
- Department of Mechanical Engineering, University of Toronto, Toronto, Canada.
- Department of Computer Science, University of Toronto, Toronto, Canada.
- Institute of Biomedical Engineering, University of Toronto, Toronto, Canada.
- Department of Electrical and Computer Engineering, University of Toronto, Toronto, Canada.
| | - Zhuoran Zhang
- School of Science and Engineering, The Chinese University of Hong Kong, Shenzhen, China.
| |
Collapse
|
2
|
Fjeldstad J, Qi W, Siddique N, Mercuri N, Nayot D, Krivoi A. Segmentation of mature human oocytes provides interpretable and improved blastocyst outcome predictions by a machine learning model. Sci Rep 2024; 14:10569. [PMID: 38719918 PMCID: PMC11078996 DOI: 10.1038/s41598-024-60901-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 04/29/2024] [Indexed: 05/12/2024] Open
Abstract
Within the medical field of human assisted reproductive technology, a method for interpretable, non-invasive, and objective oocyte evaluation is lacking. To address this clinical gap, a workflow utilizing machine learning techniques has been developed involving automatic multi-class segmentation of two-dimensional images, morphometric analysis, and prediction of developmental outcomes of mature denuded oocytes based on feature extraction and clinical variables. Two separate models have been developed for this purpose-a model to perform multiclass segmentation, and a classifier model to classify oocytes as likely or unlikely to develop into a blastocyst (Day 5-7 embryo). The segmentation model is highly accurate at segmenting the oocyte, ensuring high-quality segmented images (masks) are utilized as inputs for the classifier model (mask model). The mask model displayed an area under the curve (AUC) of 0.63, a sensitivity of 0.51, and a specificity of 0.66 on the test set. The AUC underwent a reduction to 0.57 when features extracted from the ooplasm were removed, suggesting the ooplasm holds the information most pertinent to oocyte developmental competence. The mask model was further compared to a deep learning model, which also utilized the segmented images as inputs. The performance of both models combined in an ensemble model was evaluated, showing an improvement (AUC 0.67) compared to either model alone. The results of this study indicate that direct assessments of the oocyte are warranted, providing the first objective insights into key features for developmental competence, a step above the current standard of care-solely utilizing oocyte age as a proxy for quality.
Collapse
Affiliation(s)
- Jullin Fjeldstad
- Clinical Embryology and Scientific Operations, Future Fertility, 3 Church St, Toronto, ON, M5E 1A9, Canada.
| | - Weikai Qi
- Data Science, Future Fertility, 3 Church St, Toronto, ON, M5E 1A9, Canada
| | - Nadia Siddique
- Clinical Embryology and Scientific Operations, Future Fertility, 3 Church St, Toronto, ON, M5E 1A9, Canada
| | - Natalie Mercuri
- Clinical Embryology and Scientific Operations, Future Fertility, 3 Church St, Toronto, ON, M5E 1A9, Canada
| | - Dan Nayot
- Chief Medical Officer, Future Fertility, 3 Church St, Toronto, ON, M5E 1A9, Canada
| | - Alex Krivoi
- Data Science, Future Fertility, 3 Church St, Toronto, ON, M5E 1A9, Canada
| |
Collapse
|
3
|
Pavlovic ZJ, Jiang VS, Hariton E. Current applications of artificial intelligence in assisted reproductive technologies through the perspective of a patient's journey. Curr Opin Obstet Gynecol 2024:00001703-990000000-00122. [PMID: 38597425 DOI: 10.1097/gco.0000000000000951] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/11/2024]
Abstract
PURPOSE OF REVIEW This review highlights the timely relevance of artificial intelligence in enhancing assisted reproductive technologies (ARTs), particularly in-vitro fertilization (IVF). It underscores artificial intelligence's potential in revolutionizing patient outcomes and operational efficiency by addressing challenges in fertility diagnoses and procedures. RECENT FINDINGS Recent advancements in artificial intelligence, including machine learning and predictive modeling, are making significant strides in optimizing IVF processes such as medication dosing, scheduling, and embryological assessments. Innovations include artificial intelligence augmented diagnostic testing, predictive modeling for treatment outcomes, scheduling optimization, dosing and protocol selection, follicular and hormone monitoring, trigger timing, and improved embryo selection. These developments promise to refine treatment approaches, enhance patient engagement, and increase the accuracy and scalability of fertility treatments. SUMMARY The integration of artificial intelligence into reproductive medicine offers profound implications for clinical practice and research. By facilitating personalized treatment plans, standardizing procedures, and improving the efficiency of fertility clinics, artificial intelligence technologies pave the way for value-based, accessible, and efficient fertility services. Despite the promise, the full potential of artificial intelligence in ART will require ongoing validation and ethical considerations to ensure equitable and effective implementation.
Collapse
Affiliation(s)
- Zoran J Pavlovic
- Department of Obstetrics and Gynecology/Reproductive Endocrinology and Infertility, University of South Florida, Morsani College of Medicine, Tampa, Florida
| | - Victoria S Jiang
- Division of Reproductive Endocrinology & Infertility, Vincent Department of Obstetrics and Gynecology, Massachusetts General Hospital/Harvard Medical School, Boston, Massachusetts
| | - Eduardo Hariton
- Reproductive Science Center of the San Francisco Bay Area, San Ramon, California, USA
| |
Collapse
|
4
|
Si K, Huang B, Jin L. Application of artificial intelligence in gametes and embryos selection. HUM FERTIL 2023; 26:757-777. [PMID: 37705466 DOI: 10.1080/14647273.2023.2256980] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Accepted: 07/22/2023] [Indexed: 09/15/2023]
Abstract
Gamete and embryo quality are critical to the success rate of Assisted Reproductive Technology (ART) cycles, but there remains a lack of methods to accurately measure the quality of sperm, oocytes and embryos. The ability of Artificial Intelligence (AI) technology to analyze large amounts of data, especially video and images, is particularly useful in gamete and embryo assessment and selection. The well-trained model has fast calculation speed and high accuracy, which can help embryologists to perform more objective gamete and embryo selection. Various artificial intelligence models have been developed for gamete and embryo assessment, some of which exhibit good performance. In this review, we summarize the latest applications of AI technology in semen analysis, as well as selection for sperm, oocyte and embryo, and discuss the existing problems and development directions of artificial intelligence in this field.
Collapse
Affiliation(s)
- Keyi Si
- Reproductive Medicine Center, Tongji Hospital, Tongji Medicine College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Bo Huang
- Reproductive Medicine Center, Tongji Hospital, Tongji Medicine College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| | - Lei Jin
- Reproductive Medicine Center, Tongji Hospital, Tongji Medicine College, Huazhong University of Science and Technology, Wuhan, People's Republic of China
| |
Collapse
|
5
|
Tran HP, Diem Tuyet HT, Dang Khoa TQ, Lam Thuy LN, Bao PT, Thanh Sang VN. Microscopic Video-Based Grouped Embryo Segmentation: A Deep Learning Approach. Cureus 2023; 15:e45429. [PMID: 37859886 PMCID: PMC10582205 DOI: 10.7759/cureus.45429] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/17/2023] [Indexed: 10/21/2023] Open
Abstract
PURPOSE The primary aim of this research is to enhance the utilization of advanced deep learning (DL) techniques in the domain of in vitro fertilization (IVF) by presenting a more refined approach to the segmentation and organization of microscopic embryos. This study also seeks to establish a comprehensive embryo database that can be employed for future research and educational purposes. METHODS This study introduces an advanced methodology for embryo segmentation and organization using DL. The approach comprises three primary steps: Embryo Segmentation Model, Segmented Embryo Image Organization, and Clear and Blur Image Classification. The proposed approach was rigorously evaluated on a sample of 5182 embryos extracted from 362 microscopic embryo videos. RESULTS The study's results show that the proposed method is highly effective in accurately segmenting and organizing embryo images. This is evidenced by the high mean average precision values of 1.0 at an intersection over union threshold of 0.5 and across the range of 0.5 to 0.95, indicating a robust object detection capability that is vital in the IVF process. Segmentation of images based on various factors such as the day of development, patient, growth medium, and embryo facilitates easy comparison and identification of potential issues. Finally, appropriate threshold values for clear and blur image classification are proposed. CONCLUSION The suggested technique represents an indispensable stage of data preparation for IVF training and education. Furthermore, this study provides a solid foundation for future research and adoption of DL in IVF, which is expected to have a significant positive impact on IVF outcomes.
Collapse
Affiliation(s)
- Huy Phuong Tran
- Department of Infertility, Hung Vuong Hospital, Ho Chi Minh City, VNM
| | | | | | - Le Nhi Lam Thuy
- IC-IP Lab, Faculty of Information and Technology, Saigon University, Ho Chi Minh City, VNM
| | - Pham The Bao
- IC-IP Lab, Faculty of Information and Technology, Saigon University, Ho Chi Minh City, VNM
| | - Vu Ngoc Thanh Sang
- IC-IP Lab, Faculty of Information and Technology, Saigon University, Ho Chi Minh City, VNM
| |
Collapse
|
6
|
Payá E, Bori L, Colomer A, Meseguer M, Naranjo V. Automatic characterization of human embryos at day 4 post-insemination from time-lapse imaging using supervised contrastive learning and inductive transfer learning techniques. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 221:106895. [PMID: 35609359 DOI: 10.1016/j.cmpb.2022.106895] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/04/2022] [Revised: 05/03/2022] [Accepted: 05/15/2022] [Indexed: 06/15/2023]
Abstract
BACKGROUND Embryo morphology is a predictive marker for implantation success and ultimately live births. Viability evaluation and quality grading are commonly used to select the embryo with the highest implantation potential. However, the traditional method of manual embryo assessment is time-consuming and highly susceptible to inter- and intra-observer variability. Automation of this process results in more objective and accurate predictions. METHOD In this paper, we propose a novel methodology based on deep learning to automatically evaluate the morphological appearance of human embryos from time-lapse imaging. A supervised contrastive learning framework is implemented to predict embryo viability at day 4 and day 5, and an inductive transfer approach is applied to classify embryo quality at both times. RESULTS Results showed that both methods outperformed conventional approaches and improved state-of-the-art embryology results for an independent test set. The viability result achieved an accuracy of 0.8103 and 0.9330 and the quality results reached values of 0.7500 and 0.8001 for day 4 and day 5, respectively. Furthermore, qualitative results kept consistency with the clinical interpretation. CONCLUSIONS The proposed methods are up to date with the artificial intelligence literature and have been proven to be promising. Furthermore, our findings represent a breakthrough in the field of embryology in that they study the possibilities of embryo selection at day 4. Moreover, the grad-CAMs findings are directly in line with embryologists' decisions. Finally, our results demonstrated excellent potential for the inclusion of the models in clinical practice.
Collapse
Affiliation(s)
- Elena Payá
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, 46022, Spain; IVI-RMA Valencia, Spain.
| | | | - Adrián Colomer
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, 46022, Spain
| | | | - Valery Naranjo
- Instituto de Investigación e Innovación en Bioingeniería, Universitat Politècnica de València, Valencia, 46022, Spain
| |
Collapse
|
7
|
Hou X, Shen G, Zhou L, Li Y, Wang T, Ma X. Artificial Intelligence in Cervical Cancer Screening and Diagnosis. Front Oncol 2022; 12:851367. [PMID: 35359358 PMCID: PMC8963491 DOI: 10.3389/fonc.2022.851367] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2022] [Accepted: 02/10/2022] [Indexed: 12/11/2022] Open
Abstract
Cervical cancer remains a leading cause of cancer death in women, seriously threatening their physical and mental health. It is an easily preventable cancer with early screening and diagnosis. Although technical advancements have significantly improved the early diagnosis of cervical cancer, accurate diagnosis remains difficult owing to various factors. In recent years, artificial intelligence (AI)-based medical diagnostic applications have been on the rise and have excellent applicability in the screening and diagnosis of cervical cancer. Their benefits include reduced time consumption, reduced need for professional and technical personnel, and no bias owing to subjective factors. We, thus, aimed to discuss how AI can be used in cervical cancer screening and diagnosis, particularly to improve the accuracy of early diagnosis. The application and challenges of using AI in the diagnosis and treatment of cervical cancer are also discussed.
Collapse
Affiliation(s)
- Xin Hou
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Guangyang Shen
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Liqiang Zhou
- Cancer Centre and Center of Reproduction, Development and Aging, Faculty of Health Sciences, University of Macau, Macau, Macau SAR, China
| | - Yinuo Li
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Tian Wang
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
| | - Xiangyi Ma
- Department of Obstetrics and Gynecology, Tongji Medical College, Tongji Hospital, Huazhong University of Science and Technology, Wuhan, China
- *Correspondence: Xiangyi Ma,
| |
Collapse
|
8
|
Targosz A, Przystałka P, Wiaderkiewicz R, Mrugacz G. Semantic segmentation of human oocyte images using deep neural networks. Biomed Eng Online 2021; 20:40. [PMID: 33892725 PMCID: PMC8066497 DOI: 10.1186/s12938-021-00864-w] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 03/04/2021] [Indexed: 12/04/2022] Open
Abstract
BACKGROUND Infertility is a significant problem of humanity. In vitro fertilisation is one of the most effective and frequently applied ART methods. The effectiveness IVF depends on the assessment and selection of gametes and embryo with the highest developmental potential. The subjective nature of morphological assessment of oocytes and embryos is still one of the main reasons for seeking effective and objective methods for assessing quality in automatic manner. The most promising methods to automatic classification of oocytes and embryos are based on image analysis aided by machine learning techniques. The special attention is paid on deep neural networks that can be used as classifiers solving the problem of automatic assessment of the oocytes/embryos. METHODS This paper deals with semantic segmentation of human oocyte images using deep neural networks in order to develop new version of the predefined neural networks. Deep semantic oocyte segmentation networks can be seen as medically oriented predefined networks understanding the content of the image. The research presented in the paper is focused on the performance comparison of different types of convolutional neural networks for semantic oocyte segmentation. In the case study, the merits and limitations of the selected deep neural networks are analysed. RESULTS 71 deep neural models were analysed. The best score was obtained for one of the variants of DeepLab-v3-ResNet-18 model, when the training accuracy (Acc) reached about 85% for training patterns and 79% for validation ones. The weighted intersection over union (wIoU) and global accuracy (gAcc) for test patterns were calculated, as well. The obtained values of these quality measures were 0,897 and 0.93, respectively. CONCLUSION The obtained results prove that the proposed approach can be applied to create deep neural models for semantic oocyte segmentation with the high accuracy guaranteeing their usage as the predefined networks in other tasks.
Collapse
Affiliation(s)
- Anna Targosz
- Department of Histology and Embryology, Medical University of Silesia, Faculty of Medical Sciences, 18 Medyków St., 40-752 Katowice, Poland
- Center for Reproductive Medicine Bocian, 26 Akademicka St., 15-267 Białystok, Poland
| | - Piotr Przystałka
- Department of Fundamentals of Machinery Design, Silesian University of Technology, Faculty of Mechanical Engineering, 18a Konarskiego St., 44-100 Gliwice, Poland
| | - Ryszard Wiaderkiewicz
- Department of Histology and Embryology, Medical University of Silesia, Faculty of Medical Sciences, 18 Medyków St., 40-752 Katowice, Poland
| | - Grzegorz Mrugacz
- Center for Reproductive Medicine Bocian, 26 Akademicka St., 15-267 Białystok, Poland
| |
Collapse
|