1
|
Chen Y, Liu Y, Zuo X, Zhao Q, Sun M, Cui M, Zhao X, Du Y. Identification of significant imaging features for sensing oocyte viability. Microsc Res Tech 2023; 86:181-192. [PMID: 36278826 DOI: 10.1002/jemt.24248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 09/26/2022] [Accepted: 10/06/2022] [Indexed: 01/21/2023]
Abstract
The evaluation of oocyte viability in the laboratory is limited to the morphological assessment by naked eyes, but the realization that most normal-appearing oocytes may conceal abnormalities prompts the search for automated approaches that can detect the abnormalities imperceptible to naked eyes. In this study, we developed an image processing pipeline applicable to bright-field microscope images to quantify the causal relationship between the quantitative imaging features and the developmental potential of oocytes. We acquired 19 imaging features of approximately 700 oocytes and determined two imaging subtypes, namely viable and nonviable subtypes that correlated closely with a viability fluorescence indicator and cleavage rates. The causal relationship between these imaging features and oocyte viability was derived from a viability-oriented Bayesian network that was developed based on the Bayesian information criterion and Tabu search. Our experimental results revealed that entropy with mean Gray Level Co-Occurrence Matrix energy describing the uniformity and texture roughness of cytoplasm were salient features for the automated selection of promising oocytes that exhibited excellent developmental potential.
Collapse
Affiliation(s)
- Yizhe Chen
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| | - Yaowei Liu
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| | - Xiaoying Zuo
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| | - Qili Zhao
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| | - Mingzhu Sun
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| | - Maosheng Cui
- Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China.,Innovation Team of Pig Feeding, Institute of Animal Science and Veterinary of Tianjin, Tianjin, China
| | - Xin Zhao
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| | - Yue Du
- Institute of Robotics and Automatic Information System, College of Artificial Intelligence, Nankai University, Tianjin, China.,Tianjin Key Laboratory of Intelligent Robotics, Nankai University, Tianjin, China.,Institute of Intelligence Technology and Robotic Systems, Shenzhen Research Institute of Nankai University, Tianjin, China
| |
Collapse
|
2
|
An Image Processing Protocol to Extract Variables Predictive of Human Embryo Fitness for Assisted Reproduction. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12073531] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
Despite the use of new techniques on embryo selection and the presence of equipment on the market, such as EmbryoScope® and Geri®, which help in the evaluation of embryo quality, there is still a subjectivity between the embryologist’s classifications, which are subjected to inter- and intra-observer variability, therefore compromising the successful implantation of the embryo. Nonetheless, with the acquisition of images through the time-lapse system, it is possible to perform digital processing of these images, providing a better analysis of the embryo, in addition to enabling the automatic analysis of a large volume of information. An image processing protocol was developed using well-established techniques to segment the image of blastocysts and extract variables of interest. A total of 33 variables were automatically generated by digital image processing, each one representing a different aspect of the embryo and describing a different characteristic of the blastocyst. These variables can be categorized into texture, gray-level average, gray-level standard deviation, modal value, relations, and light level. The automated and directed steps of the proposed processing protocol exclude spurious results, except when image quality (e.g., focus) prevents correct segmentation. The image processing protocol can segment human blastocyst images and automatically extract 33 variables that describe quantitative aspects of the blastocyst’s regions, with potential utility in embryo selection for assisted reproductive technology (ART).
Collapse
|
3
|
Perini G, Rosa E, Friggeri G, Di Pietro L, Barba M, Parolini O, Ciasca G, Moriconi C, Papi M, De Spirito M, Palmieri V. INSIDIA 2.0 High-Throughput Analysis of 3D Cancer Models: Multiparametric Quantification of Graphene Quantum Dots Photothermal Therapy for Glioblastoma and Pancreatic Cancer. Int J Mol Sci 2022; 23:3217. [PMID: 35328638 PMCID: PMC8948775 DOI: 10.3390/ijms23063217] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 03/14/2022] [Accepted: 03/15/2022] [Indexed: 12/04/2022] Open
Abstract
Cancer spheroids are in vitro 3D models that became crucial in nanomaterials science thanks to the possibility of performing high throughput screening of nanoparticles and combined nanoparticle-drug therapies on in vitro models. However, most of the current spheroid analysis methods involve manual steps. This is a time-consuming process and is extremely liable to the variability of individual operators. For this reason, rapid, user-friendly, ready-to-use, high-throughput image analysis software is necessary. In this work, we report the INSIDIA 2.0 macro, which offers researchers high-throughput and high content quantitative analysis of in vitro 3D cancer cell spheroids and allows advanced parametrization of the expanding and invading cancer cellular mass. INSIDIA has been implemented to provide in-depth morphologic analysis and has been used for the analysis of the effect of graphene quantum dots photothermal therapy on glioblastoma (U87) and pancreatic cancer (PANC-1) spheroids. Thanks to INSIDIA 2.0 analysis, two types of effects have been observed: In U87 spheroids, death is accompanied by a decrease in area of the entire spheroid, with a decrease in entropy due to the generation of a high uniform density spheroid core. On the other hand, PANC-1 spheroids' death caused by nanoparticle photothermal disruption is accompanied with an overall increase in area and entropy due to the progressive loss of integrity and increase in variability of spheroid texture. We have summarized these effects in a quantitative parameter of spheroid disruption demonstrating that INSIDIA 2.0 multiparametric analysis can be used to quantify cell death in a non-invasive, fast, and high-throughput fashion.
Collapse
Affiliation(s)
- Giordano Perini
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
| | - Enrico Rosa
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
| | - Ginevra Friggeri
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
| | - Lorena Di Pietro
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
- Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Marta Barba
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
- Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Ornella Parolini
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
- Dipartimento di Scienze della Vita e Sanità Pubblica, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy
| | - Gabriele Ciasca
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
| | - Chiara Moriconi
- Theolytics, The Sherard Building, Edmund Halley Road, Oxford Science Park, Oxford OX4 4DQ, UK; or
| | - Massimiliano Papi
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
| | - Marco De Spirito
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
| | - Valentina Palmieri
- Dipartimento di Neuroscienze, Università Cattolica del Sacro Cuore, Largo Francesco Vito 1, 00168 Rome, Italy; (G.P.); (E.R.); (G.F.); (G.C.); (M.D.S.)
- Fondazione Policlinico Universitario A. Gemelli IRCSS, 00168 Rome, Italy; (L.D.P.); (M.B.); (O.P.)
- Istituto dei Sistemi Complessi, CNR, Via dei Taurini 19, 00185 Rome, Italy
| |
Collapse
|
4
|
Abstract
The efficiency of lung cancer screening for reducing mortality is hindered by the high rate of false positives. Artificial intelligence applied to radiomics could help to early discard benign cases from the analysis of CT scans. The available amount of data and the fact that benign cases are a minority, constitutes a main challenge for the successful use of state of the art methods (like deep learning), which can be biased, over-fitted and lack of clinical reproducibility. We present an hybrid approach combining the potential of radiomic features to characterize nodules in CT scans and the generalization of the feed forward networks. In order to obtain maximal reproducibility with minimal training data, we propose an embedding of nodules based on the statistical significance of radiomic features for malignancy detection. This representation space of lesions is the input to a feed forward network, which architecture and hyperparameters are optimized using own-defined metrics of the diagnostic power of the whole system. Results of the best model on an independent set of patients achieve 100% of sensitivity and 83% of specificity (AUC = 0.94) for malignancy detection.
Collapse
|
6
|
Srinivasu PN, SivaSai JG, Ijaz MF, Bhoi AK, Kim W, Kang JJ. Classification of Skin Disease Using Deep Learning Neural Networks with MobileNet V2 and LSTM. SENSORS (BASEL, SWITZERLAND) 2021; 21:2852. [PMID: 33919583 PMCID: PMC8074091 DOI: 10.3390/s21082852] [Citation(s) in RCA: 168] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2021] [Revised: 04/08/2021] [Accepted: 04/16/2021] [Indexed: 12/18/2022]
Abstract
Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region's image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.
Collapse
Affiliation(s)
- Parvathaneni Naga Srinivasu
- Department of Computer Science and Engineering, Gitam Institute of Technology, GITAM Deemed to be University, Rushikonda, Visakhapatnam 530045, India;
| | | | - Muhammad Fazal Ijaz
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Korea;
| | - Akash Kumar Bhoi
- Department of Electrical and Electronics Engineering, Sikkim Manipal Institute of Technology, Sikkim Manipal University, Majitar 737136, India;
| | - Wonjoon Kim
- Division of Future Convergence (HCI Science Major), Dongduk Women’s University, Seoul 02748, Korea
| | - James Jin Kang
- School of Science, Edith Cowan University, Joondalup 6027, Australia
| |
Collapse
|