1
|
Hosseinimanesh G, Alsheghri A, Keren J, Cheriet F, Guibault F. Personalized dental crown design: A point-to-mesh completion network. Med Image Anal 2025; 101:103439. [PMID: 39705822 DOI: 10.1016/j.media.2024.103439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2024] [Revised: 11/24/2024] [Accepted: 12/10/2024] [Indexed: 12/23/2024]
Abstract
Designing dental crowns with computer-aided design software in dental laboratories is complex and time-consuming. Using real clinical datasets, we developed an end-to-end deep learning model that automatically generates personalized dental crown meshes. The input context includes the prepared tooth, its adjacent teeth, and the two closest teeth in the opposing jaw. The training set contains this context, the ground truth crown, and the extracted margin line. Our model consists of two components: First, a feature extractor converts the input point cloud into a set of local feature vectors, which are then fed into a transformer-based model to predict the geometric features of the crown. Second, a point-to-mesh module generates a dense array of points with normal vectors, and a differentiable Poisson surface reconstruction method produces an accurate crown mesh. Training is conducted with three losses: (1) a customized margin line loss; (2) a contrastive-based Chamfer distance loss; and (3) a mean square error (MSE) loss to control mesh quality. We compare our method with our previously published method, Dental Mesh Completion (DMC). Extensive testing confirms our method's superiority, achieving a 12.32% reduction in Chamfer distance and a 46.43% reduction in MSE compared to DMC. Margin line loss improves Chamfer distance by 5.59%.
Collapse
Affiliation(s)
| | - Ammar Alsheghri
- Mechanical Engineering Department, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran, 31261, Kingdom of Saudi Arabia; Interdisciplinary research center for Biosystems and Machines, King Fahd University of Petroleum and Minerals (KFUPM), Dhahran, Kingdom of Saudi Arabia
| | | | | | | |
Collapse
|
2
|
Rekik A, Ben-Hamadou A, Smaoui O, Bouzguenda F, Pujades S, Boyer E. TSegLab: Multi-stage 3D dental scan segmentation and labeling. Comput Biol Med 2025; 185:109535. [PMID: 39708498 DOI: 10.1016/j.compbiomed.2024.109535] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2024] [Revised: 08/17/2024] [Accepted: 12/03/2024] [Indexed: 12/23/2024]
Abstract
This study introduces a novel deep learning approach for 3D teeth scan segmentation and labeling, designed to enhance accuracy in computer-aided design (CAD) systems. Our method is organized into three key stages: coarse localization, fine teeth segmentation, and labeling. In the teeth localization stage, we employ a Mask-RCNN model to detect teeth in a rendered three-channel 2D representation of the input scan. For fine teeth segmentation, each detected tooth mesh is isomorphically mapped to a 2D harmonic parameter space and segmented with a Mask-RCNN model for precise crown delineation. Finally, for labeling, we propose a graph neural network that captures both the 3D shape and spatial distribution of the teeth, along with a new data augmentation technique to simulate missing teeth and teeth position variation during training. The method is evaluated using three key metrics: Teeth Localization Accuracy (TLA), Teeth Segmentation Accuracy (TSA), and Teeth Identification Rate (TIR). We tested our approach on the Teeth3DS dataset, consisting of 1800 intraoral 3D scans, and achieved a TLA of 98.45%, TSA of 98.17%, and TIR of 97.61%, outperforming existing state-of-the-art techniques. These results suggest that our approach significantly enhances the precision and reliability of automatic teeth segmentation and labeling in dental CAD applications. Link to the project page: https://crns-smartvision.github.io/tseglab.
Collapse
Affiliation(s)
- Ahmed Rekik
- Digital Research Center of Sfax, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia; ISSAT, Gafsa university, Sidi Ahmed Zarrouk University Campus, 2112 Gafsa, Tunisia; Laboratory of Signals, systeMs, aRtificial Intelligence and neTworkS, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia
| | - Achraf Ben-Hamadou
- Digital Research Center of Sfax, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia; Laboratory of Signals, systeMs, aRtificial Intelligence and neTworkS, Technopark of Sfax, Sakiet Ezzit, 3021 Sfax, Tunisia.
| | - Oussama Smaoui
- Udini, 37 BD Aristide Briand, 13100 Aix-En-Provence, France
| | | | - Sergi Pujades
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France
| | - Edmond Boyer
- Inria, Univ. Grenoble Alpes, CNRS, Grenoble INP, LJK, France
| |
Collapse
|
3
|
Liu Z, Lv Q, Lee CH, Shen L. Segmenting medical images with limited data. Neural Netw 2024; 177:106367. [PMID: 38754215 DOI: 10.1016/j.neunet.2024.106367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 05/03/2024] [Accepted: 05/03/2024] [Indexed: 05/18/2024]
Abstract
While computer vision has proven valuable for medical image segmentation, its application faces challenges such as limited dataset sizes and the complexity of effectively leveraging unlabeled images. To address these challenges, we present a novel semi-supervised, consistency-based approach termed the data-efficient medical segmenter (DEMS). The DEMS features an encoder-decoder architecture and incorporates the developed online automatic augmenter (OAA) and residual robustness enhancement (RRE) blocks. The OAA augments input data with various image transformations, thereby diversifying the dataset to improve the generalization ability. The RRE enriches feature diversity and introduces perturbations to create varied inputs for different decoders, thereby providing enhanced variability. Moreover, we introduce a sensitive loss to further enhance consistency across different decoders and stabilize the training process. Extensive experimental results on both our own and three public datasets affirm the effectiveness of DEMS. Under extreme data shortage scenarios, our DEMS achieves 16.85% and 10.37% improvement in dice score compared with the U-Net and top-performed state-of-the-art method, respectively. Given its superior data efficiency, DEMS could present significant advancements in medical segmentation under small data regimes. The project homepage can be accessed at https://github.com/NUS-Tim/DEMS.
Collapse
Affiliation(s)
- Zhaoshan Liu
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| | - Qiujie Lv
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore; School of Intelligent Systems Engineering, Sun Yat-sen University, No. 66, Gongchang Road, Guangming District, 518107, China.
| | - Chau Hung Lee
- Department of Radiology, Tan Tock Seng Hospital, 11 Jalan Tan Tock Seng, Singapore, 308433, Singapore.
| | - Lei Shen
- Department of Mechanical Engineering, National University of Singapore, 9 Engineering Drive 1, Singapore, 117575, Singapore.
| |
Collapse
|
4
|
Zhou Z, Chen Y, He A, Que X, Wang K, Yao R, Li T. NKUT: Dataset and Benchmark for Pediatric Mandibular Wisdom Teeth Segmentation. IEEE J Biomed Health Inform 2024; 28:3523-3533. [PMID: 38557613 DOI: 10.1109/jbhi.2024.3383222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/04/2024]
Abstract
Germectomy is a common surgery in pediatric dentistry to prevent the potential dangers caused by impacted mandibular wisdom teeth. Segmentation of mandibular wisdom teeth is a crucial step in surgery planning. However, manually segmenting teeth and bones from 3D volumes is time-consuming and may cause delays in treatment. Deep learning based medical image segmentation methods have demonstrated the potential to reduce the burden of manual annotations, but they still require a lot of well-annotated data for training. In this paper, we initially curated a Cone Beam Computed Tomography (CBCT) dataset, NKUT, for the segmentation of pediatric mandibular wisdom teeth. This marks the first publicly available dataset in this domain. Second, we propose a semantic separation scale-specific feature fusion network named WTNet, which introduces two branches to address the teeth and bones segmentation tasks. In WTNet, We design a Input Enhancement (IE) block and a Teeth-Bones Feature Separation (TBFS) block to solve the feature confusions and semantic-blur problems in our task. Experimental results suggest that WTNet performs better on NKUT compared to previous state-of-the-art segmentation methods (such as TransUnet), with a maximum DSC lead of nearly 16%.
Collapse
|
5
|
Broll A, Rosentritt M, Schlegl T, Goldhacker M. A data-driven approach for the partial reconstruction of individual human molar teeth using generative deep learning. Front Artif Intell 2024; 7:1339193. [PMID: 38690195 PMCID: PMC11058210 DOI: 10.3389/frai.2024.1339193] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2023] [Accepted: 03/19/2024] [Indexed: 05/02/2024] Open
Abstract
Background and objective Due to the high prevalence of dental caries, fixed dental restorations are regularly required to restore compromised teeth or replace missing teeth while retaining function and aesthetic appearance. The fabrication of dental restorations, however, remains challenging due to the complexity of the human masticatory system as well as the unique morphology of each individual dentition. Adaptation and reworking are frequently required during the insertion of fixed dental prostheses (FDPs), which increase cost and treatment time. This article proposes a data-driven approach for the partial reconstruction of occlusal surfaces based on a data set that comprises 92 3D mesh files of full dental crown restorations. Methods A Generative Adversarial Network (GAN) is considered for the given task in view of its ability to represent extensive data sets in an unsupervised manner with a wide variety of applications. Having demonstrated good capabilities in terms of image quality and training stability, StyleGAN-2 has been chosen as the main network for generating the occlusal surfaces. A 2D projection method is proposed in order to generate 2D representations of the provided 3D tooth data set for integration with the StyleGAN architecture. The reconstruction capabilities of the trained network are demonstrated by means of 4 common inlay types using a Bayesian Image Reconstruction method. This involves pre-processing the data in order to extract the necessary information of the tooth preparations required for the used method as well as the modification of the initial reconstruction loss. Results The reconstruction process yields satisfactory visual and quantitative results for all preparations with a root mean square error (RMSE) ranging from 0.02 mm to 0.18 mm. When compared against a clinical procedure for CAD inlay fabrication, the group of dentists preferred the GAN-based restorations for 3 of the total 4 inlay geometries. Conclusions This article shows the effectiveness of the StyleGAN architecture with a downstream optimization process for the reconstruction of 4 different inlay geometries. The independence of the reconstruction process and the initial training of the GAN enables the application of the method for arbitrary inlay geometries without time-consuming retraining of the GAN.
Collapse
Affiliation(s)
- Alexander Broll
- Department of Prosthetic Dentistry, University Hospital Regensburg, Regensburg, Germany
- Faculty of Mechanical Engineering, Ostbayerische Technische Hochschule Regensburg, Regensburg, Germany
| | - Martin Rosentritt
- Department of Prosthetic Dentistry, University Hospital Regensburg, Regensburg, Germany
| | - Thomas Schlegl
- Faculty of Mechanical Engineering, Ostbayerische Technische Hochschule Regensburg, Regensburg, Germany
| | - Markus Goldhacker
- Faculty of Mechanical Engineering, Ostbayerische Technische Hochschule Regensburg, Regensburg, Germany
| |
Collapse
|
6
|
Cho JH, Çakmak G, Yi Y, Yoon HI, Yilmaz B, Schimmel M. Tooth morphology, internal fit, occlusion and proximal contacts of dental crowns designed by deep learning-based dental software: A comparative study. J Dent 2024; 141:104830. [PMID: 38163455 DOI: 10.1016/j.jdent.2023.104830] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2023] [Revised: 12/13/2023] [Accepted: 12/29/2023] [Indexed: 01/03/2024] Open
Abstract
OBJECTIVES This study compared the tooth morphology, internal fit, occlusion, and proximal contacts of dental crowns automatically generated via two deep learning (DL)-based dental software systems with those manually designed by an experienced dental technician using conventional software. METHODS Thirty partial arch scans of prepared posterior teeth were used. The crowns were designed using two DL-based methods (AA and AD) and a technician-based method (NC). The crown design outcomes were three-dimensionally compared, focusing on tooth morphology, internal fit, occlusion, and proximal contacts, by calculating the geometric relationship. Statistical analysis utilized the independent t-test, Mann-Whitney test, one-way ANOVA, and Kruskal-Wallis test with post hoc pairwise comparisons (α = 0.05). RESULTS The AA and AD groups, with the NC group as a reference, exhibited no significant tooth morphology discrepancies across entire external or occlusal surfaces. The AD group exhibited higher root mean square and positive average values on the axial surface (P < .05). The AD and NC groups exhibited a better internal fit than the AA group (P < .001). The cusp angles were similar across all groups (P = .065). The NC group yielded more occlusal contact points than the AD group (P = .006). Occlusal and proximal contact intensities varied among the groups (both P < .001). CONCLUSIONS Crowns designed by using both DL-based software programs exhibited similar morphologies on the occlusal and axial surfaces; however, they differed in internal fit, occlusion, and proximal contacts. Their overall performance was clinically comparable to that of the technician-based method in terms of the internal fit and number of occlusal contact points. CLINICAL SIGNIFICANCE DL-based dental software for crown design can streamline the digital workflow in restorative dentistry, ensuring clinically-acceptable outcomes on tooth morphology, internal fit, occlusion, and proximal contacts. It can minimize the necessity of additional design optimization by dental technician.
Collapse
Affiliation(s)
- Jun-Ho Cho
- Department of Prosthodontics, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Gülce Çakmak
- Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland
| | - Yuseung Yi
- Department of Prosthodontics, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Hyung-In Yoon
- Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland; Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Republic of Korea.
| | - Burak Yilmaz
- Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland; Department of Restorative, Preventive and Pediatric Dentistry, School of Dental Medicine, University of Bern, Bern, Switzerland; Division of Restorative and Prosthetic Dentistry, The Ohio State University, Columbus, OH, USA
| | - Martin Schimmel
- Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland
| |
Collapse
|
7
|
Cho JH, Yi Y, Choi J, Ahn J, Yoon HI, Yilmaz B. Time efficiency, occlusal morphology, and internal fit of anatomic contour crowns designed by dental software powered by generative adversarial network: A comparative study. J Dent 2023; 138:104739. [PMID: 37804938 DOI: 10.1016/j.jdent.2023.104739] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Revised: 09/26/2023] [Accepted: 10/05/2023] [Indexed: 10/09/2023] Open
Abstract
OBJECTIVES To evaluate the time efficiency, occlusal morphology, and internal fit of dental crowns designed using generative adversarial network (GAN)-based dental software compared to conventional dental software. METHODS Thirty datasets of partial arch scans for prepared posterior teeth were analyzed. Each crown was designed on each abutment using GAN-based software (AI) and conventional dental software (non-AI). The AI and non-AI groups were compared in terms of time efficiency by measuring the elapsed work time. The difference in the occlusal morphology of the crowns before and after design optimization and the internal fit of the crown to the prepared abutment were also evaluated by superimposition for each software. Data were analyzed using independent t tests or Mann-Whitney test with statistical significance (α=.05). RESULTS The working time was significantly less for the AI group than the non-AI group at T1, T5, and T6 (P≤.043). The working time with AI was significantly shorter at T1, T3, T5, and T6 for the intraoral scan (P≤.036). Only at T2 (P≤.001) did the cast scan show a significant difference between the two groups. The crowns in the AI group showed less deviation in occlusal morphology and significantly better internal fit to the abutment than those in the non-AI group (both P<.001). CONCLUSIONS Crowns designed by AI software showed improved outcomes than that designed by non-AI software, in terms of time efficiency, difference in occlusal morphology, and internal fit. CLINICAL SIGNIFICANCE The GAN-based software showed better time efficiency and less deviation in occlusal morphology during the design process than the conventional software, suggesting a higher probability of optimized outcomes of crown design.
Collapse
Affiliation(s)
- Jun-Ho Cho
- Department of Prosthodontics, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Yuseung Yi
- Department of Prosthodontics, Seoul National University Dental Hospital, Seoul, Republic of Korea
| | - Jinhyeok Choi
- Department of Biomedical Sciences, Seoul National University, Seoul, Republic of Korea
| | - Junseong Ahn
- Department of Computer Science, Korea University, Seoul, Republic of Korea
| | - Hyung-In Yoon
- Department of Prosthodontics, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Republic of Korea; Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland.
| | - Burak Yilmaz
- Department of Reconstructive Dentistry and Gerodontology, School of Dental Medicine, University of Bern, Bern, Switzerland; Department of Restorative, Preventive and Pediatric Dentistry, School of Dental Medicine, University of Bern, Bern, Switzerland; Division of Restorative and Prosthetic Dentistry, The Ohio State University, Columbus, Ohio, United States
| |
Collapse
|
8
|
Tasnadi E, Sliz-Nagy A, Horvath P. Structure preserving adversarial generation of labeled training samples for single-cell segmentation. CELL REPORTS METHODS 2023; 3:100592. [PMID: 37725984 PMCID: PMC10545934 DOI: 10.1016/j.crmeth.2023.100592] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 05/09/2023] [Accepted: 08/24/2023] [Indexed: 09/21/2023]
Abstract
We introduce a generative data augmentation strategy to improve the accuracy of instance segmentation of microscopy data for complex tissue structures. Our pipeline uses regular and conditional generative adversarial networks (GANs) for image-to-image translation to construct synthetic microscopy images along with their corresponding masks to simulate the distribution and shape of the objects and their appearance. The synthetic samples are then used for training an instance segmentation network (for example, StarDist or Cellpose). We show on two single-cell-resolution tissue datasets that our method improves the accuracy of downstream instance segmentation tasks compared with traditional training strategies using either the raw data or basic augmentations. We also compare the quality of the object masks with those generated by a traditional cell population simulation method, finding that our synthesized masks are closer to the ground truth considering Fréchet inception distances.
Collapse
Affiliation(s)
- Ervin Tasnadi
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary; Doctoral School of Computer Science, University of Szeged, 6720 Szeged, Hungary; Single-Cell Technologies, Ltd, 6726 Szeged, Hungary.
| | - Alex Sliz-Nagy
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary
| | - Peter Horvath
- Synthetic and Systems Biology Unit, Biological Research Centre, Eötvös Loránd Research Network, 6726 Szeged, Hungary; Single-Cell Technologies, Ltd, 6726 Szeged, Hungary; Institute for Molecular Medicine Finland (FIMM), University of Helsinki, 00014 Helsinki, Finland.
| |
Collapse
|
9
|
Xing X, Papanastasiou G, Walsh S, Yang G. Less Is More: Unsupervised Mask-Guided Annotated CT Image Synthesis With Minimum Manual Segmentations. IEEE TRANSACTIONS ON MEDICAL IMAGING 2023; 42:2566-2576. [PMID: 37030699 DOI: 10.1109/tmi.2023.3260169] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
As a pragmatic data augmentation tool, data synthesis has generally returned dividends in performance for deep learning based medical image analysis. However, generating corresponding segmentation masks for synthetic medical images is laborious and subjective. To obtain paired synthetic medical images and segmentations, conditional generative models that use segmentation masks as synthesis conditions were proposed. However, these segmentation mask-conditioned generative models still relied on large, varied, and labeled training datasets, and they could only provide limited constraints on human anatomical structures, leading to unrealistic image features. Moreover, the invariant pixel-level conditions could reduce the variety of synthetic lesions and thus reduce the efficacy of data augmentation. To address these issues, in this work, we propose a novel strategy for medical image synthesis, namely Unsupervised Mask (UM)-guided synthesis, to obtain both synthetic images and segmentations using limited manual segmentation labels. We first develop a superpixel based algorithm to generate unsupervised structural guidance and then design a conditional generative model to synthesize images and annotations simultaneously from those unsupervised masks in a semi-supervised multi-task setting. In addition, we devise a multi-scale multi-task Fréchet Inception Distance (MM-FID) and multi-scale multi-task standard deviation (MM-STD) to harness both fidelity and variety evaluations of synthetic CT images. With multiple analyses on different scales, we could produce stable image quality measurements with high reproducibility. Compared with the segmentation mask guided synthesis, our UM-guided synthesis provided high-quality synthetic images with significantly higher fidelity, variety, and utility ( by Wilcoxon Signed Ranked test).
Collapse
|
10
|
Osuala R, Kushibar K, Garrucho L, Linardos A, Szafranowska Z, Klein S, Glocker B, Diaz O, Lekadir K. Data synthesis and adversarial networks: A review and meta-analysis in cancer imaging. Med Image Anal 2023; 84:102704. [PMID: 36473414 DOI: 10.1016/j.media.2022.102704] [Citation(s) in RCA: 17] [Impact Index Per Article: 8.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2021] [Revised: 11/02/2022] [Accepted: 11/21/2022] [Indexed: 11/26/2022]
Abstract
Despite technological and medical advances, the detection, interpretation, and treatment of cancer based on imaging data continue to pose significant challenges. These include inter-observer variability, class imbalance, dataset shifts, inter- and intra-tumour heterogeneity, malignancy determination, and treatment effect uncertainty. Given the recent advancements in image synthesis, Generative Adversarial Networks (GANs), and adversarial training, we assess the potential of these technologies to address a number of key challenges of cancer imaging. We categorise these challenges into (a) data scarcity and imbalance, (b) data access and privacy, (c) data annotation and segmentation, (d) cancer detection and diagnosis, and (e) tumour profiling, treatment planning and monitoring. Based on our analysis of 164 publications that apply adversarial training techniques in the context of cancer imaging, we highlight multiple underexplored solutions with research potential. We further contribute the Synthesis Study Trustworthiness Test (SynTRUST), a meta-analysis framework for assessing the validation rigour of medical image synthesis studies. SynTRUST is based on 26 concrete measures of thoroughness, reproducibility, usefulness, scalability, and tenability. Based on SynTRUST, we analyse 16 of the most promising cancer imaging challenge solutions and observe a high validation rigour in general, but also several desirable improvements. With this work, we strive to bridge the gap between the needs of the clinical cancer imaging community and the current and prospective research on data synthesis and adversarial networks in the artificial intelligence community.
Collapse
Affiliation(s)
- Richard Osuala
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain.
| | - Kaisar Kushibar
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Lidia Garrucho
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Akis Linardos
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Zuzanna Szafranowska
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Stefan Klein
- Biomedical Imaging Group Rotterdam, Department of Radiology & Nuclear Medicine, Erasmus MC, Rotterdam, The Netherlands
| | - Ben Glocker
- Biomedical Image Analysis Group, Department of Computing, Imperial College London, UK
| | - Oliver Diaz
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| | - Karim Lekadir
- Artificial Intelligence in Medicine Lab (BCN-AIM), Facultat de Matemàtiques i Informàtica, Universitat de Barcelona, Spain
| |
Collapse
|
11
|
Conditional TransGAN-Based Data Augmentation for PCB Electronic Component Inspection. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2023; 2023:2024237. [PMID: 36660560 PMCID: PMC9845033 DOI: 10.1155/2023/2024237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 10/06/2022] [Accepted: 12/21/2022] [Indexed: 01/12/2023]
Abstract
Automatic recognition and positioning of electronic components on PCBs can enhance quality inspection efficiency for electronic products during manufacturing. Efficient PCB inspection requires identification and classification of PCB components as well as defects for better quality assurance. The small size of the electronic component and PCB defect targets means that there are fewer feature areas for the neural network to detect, and the complex grain backgrounds of both datasets can cause significant interference, making the target detection task challenging. Meanwhile, the detection performance of deep learning models is significantly impacted due to the lack of samples. In this paper, we propose conditional TransGAN (cTransGAN), a generative model for data augmentation, which enhances the quantity and diversity of the original training set and further improves the accuracy of PCB electronic component recognition. The design of cTransGAN brings together the merits of both conditional GAN and TransGAN, allowing a trained model to generate high-quality synthetic images conditioned on the class embeddings. To validate the proposed method, we conduct extensive experiments on two datasets, including a self-developed dataset for PCB component detection and an existing dataset for PCB defect detection. Also, we have evaluated three existing object detection algorithms, including Faster R-CNN ResNet101, YOLO V3 DarkNet-53, and SCNet ResNet101, and each is validated under four experimental settings to form an ablation study. Results demonstrate that the proposed cTransGAN can effectively enhance the quality and diversity of the training set, leading to superior performance on both tasks. We have open-sourced the project to facilitate further studies.
Collapse
|
12
|
Nasir ES, Parvaiz A, Fraz MM. Nuclei and glands instance segmentation in histology images: a narrative review. Artif Intell Rev 2022. [DOI: 10.1007/s10462-022-10372-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
13
|
Fu Z, Li J, Hua Z. DEAU-Net: Attention networks based on dual encoder for Medical Image Segmentation. Comput Biol Med 2022; 150:106197. [PMID: 37859289 DOI: 10.1016/j.compbiomed.2022.106197] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2022] [Revised: 09/25/2022] [Accepted: 10/09/2022] [Indexed: 11/15/2022]
Abstract
In recent years, variant networks derived from U-Net networks have achieved better results in the field of medical image segmentation. However, we found during our experiments that the current mainstream networks still have certain shortcomings in the learning and extraction of detailed features. Therefore, in this paper, we propose a feature attention network based on dual encoder. In the encoder stage, a dual encoder is used to implement macro feature extraction and micro feature extraction respectively. Feature attention fusion is then performed, resulting in the network that not only performs well in the recognition of macro features, but also in the processing of micro features, which is significantly improved. The network is divided into three stages: (1) learning and capture of macro features and detail features with dual encoders; (2) completing the mutual complementation of macro features and detail features through the residual attention module; (3) complete the fusion of the two features and output the final prediction result. We conducted experiments on two datasets on DEAU-Net and from the results of the comparison experiments, we showed better results in terms of edge detail features and macro features processing.
Collapse
Affiliation(s)
- Zhaojin Fu
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China; School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China
| | - Jinjiang Li
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China; School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China; Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China.
| | - Zhen Hua
- School of Information and Electronic Engineering, Shandong Technology and Business University, Yantai 264005, China; School of Computer Science and Technology, Shandong Technology and Business University, Yantai 264005, China; Co-innovation Center of Shandong Colleges and Universities: Future Intelligent Computing, Yantai 264005, China
| |
Collapse
|
14
|
Zhang X, Angelini ED, Haghpanah FS, Laine AF, Sun Y, Hiura GT, Dashnaw SM, Prince MR, Hoffman EA, Ambale-Venkatesh B, Lima JA, Wild JM, Hughes EW, Barr RG, Shen W. Quantification of lung ventilation defects on hyperpolarized MRI: The Multi-Ethnic Study of Atherosclerosis (MESA) COPD study. Magn Reson Imaging 2022; 92:140-149. [PMID: 35777684 PMCID: PMC9957614 DOI: 10.1016/j.mri.2022.06.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 05/11/2022] [Accepted: 06/23/2022] [Indexed: 01/12/2023]
Abstract
PURPOSE To develop an end-to-end deep learning (DL) framework to segment ventilation defects on pulmonary hyperpolarized MRI. MATERIALS AND METHODS The Multi-Ethnic Study of Atherosclerosis Chronic Obstructive Pulmonary Disease (COPD) study is a nested longitudinal case-control study in older smokers. Between February 2016 and July 2017, 56 participants (age, mean ± SD, 74 ± 8 years; 34 men) underwent same breath-hold proton (1H) and helium (3He) MRI, which were annotated for non-ventilated, hypo-ventilated, and normal-ventilated lungs. In this retrospective DL study, 820 1H and 3He slices from 42/56 (75%) participants were randomly selected for training, with the remaining 14/56 (25%) for test. Full lung masks were segmented using a traditional U-Net on 1H MRI and were imported into a cascaded U-Net, which were used to segment ventilation defects on 3He MRI. Models were trained with conventional data augmentation (DA) and generative adversarial networks (GAN)-DA. RESULTS Conventional-DA improved 1H and 3He MRI segmentation over the non-DA model (P = 0.007 to 0.03) but GAN-DA did not yield further improvement. The cascaded U-Net improved non-ventilated lung segmentation (P < 0.005). Dice similarity coefficients (DSC) between manually and DL-segmented full lung, non-ventilated, hypo-ventilated, and normal-ventilated regions were 0.965 ± 0.010, 0.840 ± 0.057, 0.715 ± 0.175, and 0.883 ± 0.060, respectively. We observed no statistically significant difference in DCSs between participants with and without COPD (P = 0.41, 0.06, and 0.18 for non-ventilated, hypo-ventilated, and normal-ventilated regions, respectively). CONCLUSION The proposed cascaded U-Net framework generated fully-automated segmentation of ventilation defects on 3He MRI among older smokers with and without COPD that is consistent with our reference method.
Collapse
Affiliation(s)
- Xuzhe Zhang
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Elsa D Angelini
- Department of Biomedical Engineering, Columbia University, New York, NY, USA; NIHR Imperial BRC, ITMAT Data Science Group, Department of Metabolism, Digestion and Reproduction, Imperial College, London, UK
| | - Fateme S Haghpanah
- Department of Computer Science, University of Toronto, Toronto, Ontario, Canada
| | - Andrew F Laine
- Department of Biomedical Engineering, Columbia University, New York, NY, USA
| | - Yanping Sun
- Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA
| | - Grant T Hiura
- Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA
| | - Stephen M Dashnaw
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA
| | - Martin R Prince
- Department of Radiology, Columbia University Irving Medical Center, New York, NY, USA; Department of Radiology, Weill Cornell Medicine, Cornell University, New York, NY, USA
| | - Eric A Hoffman
- Department of Radiology, University of Iowa, Iowa City, IA, USA; Department of Biomedical Engineering, University of Iowa, Iowa City, IA, USA; Department of Medicine, University of Iowa, Iowa City, IA, USA
| | | | - Joao A Lima
- School of Medicine, John Hopkins University, Baltimore, MD, USA
| | - Jim M Wild
- POLARIS, Department of Infection, Immunity and Cardiovascular Disease, University of Sheffield, Sheffield, UK
| | - Emlyn W Hughes
- Department of Physics, Columbia University, New York, NY, USA
| | - R Graham Barr
- Department of Medicine, Columbia University Irving Medical Center, New York, NY, USA; Department of Epidemiology, Columbia University Irving Medical Center, New York, NY, USA
| | - Wei Shen
- Division of Pediatric Gastroenterology, Hepatology and Nutrition, Columbia University Irving Medical Center, New York, NY, USA; Institute of Human Nutrition, Columbia University Irving Medical Center, New York, NY, USA; Columbia Magnetic Resonance Research Center (CMRRC), Columbia University, New York, NY, USA.
| |
Collapse
|
15
|
Iqbal A, Sharif M, Yasmin M, Raza M, Aftab S. Generative adversarial networks and its applications in the biomedical image segmentation: a comprehensive survey. INTERNATIONAL JOURNAL OF MULTIMEDIA INFORMATION RETRIEVAL 2022; 11:333-368. [PMID: 35821891 PMCID: PMC9264294 DOI: 10.1007/s13735-022-00240-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 03/16/2022] [Accepted: 05/24/2022] [Indexed: 05/13/2023]
Abstract
Recent advancements with deep generative models have proven significant potential in the task of image synthesis, detection, segmentation, and classification. Segmenting the medical images is considered a primary challenge in the biomedical imaging field. There have been various GANs-based models proposed in the literature to resolve medical segmentation challenges. Our research outcome has identified 151 papers; after the twofold screening, 138 papers are selected for the final survey. A comprehensive survey is conducted on GANs network application to medical image segmentation, primarily focused on various GANs-based models, performance metrics, loss function, datasets, augmentation methods, paper implementation, and source codes. Secondly, this paper provides a detailed overview of GANs network application in different human diseases segmentation. We conclude our research with critical discussion, limitations of GANs, and suggestions for future directions. We hope this survey is beneficial and increases awareness of GANs network implementations for biomedical image segmentation tasks.
Collapse
Affiliation(s)
- Ahmed Iqbal
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Muhammad Sharif
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mussarat Yasmin
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Mudassar Raza
- Department of Computer Science, COMSATS University Islamabad, Wah Campus, Pakistan
| | - Shabib Aftab
- Department of Computer Science, Virtual University of Pakistan, Lahore, Pakistan
| |
Collapse
|
16
|
Zak J, Grzeszczyk MK, Pater A, Roszkowiak L, Siemion K, Korzynska A. Cell image augmentation for classification task using GANs on Pap smear dataset. Biocybern Biomed Eng 2022. [DOI: 10.1016/j.bbe.2022.07.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
17
|
Oulefki A, Agaian S, Trongtirakul T, Benbelkacem S, Aouam D, Zenati-Henda N, Abdelli ML. Virtual Reality visualization for computerized COVID-19 lesion segmentation and interpretation. Biomed Signal Process Control 2022; 73:103371. [PMID: 34840591 PMCID: PMC8610934 DOI: 10.1016/j.bspc.2021.103371] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Revised: 09/23/2021] [Accepted: 11/07/2021] [Indexed: 02/01/2023]
Abstract
Coronavirus disease (COVID-19) is a severe infectious disease that causes respiratory illness and has had devastating medical and economic consequences globally. Therefore, early, and precise diagnosis is critical to control disease progression and management. Compared to the very popular RT-PCR (reverse-transcription polymerase chain reaction) method, chest CT imaging is a more consistent, sensible, and fast approach for identifying and managing infected COVID-19 patients, specifically in the epidemic area. CT images use computational methods to combine 2D X-ray images and transform them into 3D images. One major drawback of CT scans in diagnosing COVID-19 is creating false-negative effects, especially early infection. This article aims to combine novel CT imaging tools and Virtual Reality (VR) technology and generate an automatize system for accurately screening COVID-19 disease and navigating 3D visualizations of medical scenes. The key benefits of this system are a) it offers stereoscopic depth perception, b) give better insights and comprehension into the overall imaging data, c) it allows doctors to visualize the 3D models, manipulate them, study the inside 3D data, and do several kinds of measurements, and finally d) it has the capacity of real-time interactivity and accurately visualizes dynamic 3D volumetric data. The tool provides novel visualizations for medical practitioners to identify and analyze the change in the shape of COVID-19 infectious. The second objective of this work is to generate, the first time, the CT African patient COVID-19 scan datasets containing 224 patients positive for an infection and 70 regular patients CT-scan images. Computer simulations demonstrate that the proposed method's effectiveness comparing with state-of-the-art baselines methods. The results have also been evaluated with medical professionals. The developed system could be used for medical education professional training and a telehealth VR platform.
Collapse
Affiliation(s)
- Adel Oulefki
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | - Sos Agaian
- Dept. of Computer Science, College of Staten Island, New York, 2800 Victory Blvd Staten Island, New York 10314, USA
| | - Thaweesak Trongtirakul
- Faculty of Industrial Education, Rajamangala University of Technology Phra Nakhon, 399 Samsen Rd. Vachira Phayaban, Dusit, Bangkok 10300, Thailand
| | - Samir Benbelkacem
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | - Djamel Aouam
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | - Nadia Zenati-Henda
- Centre de Développement des Technologies Avancées (CDTA), PO. Box 17 Baba Hassen, Algiers 16081, Algeria
| | | |
Collapse
|
18
|
Platscher M, Zopes J, Federau C. Image translation for medical image generation: Ischemic stroke lesion segmentation. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103283] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
19
|
Jose L, Liu S, Russo C, Nadort A, Di Ieva A. Generative Adversarial Networks in Digital Pathology and Histopathological Image Processing: A Review. J Pathol Inform 2021; 12:43. [PMID: 34881098 PMCID: PMC8609288 DOI: 10.4103/jpi.jpi_103_20] [Citation(s) in RCA: 31] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 03/03/2021] [Accepted: 04/23/2021] [Indexed: 12/13/2022] Open
Abstract
Digital pathology is gaining prominence among the researchers with developments in advanced imaging modalities and new technologies. Generative adversarial networks (GANs) are a recent development in the field of artificial intelligence and since their inception, have boosted considerable interest in digital pathology. GANs and their extensions have opened several ways to tackle many challenging histopathological image processing problems such as color normalization, virtual staining, ink removal, image enhancement, automatic feature extraction, segmentation of nuclei, domain adaptation and data augmentation. This paper reviews recent advances in histopathological image processing using GANs with special emphasis on the future perspectives related to the use of such a technique. The papers included in this review were retrieved by conducting a keyword search on Google Scholar and manually selecting the papers on the subject of H&E stained digital pathology images for histopathological image processing. In the first part, we describe recent literature that use GANs in various image preprocessing tasks such as stain normalization, virtual staining, image enhancement, ink removal, and data augmentation. In the second part, we describe literature that use GANs for image analysis, such as nuclei detection, segmentation, and feature extraction. This review illustrates the role of GANs in digital pathology with the objective to trigger new research on the application of generative models in future research in digital pathology informatics.
Collapse
Affiliation(s)
- Laya Jose
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
| | - Sidong Liu
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
- Australian Institute of Health Innovation, Centre for
Health Informatics, Macquarie University, Sydney, Australia
| | - Carlo Russo
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| | - Annemarie Nadort
- ARC Centre of Excellence for Nanoscale Biophotonics,
Macquarie University, Sydney, Australia
- Department of Physics and Astronomy, Faculty of Science
and Engineering, Macquarie University, Sydney, Australia
| | - Antonio Di Ieva
- Computational NeuroSurgery (CNS) Lab, Macquarie Medical
School, Faculty of Medicine, Health and Human Sciences, Macquarie University,
Sydney, Australia
| |
Collapse
|
20
|
Li Y, Cui J, Sheng Y, Liang X, Wang J, Chang EIC, Xu Y. Whole brain segmentation with full volume neural network. Comput Med Imaging Graph 2021; 93:101991. [PMID: 34634548 DOI: 10.1016/j.compmedimag.2021.101991] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 06/13/2021] [Accepted: 09/06/2021] [Indexed: 10/20/2022]
Abstract
Whole brain segmentation is an important neuroimaging task that segments the whole brain volume into anatomically labeled regions-of-interest. Convolutional neural networks have demonstrated good performance in this task. Existing solutions, usually segment the brain image by classifying the voxels, or labeling the slices or the sub-volumes separately. Their representation learning is based on parts of the whole volume whereas their labeling result is produced by aggregation of partial segmentation. Learning and inference with incomplete information could lead to sub-optimal final segmentation result. To address these issues, we propose to adopt a full volume framework, which feeds the full volume brain image into the segmentation network and directly outputs the segmentation result for the whole brain volume. The framework makes use of complete information in each volume and can be implemented easily. An effective instance in this framework is given subsequently. We adopt the 3D high-resolution network (HRNet) for learning spatially fine-grained representations and the mixed precision training scheme for memory-efficient training. Extensive experiment results on a publicly available 3D MRI brain dataset show that our proposed model advances the state-of-the-art methods in terms of segmentation performance.
Collapse
Affiliation(s)
- Yeshu Li
- Department of Computer Science, University of Illinois at Chicago, Chicago, IL 60607, United States.
| | - Jonathan Cui
- Vacaville Christian Schools, Vacaville, CA 95687, United States.
| | - Yilun Sheng
- Institute for Interdisciplinary Information Sciences, Tsinghua University, Beijing 100084, China; Microsoft Research, Beijing 100080, China.
| | - Xiao Liang
- High School Affiliated to Renmin University of China, Beijing 100080, China.
| | | | | | - Yan Xu
- School of Biological Science and Medical Engineering and Beijing Advanced Innovation Centre for Biomedical Engineering, Beihang University, Beijing 100191, China; Microsoft Research, Beijing 100080, China.
| |
Collapse
|
21
|
Ding W, Nayak J, Swapnarekha H, Abraham A, Naik B, Pelusi D. Fusion of intelligent learning for COVID-19: A state-of-the-art review and analysis on real medical data. Neurocomputing 2021; 457:40-66. [PMID: 34149184 PMCID: PMC8206574 DOI: 10.1016/j.neucom.2021.06.024] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 06/02/2021] [Accepted: 06/11/2021] [Indexed: 12/11/2022]
Abstract
The unprecedented surge of a novel coronavirus in the month of December 2019, named as COVID-19 by the World Health organization has caused a serious impact on the health and socioeconomic activities of the public all over the world. Since its origin, the number of infected and deceased cases has been growing exponentially in almost all the affected countries of the world. The rapid spread of the novel coronavirus across the world results in the scarcity of medical resources and overburdened hospitals. As a result, the researchers and technocrats are continuously working across the world for the inculcation of efficient strategies which may assist the government and healthcare system in controlling and managing the spread of the COVID-19 pandemic. Therefore, this study provides an extensive review of the ongoing strategies such as diagnosis, prediction, drug and vaccine development and preventive measures used in combating the COVID-19 along with technologies used and limitations. Moreover, this review also provides a comparative analysis of the distinct type of data, emerging technologies, approaches used in diagnosis and prediction of COVID-19, statistics of contact tracing apps, vaccine production platforms used in the COVID-19 pandemic. Finally, the study highlights some challenges and pitfalls observed in the systematic review which may assist the researchers to develop more efficient strategies used in controlling and managing the spread of COVID-19.
Collapse
Affiliation(s)
- Weiping Ding
- School of Information Science and Technology, Nantong University, China
| | - Janmenjoy Nayak
- Aditya Institute of Technology and Management (AITAM), India
| | - H Swapnarekha
- Aditya Institute of Technology and Management (AITAM), India
- Veer Surendra Sai University of Technology, India
| | | | | | | |
Collapse
|
22
|
DGFAU-Net: Global feature attention upsampling network for medical image segmentation. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-05908-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
23
|
Zhang R, Lu W, Wei X, Zhu J, Jiang H, Liu Z, Gao J, Li X, Yu J, Yu M, Yu R. A Progressive Generative Adversarial Method for Structurally Inadequate Medical Image Data Augmentation. IEEE J Biomed Health Inform 2021; 26:7-16. [PMID: 34347609 DOI: 10.1109/jbhi.2021.3101551] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
The generation-based data augmentation method can overcome the challenge caused by the imbalance of medical image data to a certain extent. However, most of the current research focus on images with unified structure which are easy to learn. What is different is that ultrasound images are structurally inadequate, making it difficult for the structure to be captured by the generative network, resulting in the generated image lacks structural legitimacy. Therefore, a Progressive Generative Adversarial Method for Structurally Inadequate Medical Image Data Augmentation is proposed in this paper, including a network and a strategy. Our Progressive Texture Generative Adversarial Network alleviates the adverse effect of completely truncating the reconstruction of structure and texture during the generation process and enhances the implicit association between structure and texture. The Image Data Augmentation Strategy based on Mask-Reconstruction overcomes data imbalance from a novel perspective, maintains the legitimacy of the structure in the generated data, as well as increases the diversity of disease data interpretably. The experiments prove the effectiveness of our method on data augmentation and image reconstruction on Structurally Inadequate Medical Image both qualitatively and quantitatively. Finally, the weakly supervised segmentation of the lesion is the additional contribution of our method.
Collapse
|
24
|
Lee H, Lee H, Hong H, Bae H, Lim JS, Kim J. Classification of focal liver lesions in CT images using convolutional neural networks with lesion information augmented patches and synthetic data augmentation. Med Phys 2021; 48:5029-5046. [PMID: 34287951 DOI: 10.1002/mp.15118] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 06/25/2021] [Accepted: 06/27/2021] [Indexed: 01/10/2023] Open
Abstract
PURPOSE We propose a deep learning method that classifies focal liver lesions (FLLs) into cysts, hemangiomas, and metastases from portal phase abdominal CT images. We propose a synthetic data augmentation process to alleviate the class imbalance and the Lesion INformation Augmented (LINA) patch to improve the learning efficiency. METHODS A dataset of 502 portal phase CT scans of 1,290 FLLs was used. First, to alleviate the class imbalance and to diversify the training data patterns, we suggest synthetic training data augmentation using DCGAN-based lesion mask synthesis and pix2pix-based mask-to-image translation. Second, to improve the learning efficiency of convolutional neural networks (CNNs) for the small lesions, we propose a novel type of input patch termed the LINA patch to emphasize the lesion texture information while also maintaining the lesion boundary information in the patches. Third, we construct a multi-scale CNN through a model ensemble of ResNet-18 CNNs trained on LINA patches of various mini-patch sizes. RESULTS The experiments demonstrate that (a) synthetic data augmentation method shows characteristics different but complementary to those in conventional real data augmentation in augmenting data distributions, (b) the proposed LINA patches improve classification performance compared to those by existing types of CNN input patches due to the enhanced texture and boundary information in the small lesions, and (c) through an ensemble of LINA patch-trained CNNs with different mini-patch sizes, the multi-scale CNN further improves overall classification performance. As a result, the proposed method achieved an accuracy of 87.30%, showing improvements of 10.81%p and 15.0%p compared to the conventional image patch-trained CNN and texture feature-trained SVM, respectively. CONCLUSIONS The proposed synthetic data augmentation method shows promising results in improving the data diversity and class imbalance, and the proposed LINA patches enhance the learning efficiency compared to the existing input image patches.
Collapse
Affiliation(s)
- Hansang Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Haeil Lee
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Helen Hong
- Department of Software Convergence, College of Interdisciplinary Studies for Emerging Industries, Seoul Women's University, Seoul, Republic of Korea
| | - Heejin Bae
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Joon Seok Lim
- Department of Radiology, Research Institute of Radiological Science, Severance Hospital, Yonsei University College of Medicine, Seoul, Republic of Korea
| | - Junmo Kim
- School of Electrical Engineering, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
25
|
Tang Y, Zhang J, He D, Miao W, Liu W, Li Y, Lu G, Wu F, Wang S. GANDA: A deep generative adversarial network conditionally generates intratumoral nanoparticles distribution pixels-to-pixels. J Control Release 2021; 336:336-343. [PMID: 34197860 DOI: 10.1016/j.jconrel.2021.06.039] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2021] [Revised: 05/19/2021] [Accepted: 06/25/2021] [Indexed: 12/22/2022]
Abstract
Intratumoral nanoparticles (NPs) distribution is critical for the success of nanomedicine in imaging and treatment, but computational models to describe the NPs distribution remain unavailable due to the complex tumor-nano interactions. Here, we develop a Generative Adversarial Network for Distribution Analysis (GANDA) to describe and conditionally generates the intratumoral quantum dots (QDs) distribution after i.v. injection. This deep generative model is trained automatically by 27,775 patches of tumor vessels and cell nuclei decomposed from whole-slide images of 4 T1 breast cancer sections. The GANDA model can conditionally generate images of intratumoral QDs distribution under the constraint of given tumor vessels and cell nuclei channels with the same spatial resolution (pixels-to-pixels), minimal loss (mean squared error, MSE = 1.871) and excellent reliability (intraclass correlation, ICC = 0.94). Quantitative analysis of QDs extravasation distance (ICC = 0.95) and subarea distribution (ICC = 0.99) is allowed on the generated images without knowing the real QDs distribution. We believe this deep generative model may provide opportunities to investigate how influencing factors affect NPs distribution in individual tumors and guide nanomedicine optimization for molecular imaging and personalized treatment.
Collapse
Affiliation(s)
- Yuxia Tang
- Department of Radiology, Jinling Hospital, Nanjing, Jiangsu 210000, Nanjing Medical University, China
| | - Jiulou Zhang
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Doudou He
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Wenfang Miao
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Wei Liu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Yang Li
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China
| | - Guangming Lu
- Department of Radiology, Jinling Hospital, Nanjing, Jiangsu 210000, Nanjing Medical University, China.
| | - Feiyun Wu
- Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China.
| | - Shouju Wang
- Department of Radiology, Jinling Hospital, Nanjing, Jiangsu 210000, Nanjing Medical University, China; Department of Radiology, The First Affiliated Hospital of Nanjing Medical University, Nanjing, Jiangsu, China.
| |
Collapse
|
26
|
Lan L, You L, Zhang Z, Fan Z, Zhao W, Zeng N, Chen Y, Zhou X. Generative Adversarial Networks and Its Applications in Biomedical Informatics. Front Public Health 2020; 8:164. [PMID: 32478029 PMCID: PMC7235323 DOI: 10.3389/fpubh.2020.00164] [Citation(s) in RCA: 64] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Accepted: 04/17/2020] [Indexed: 02/05/2023] Open
Abstract
The basic Generative Adversarial Networks (GAN) model is composed of the input vector, generator, and discriminator. Among them, the generator and discriminator are implicit function expressions, usually implemented by deep neural networks. GAN can learn the generative model of any data distribution through adversarial methods with excellent performance. It has been widely applied to different areas since it was proposed in 2014. In this review, we introduced the origin, specific working principle, and development history of GAN, various applications of GAN in digital image processing, Cycle-GAN, and its application in medical imaging analysis, as well as the latest applications of GAN in medical informatics and bioinformatics.
Collapse
Affiliation(s)
- Lan Lan
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu, China
| | - Lei You
- Center for Computational Systems Medicine, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Zeyang Zhang
- Department of Computer Science and Technology, College of Electronics and Information Engineering, Tongji University, Shanghai, China
| | - Zhiwei Fan
- Department of Epidemiology and Health Statistics, West China School of Public Health and West China Fourth Hospital, Sichuan University, Chengdu, China
| | - Weiling Zhao
- Center for Computational Systems Medicine, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| | - Nianyin Zeng
- Department of Instrumental and Electrical Engineering, Xiamen University, Fujian, China
| | - Yidong Chen
- Department of Computer Science and Technology, College of Computer Science, Sichuan University, Chengdu, China
| | - Xiaobo Zhou
- Center for Computational Systems Medicine, School of Biomedical Informatics, University of Texas Health Science Center at Houston, Houston, TX, United States
| |
Collapse
|