101
|
Kwon O, Yong TH, Kang SR, Kim JE, Huh KH, Heo MS, Lee SS, Choi SC, Yi WJ. Automatic diagnosis for cysts and tumors of both jaws on panoramic radiographs using a deep convolution neural network. Dentomaxillofac Radiol 2020; 49:20200185. [PMID: 32574113 PMCID: PMC7719862 DOI: 10.1259/dmfr.20200185] [Citation(s) in RCA: 82] [Impact Index Per Article: 16.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 05/29/2020] [Accepted: 06/11/2020] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES The purpose of this study was to automatically diagnose odontogenic cysts and tumors of both jaws on panoramic radiographs using deep learning. We proposed a novel framework of deep convolution neural network (CNN) with data augmentation for detection and classification of the multiple diseases. METHODS We developed a deep CNN modified from YOLOv3 for detecting and classifying odontogenic cysts and tumors of both jaws. Our data set of 1282 panoramic radiographs comprised 350 dentigerous cysts (DCs), 302 periapical cysts (PCs), 300 odontogenic keratocysts (OKCs), 230 ameloblastomas (ABs), and 100 normal jaws with no disease. In addition, the number of radiographs was augmented 12-fold by flip, rotation, and intensity changes. We evaluated the classification performance of the developed CNN by calculating sensitivity, specificity, accuracy, and area under the curve (AUC) for diseases of both jaws. RESULTS The overall classification performance for the diseases improved from 78.2% sensitivity, 93.9% specificity,91.3% accuracy, and 0.86 AUC using the CNN with unaugmented data set to 88.9% sensitivity, 97.2% specificity, 95.6% accuracy, and 0.94 AUC using the CNN with augmented data set. CNN using augmented data set had the following sensitivities, specificities, accuracies, and AUCs: 91.4%, 99.2%, 97.8%, and 0.96 for DCs, 82.8%, 99.2%, 96.2%, and 0.92 for PCs, 98.4%,92.3%,94.0%, and 0.97 for OKCs, 71.7%, 100%, 94.3%, and 0.86 for ABs, and 100.0%, 95.1%, 96.0%, and 0.97 for normal jaws, respectively. CONCLUSION The CNN method we developed for automatically diagnosing odontogenic cysts and tumors of both jaws on panoramic radiographs using data augmentation showed high sensitivity, specificity, accuracy, and AUC despite the limited number of panoramic images involved.
Collapse
Affiliation(s)
- Odeuk Kwon
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Seoul National University, Seoul, South Korea
| | - Tae-Hoon Yong
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Se-Ryong Kang
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, South Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | - Soon-Chul Choi
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, BK21, Seoul National University, Seoul, South Korea
| | | |
Collapse
|
102
|
Chung M, Lee J, Park S, Lee M, Lee CE, Lee J, Shin YG. Individual tooth detection and identification from dental panoramic X-ray images via point-wise localization and distance regularization. Artif Intell Med 2020; 111:101996. [PMID: 33461689 DOI: 10.1016/j.artmed.2020.101996] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Revised: 11/04/2020] [Accepted: 11/17/2020] [Indexed: 11/19/2022]
Abstract
Dental panoramic X-ray imaging is a popular diagnostic method owing to its very small dose of radiation. For an automated computer-aided diagnosis system in dental clinics, automatic detection and identification of individual teeth from panoramic X-ray images are critical prerequisites. In this study, we propose a point-wise tooth localization neural network by introducing a spatial distance regularization loss. The proposed network initially performs center point regression for all the anatomical teeth (i.e., 32 points), which automatically identifies each tooth. A novel distance regularization penalty is employed on the 32 points by considering L2 regularization loss of Laplacian on spatial distances. Subsequently, teeth boxes are individually localized using a multitask neural network on a patch basis. A multitask offset training is employed on the final output to improve the localization accuracy. Our method successfully localizes not only the existing teeth but also missing teeth; consequently, highly accurate detection and identification are achieved. The experimental results demonstrate that the proposed algorithm outperforms state-of-the-art approaches by increasing the average precision of teeth detection by 15.71 % compared to the best performing method. The accuracy of identification achieved a precision of 0.997 and recall value of 0.972. Moreover, the proposed network does not require any additional identification algorithm owing to the preceding regression of the fixed 32 points regardless of the existence of the teeth.
Collapse
Affiliation(s)
- Minyoung Chung
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Jusang Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Sanguk Park
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Minkyung Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Chae Eun Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, Republic of Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, Republic of Korea.
| |
Collapse
|
103
|
Corbella S, Srinivas S, Cabitza F. Applications of deep learning in dentistry. Oral Surg Oral Med Oral Pathol Oral Radiol 2020; 132:225-238. [PMID: 33303419 DOI: 10.1016/j.oooo.2020.11.003] [Citation(s) in RCA: 41] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2020] [Revised: 10/09/2020] [Accepted: 11/08/2020] [Indexed: 01/24/2023]
Abstract
Over the last few years, translational applications of so-called artificial intelligence in the field of medicine have garnered a significant amount of interest. The present article aims to review existing dental literature that has examined deep learning, a subset of machine learning that has demonstrated the highest performance when applied to image processing and that has been tested as a formidable diagnostic support tool through its automated analysis of radiographic/photographic images. Furthermore, the article will critically evaluate the literature to describe potential methodological weaknesses of the studies and the need for further development. This review includes 28 studies that have described the applications of deep learning in various fields of dentistry. Research into the applications of deep learning in dentistry contains claims of its high accuracy. Nonetheless, many of these studies have substantial limitations and methodological issues (e.g., examiner reliability, the number of images used for training/testing, the methods used for validation) that have significantly limited the external validity of their results. Therefore, future studies that acknowledge the methodological limitations of existing literature will help to establish a better understanding of the usefulness of applying deep learning in dentistry.
Collapse
Affiliation(s)
- Stefano Corbella
- Department of Biomedical, Surgical and Dental Sciences, Università degli Studi di Milano, Milan, Italy; IRCCS Istituto Ortopedico Galeazzi, Milan, Italy; Department of Oral Surgery, Institute of Dentistry, I. M. Sechenov First Moscow State Medical University, Moscow, Russia.
| | | | - Federico Cabitza
- Department of Informatics, Systemics and Communication, University of Milano-Bicocca, Milan, Italy
| |
Collapse
|
104
|
Li D, Fu Z, Xu J. Stacked-autoencoder-based model for COVID-19 diagnosis on CT images. APPL INTELL 2020; 51:2805-2817. [PMID: 34764564 PMCID: PMC7652058 DOI: 10.1007/s10489-020-02002-w] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2020] [Revised: 09/28/2020] [Accepted: 10/01/2020] [Indexed: 12/24/2022]
Abstract
With the outbreak of COVID-19, medical imaging such as computed tomography (CT) based diagnosis is proved to be an effective way to fight against the rapid spread of the virus. Therefore, it is important to study computerized models for infectious detection based on CT imaging. New deep learning-based approaches are developed for CT assisted diagnosis of COVID-19. However, most of the current studies are based on a small size dataset of COVID-19 CT images as there are less publicly available datasets for patient privacy reasons. As a result, the performance of deep learning-based detection models needs to be improved based on a small size dataset. In this paper, a stacked autoencoder detector model is proposed to greatly improve the performance of the detection models such as precision rate and recall rate. Firstly, four autoencoders are constructed as the first four layers of the whole stacked autoencoder detector model being developed to extract better features of CT images. Secondly, the four autoencoders are cascaded together and connected to the dense layer and the softmax classifier to constitute the model. Finally, a new classification loss function is constructed by superimposing reconstruction loss to enhance the detection accuracy of the model. The experiment results show that our model is performed well on a small size COVID-2019 CT image dataset. Our model achieves the average accuracy, precision, recall, and F1-score rate of 94.7%, 96.54%, 94.1%, and 94.8%, respectively. The results reflect the ability of our model in discriminating COVID-19 images which might help radiologists in the diagnosis of suspected COVID-19 patients.
Collapse
Affiliation(s)
- Daqiu Li
- School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing, 210044 China
- Peng Cheng Laboratory, Shenzhen, 518000 China
| | - Zhangjie Fu
- School of Computer and Software, Nanjing University of Information Science & Technology, Nanjing, 210044 China
- Peng Cheng Laboratory, Shenzhen, 518000 China
| | - Jun Xu
- School of Automation, Nanjing University of Information Science & Technology, Nanjing, 210044 China
| |
Collapse
|
105
|
B S, R N. Transfer Learning Based Automatic Human Identification using Dental Traits- An Aid to Forensic Odontology. J Forensic Leg Med 2020; 76:102066. [PMID: 33032205 DOI: 10.1016/j.jflm.2020.102066] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2020] [Revised: 08/17/2020] [Accepted: 09/23/2020] [Indexed: 12/13/2022]
Abstract
Forensic Odontology deals with identifying humans based on their dental traits because of their robust nature. Classical methods of human identification require more manual effort and are difficult to use for large number of Images. A Novel way of automating the process of human identification by using deep learning approaches is proposed in this paper. Transfer learning using AlexNet is applied in three stages: In the first stage, the features of the query tooth image are extracted and its location is identified as either in the upper or lower Jaw. In the second stage of transfer learning, the tooth is then classified into any of the four classes namely Molar, Premolar, Canine or Incisor. In the last stage, the classified tooth is then numbered according to the universal numbering system and finally the candidate identification is made by using distance as metrics. These three stage transfer learning approach proposed in this work helps in reducing the search space in the process of candidate matching. Also, instead of making the network classify all the 32 teeth into 32 different classes, this approach reduces the number of classes assigned to the classification layer in each stage thereby increasing the performance of the network. This work outperforms the classical approaches in terms of both accuracy and precision. The hit rate in human identification is also higher compared to the other state-of-art methods.
Collapse
Affiliation(s)
- Sathya B
- Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore, Tamilnadu, 641 004, India.
| | - Neelaveni R
- Department of Electrical and Electronics Engineering, PSG College of Technology, Coimbatore, Tamilnadu, 641 004, India.
| |
Collapse
|
106
|
Kimura R, Teramoto A, Ohno T, Saito K, Fujita H. Virtual digital subtraction angiography using multizone patch-based U-Net. Phys Eng Sci Med 2020; 43:1305-1315. [PMID: 33026591 DOI: 10.1007/s13246-020-00933-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2020] [Accepted: 09/25/2020] [Indexed: 01/20/2023]
Abstract
Digital subtraction angiography (DSA) is a powerful technique for visualizing blood vessels from X-ray images. However, the subtraction images obtained with this technique suffer from artifacts caused by patient motion. To avoid these artifacts, a new method called "Virtual DSA" is proposed, which generates DSA images directly from a single live image without using a mask image. The proposed Virtual DSA method was developed using the U-Net deep learning architecture. In the proposed method, a virtual DSA image only containing the extracted blood vessels was generated by inputting a single live image into U-Net. To extract the blood vessels more accurately, U-Net operates on each small area via a patch-based process. In addition, a different network was used for each zone to use the local information. The evaluation of the live images of the head confirmed accurate blood vessel extraction without artifacts in the virtual DSA image generated with the proposed method. In this study, the NMSE, PSNR, and SSIM indices were 8.58%, 33.86 dB, and 0.829, respectively. These results indicate that the proposed method can visualize blood vessels without motion artifacts from a single live image.
Collapse
Affiliation(s)
- Ryusei Kimura
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Tomoyuki Ohno
- Fujita Health University Bantane Hospital, 3-6-10 Otobashi Nakagawa-ku, Nagoya-city, Aichi, 454-8509, Japan
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
107
|
Automatic Tooth Detection and Numbering Using a Combination of a CNN and Heuristic Algorithm. APPLIED SCIENCES-BASEL 2020. [DOI: 10.3390/app10165624] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Dental panoramic radiography (DPR) is a method commonly used in dentistry for patient diagnosis. This study presents a new technique that combines a regional convolutional neural network (RCNN), Single Shot Multibox Detector, and heuristic methods to detect and number the teeth and implants with only fixtures in a DPR image. This technology is highly significant in providing statistical information and personal identification based on DPR and separating the images of individual teeth, which serve as basic data for various DPR-based AI algorithms. As a result, the mAP(@IOU = 0.5) of the tooth, implant fixture, and crown detection using the RCNN algorithm were obtained at rates of 96.7%, 45.1%, and 60.9%, respectively. Further, the sensitivity, specificity, and accuracy of the tooth numbering algorithm using a convolutional neural network and heuristics were 84.2%, 75.5%, and 84.5%, respectively. Techniques to analyze DPR images, including implants and bridges, were developed, enabling the possibility of applying AI to orthodontic or implant DPR images of patients.
Collapse
|
108
|
Personal identification with orthopantomography using simple convolutional neural networks: a preliminary study. Sci Rep 2020; 10:13559. [PMID: 32782269 PMCID: PMC7419525 DOI: 10.1038/s41598-020-70474-4] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2020] [Accepted: 07/09/2020] [Indexed: 11/08/2022] Open
Abstract
Forensic dental examination has played an important role in personal identification (PI). However, PI has essentially been based on traditional visual comparisons of ante- and postmortem dental records and radiographs, and there is no globally accepted PI method based on digital technology. Although many effective image recognition models have been developed, they have been underutilized in forensic odontology. The aim of this study was to verify the usefulness of PI with paired orthopantomographs obtained in a relatively short period using convolutional neural network (CNN) technologies. Thirty pairs of orthopantomographs obtained on different days were analyzed in terms of the accuracy of dental PI based on six well-known CNN architectures: VGG16, ResNet50, Inception-v3, InceptionResNet-v2, Xception, and MobileNet-v2. Each model was trained and tested using paired orthopantomographs, and pretraining and fine-tuning transfer learning methods were validated. Higher validation accuracy was achieved with fine-tuning than with pretraining, and each architecture showed a detection accuracy of 80.0% or more. The VGG16 model achieved the highest accuracy (100.0%) with pretraining and with fine-tuning. This study demonstrated the usefulness of CNN for PI using small numbers of orthopantomographic images, and it also showed that VGG16 was the most useful of the six tested CNN architectures.
Collapse
|
109
|
Fan F, Ke W, Wu W, Tian X, Lyu T, Liu Y, Liao P, Dai X, Chen H, Deng Z. Automatic human identification from panoramic dental radiographs using the convolutional neural network. Forensic Sci Int 2020; 314:110416. [PMID: 32721824 DOI: 10.1016/j.forsciint.2020.110416] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2019] [Revised: 06/08/2020] [Accepted: 07/13/2020] [Indexed: 12/21/2022]
Abstract
Human identification is an important task in mass disaster and criminal investigations. Although several automatic dental identification systems have been proposed, accurate and fast identification from panoramic dental radiographs (PDRs) remains a challenging issue. In this study, an automatic human identification system (DENT-net) was developed using the customized convolutional neural network (CNN). The DENT-net was trained on 15,369 PDRs from 6300 individuals. The PDRs were preprocessed by affine transformation and histogram equalization. The DENT-net took 128 × 128 × 7 square patches as input, including the whole PDR and six details extracted from the PDR. Using the DENT-net, the feature extraction took around 10 milliseconds per image and the running time for retrieval was 33.03 milliseconds in a 2000-individual database, promising an application on larger databases. The visualization of CNN showed that the teeth, maxilla, and mandible all contributed to human identification. The DENT-net achieved Rank-1 accuracy of 85.16% and Rank-5 accuracy of 97.74% for human identification. The present results demonstrated that human identification can be achieved from PDRs by CNN with high accuracy and speed. The present system can be used without any special equipment or knowledge to generate the candidate images. While the final decision should be made by human specialists in practice. It is expected to aid human identification in mass disaster and criminal investigation.
Collapse
Affiliation(s)
- Fei Fan
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu 610041, China
| | - Wenchi Ke
- College of Computer Science, Sichuan University, Chengdu 610064, China
| | - Wei Wu
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu 610041, China
| | - Xuemei Tian
- Institute of Forensic Science, Ministry of Public Security, Beijing 100038, China
| | - Tu Lyu
- Institute of Forensic Science, Ministry of Public Security, Beijing 100038, China
| | - Yuanyuan Liu
- Department of Oral Radiology, West China College of Stomatology, Sichuan University, Chengdu 610041, China
| | - Peixi Liao
- The Department of Scientific Research and Education, The Sixth People's Hospital of Chengdu, Chengdu 610000, China
| | - Xinhua Dai
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu 610041, China
| | - Hu Chen
- College of Computer Science, Sichuan University, Chengdu 610064, China.
| | - Zhenhua Deng
- West China School of Basic Medical Sciences & Forensic Medicine, Sichuan University, Chengdu 610041, China.
| |
Collapse
|
110
|
Abstract
Objective: To apply the technique of deep learning on a small dataset of panoramic images for the detection and segmentation of the mental foramen (MF). Study design: In this study we used in-house dataset created within the School of Dental Medicine, Tel Aviv University. The dataset contained randomly chosen and anonymized 112 digital panoramic X-ray images and corresponding segmentations of MF. In order to solve the task of segmentation of the MF we used a single fully convolution neural network, that was based on U-net as well as a cascade architecture. 70% of the data were randomly chosen for training, 15% for validation and accuracy was tested on 15%. The model was trained using NVIDIA GeForce GTX 1080 GPU. The SPSS software, version 17.0 (Chicago, IL, USA) was used for the statistical analysis. The study was approved by the ethical committee of Tel Aviv University. Results: The best results of the dice similarity coefficient ( DSC), precision, recall, MF-wise true positive rate (MFTPR) and MF-wise false positive rate (MFFPR) in single networks were 49.51%, 71.13%, 68.24%, 87.81% and 14.08%, respectively. The cascade of networks has shown better results than simple networks in recall and MFTPR, which were 88.83%, 93.75%, respectively, while DSC and precision achieved the lowest values, 31.77% and 23.92%, respectively. Conclusions: Currently, the U-net, one of the most used neural network architectures for biomedical application, was effectively used in this study. Methods based on deep learning are extremely important for automatic detection and segmentation in radiology and require further development.
Collapse
|
111
|
Kim I, Misra D, Rodriguez L, Gill M, Liberton DK, Almpani K, Lee JS, Antani S. Malocclusion Classification on 3D Cone-Beam CT Craniofacial Images Using Multi-Channel Deep Learning Models .. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:1294-1298. [PMID: 33018225 PMCID: PMC11977666 DOI: 10.1109/embc44109.2020.9176672] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Analyzing and interpreting cone-beam computed tomography (CBCT) images is a complicated and often time-consuming process. In this study, we present two different architectures of multi-channel deep learning (DL) models: "Ensemble" and "Synchronized multi-channel", to automatically identify and classify skeletal malocclusions from 3D CBCT craniofacial images. These multi-channel models combine three individual single-channel base models using a voting scheme and a two-step learning process, respectively, to simultaneously extract and learn a visual representation from three different directional views of 2D images generated from a single 3D CBCT image. We also employ a visualization method called "Class-selective Relevance Mapping" (CRM) to explain the learned behavior of our DL models by localizing and highlighting a discriminative area within an input image. Our multi-channel models achieve significantly better performance overall (accuracy exceeding 93%), compared to single-channel DL models that only take one specific directional view of 2D projected image as an input. In addition, CRM visually demonstrates that a DL model based on the sagittal-left view of 2D images outperforms those based on other directional 2D images.Clinical Relevance- the proposed method aims at assisting orthodontist to determine the best treatment path for the patient be it orthodontic or surgical treatment or a combination of both.
Collapse
|
112
|
Hung K, Yeung AWK, Tanaka R, Bornstein MM. Current Applications, Opportunities, and Limitations of AI for 3D Imaging in Dental Research and Practice. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2020; 17:ijerph17124424. [PMID: 32575560 PMCID: PMC7345758 DOI: 10.3390/ijerph17124424] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Revised: 06/12/2020] [Accepted: 06/16/2020] [Indexed: 12/15/2022]
Abstract
The increasing use of three-dimensional (3D) imaging techniques in dental medicine has boosted the development and use of artificial intelligence (AI) systems for various clinical problems. Cone beam computed tomography (CBCT) and intraoral/facial scans are potential sources of image data to develop 3D image-based AI systems for automated diagnosis, treatment planning, and prediction of treatment outcome. This review focuses on current developments and performance of AI for 3D imaging in dentomaxillofacial radiology (DMFR) as well as intraoral and facial scanning. In DMFR, machine learning-based algorithms proposed in the literature focus on three main applications, including automated diagnosis of dental and maxillofacial diseases, localization of anatomical landmarks for orthodontic and orthognathic treatment planning, and general improvement of image quality. Automatic recognition of teeth and diagnosis of facial deformations using AI systems based on intraoral and facial scanning will very likely be a field of increased interest in the future. The review is aimed at providing dental practitioners and interested colleagues in healthcare with a comprehensive understanding of the current trend of AI developments in the field of 3D imaging in dental medicine.
Collapse
Affiliation(s)
- Kuofeng Hung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Andy Wai Kan Yeung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Ray Tanaka
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
| | - Michael M. Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong 999077, China; (K.H.); (A.W.K.Y.); (R.T.)
- Department of Oral Health & Medicine, University Center for Dental Medicine Basel UZB, University of Basel, 4058 Basel, Switzerland
- Correspondence: ; Tel.: +41-(0)61-267-25-45
| |
Collapse
|
113
|
Chang HJ, Lee SJ, Yong TH, Shin NY, Jang BG, Kim JE, Huh KH, Lee SS, Heo MS, Choi SC, Kim TI, Yi WJ. Deep Learning Hybrid Method to Automatically Diagnose Periodontal Bone Loss and Stage Periodontitis. Sci Rep 2020; 10:7531. [PMID: 32372049 PMCID: PMC7200807 DOI: 10.1038/s41598-020-64509-z] [Citation(s) in RCA: 99] [Impact Index Per Article: 19.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2020] [Accepted: 04/17/2020] [Indexed: 11/25/2022] Open
Abstract
We developed an automatic method for staging periodontitis on dental panoramic radiographs using the deep learning hybrid method. A novel hybrid framework was proposed to automatically detect and classify the periodontal bone loss of each individual tooth. The framework is a hybrid of deep learning architecture for detection and conventional CAD processing for classification. Deep learning was used to detect the radiographic bone level (or the CEJ level) as a simple structure for the whole jaw on panoramic radiographs. Next, the percentage rate analysis of the radiographic bone loss combined the tooth long-axis with the periodontal bone and CEJ levels. Using the percentage rate, we could automatically classify the periodontal bone loss. This classification was used for periodontitis staging according to the new criteria proposed at the 2017 World Workshop on the Classification of Periodontal and Peri-Implant Diseases and Conditions. The Pearson correlation coefficient of the automatic method with the diagnoses by radiologists was 0.73 overall for the whole jaw (p < 0.01), and the intraclass correlation value 0.91 overall for the whole jaw (p < 0.01). The novel hybrid framework that combined deep learning architecture and the conventional CAD approach demonstrated high accuracy and excellent reliability in the automatic diagnosis of periodontal bone loss and staging of periodontitis.
Collapse
Affiliation(s)
- Hyuk-Joon Chang
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Sang-Jeong Lee
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Tae-Hoon Yong
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea
| | - Nan-Young Shin
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Bong-Geun Jang
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Jo-Eun Kim
- Department of Oral and Maxillofacial Radiology, Seoul National University Dental Hospital, Seoul, Korea
| | - Kyung-Hoe Huh
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Sam-Sun Lee
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Soon-Chul Choi
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Tae-Il Kim
- Department of Periodontology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea
| | - Won-Jin Yi
- Department of Oral and Maxillofacial Radiology, School of Dentistry and Dental Research Institute, Seoul National University, Seoul, Korea.
- Department of Biomedical Radiation Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, Korea.
| |
Collapse
|
114
|
Chen J, Zhang D, Nanehkaran YA, Li D. Detection of rice plant diseases based on deep transfer learning. JOURNAL OF THE SCIENCE OF FOOD AND AGRICULTURE 2020; 100:3246-3256. [PMID: 32124447 DOI: 10.1002/jsfa.10365] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2019] [Revised: 02/14/2020] [Accepted: 03/02/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND As the primary food for nearly half of the world's population, rice is cultivated almost all over the world, especially in Asian countries. However, the farmers and planting experts have been facing many persistent agricultural challenges for centuries, such as different diseases of rice. The severe rice diseases may lead to no harvest of grains; therefore, a fast, automatic, less expensive and accurate method to detect rice diseases is highly desired in the field of agricultural information. RESULTS In this article, we study the deep learning approach for solving the task since it has shown outstanding performance in image processing and classification problem. Combining the advantages of both, the DenseNet pre-trained on ImageNet and Inception module were selected to be used in the network, and this approach presents a superior performance with respect to other state-of-the-art methods. It achieves an average predicting accuracy of no less than 94.07% in the public dataset. Even when multiple diseases were considered, the average accuracy reaches 98.63% for the class prediction of rice disease images. CONCLUSIONS The experimental results prove the validity of the proposed approach, and it is accomplished efficiently for rice disease detection. © 2020 Society of Chemical Industry.
Collapse
Affiliation(s)
- Junde Chen
- School of Informatics, Xiamen University, Xiamen, China
| | - Defu Zhang
- School of Informatics, Xiamen University, Xiamen, China
| | | | - Dele Li
- Fujian College of Water Conservancy and Electric Power, Sanming, China
| |
Collapse
|
115
|
Yuan F, Dai N, Tian S, Zhang B, Sun Y, Yu Q, Liu H. Personalized design technique for the dental occlusal surface based on conditional generative adversarial networks. INTERNATIONAL JOURNAL FOR NUMERICAL METHODS IN BIOMEDICAL ENGINEERING 2020; 36:e3321. [PMID: 32043311 DOI: 10.1002/cnm.3321] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/02/2019] [Revised: 12/14/2019] [Accepted: 02/03/2020] [Indexed: 06/10/2023]
Abstract
The tooth defect is a frequently occurring disease within the field of dental clinic. However, the traditional manual restoration for the defective tooth needs an especially long treatment time, and dental computer aided design and manufacture (CAD/CAM) systems fail to restore the personalized anatomical features of natural teeth. Aiming to address the shortcomings of existed methods, this article proposes an intelligent network model for designing tooth crown surface based on conditional generative adversarial networks. Then, the data set for training the network model is constructed via generating depth maps of 3D tooth models scanned by the intraoral. Through adversarial training, the network model is able to generate tooth occlusal surface under the constraint of the space occlusal relationship, the perceptual loss, and occlusal groove filter loss. Finally, we carry out the assessment experiments for the quality of the occlusal surface and the occlusal relationship with the opposing tooth. The experimental results demonstrate that our method can automatically reconstruct the personalized anatomical features on occlusal surface and shorten the treatment time while restoring the full functionality of the defective tooth.
Collapse
Affiliation(s)
- Fulai Yuan
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Ning Dai
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Sukun Tian
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Bei Zhang
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| | - Yuchun Sun
- Peking University School and Hospital of Stomatology, Beijing, People's Republic of China
| | - Qing Yu
- Nanjing Stomatological Hospital, Medical School of Nanjing University, Nanjing, People's Republic of China
| | - Hao Liu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, Nanjing, People's Republic of China
| |
Collapse
|
116
|
Banar N, Bertels J, Laurent F, Boedi RM, De Tobel J, Thevissen P, Vandermeulen D. Towards fully automated third molar development staging in panoramic radiographs. Int J Legal Med 2020; 134:1831-1841. [PMID: 32239317 DOI: 10.1007/s00414-020-02283-3] [Citation(s) in RCA: 34] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Accepted: 03/17/2020] [Indexed: 11/26/2022]
Abstract
Staging third molar development is commonly used for age assessment in sub-adults. Current staging techniques are, at most, semi-automated and rely on manual interactions prone to operator variability. The aim of this study was to fully automate the staging process by employing the full potential of deep learning, using convolutional neural networks (CNNs) in every step of the procedure. The dataset used to train the CNNs consisted of 400 panoramic radiographs (OPGs), with 20 OPGs per developmental stage per sex, staged in consensus between three observers. The concepts of transfer learning, using pre-trained CNNs, and data augmentation were used to mitigate the issues when dealing with a limited dataset. In this work, a three-step procedure was proposed and the results were validated using fivefold cross-validation. First, a CNN localized the geometrical center of the lower left third molar, around which a square region of interest (ROI) was extracted. Second, another CNN segmented the third molar within the ROI. Third, a final CNN used both the ROI and the segmentation to classify the third molar into its developmental stage. The geometrical center of the third molar was found with an average Euclidean distance of 63 pixels. Third molars were segmented with an average Dice score of 93%. Finally, the developmental stages were classified with an accuracy of 54%, a mean absolute error of 0.69 stages, and a linear weighted Cohen's kappa coefficient of 0.79. The entire automated workflow on average took 2.72 s to compute, which is substantially faster than manual staging starting from the OPG. Taking into account the limited dataset size, this pilot study shows that the proposed fully automated approach shows promising results compared with manual staging.
Collapse
Affiliation(s)
- Nikolay Banar
- Computational Linguistics and Psycholinguistics Research Center (CLiPS), University of Antwerp, Antwerp, Belgium
| | - Jeroen Bertels
- Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium.
| | - François Laurent
- Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
| | - Rizky Merdietio Boedi
- Department of Dentistry, Diponegoro University, Semarang, Indonesia
- Department of Imaging and Pathology (Forensic Odontology), KU Leuven, Leuven, Belgium
| | - Jannick De Tobel
- Department of Imaging and Pathology (Forensic Odontology), KU Leuven, Leuven, Belgium
| | - Patrick Thevissen
- Department of Imaging and Pathology (Forensic Odontology), KU Leuven, Leuven, Belgium
| | - Dirk Vandermeulen
- Department of Electrical Engineering (ESAT/PSI), KU Leuven, Leuven, Belgium
- Department of Anatomy, University of Pretoria, Pretoria, South Africa
| |
Collapse
|
117
|
Singh P, Sehgal P. Numbering and Classification of Panoramic Dental Images Using 6-Layer Convolutional Neural Network. PATTERN RECOGNITION AND IMAGE ANALYSIS 2020. [DOI: 10.1134/s1054661820010149] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
118
|
Chung M, Lee M, Hong J, Park S, Lee J, Lee J, Yang IH, Lee J, Shin YG. Pose-aware instance segmentation framework from cone beam CT images for tooth segmentation. Comput Biol Med 2020; 120:103720. [PMID: 32250852 DOI: 10.1016/j.compbiomed.2020.103720] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Revised: 03/21/2020] [Accepted: 03/21/2020] [Indexed: 11/30/2022]
Abstract
Individual tooth segmentation from cone beam computed tomography (CBCT) images is an essential prerequisite for an anatomical understanding of orthodontic structures in several applications, such as tooth reformation planning and implant guide simulations. However, the presence of severe metal artifacts in CBCT images hinders the accurate segmentation of each individual tooth. In this study, we propose a neural network for pixel-wise labeling to exploit an instance segmentation framework that is robust to metal artifacts. Our method comprises of three steps: 1) image cropping and realignment by pose regressions, 2) metal-robust individual tooth detection, and 3) segmentation. We first extract the alignment information of the patient by pose regression neural networks to attain a volume-of-interest (VOI) region and realign the input image, which reduces the inter-overlapping area between tooth bounding boxes. Then, individual tooth regions are localized within a VOI realigned image using a convolutional detector. We improved the accuracy of the detector by employing non-maximum suppression and multiclass classification metrics in the region proposal network. Finally, we apply a convolutional neural network (CNN) to perform individual tooth segmentation by converting the pixel-wise labeling task to a distance regression task. Metal-intensive image augmentation is also employed for a robust segmentation of metal artifacts. The result shows that our proposed method outperforms other state-of-the-art methods, especially for teeth with metal artifacts. Our method demonstrated 5.68% and 30.30% better accuracy in the F1 score and aggregated Jaccard index, respectively, when compared to the best performing state-of-the-art algorithms. The major implication of the proposed method is two-fold: 1) an introduction of pose-aware VOI realignment followed by a robust tooth detection and 2) a metal-robust CNN framework for accurate tooth segmentation.
Collapse
Affiliation(s)
- Minyoung Chung
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Minkyung Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Jioh Hong
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Sanguk Park
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Jusang Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Jingyu Lee
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| | - Il-Hyung Yang
- Department of Orthodontics, Seoul National University School of Dentistry, 101 Daehak-Ro Jongro-Gu, Seoul, 03080, South Korea.
| | - Jeongjin Lee
- School of Computer Science and Engineering, Soongsil University, 369 Sangdo-Ro, Dongjak-Gu, Seoul, 06978, South Korea.
| | - Yeong-Gil Shin
- Department of Computer Science and Engineering, Seoul National University, 1 Gwanak-ro, Gwanak-gu, Seoul, 08826, South Korea.
| |
Collapse
|
119
|
Orhan K, Bayrakdar IS, Ezhov M, Kravtsov A, Özyürek T. Evaluation of artificial intelligence for detecting periapical pathosis on cone-beam computed tomography scans. Int Endod J 2020; 53:680-689. [PMID: 31922612 DOI: 10.1111/iej.13265] [Citation(s) in RCA: 134] [Impact Index Per Article: 26.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 01/07/2020] [Indexed: 12/17/2022]
Abstract
AIM To verify the diagnostic performance of an artificial intelligence system based on the deep convolutional neural network method to detect periapical pathosis on cone-beam computed tomography (CBCT) images. METHODOLOGY images of 153 periapical lesions obtained from 109 patients were included. The specific area of the jaw and teeth associated with the periapical lesions were then determined by a human observer. Lesion volumes were calculated using the manual segmentation methods using Fujifilm-Synapse 3D software (Fujifilm Medical Systems, Tokyo, Japan). The neural network was then used to determine (i) whether the lesion could be detected; (ii) if the lesion was detected, where it was localized (maxilla, mandible or specific tooth); and (iii) lesion volume. Manual segmentation and artificial intelligence (AI) (Diagnocat Inc., San Francisco, CA, USA) methods were compared using Wilcoxon signed rank test and Bland-Altman analysis. RESULTS The deep convolutional neural network system was successful in detecting teeth and numbering specific teeth. Only one tooth was incorrectly identified. The AI system was able to detect 142 of a total of 153 periapical lesions. The reliability of correctly detecting a periapical lesion was 92.8%. The deep convolutional neural network volumetric measurements of the lesions were similar to those with manual segmentation. There was no significant difference between the two measurement methods (P > 0.05). CONCLUSIONS Volume measurements performed by humans and by AI systems were comparable to each other. AI systems based on deep learning methods can be useful for detecting periapical pathosis on CBCT images for clinical application.
Collapse
Affiliation(s)
- K Orhan
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Ankara University, Ankara, Turkey
| | - I S Bayrakdar
- Department of Oral and Maxillofacial Radiology, Faculty of Dentistry, Eskisehir Osmangazi University, Eskişehir, Turkey
| | - M Ezhov
- Diagnocat Inc, San Francisco, CA, USA
| | | | - T Özyürek
- Department of Endodontics, Faculty of Dentistry, Istanbul Medeniyet University, Istanbul, Turkey
| |
Collapse
|
120
|
Leite AF, Vasconcelos KDF, Willems H, Jacobs R. Radiomics and Machine Learning in Oral Healthcare. Proteomics Clin Appl 2020; 14:e1900040. [PMID: 31950592 DOI: 10.1002/prca.201900040] [Citation(s) in RCA: 59] [Impact Index Per Article: 11.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Revised: 12/09/2019] [Indexed: 12/12/2022]
Abstract
The increasing storage of information, data, and forms of knowledge has led to the development of new technologies that can help to accomplish complex tasks in different areas, such as in dentistry. In this context, the role of computational methods, such as radiomics and Artificial Intelligence (AI) applications, has been progressing remarkably for dentomaxillofacial radiology (DMFR). These tools bring new perspectives for diagnosis, classification, and prediction of oral diseases, treatment planning, and for the evaluation and prediction of outcomes, minimizing the possibilities of human errors. A comprehensive review of the state-of-the-art of using radiomics and machine learning (ML) for imaging in oral healthcare is presented in this paper. Although the number of published studies is still relatively low, the preliminary results are very promising and in a near future, an augmented dentomaxillofacial radiology (ADMFR) will combine the use of radiomics-based and AI-based analyses with the radiologist's evaluation. In addition to the opportunities and possibilities, some challenges and limitations have also been discussed for further investigations.
Collapse
Affiliation(s)
- André Ferreira Leite
- Department of Dentistry, Faculty of Health Sciences, University of Brasília, Brasília, 70910-900, Brazil.,Omfsimpath Research Group, Department of Imaging and Pathology, Biomedical Sciences, KU Leuven and Dentomaxillofacial Imaging Department, University Hospitals Leuven, Leuven, 3000, Belgium
| | - Karla de Faria Vasconcelos
- Omfsimpath Research Group, Department of Imaging and Pathology, Biomedical Sciences, KU Leuven and Dentomaxillofacial Imaging Department, University Hospitals Leuven, Leuven, 3000, Belgium
| | - Holger Willems
- Relu, Innovatie-en incubatiecentrum KU Leuven, Leuven, 3000, Belgium
| | - Reinhilde Jacobs
- Omfsimpath Research Group, Department of Imaging and Pathology, Biomedical Sciences, KU Leuven and Dentomaxillofacial Imaging Department, University Hospitals Leuven, Leuven, 3000, Belgium.,Department of Dental Medicine, Karolinska Institutet, Huddinge, 17177, Sweden
| |
Collapse
|
121
|
Tooth detection and classification on panoramic radiographs for automatic dental chart filing: improved classification by multi-sized input data. Oral Radiol 2020; 37:13-19. [PMID: 31893343 DOI: 10.1007/s11282-019-00418-w] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Accepted: 12/15/2019] [Indexed: 10/25/2022]
Abstract
OBJECTIVES Dental state plays an important role in forensic radiology in case of large scale disasters. However, dental information stored in dental clinics are not standardized or electronically filed in general. The purpose of this study is to develop a computerized system to detect and classify teeth in dental panoramic radiographs for automatic structured filing of the dental charts. It can also be used as a preprocessing step for computerized image analysis of dental diseases. METHODS One hundred dental panoramic radiographs were employed for training and testing an object detection network using fourfold cross-validation method. The detected bounding boxes were then classified into four tooth types, including incisors, canines, premolars, and molars, and three tooth conditions, including nonmetal restored, partially restored, and completely restored, using classification network. Based on the visualization result, multisized image data were used for the double input layers of a convolutional neural network. The result was evaluated by the detection sensitivity, the number of false-positive detection, and classification accuracies. RESULTS The tooth detection sensitivity was 96.4% with 0.5 false positives per case. The classification accuracies for tooth types and tooth conditions were 93.2% and 98.0%. Using the double input layer network, 6 point increase in classification accuracy was achieved for the tooth types. CONCLUSIONS The proposed method can be useful in automatic filing of dental charts for forensic identification and preprocessing of dental disease prescreening purposes.
Collapse
|
122
|
Matsubara N, Teramoto A, Saito K, Fujita H. Bone suppression for chest X-ray image using a convolutional neural filter. AUSTRALASIAN PHYSICAL & ENGINEERING SCIENCES IN MEDICINE 2019; 43:10.1007/s13246-019-00822-w. [PMID: 31773501 DOI: 10.1007/s13246-019-00822-w] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2019] [Accepted: 11/19/2019] [Indexed: 12/22/2022]
Abstract
Chest X-rays are used for mass screening for the early detection of lung cancer. However, lung nodules are often overlooked because of bones overlapping the lung fields. Bone suppression techniques based on artificial intelligence have been developed to solve this problem. However, bone suppression accuracy needs improvement. In this study, we propose a convolutional neural filter (CNF) for bone suppression based on a convolutional neural network which is frequently used in the medical field and has excellent performance in image processing. CNF outputs a value for the bone component of the target pixel by inputting pixel values in the neighborhood of the target pixel. By processing all positions in the input image, a bone-extracted image is generated. Finally, bone-suppressed image is obtained by subtracting the bone-extracted image from the original chest X-ray image. Bone suppression was most accurate when using CNF with six convolutional layers, yielding bone suppression of 89.2%. In addition, abnormalities, if present, were effectively imaged by suppressing only bone components and maintaining soft-tissue. These results suggest that the chances of missing abnormalities may be reduced by using the proposed method. The proposed method is useful for bone suppression in chest X-ray images.
Collapse
Affiliation(s)
- Naoki Matsubara
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan.
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake-city, Aichi, 470-1192, Japan
| | - Hiroshi Fujita
- Department of Electrical, Electronic & Computer Engineering, Faculty of Engineering, Gifu University, 1-1 Yanagido, Gifu-city, Gifu, 501-1194, Japan
| |
Collapse
|
123
|
Using deep learning techniques in medical imaging: a systematic review of applications on CT and PET. Artif Intell Rev 2019. [DOI: 10.1007/s10462-019-09788-3] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
|
124
|
Mine Y, Suzuki S, Eguchi T, Murayama T. Applying deep artificial neural network approach to maxillofacial prostheses coloration. J Prosthodont Res 2019; 64:296-300. [PMID: 31554602 DOI: 10.1016/j.jpor.2019.08.006] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 07/30/2019] [Accepted: 08/27/2019] [Indexed: 11/19/2022]
Abstract
PURPOSE Maxillofacial prosthetic rehabilitation replaces missing structures to recover the function and aesthetics relating to facial defects or injuries. Deep learning is rapidly expanding with respect to applications in medical fields. In this study, we apply the artificial neural network (ANN)-based deep learning approach to coloration support for fabricating maxillofacial prostheses. METHODS We compared two machine learning algorithms, ANN-based deep learning and the random forest algorithm, to determine the compounding amount of pigment. We prepared 52 silicone elastomer specimens of varying colors and measured the CIE 1976 L* a* b* color space information using a spectrophotometer on the input dataset. The output of these algorithms indicated the compounding amount of four pigments. According to the algorithms' pigment compounding predictions, we prepared the specimens for validation analysis and measured the CIE 1976 L* a* b* values. We determined the color differences between the real skin color of five research participants (22.3 ± 1.7 years) and that of the silicone elastomer specimens fabricated based on the algorithm predictions using the CIEDE00 ΔE00 color system. RESULTS The color differences (ΔE00 value) between the real skin color and silicone elastomer validation specimens were 3.45 ± 0.87 (ANN) and 5.54 ± 1.41 (random forest), which indicates that the deep ANN approach produced superior results with respect to the ΔE00 value compared with the random forest algorithm. CONCLUSIONS These results suggest that applying deep ANN is a promising technique for the coloration of maxillofacial prostheses.
Collapse
Affiliation(s)
- Yuichi Mine
- Department of Medical System Engineering, Division of Oral Health Sciences, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi Minami-ku, Hiroshima 734-8553, Japan; Translational Research Center, Hiroshima University, 1-2-3 Kasumi Minami-ku, Hiroshima 734-8553, Japan.
| | - Shunsuke Suzuki
- Department of Medical System Engineering, Division of Oral Health Sciences, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi Minami-ku, Hiroshima 734-8553, Japan
| | - Toru Eguchi
- Graduate School of Engineering, Hiroshima University, 1-3-2 Kagamiyama, Higashi-hiroshima 739-0046, Japan
| | - Takeshi Murayama
- Department of Medical System Engineering, Division of Oral Health Sciences, Graduate School of Biomedical and Health Sciences, Hiroshima University, 1-2-3 Kasumi Minami-ku, Hiroshima 734-8553, Japan
| |
Collapse
|
125
|
Deep learning in medical image analysis: A third eye for doctors. JOURNAL OF STOMATOLOGY, ORAL AND MAXILLOFACIAL SURGERY 2019; 120:279-288. [DOI: 10.1016/j.jormas.2019.06.002] [Citation(s) in RCA: 90] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2019] [Revised: 06/11/2019] [Accepted: 06/18/2019] [Indexed: 12/22/2022]
|
126
|
Wang Y, Guan Q, Lao I, Wang L, Wu Y, Li D, Ji Q, Wang Y, Zhu Y, Lu H, Xiang J. Using deep convolutional neural networks for multi-classification of thyroid tumor by histopathology: a large-scale pilot study. ANNALS OF TRANSLATIONAL MEDICINE 2019; 7:468. [PMID: 31700904 DOI: 10.21037/atm.2019.08.54] [Citation(s) in RCA: 43] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Background To explore whether deep convolutional neural networks (DCNNs) have the potential to improve diagnostic efficiency and increase the level of interobserver agreement in the classification of thyroid nodules in histopathological slides. Methods A total of 11,715 fragmented images from 806 patients' original histological images were divided into a training dataset and a test dataset. Inception-ResNet-v2 and VGG-19 were trained using the training dataset and tested using the test dataset to determine the diagnostic efficiencies of different histologic types of thyroid nodules, including normal tissue, adenoma, nodular goiter, papillary thyroid carcinoma (PTC), follicular thyroid carcinoma (FTC), medullary thyroid carcinoma (MTC) and anaplastic thyroid carcinoma (ATC). Misdiagnoses were further analyzed. Results The total 11,715 fragmented images were divided into a training dataset and a test dataset for each pathology type at a ratio of 5:1. Using the test set, VGG-19 yielded a better average diagnostic accuracy than did Inception-ResNet-v2 (97.34% vs. 94.42%, respectively). The VGG-19 model applied to 7 pathology types showed a fragmentation accuracy of 88.33% for normal tissue, 98.57% for ATC, 98.89% for FTC, 100% for MTC, 97.77% for PTC, 100% for nodular goiter and 92.44% for adenoma. It achieved excellent diagnostic efficiencies for all the malignant types. Normal tissue and adenoma were the most challenging histological types to classify. Conclusions The DCNN models, especially VGG-19, achieved satisfactory accuracies on the task of differentiating thyroid tumors by histopathology. Analysis of the misdiagnosed cases revealed that normal tissue and adenoma were the most challenging histological types for the DCNN to differentiate, while all the malignant classifications achieved excellent diagnostic efficiencies. The results indicate that DCNN models may have potential for facilitating histopathologic thyroid disease diagnosis.
Collapse
Affiliation(s)
- Yunjun Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Qing Guan
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Iweng Lao
- Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China.,Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai 200032, China
| | - Li Wang
- Depertment of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Yi Wu
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Duanshu Li
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Qinghai Ji
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Yu Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Yongxue Zhu
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| | - Hongtao Lu
- Depertment of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai 200240, China
| | - Jun Xiang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai 200032, China.,Department of Oncology, Shanghai Medical College, Fudan University, Shanghai 200032, China
| |
Collapse
|
127
|
Guan Q, Wang Y, Ping B, Li D, Du J, Qin Y, Lu H, Wan X, Xiang J. Deep convolutional neural network VGG-16 model for differential diagnosing of papillary thyroid carcinomas in cytological images: a pilot study. J Cancer 2019; 10:4876-4882. [PMID: 31598159 PMCID: PMC6775529 DOI: 10.7150/jca.28769] [Citation(s) in RCA: 90] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2018] [Accepted: 07/28/2019] [Indexed: 12/22/2022] Open
Abstract
Objective: In this study, we exploited a VGG-16 deep convolutional neural network (DCNN) model to differentiate papillary thyroid carcinoma (PTC) from benign thyroid nodules using cytological images. Methods: A pathology-proven dataset was built from 279 cytological images of thyroid nodules. The images were cropped into fragmented images and divided into a training dataset and a test dataset. VGG-16 and Inception-v3 DCNNs were trained and tested to make differential diagnoses. The characteristics of tumor cell nucleus were quantified as contours, perimeter, area and mean of pixel intensity and compared using independent Student's t-tests. Results: In the test group, the accuracy rates of the VGG-16 model and Inception-v3 on fragmented images were 97.66% and 92.75%, respectively, and the accuracy rates of VGG-16 and Inception-v3 in patients were 95% and 87.5%, respectively. The contours, perimeter, area and mean of pixel intensity of PTC in fragmented images were more than the benign nodules, which were 61.01±17.10 vs 47.00±24.08, p=0.000, 134.99±21.42 vs 62.40±29.15, p=0.000, 1770.89±627.22 vs 1157.27±722.23, p=0.013, 165.84±26.33 vs 132.94±28.73, p=0.000), respectively. Conclusion: In summary, after training with a large dataset, the DCNN VGG-16 model showed great potential in facilitating PTC diagnosis from cytological images. The contours, perimeter, area and mean of pixel intensity of PTC in fragmented images were more than the benign nodules.
Collapse
Affiliation(s)
- Qing Guan
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| | - Yunjun Wang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| | - Bo Ping
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Duanshu Li
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| | - Jiajun Du
- Depertment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Yu Qin
- Depertment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Hongtao Lu
- Depertment of Computer Science and Engineering, Shanghai Jiaotong University, Shanghai, China
| | - Xiaochun Wan
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
- Department of Pathology, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
| | - Jun Xiang
- Department of Head and Neck Surgery, Fudan University Shanghai Cancer Center, Shanghai, 200032, China
- Department of Oncology, Shanghai Medical Colloge, Fudan University, Shanghai, 200032, China
| |
Collapse
|
128
|
Yu C, Xie S, Niu S, Ji Z, Fan W, Yuan S, Liu Q, Chen Q. Hyper‐reflective foci segmentation in SD‐OCT retinal images with diabetic retinopathy using deep convolutional neural networks. Med Phys 2019; 46:4502-4519. [PMID: 31315159 DOI: 10.1002/mp.13728] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2019] [Revised: 07/08/2019] [Accepted: 07/11/2019] [Indexed: 11/07/2022] Open
Affiliation(s)
- Chenchen Yu
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| | - Sha Xie
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| | - Sijie Niu
- School of Information Science and Engineering University of Jinan Jinan 250022 China
| | - Zexuan Ji
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| | - Wen Fan
- Department of Ophthalmology the First Affiliated Hospital with Nanjing Medical University Nanjing 210029 China
| | - Songtao Yuan
- Department of Ophthalmology the First Affiliated Hospital with Nanjing Medical University Nanjing 210029 China
- The Affiliated Jiangsu Shengze Hospital of Nanjing Medical University Suzhou 215228 China
| | - Qinghuai Liu
- Department of Ophthalmology the First Affiliated Hospital with Nanjing Medical University Nanjing 210029 China
| | - Qiang Chen
- School of Computer Science and Engineering Nanjing University of Science and Technology Nanjing 210094 China
| |
Collapse
|
129
|
Hung K, Montalvao C, Tanaka R, Kawai T, Bornstein MM. The use and performance of artificial intelligence applications in dental and maxillofacial radiology: A systematic review. Dentomaxillofac Radiol 2019; 49:20190107. [PMID: 31386555 DOI: 10.1259/dmfr.20190107] [Citation(s) in RCA: 165] [Impact Index Per Article: 27.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022] Open
Abstract
OBJECTIVES To investigate the current clinical applications and diagnostic performance of artificial intelligence (AI) in dental and maxillofacial radiology (DMFR). METHODS Studies using applications related to DMFR to develop or implement AI models were sought by searching five electronic databases and four selected core journals in the field of DMFR. The customized assessment criteria based on QUADAS-2 were adapted for quality analysis of the studies included. RESULTS The initial electronic search yielded 1862 titles, and 50 studies were eventually included. Most studies focused on AI applications for an automated localization of cephalometric landmarks, diagnosis of osteoporosis, classification/segmentation of maxillofacial cysts and/or tumors, and identification of periodontitis/periapical disease. The performance of AI models varies among different algorithms. CONCLUSION The AI models proposed in the studies included exhibited wide clinical applications in DMFR. Nevertheless, it is still necessary to further verify the reliability and applicability of the AI models prior to transferring these models into clinical practice.
Collapse
Affiliation(s)
- Kuofeng Hung
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Carla Montalvao
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Ray Tanaka
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| | - Taisuke Kawai
- Department of Oral and Maxillofacial Radiology, School of Life Dentistry at Tokyo, Nippon Dental University, Tokyo, Japan
| | - Michael M Bornstein
- Oral and Maxillofacial Radiology, Applied Oral Sciences and Community Dental Care, Faculty of Dentistry, The University of Hong Kong, Hong Kong SAR, China
| |
Collapse
|
130
|
Zhang Z, Sejdić E. Radiological images and machine learning: Trends, perspectives, and prospects. Comput Biol Med 2019; 108:354-370. [PMID: 31054502 PMCID: PMC6531364 DOI: 10.1016/j.compbiomed.2019.02.017] [Citation(s) in RCA: 77] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2018] [Revised: 02/19/2019] [Accepted: 02/19/2019] [Indexed: 01/18/2023]
Abstract
The application of machine learning to radiological images is an increasingly active research area that is expected to grow in the next five to ten years. Recent advances in machine learning have the potential to recognize and classify complex patterns from different radiological imaging modalities such as x-rays, computed tomography, magnetic resonance imaging and positron emission tomography imaging. In many applications, machine learning based systems have shown comparable performance to human decision-making. The applications of machine learning are the key ingredients of future clinical decision making and monitoring systems. This review covers the fundamental concepts behind various machine learning techniques and their applications in several radiological imaging areas, such as medical image segmentation, brain function studies and neurological disease diagnosis, as well as computer-aided systems, image registration, and content-based image retrieval systems. Synchronistically, we will briefly discuss current challenges and future directions regarding the application of machine learning in radiological imaging. By giving insight on how take advantage of machine learning powered applications, we expect that clinicians can prevent and diagnose diseases more accurately and efficiently.
Collapse
Affiliation(s)
- Zhenwei Zhang
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA
| | - Ervin Sejdić
- Department of Electrical and Computer Engineering, Swanson School of Engineering, University of Pittsburgh, Pittsburgh, PA, 15261, USA.
| |
Collapse
|
131
|
Tuzoff DV, Tuzova LN, Bornstein MM, Krasnov AS, Kharchenko MA, Nikolenko SI, Sveshnikov MM, Bednenko GB. Tooth detection and numbering in panoramic radiographs using convolutional neural networks. Dentomaxillofac Radiol 2019; 48:20180051. [PMID: 30835551 PMCID: PMC6592580 DOI: 10.1259/dmfr.20180051] [Citation(s) in RCA: 186] [Impact Index Per Article: 31.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2018] [Revised: 01/28/2019] [Accepted: 01/31/2019] [Indexed: 01/18/2023] Open
Abstract
OBJECTIVES Analysis of dental radiographs is an important part of the diagnostic process in daily clinical practice. Interpretation by an expert includes teeth detection and numbering. In this project, a novel solution based on convolutional neural networks (CNNs) is proposed that performs this task automatically for panoramic radiographs. METHODS A data set of 1352 randomly chosen panoramic radiographs of adults was used to train the system. The CNN-based architectures for both teeth detection and numbering tasks were analyzed. The teeth detection module processes the radiograph to define the boundaries of each tooth. It is based on the state-of-the-art Faster R-CNN architecture. The teeth numbering module classifies detected teeth images according to the FDI notation. It utilizes the classical VGG-16 CNN together with the heuristic algorithm to improve results according to the rules for spatial arrangement of teeth. A separate testing set of 222 images was used to evaluate the performance of the system and to compare it to the expert level. RESULTS For the teeth detection task, the system achieves the following performance metrics: a sensitivity of 0.9941 and a precision of 0.9945. For teeth numbering, its sensitivity is 0.9800 and specificity is 0.9994. Experts detect teeth with a sensitivity of 0.9980 and a precision of 0.9998. Their sensitivity for tooth numbering is 0.9893 and specificity is 0.9997. The detailed error analysis showed that the developed software system makes errors caused by similar factors as those for experts. CONCLUSIONS The performance of the proposed computer-aided diagnosis solution is comparable to the level of experts. Based on these findings, the method has the potential for practical application and further evaluation for automated dental radiograph analysis. Computer-aided teeth detection and numbering simplifies the process of filling out digital dental charts. Automation could help to save time and improve the completeness of electronic dental records.
Collapse
Affiliation(s)
| | | | - Michael M. Bornstein
- Applied Oral Sciences, Faculty of Dentistry, The
University of Hong Kong, Hong Kong,
China
| | - Alexey S. Krasnov
- Dmitry Rogachev National Research Center of
Pediatric Hematology, Oncology and Immunology,
Moscow, Russia
| | | | | | | | | |
Collapse
|
132
|
PSO optimized 1-D CNN-SVM architecture for real-time detection and classification applications. Comput Biol Med 2019; 108:85-92. [PMID: 31003183 DOI: 10.1016/j.compbiomed.2019.03.017] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 03/17/2019] [Accepted: 03/18/2019] [Indexed: 11/22/2022]
Abstract
In this paper, we propose a novel Particle Swarm Optimized (PSO) One-Dimensional Convolutional Neural Network with Support Vector Machine (1-D CNN-SVM) architecture for real-time detection and classification of diseases. The performance of the proposed architecture is validated with a novel hardware model for detecting Chronic Kidney Disease (CKD) from saliva samples. For detecting CKD, the urea concentration in the saliva sample is monitored by converting it into ammonia. The urea on hydrolysis in the presence of urease enzyme produces ammonia. This ammonia is then measured using a semiconductor gas sensor. The sensor response is given to the proposed architecture for feature extraction and classification. The performance of the architecture is optimized by regulating the parameter values using a PSO algorithm. The proposed architecture outperforms current conventional methods, as this approach is a combination of strong feature extraction and classification techniques. Optimal features are extracted directly from the raw signal, aiming to reduce the computational time and complexity. The proposed architecture has achieved an accuracy of 98.25%.
Collapse
|
133
|
Hwang JJ, Jung YH, Cho BH, Heo MS. An overview of deep learning in the field of dentistry. Imaging Sci Dent 2019; 49:1-7. [PMID: 30941282 PMCID: PMC6444007 DOI: 10.5624/isd.2019.49.1.1] [Citation(s) in RCA: 141] [Impact Index Per Article: 23.5] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2018] [Revised: 12/15/2018] [Accepted: 12/17/2018] [Indexed: 01/23/2023] Open
Abstract
Purpose Artificial intelligence (AI), represented by deep learning, can be used for real-life problems and is applied across all sectors of society including medical and dental field. The purpose of this study is to review articles about deep learning that were applied to the field of oral and maxillofacial radiology. Materials and Methods A systematic review was performed using Pubmed, Scopus, and IEEE explore databases to identify articles using deep learning in English literature. The variables from 25 articles included network architecture, number of training data, evaluation result, pros and cons, study object and imaging modality. Results Convolutional Neural network (CNN) was used as a main network component. The number of published paper and training datasets tended to increase, dealing with various field of dentistry. Conclusion Dental public datasets need to be constructed and data standardization is necessary for clinical application of deep learning in dental field.
Collapse
Affiliation(s)
- Jae-Joon Hwang
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Yun-Hoa Jung
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Bong-Hae Cho
- Department of Oral and Maxillofacial Radiology, School of Dentistry, Pusan National University, Dental Research Institute, Yangsan, Korea
| | - Min-Suk Heo
- Department of Oral and Maxillofacial Radiology and Dental Research Institute, School of Dentistry, Seoul National University, Seoul, Korea
| |
Collapse
|
134
|
Zhang YD, Dong Z, Chen X, Jia W, Du S, Muhammad K, Wang SH. Image based fruit category classification by 13-layer deep convolutional neural network and data augmentation. MULTIMEDIA TOOLS AND APPLICATIONS 2019; 78:3613-3632. [DOI: 10.1007/s11042-017-5243-3] [Citation(s) in RCA: 78] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2017] [Revised: 08/16/2017] [Accepted: 09/20/2017] [Indexed: 08/30/2023]
|
135
|
Yurtsever M, Yurtsever U. Use of a convolutional neural network for the classification of microbeads in urban wastewater. CHEMOSPHERE 2019; 216:271-280. [PMID: 30384295 DOI: 10.1016/j.chemosphere.2018.10.084] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/23/2018] [Revised: 10/08/2018] [Accepted: 10/14/2018] [Indexed: 06/08/2023]
Abstract
Scientists are on the lookout for a practical model that can serve as a standard for sorting out, identifying, and characterizing microplastics which are common occurrences in water sources and wastewaters. The microbeads (MBs) used in cosmetics and discharged into the sewer systems after use cause substantial microplastics pollution in the receiving waters. Today, the use of plastic microbeads in cosmetics is banned. The existing use cases are to be discontinued within a few years. Yet, there are no restrictions regarding the use of microbeads in a number of industries, cleaning products, pharmaceuticals and medical practices. In this context, the determination and classification of MBs which had so far been discharged to water sources and which continue to be discharged, represent crucial problems. In this work, we examined a new approach for the classification of MBs based on microscopic images. For classification purposes, Convolutional Neural Network (CNN) -a Deep Learning algorithm- was employed, whereas GoogLeNet architecture served as the model. The network is built from scratch, and trained then after tested on a total of 42928 images containing MBs in 5 distinct cleansers. The study performed with the CNN which achieved a classification performance of 89% for MBs in wastewater.
Collapse
Affiliation(s)
- Meral Yurtsever
- Department of Environmental Engineering, Sakarya University, 54187, Sakarya, Turkey.
| | - Ulaş Yurtsever
- Department of Computer and Information Engineering, Sakarya University, 54187, Sakarya, Turkey.
| |
Collapse
|
136
|
Onishi Y, Teramoto A, Tsujimoto M, Tsukamoto T, Saito K, Toyama H, Imaizumi K, Fujita H. Automated Pulmonary Nodule Classification in Computed Tomography Images Using a Deep Convolutional Neural Network Trained by Generative Adversarial Networks. BIOMED RESEARCH INTERNATIONAL 2019; 2019:6051939. [PMID: 30719445 PMCID: PMC6334309 DOI: 10.1155/2019/6051939] [Citation(s) in RCA: 37] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/14/2018] [Revised: 11/24/2018] [Accepted: 12/18/2018] [Indexed: 02/01/2023]
Abstract
Lung cancer is a leading cause of death worldwide. Although computed tomography (CT) examinations are frequently used for lung cancer diagnosis, it can be difficult to distinguish between benign and malignant pulmonary nodules on the basis of CT images alone. Therefore, a bronchoscopic biopsy may be conducted if malignancy is suspected following CT examinations. However, biopsies are highly invasive, and patients with benign nodules may undergo many unnecessary biopsies. To prevent this, an imaging diagnosis with high classification accuracy is essential. In this study, we investigate the automated classification of pulmonary nodules in CT images using a deep convolutional neural network (DCNN). We use generative adversarial networks (GANs) to generate additional images when only small amounts of data are available, which is a common problem in medical research, and evaluate whether the classification accuracy is improved by generating a large amount of new pulmonary nodule images using the GAN. Using the proposed method, CT images of 60 cases with confirmed pathological diagnosis by biopsy are analyzed. The benign nodules assessed in this study are difficult for radiologists to differentiate because they cannot be rejected as being malignant. A volume of interest centered on the pulmonary nodule is extracted from the CT images, and further images are created using axial sections and augmented data. The DCNN is trained using nodule images generated by the GAN and then fine-tuned using the actual nodule images to allow the DCNN to distinguish between benign and malignant nodules. This pretraining and fine-tuning process makes it possible to distinguish 66.7% of benign nodules and 93.9% of malignant nodules. These results indicate that the proposed method improves the classification accuracy by approximately 20% in comparison with training using only the original images.
Collapse
Affiliation(s)
- Yuya Onishi
- Graduate School of Health Sciences, Fujita Health University, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Atsushi Teramoto
- Graduate School of Health Sciences, Fujita Health University, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Masakazu Tsujimoto
- Fujita Health University Hospital, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Tetsuya Tsukamoto
- School of Medicine, Fujita Health University, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Kuniaki Saito
- Graduate School of Health Sciences, Fujita Health University, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Hiroshi Toyama
- School of Medicine, Fujita Health University, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Kazuyoshi Imaizumi
- School of Medicine, Fujita Health University, 1–98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | | |
Collapse
|
137
|
Utilizing Pretrained Deep Learning Models for Automated Pulmonary Tuberculosis Detection Using Chest Radiography. INTELLIGENT INFORMATION AND DATABASE SYSTEMS 2019. [DOI: 10.1007/978-3-030-14802-7_34] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
138
|
Mupparapu M, Chen YC, Hong DK, Wu CW. The Use of Deep Convolutional Neural Networks in Biomedical Imaging: A Review. JOURNAL OF OROFACIAL SCIENCES 2019. [DOI: 10.4103/jofs.jofs_55_19] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022] Open
|
139
|
Ramos LA, van der Steen WE, Sales Barros R, Majoie CBLM, van den Berg R, Verbaan D, Vandertop WP, Zijlstra IJAJ, Zwinderman AH, Strijkers GJ, Olabarriaga SD, Marquering HA. Machine learning improves prediction of delayed cerebral ischemia in patients with subarachnoid hemorrhage. J Neurointerv Surg 2018; 11:497-502. [DOI: 10.1136/neurintsurg-2018-014258] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2018] [Revised: 10/09/2018] [Accepted: 10/11/2018] [Indexed: 01/04/2023]
Abstract
Background and purposeDelayed cerebral ischemia (DCI) is a severe complication in patients with aneurysmal subarachnoid hemorrhage. Several associated predictors have been previously identified. However, their predictive value is generally low. We hypothesize that Machine Learning (ML) algorithms for the prediction of DCI using a combination of clinical and image data lead to higher predictive accuracy than previously applied logistic regressions.Materials and methodsClinical and baseline CT image data from 317 patients with aneurysmal subarachnoid hemorrhage were included. Three types of analysis were performed to predict DCI. First, the prognostic value of known predictors was assessed with logistic regression models. Second, ML models were created using all clinical variables. Third, image features were extracted from the CT images using an auto-encoder and combined with clinical data to create ML models. Accuracy was evaluated based on the area under the curve (AUC), sensitivity and specificity with 95% CI.ResultsThe best AUC of the logistic regression models for known predictors was 0.63 (95% CI 0.62 to 0.63). For the ML algorithms with clinical data there was a small but statistically significant improvement in the AUC to 0.68 (95% CI 0.65 to 0.69). Notably, aneurysm width and height were included in many of the ML models. The AUC was highest for ML models that also included image features: 0.74 (95% CI 0.72 to 0.75).ConclusionML algorithms significantly improve the prediction of DCI in patients with aneurysmal subarachnoid hemorrhage, particularly when image features are also included. Our experiments suggest that aneurysm characteristics are also associated with the development of DCI.
Collapse
|
140
|
da Silva GLF, Valente TLA, Silva AC, de Paiva AC, Gattass M. Convolutional neural network-based PSO for lung nodule false positive reduction on CT images. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2018; 162:109-118. [PMID: 29903476 DOI: 10.1016/j.cmpb.2018.05.006] [Citation(s) in RCA: 68] [Impact Index Per Article: 9.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2017] [Revised: 09/15/2017] [Accepted: 05/03/2018] [Indexed: 06/08/2023]
Abstract
BACKGROUND AND OBJECTIVE Detection of lung nodules is critical in CAD systems; this is because of their similar contrast with other structures and low density, which result in the generation of numerous false positives (FPs). Therefore, this study proposes a methodology to reduce the FP number using a deep learning technique in conjunction with an evolutionary technique. METHOD The particle swarm optimization (PSO) algorithm was used to optimize the network hyperparameters in the convolutional neural network (CNN) in order to enhance the network performance and eliminate the requirement of manual search. RESULTS The methodology was tested on computed tomography (CT) scans from the Lung Image Database Consortium and Image Database Resource Initiative (LIDC-IDRI) with the highest accuracy of 97.62%, sensitivity of 92.20%, specificity of 98.64%, and area under the receiver operating characteristic (ROC) curve of 0.955. CONCLUSION The results demonstrate the high performance-potential of the PSO algorithm in the identification of optimal CNN hyperparameters for lung nodule candidate classification into nodules and non-nodules, increasing the sensitivity rates in the FP reduction step of CAD systems.
Collapse
Affiliation(s)
- Giovanni Lucca França da Silva
- Federal University of Maranhão - UFMA, Applied Computing Group - NCA Av. dos Portugueses, SN, Bacanga, São Luís, MA 65085-580, Brazil.
| | - Thales Levi Azevedo Valente
- Pontifical Catholic University of Rio de Janeiro - PUC - Rio R. São Vicente, 225, Gávea, Rio de Janeiro, RJ 22453-900, Brazil.
| | - Aristófanes Corrêa Silva
- Federal University of Maranhão - UFMA, Applied Computing Group - NCA Av. dos Portugueses, SN, Bacanga, São Luís, MA 65085-580, Brazil.
| | - Anselmo Cardoso de Paiva
- Federal University of Maranhão - UFMA, Applied Computing Group - NCA Av. dos Portugueses, SN, Bacanga, São Luís, MA 65085-580, Brazil.
| | - Marcelo Gattass
- Pontifical Catholic University of Rio de Janeiro - PUC - Rio R. São Vicente, 225, Gávea, Rio de Janeiro, RJ 22453-900, Brazil.
| |
Collapse
|
141
|
Automatic classification of ovarian cancer types from cytological images using deep convolutional neural networks. Biosci Rep 2018; 38:BSR20180289. [PMID: 29572387 PMCID: PMC5938423 DOI: 10.1042/bsr20180289] [Citation(s) in RCA: 39] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2018] [Revised: 03/20/2018] [Accepted: 03/20/2018] [Indexed: 12/29/2022] Open
Abstract
Ovarian cancer is one of the most common gynecologic malignancies. Accurate classification of ovarian cancer types (serous carcinoma, mucous carcinoma, endometrioid carcinoma, transparent cell carcinoma) is an essential part in the different diagnosis. Computer-aided diagnosis (CADx) can provide useful advice for pathologists to determine the diagnosis correctly. In our study, we employed a Deep Convolutional Neural Networks (DCNN) based on AlexNet to automatically classify the different types of ovarian cancers from cytological images. The DCNN consists of five convolutional layers, three max pooling layers, and two full reconnect layers. Then we trained the model by two group input data separately, one was original image data and the other one was augmented image data including image enhancement and image rotation. The testing results are obtained by the method of 10-fold cross-validation, showing that the accuracy of classification models has been improved from 72.76 to 78.20% by using augmented images as training data. The developed scheme was useful for classifying ovarian cancers from cytological images.
Collapse
|
142
|
RADIOLOGICAL BIOMARKERS ON THE CONE BEAM COMPUTED TOMOGRAPHY BASIS FOR THE FUNCTIONAL ASYMMETRY DETECTION OF THE MAXILLA AND MANDIBULA IN YOUNG PEOPLE. WORLD OF MEDICINE AND BIOLOGY 2018. [DOI: 10.26724/2079-8334-2018-3-65-167-171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
143
|
Teramoto A, Tsukamoto T, Kiriyama Y, Fujita H. Automated Classification of Lung Cancer Types from Cytological Images Using Deep Convolutional Neural Networks. BIOMED RESEARCH INTERNATIONAL 2017; 2017:4067832. [PMID: 28884120 PMCID: PMC5572620 DOI: 10.1155/2017/4067832] [Citation(s) in RCA: 107] [Impact Index Per Article: 13.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/05/2017] [Revised: 06/20/2017] [Accepted: 07/05/2017] [Indexed: 02/08/2023]
Abstract
Lung cancer is a leading cause of death worldwide. Currently, in differential diagnosis of lung cancer, accurate classification of cancer types (adenocarcinoma, squamous cell carcinoma, and small cell carcinoma) is required. However, improving the accuracy and stability of diagnosis is challenging. In this study, we developed an automated classification scheme for lung cancers presented in microscopic images using a deep convolutional neural network (DCNN), which is a major deep learning technique. The DCNN used for classification consists of three convolutional layers, three pooling layers, and two fully connected layers. In evaluation experiments conducted, the DCNN was trained using our original database with a graphics processing unit. Microscopic images were first cropped and resampled to obtain images with resolution of 256 × 256 pixels and, to prevent overfitting, collected images were augmented via rotation, flipping, and filtering. The probabilities of three types of cancers were estimated using the developed scheme and its classification accuracy was evaluated using threefold cross validation. In the results obtained, approximately 71% of the images were classified correctly, which is on par with the accuracy of cytotechnologists and pathologists. Thus, the developed scheme is useful for classification of lung cancers from microscopic images.
Collapse
Affiliation(s)
- Atsushi Teramoto
- School of Health Sciences, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Tetsuya Tsukamoto
- School of Medicine, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Yuka Kiriyama
- School of Medicine, Fujita Health University, 1-98 Dengakugakubo, Kutsukake-cho, Toyoake City, Aichi 470-1192, Japan
| | - Hiroshi Fujita
- Graduate School of Medicine, Gifu University, 1-1 Yanagido, Gifu 501-1194, Japan
| |
Collapse
|
144
|
Lopes UK, Valiati JF. Pre-trained convolutional neural networks as feature extractors for tuberculosis detection. Comput Biol Med 2017; 89:135-143. [PMID: 28800442 DOI: 10.1016/j.compbiomed.2017.08.001] [Citation(s) in RCA: 100] [Impact Index Per Article: 12.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2017] [Revised: 08/01/2017] [Accepted: 08/01/2017] [Indexed: 02/07/2023]
Abstract
It is estimated that in 2015, approximately 1.8 million people infected by tuberculosis died, most of them in developing countries. Many of those deaths could have been prevented if the disease had been detected at an earlier stage, but the most advanced diagnosis methods are still cost prohibitive for mass adoption. One of the most popular tuberculosis diagnosis methods is the analysis of frontal thoracic radiographs; however, the impact of this method is diminished by the need for individual analysis of each radiography by properly trained radiologists. Significant research can be found on automating diagnosis by applying computational techniques to medical images, thereby eliminating the need for individual image analysis and greatly diminishing overall costs. In addition, recent improvements on deep learning accomplished excellent results classifying images on diverse domains, but its application for tuberculosis diagnosis remains limited. Thus, the focus of this work is to produce an investigation that will advance the research in the area, presenting three proposals to the application of pre-trained convolutional neural networks as feature extractors to detect the disease. The proposals presented in this work are implemented and compared to the current literature. The obtained results are competitive with published works demonstrating the potential of pre-trained convolutional networks as medical image feature extractors.
Collapse
Affiliation(s)
- U K Lopes
- DevGrid, 482, Italia Avenue, Caxias do Sul, RS, Brazil
| | - J F Valiati
- Artificial Intelligence Engineers - AIE, 262, Vieira de Castro Street, Porto Alegre, RS, Brazil.
| |
Collapse
|
145
|
Suzuki K. Overview of deep learning in medical imaging. Radiol Phys Technol 2017; 10:257-273. [PMID: 28689314 DOI: 10.1007/s12194-017-0406-5] [Citation(s) in RCA: 399] [Impact Index Per Article: 49.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2017] [Accepted: 06/29/2017] [Indexed: 02/07/2023]
Abstract
The use of machine learning (ML) has been increasing rapidly in the medical imaging field, including computer-aided diagnosis (CAD), radiomics, and medical image analysis. Recently, an ML area called deep learning emerged in the computer vision field and became very popular in many fields. It started from an event in late 2012, when a deep-learning approach based on a convolutional neural network (CNN) won an overwhelming victory in the best-known worldwide computer vision competition, ImageNet Classification. Since then, researchers in virtually all fields, including medical imaging, have started actively participating in the explosively growing field of deep learning. In this paper, the area of deep learning in medical imaging is overviewed, including (1) what was changed in machine learning before and after the introduction of deep learning, (2) what is the source of the power of deep learning, (3) two major deep-learning models: a massive-training artificial neural network (MTANN) and a convolutional neural network (CNN), (4) similarities and differences between the two models, and (5) their applications to medical imaging. This review shows that ML with feature input (or feature-based ML) was dominant before the introduction of deep learning, and that the major and essential difference between ML before and after deep learning is the learning of image data directly without object segmentation or feature extraction; thus, it is the source of the power of deep learning, although the depth of the model is an important attribute. The class of ML with image input (or image-based ML) including deep learning has a long history, but recently gained popularity due to the use of the new terminology, deep learning. There are two major models in this class of ML in medical imaging, MTANN and CNN, which have similarities as well as several differences. In our experience, MTANNs were substantially more efficient in their development, had a higher performance, and required a lesser number of training cases than did CNNs. "Deep learning", or ML with image input, in medical imaging is an explosively growing, promising field. It is expected that ML with image input will be the mainstream area in the field of medical imaging in the next few decades.
Collapse
Affiliation(s)
- Kenji Suzuki
- Medical Imaging Research Center and Department of Electrical and Computer Engineering, Illinois Institute of Technology, 3440 South Dearborn Street, Chicago, IL, 60616, USA. .,World Research Hub Initiative (WRHI), Tokyo Institute of Technology, Tokyo, Japan.
| |
Collapse
|