1
|
Luengnaruemitchai G, Kaewmahanin W, Munthuli A, Phienphanich P, Puangarom S, Sangchocanonta S, Jariyakosol S, Hirunwiwatkul P, Tantibundhit C. Alzheimer's Together with Mild Cognitive Impairment Screening Using Polar Transformation of Middle Zone of Fundus Images Based Deep Learning. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-4. [PMID: 38083188 DOI: 10.1109/embc40787.2023.10340463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Alzheimer's disease (AD) and Mild Cognitive Impairment (MCI) are considered an increasing major health problem in elderlies. However, current clinical methods of Alzheimer's detection are expensive and difficult to access, making the detection inconvenient and unsuitable for developing countries such as Thailand. Thus, we developed a method of AD together with MCI screening by fine-tuning a pre-trained Densely Connected Convolutional Network (DenseNet-121) model using the middle zone of polar transformed fundus image. The polar transformation in the middle zone of the fundus is a key factor helping the model to extract features more effectively and that enhances the model accuracy. The dataset was divided into 2 groups: normal and abnormal (AD and MCI). This method can classify between normal and abnormal patients with 96% accuracy, 99% sensitivity, 90% specificity, 95% precision, and 97% F1 score. Parts of both MCI and AD input images that most impact the classification score visualized by Grad-CAM++ focus in superior and inferior retinal quadrants.Clinical relevance- The parts of both MCI and AD input images that have the most impact the classification score (visualized by Grad-CAM++) are superior and inferior retinal quadrants. Polar transformation of the middle zone of retinal fundus images is a key factor that enhances the classification accuracy.
Collapse
|
2
|
Sangchocanonta S, Ingpochai S, Puangarom S, Munthuli A, Phienphanich P, Itthipanichpong R, Chansangpetch S, Manassakorn A, Ratanawongphaibul K, Tantisevi V, Rojanapongpun P, Tantibundhit C. Donut: Augmentation Technique for Enhancing The Efficacy of Glaucoma Suspect Screening. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-5. [PMID: 38083547 DOI: 10.1109/embc40787.2023.10341115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Glaucoma is the second most common cause of blindness. A glaucoma suspect has risk factors that increase the possibility of developing glaucoma. Evaluating a patient with suspected glaucoma is challenging. The "donut method" was developed in this study as an augmentation technique for obtaining high-quality fundus images for training ConvNeXt-Small model. Fundus images from GlauCUTU-DATA, labelled by randomizing at least 3 well-trained ophthalmologists (4 well-trained ophthalmologists in case of no majority agreement) with a unanimous agreement (3/3) and majority agreement (2/3), were used in the experiment. The experimental results from the proposed method showed the training model with the "donut method" increased the sensitivity of glaucoma suspects from 52.94% to 70.59% for the 3/3 data and increased the sensitivity of glaucoma suspects from 37.78% to 42.22% for the 2/3 data. This method enhanced the efficacy of classifying glaucoma suspects in both equalizing sensitivity and specificity sufficiently. Furthermore, three well-trained ophthalmologists agreed that the GradCAM++ heatmaps obtained from the training model using the proposed method highlighted the clinical criteria.Clinical relevance- The donut method for augmentation fundus images focuses on the optic nerve head region for enhancing efficacy of glaucoma suspect screening, and uses Grad-CAM++ to highlight the clinical criteria.
Collapse
|
3
|
Puangarom S, Twinvitoo A, Sangchocanonta S, Munthuli A, Phienphanich P, Itthipanichpong R, Ratanawongphaibul K, Chansangpetch S, Manassakorn A, Tantisevi V, Rojanapongpun P, Tantibundhit C. 3-LbNets: Tri-Labeling Deep Convolutional Neural Network for the Automated Screening of Glaucoma, Glaucoma Suspect, and No Glaucoma in Fundus Images. Annu Int Conf IEEE Eng Med Biol Soc 2023; 2023:1-5. [PMID: 38083236 DOI: 10.1109/embc40787.2023.10340102] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
Early detection of glaucoma, a widespread visual disease, can prevent vision loss. Unfortunately, ophthalmologists are scarce and clinical diagnosis requires much time and cost. Therefore, we developed a screening Tri-Labeling deep convolutional neural network (3-LbNets) to identify no glaucoma, glaucoma suspect, and glaucoma cases in global fundus images. 3-LbNets extracts important features from 3 different labeling modals and puts them into an artificial neural network (ANN) to find the final result. The method was effective, with an AUC of 98.66% for no glaucoma, 97.54% for glaucoma suspect, and 97.19% for glaucoma when analysing 206 fundus images evaluated with unanimous agreement from 3 well-trained ophthalmologists (3/3). When analysing 178 difficult to interpret fundus images (with majority agreement (2/3)), this method had an AUC of 80.80% for no glaucoma, 69.52% for glaucoma suspect, and 82.74% for glaucoma cases.Clinical relevance-This establishes a robust global fundus image screening network based on the ensemble method that can optimize glaucoma screening to alleviate the toll on those with glaucoma and prevent glaucoma suspects from developing the disease.
Collapse
|
4
|
Munthuli A, Pooprasert P, Klangpornkun N, Phienphanich P, Onsuwan C, Jaisin K, Pattanaseri K, Lortrakul J, Tantibundhit C. Classification and analysis of text transcription from Thai depression assessment tasks among patients with depression. PLoS One 2023; 18:e0283095. [PMID: 36996118 PMCID: PMC10062633 DOI: 10.1371/journal.pone.0283095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Accepted: 03/01/2023] [Indexed: 03/31/2023] Open
Abstract
Depression is a serious mental health disorder that poses a major public health concern in Thailand and have a profound impact on individuals' physical and mental health. In addition, the lack of number to mental health services and limited number of psychiatrists in Thailand make depression particularly challenging to diagnose and treat, leaving many individuals with the condition untreated. Recent studies have explored the use of natural language processing to enable access to the classification of depression, particularly with a trend toward transfer learning from pre-trained language model. In this study, we attempted to evaluate the effectiveness of using XLM-RoBERTa, a pre-trained multi-lingual language model supporting the Thai language, for the classification of depression from a limited set of text transcripts from speech responses. Twelve Thai depression assessment questions were developed to collect text transcripts of speech responses to be used with XLM-RoBERTa in transfer learning. The results of transfer learning with text transcription from speech responses of 80 participants (40 with depression and 40 normal control) showed that when only one question (Q1) of "How are you these days?" was used, the recall, precision, specificity, and accuracy were 82.5%, 84.65, 85.00, and 83.75%, respectively. When utilizing the first three questions from Thai depression assessment tasks (Q1 - Q3), the values increased to 87.50%, 92.11%, 92.50%, and 90.00%, respectively. The local interpretable model explanations were analyzed to determine which words contributed the most to the model's word cloud visualization. Our findings were consistent with previously published literature and provide similar explanation for clinical settings. It was discovered that the classification model for individuals with depression relied heavily on negative terms such as 'not,' 'sad,', 'mood', 'suicide', 'bad', and 'bore' whereas normal control participants used neutral to positive terms such as 'recently,' 'fine,', 'normally', 'work', and 'working'. The findings of the study suggest that screening for depression can be facilitated by eliciting just three questions from patients with depression, making the process more accessible and less time-consuming while reducing the already huge burden on healthcare workers.
Collapse
Affiliation(s)
- Adirek Munthuli
- Department of Electrical and Computer Engineering, Thammasat School of Engineering, Thammasat University (Rangsit Campus), Khlong Luang, Pathum Thani, Thailand
- Center of Excellence in Intelligent Informatics, Speech, and Language Technology, and Service Innovation (CILS), Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Pakinee Pooprasert
- Center of Excellence in Intelligent Informatics, Speech, and Language Technology, and Service Innovation (CILS), Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Nittayapa Klangpornkun
- Department of Electrical and Computer Engineering, Thammasat School of Engineering, Thammasat University (Rangsit Campus), Khlong Luang, Pathum Thani, Thailand
- Center of Excellence in Intelligent Informatics, Speech, and Language Technology, and Service Innovation (CILS), Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Phongphan Phienphanich
- Department of Electrical and Computer Engineering, Thammasat School of Engineering, Thammasat University (Rangsit Campus), Khlong Luang, Pathum Thani, Thailand
- Center of Excellence in Intelligent Informatics, Speech, and Language Technology, and Service Innovation (CILS), Thammasat University, Khlong Luang, Pathum Thani, Thailand
| | - Chutamanee Onsuwan
- Center of Excellence in Intelligent Informatics, Speech, and Language Technology, and Service Innovation (CILS), Thammasat University, Khlong Luang, Pathum Thani, Thailand
- Department of Linguistics, Faculty of Liberal Arts, Thammasat University (Rangsit Campus), Pathum Thani, Thailand
| | - Kankamol Jaisin
- Department of Psychiatry, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok Noi, Bangkok, Thailand
| | - Keerati Pattanaseri
- Department of Psychiatry, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok Noi, Bangkok, Thailand
| | - Juthawadee Lortrakul
- Department of Psychiatry, Faculty of Medicine Siriraj Hospital, Mahidol University, Bangkok Noi, Bangkok, Thailand
| | - Charturong Tantibundhit
- Department of Electrical and Computer Engineering, Thammasat School of Engineering, Thammasat University (Rangsit Campus), Khlong Luang, Pathum Thani, Thailand
- Center of Excellence in Intelligent Informatics, Speech, and Language Technology, and Service Innovation (CILS), Thammasat University, Khlong Luang, Pathum Thani, Thailand
| |
Collapse
|
5
|
Munthuli A, Intanai J, Tossanuch P, Pooprasert P, Ingpochai P, Boonyasatian S, Kittithammo K, Thammarach P, Boonmak T, Khaengthanyakan S, Yaemsuk A, Vanichvarodom P, Phienphanich P, Pongcharoen P, Sakonlaya D, Sitthiwatthanawong P, Wetchawalit S, Chakkavittumrong P, Thongthawee B, Pathomjaruwat T, Tantibundhit C. Extravasation Screening and Severity Prediction from Skin Lesion Image using Deep Neural Networks. Annu Int Conf IEEE Eng Med Biol Soc 2022; 2022:1827-1833. [PMID: 36086628 DOI: 10.1109/embc48229.2022.9871115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Extravasation occurs secondary to the leakage of medication from blood vessels into the surrounding tissue during intravenous administration resulting in significant soft tissue injury and necrosis. If treatment is delayed, invasive management such as surgical debridement, skin grafting, and even amputation may be required. Thus, it is imperative to develop a smartphone application for predicting extravasation severity from skin image. Two Deep Neural Network (DNN) architectures, U-Net and DenseNet-121, were used to segment skin and lesion, and to classify extravasation severity. Sensitivity and specificity for predicting between asymptomatic and abnormal cases were 77.78 and 90.24%. For each severity in abnormal cases, mild extravasation attained the highest F1-score of 0.8049, followed by severe extravasation of 0.6429, and moderate extravasation of 0.6250. The F1-score of moderate-to-severe extravasation classification can improve by applying the our proposed rule-based for multi-class classification. These findings proposed a novel and feasible DNN approach for screening extravasation from skin images. The implementation of DNN-based applications on mobile devices has a strong potential for clinical application in low-resource countries. Clinical relevance- The application can serve as a valuable tool in monitoring when extravasation occurs during intravaneous administration. It can also help in the scheduling process across worksite to reduce the risks associated with working shifts.
Collapse
|
6
|
Klangpornkun N, Ruangritchai M, Munthuli A, Onsuwan C, Jaisin K, Pattanaseri K, Lortrakul J, Thanakulakkarachai P, Anansiripinyo T, Amornlaksananon A, Laohawee S, Tantibundhit C. Classification of Depression and Other Psychiatric Conditions Using Speech Features Extracted from a Thai Psychiatric and Verbal Screening Test. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:651-656. [PMID: 34891377 DOI: 10.1109/embc46164.2021.9629571] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Depression is a common and serious mental illness which negatively affects daily functioning. To prevent the progression of the illness into severe or long-term consequences, early diagnosis is crucial. We developed an automated speech feature analysis application for depression and other psychiatric disorders derived from a developed Thai psychiatric and verbal screening test. The screening test includes Thai's version of Patient Health Questionnaire-9 (PHQ-9) and Hamilton Depression Rating Scale (HAM-D), and 32 additional emotion-induced questions. Case-control study was conducted on speech features from 66 participants. Twenty seven of those had depression (DP), 12 had other psychiatric disorders (OP), and 27 were normal controls (NC). The five-fold cross-validation from 6 settings of 5 classifiers with the combination of PHQ-9 and HAM-D scores, and speech features were examined. Results showed highest performance from the multilayer perceptron (MLP) classifier which yielded 83.33% sensitivity, 91.67% specificity, and 83.33% accuracy, where negative-emotional questions were most effective in classification. The automated speech feature analysis showed promising results for screening patients with depression or other psychiatric disorders. The current application is accessible through smartphone, making it a feasible and intuitive setup for low-resource countries such as Thailand.
Collapse
|
7
|
Munthuli A, Vongsurakrai S, Anansiripinyo T, Ellermann V, Sroykhumpa K, Onsuwan C, Chutichetpong P, Hemrungrojn S, Kosawat K, Tantibundhit C. Thammasat-NECTEC-Chula's Thai Language and Cognition Assessment (TLCA): The Thai Alzheimer's and Mild Cognitive Impairment Screening Test. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:690-694. [PMID: 34891386 DOI: 10.1109/embc46164.2021.9630779] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Thammasat-NECTEC-Chula's Thai Language and Cognition Assessment (TLCA) is a cognitive paper-based test consisting of 21 tasks that cover 3 domains: memory, language, and other cognitive abilities. The TLCA follows some aspects of the existing tests (Thai Addenbrooke's Cognitive Examination-Revised (Thai-ACE-R) and the Thai Montreal Cognitive Assessment Test (Thai-MoCA)) and many parts were reconstructed to be more adapted to the Thai culture. Data obtained from the test will be able to precisely distinguish between patients with Mild Cognitive Impairment (MCI), Alzheimer's Disease (AD), and Normal healthy Controls (NC). The TLCA was tested on 90 participants (32 on the paper-based version and 58 on the computerized version) using a scoring procedure and speech features from verbal responses with machine learning classification. The scoring results showed significant difference between non-AD (NC + MCI) vs AD participants in 3 domains and could differentiate between NC and MCI, while machine classification could classify in three settings: NC vs non-NC (MCI + AD), AD vs non-AD and NC vs MCI vs AD. These promising results suggest that TLCA could be further verified and used as an efficient assessment in MCI and AD screening for Thais.Clinical relevance- The speech feature analysis of TLCA showed promising result for screening MCI and AD for Thais.
Collapse
|
8
|
Kunumpol P, Lerthirunvibul N, Phienphanich P, Munthuli A, Tantisevi V, Manassakorn A, Chansangpetch S, Itthipanichpong R, Ratanawongphaibol K, Rojanapongpun P, Tantibundhit C. GlauCUTU: Virtual Reality Visual Field Test. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:7416-7421. [PMID: 34892811 DOI: 10.1109/embc46164.2021.9629827] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
This study proposed a virtual reality (VR) head-mounted visual field (VF) test system, or also known as the GlauCUTU VF test, for a reaction time (RT) perimetry with moving visual stimuli that progressively increase in intensity. The test entailed 24-2 VF protocol and was examined on 2 study groups, controls with normal fields and subjects with glaucoma. To collect reaction times, participants were urged to respond to the stimulus by pressing on the clicker as fast as possible. Performance of the GlauCUTU VF test was compared to the gold standard Humphrey Visual Field Analyzer (HFA). The HFA showed a significant difference between the GlauCUTU and HFA with mean duration of 254.41 and 609, respectively [t(16) = 15.273, p<0.05]. Likewise, our system also effectively differentiated glaucomatous eyes from normal eyes for the left eye and right eye, respectively. When compared to the HFA, the GlauCUTU test produced a significantly shorter average test duration by 354 seconds which reduced test-induced eye fatigue. The portable and inexpensive GlauCUTU perimetry system proves to be a promising method for increasing accessibility to glaucoma screening.Clinical relevance- GlauCUTU, an automated head-mounted VR perimetry device for VF test, is portable, cost-effective, and suitable for low resource settings. Unlike the conventional HFA test, GlauCUTU VF test reports in terms of subjects RT which is reportedly higher in glaucoma patients.
Collapse
|
9
|
Sangchocanonta S, Vongsurakrai S, Sroykhumpa K, Ellermann V, Munthuli A, Anansiripinyo T, Onsuwan C, Hemrungrojn S, Kosawat K, Tantibundhit C. Development of Thai Picture Description Task for Alzheimer's Screening using Part-of-Speech Tagging. Annu Int Conf IEEE Eng Med Biol Soc 2021; 2021:2104-2109. [PMID: 34891704 DOI: 10.1109/embc46164.2021.9629861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Alzheimer's Disease (AD) and Mild Cognitive Impairment (MCI) are among the most common health conditions in elderly patients. Currently, methods to diagnose AD and MCI are lengthy, costly and require specialized staff to operate. A picture description task was developed to speed up the diagnosis. It was designed to be suitable and relatable to the Thai culture. In this paper, we will be presenting two picture description tasks named Thais-at-Home and Thai Temple Fair. The developed picture set was presented to 90 participants (30 normals, 30 MCI patients, and 30 AD patients). Then, the recording in the form of spontaneous speech is converted to text. A Part-of-Speech (PoS) tagger is used to categorize words into 7 types (noun, pronoun, adjective, verb, conjunction, preposition, and interjection) according to the Office of the Royal Society of Thailand. Six machine learning algorithms were applied to train with the PoS patterns and their performances were compared. Results showed that the PoS can be used to classify patients (MCI and AD) and healthy controls using multilayer perceptron with 90.00% sensitivity, 80.00% specificity, and 86.67% accuracy. Moreover, the findings showed that healthy controls used more conjunctions and verbs but fewer pronouns than the patients.Clinical relevance- The picture description tasks using part-of-speech (PoS) to showed promising results in screening Alzheimer's patients.
Collapse
|
10
|
Munthuli A, Anansiripinyo T, Klangpornkun N, Onsuwan C, Chonchaiya W, Trairatvorakul P, Jitrotjanarak J, Voracharusrungsi P, Atichatthanin N, Tantibundhit C. Development of Computerized Tool for Screening Thai Children at Risk for Learning Disabilities. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:6159-6162. [PMID: 33019377 DOI: 10.1109/embc44109.2020.9175765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
A computerized version of "Noo-Khor-Arn" `May I read?', a paper-based screening test for Thai children at risk with Learning Disability (LD), was developed and some core ideas of development were given in details. Six test categories with 23 subtests were conducted on 110 Thai children aged between 7-12 years old (Mean = 7.94, SD = 1.45), divided into 50 LD and 60 Typically Developing (TD) children to determine most relevant test categories and subtests for classifying between the groups. Two-factor balanced Analysis of Variance (ANOVA) revealed that a computerized version shown a significant difference between TD and LD groups in the tasks related to linguistics, decoding, and naming. These tasks were Phonological Awareness (PA), Morphological Awareness (MA), Decoding (DEC), and Rapid Naming (RN), respectively. The rest of the test categories showed non-significant factors between TD and LD. Not only the results can be used for classification but also for streamlining the test categories and subtests, to shorten the test tool.Clinical relevance- The subtests related to linguistics and decoding aspects showed promising results in screening children at risk for learning disabilities.
Collapse
|
11
|
Thammarach P, Khaengthanyakan S, Vongsurakrai S, Phienphanich P, Pooprasert P, Yaemsuk A, Vanichvarodom P, Munpolsri N, Khwayotha S, Lertkowit M, Tungsagunwattana S, Vijitsanguan C, Lertrojanapunya S, Noisiri W, Chiawiriyabunya I, Aphikulvanich N, Tantibundhit C. AI Chest 4 All. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:1229-1233. [PMID: 33018209 DOI: 10.1109/embc44109.2020.9175862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
AIChest4All is the name of the model used to label and screening diseases in our area of focus, Thailand, including heart disease, lung cancer, and tuberculosis. This is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. Deep learning is used in our methodology to classify the chest X-ray images from datasets namely, NIH set, which is separated into 14 observations, and the Montgomery and Shenzhen set, which contains chest X-ray images of patients with tuberculosis, further supplemented by the dataset from Udonthani Cancer hospital and the National Chest Institute of Thailand. The images are classified into six categories: no finding, suspected active tuberculosis, suspected lung malignancy, abnormal heart and great vessels, Intrathoracic abnormal findings, and Extrathroacic abnormal findings. A total of 201,527 images were used. Results from testing showed that the accuracy values of the categories heart disease, lung cancer, and tuberculosis were 94.11%, 93.28%, and 92.32%, respectively with sensitivity values of 90.07%, 81.02%, and 82.33%, respectively and the specificity values were 94.65%, 94.04%, and 93.54%, respectively. In conclusion, the results acquired have sufficient accuracy, sensitivity, and specificity values to be used. Currently, AIChest4All is being used to help several of Thailand's government funded hospitals, free of charge.Clinical relevance- AIChest4All is aimed to aid radiologist in Thailand especially in rural areas, where there is immense staff shortages. It is being used to help several of Thailand's goverment funded hospitals, free of charege to screening heart disease, lung cancer, and tubeculosis with 94.11%, 93.28%, and 92.32% accuracy.
Collapse
|
12
|
Kingkosol P, Pooprasert P, Choopong P, Hunchangsith B, Laksanaphuk V, Tantibundhit C. Automated Cytomegalovirus Retinitis Screening in Fundus Images. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2020:1996-2002. [PMID: 33018395 DOI: 10.1109/embc44109.2020.9175461] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
This work proposes an automated algorithms for classifying retinal fundus images as cytomegalovirus retinitis (CMVR), normal, and other diseases. Adaptive wavelet packet transform (AWPT) was used to extract features. The retinal fundus images were transformed using a 4-level Haar wavelet packet (WP) transform. The first two best trees were obtained using Shannon and log energy entropy, while the third best tree was obtained using the Daubechies-4 mother wavelet with Shannon entropy. The coefficients of each node were extracted, where the feature value of each leaf node of the best tree was the average of the WP coefficients in that node, while those of other non-leaf nodes were set to zero. The feature vector was classified using an artificial neural network (ANN). The effectiveness of the algorithm was evaluated using ten-fold cross-validation over a dataset consisting of 1,011 images (310 CMVR, 240 normal, and 461 other diseases). In testing, a dataset consisting of 101 images (31 CMVR, 24 normal, and 46 other diseases), the AWPT-based ANN had sensitivities of 90.32%, 83.33%, and 91.30% and specificities of 95.71%, 94.81%, and 92.73%. In conclusion, the proposed algorithm has promising potential in CMVR screening, for which the AWPT-based ANN is applicable with scarce data and limited resources.
Collapse
|
13
|
Sompawong N, Mopan J, Pooprasert P, Himakhun W, Suwannarurk K, Ngamvirojcharoen J, Vachiramon T, Tantibundhit C. Automated Pap Smear Cervical Cancer Screening Using Deep Learning. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2019:7044-7048. [PMID: 31947460 DOI: 10.1109/embc.2019.8856369] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/09/2022]
Abstract
This study aims to apply Mask Regional Convolutional Neural Network (Mask R-CNN) to cervical cancer screening using pap smear histological slides. Based on our current literature review, this is the first attempt of using Mask R-CNN to detect and analyze the nucleus of the cervical cell, screening for normal and abnormal nuclear features. The data set were liquid-based histological slides obtained from Thammasat University (TU) Hospital. The slides contained both cervical cells and various artifacts such as white blood cells, mimicking the slides obtained in actual clinical settings. The proposed algorithm achieved mean average precision (mAP) of 57.8%, accuracy of 91.7%, sensitivity of 91.7%, and specificity of 91.7% per image. As we needed to evaluate the efficiency of our algorithm in comparison to single cell classification algorithm (Zhang et al., IEEE JBHI, vol. 21, no. 6, pp. 1633, 2017), we modified our method to also classify single cells on TU dataset test using Mask R-CNN segmentation. The results obtained had an accuracy of 89.8%, sensitivity of 72.5%, and specificity of 94.3%.
Collapse
|
14
|
Phienphanich P, Tankongchamruskul N, Akarathanawat W, Chutinet A, Nimnual R, Tantibundhit C, Suwanwela NC. Automatic Stroke Screening on Mobile Application: Features of Gyroscope and Accelerometer for Arm Factor in FAST. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2019:4225-4228. [PMID: 31946801 DOI: 10.1109/embc.2019.8857550] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This study focuses on automatic stroke-screening of the arm factor in the FAST (Face, Arm, Speech, and Time) stroke screening method. The study provides a methodology to collect data on specific arm movements, using signals from the gyroscope and accelerometer in mobile devices. Fifty-two subjects were enrolled in this study (20 stroke patients and 32 healthy subjects). Given in the instructions of the application, the patients were asked to perform two arm movements, Curl Up and Raise Up. The two exercises were classified into three parts: curl part, raise part, and stable part. Stroke patients were expected to experience difficulty in performing both exercises efficiently on the same arm. We proposed 20 handcrafted features from these three parts. Our study achieved an average accuracy of 61.7%-74.2% and an average area under the ROC curve (AUC) of 66.2%-81.5% from the combination of both exercises. Compared to the FAST method used by examiners in a previous study (Kapes et al., 2014) that showed with an accuracy of 69%-77% for every age group, our study showed promising results for early stroke identification, giving that our study is based only on the arm factor.
Collapse
|
15
|
Phasuk S, Tantibundhit C, Poopresert P, Yaemsuk A, Suvannachart P, Itthipanichpong R, Chansangpetch S, Manassakorn A, Tantisevi V, Rojanapongpun P. Automated Glaucoma Screening from Retinal Fundus Image Using Deep Learning. Annu Int Conf IEEE Eng Med Biol Soc 2020; 2019:904-907. [PMID: 31946040 DOI: 10.1109/embc.2019.8857136] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Glaucoma is the second leading cause of blindness worldwide. This paper proposes an automated glaucoma screening method using retinal fundus images via the ensemble technique to fuse the results of different classification networks and the result of each classification network was fed as an input to a simple artificial neural network (ANN) to obtain the final result. Three public datasets, i.e., ORIGA-650, RIM-ONE R3, and DRISHTI-GS were used for training and evaluating the performance of the proposed network. The experimental results showed that the proposed network outperformed other state-of-art glaucoma screening algorithms with AUC of 0.94. Our proposed algorithms showed promising potential as a medical support system for glaucoma screening especially in low resource countries.
Collapse
|
16
|
Muengtaweepongsa S, Tantibundhit C. Microembolic signal detection by transcranial Doppler: Old method with a new indication. World J Methodol 2018; 8:40-43. [PMID: 30519538 PMCID: PMC6275557 DOI: 10.5662/wjm.v8.i3.40] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 09/30/2018] [Accepted: 10/18/2018] [Indexed: 02/06/2023] Open
Abstract
Transcranial Doppler (TCD) is useful for investigation of intracranial arterial blood flow and can be used to detect a real-time embolic signal. Unfortunately, artefacts can mimic the embolic signal, complicating interpretation and necessitating expert-level opinion to distinguish the two. Resolving this situation is critical to achieve improved accuracy and utility of TCD for patients with disrupted intracranial arterial blood flow, such as stroke victims. A common type of stroke encountered in the clinic is cryptogenic stroke (or stroke with undetermined etiology), and patent foramen ovale (PFO) has been associated with the condition. An early clinical trial of PFO closure effect on secondary stroke prevention failed to demonstrate any benefit for the therapy, and research into the PFO therapy generally diminished. However, the recent publication of large randomized control trials with demonstrated benefit of PFO closure for recurrent stroke prevention has rekindled the interest in PFO in patients with cryptogenic stroke. To confirm that emboli across the PFO can reach the brain, TCD should be applied to detect the air embolic signal after injection of agitated saline bubbles at the antecubital vein. In addition, the automated embolic signal detection method should further facilitate use of TCD for air embolic signal detection after the agitated saline bubbles injection in patients with cryptogenic stroke and PFO.
Collapse
Affiliation(s)
- Sombat Muengtaweepongsa
- Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
| | - Charturong Tantibundhit
- Department of Internal Medicine, Faculty of Medicine, Thammasat University, Pathum Thani 12120, Thailand
| |
Collapse
|
17
|
Sombune P, Phienphanich P, Phuechpanpaisal S, Muengtaweepongsa S, Ruamthanthong A, Tantibundhit C. Automated embolic signal detection using Deep Convolutional Neural Network. Conf Proc IEEE Eng Med Biol Soc 2018; 2017:3365-3368. [PMID: 29060618 DOI: 10.1109/embc.2017.8037577] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
This work investigated the potential of Deep Neural Network in detection of cerebral embolic signal (ES) from transcranial Doppler ultrasound (TCD). The resulting system is aimed to couple with TCD devices in diagnosing a risk of stroke in real-time with high accuracy. The Adaptive Gain Control (AGC) approach developed in our previous study is employed to capture suspected ESs in real-time. By using spectrograms of the same TCD signal dataset as that of our previous work as inputs and the same experimental setup, Deep Convolutional Neural Network (CNN), which can learn features while training, was investigated for its ability to bypass the traditional handcrafted feature extraction and selection process. Extracted feature vectors from the suspected ESs are later determined whether they are of an ES, artifact (AF) or normal (NR) interval. The effectiveness of the developed system was evaluated over 19 subjects going under procedures generating emboli. The CNN-based system could achieve in average of 83.0% sensitivity, 80.1% specificity, and 81.4% accuracy, with considerably much less time consumption in development. The certainly growing set of training samples and computational resources will contribute to high performance. Besides having potential use in various clinical ES monitoring settings, continuation of this promising study will benefit developments of wearable applications by leveraging learnable features to serve demographic differentials.
Collapse
|
18
|
Kunumpol P, Umpaipant W, Kanchanaranya N, Charoenpong T, Vongkittirux S, Kupakanjana T, Tantibundhit C. Automated Age-related Macular Degeneration screening system using fundus images. Annu Int Conf IEEE Eng Med Biol Soc 2018; 2017:1469-1472. [PMID: 29060156 DOI: 10.1109/embc.2017.8037112] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This work proposed an automated screening system for Age-related Macular Degeneration (AMD), and distinguishing between wet or dry types of AMD using fundus images to assist ophthalmologists in eye disease screening and management. The algorithm employs contrast-limited adaptive histogram equalization (CLAHE) in image enhancement. Subsequently, discrete wavelet transform (DWT) and locality sensitivity discrimination analysis (LSDA) were used to extract features for a neural network model to classify the results. The results showed that the proposed algorithm was able to distinguish between normal eyes, dry AMD, or wet AMD with 98.63% sensitivity, 99.15% specificity, and 98.94% accuracy, suggesting promising potential as a medical support system for faster eye disease screening at lower costs.
Collapse
|
19
|
Sombune P, Phienphanich P, Phuechpanpaisal S, Muengtaweepongsa S, Ruamthanthong A, Chazal PD, Tantibundhit C. Automated Cerebral Emboli Detection Using Adaptive Threshold and Adaptive Neuro-Fuzzy Inference System. IEEE Access 2018; 6:55361-55371. [DOI: 10.1109/access.2018.2871136] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/03/2023]
|
20
|
Sombune P, Phienphanich P, Muengtaweepongsa S, Ruamthanthong A, Tantibundhit C. Automated embolic signal detection using adaptive gain control and classification using ANFIS. Conf Proc IEEE Eng Med Biol Soc 2017; 2016:3825-3828. [PMID: 28269120 DOI: 10.1109/embc.2016.7591562] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
This work proposes an automated system for real-time high-accuracy detection of cerebral embolic signals (ES) to couple with transcranial Doppler ultrasound (TCD) devices in diagnosing a risk of stroke. The algorithm employs Adaptive Gain Control (AGC) approach to capture suspected ESs in real-time. Then, Adaptive Wavelet Packet Transform (AWPT) and Fast Fourier Transform (FFT) are used to extract from them features most efficiently representing ES, which determined by Sequential Feature Selection technique. Extracted feature vectors from the suspected ESs are later determined whether they are of an ES or non-ES interval by Adaptive Neuro-Fuzzy Inference System (ANFIS) based classifier. The effectiveness of the developed system was evaluated over 19 subjects going under procedures generating solid and gaseous emboli. The results showed that the proposed algorithm yielded 91.5% sensitivity, 90.0% specificity, and 90.5% accuracy. Cross validations were performed 20 times on both the proposed algorithm and the High Dimensional Model Representation (HDMR) method (the most efficient algorithm to date) and their performances were compared. Paired t-test difference showed that the proposed algorithm outperformed the HDMR method, in both detection accuracy [t(19, 0.01) = 132.2073, p ~ 0] and sensitivity [t(19, 0.01) = 131.4676, p ~ 0] at 90.0% specificity, suggesting promising potential as a medical support system in ES monitoring of various clinical settings.
Collapse
|
21
|
Lueang-on C, Tantibundhit C, Muengtaweepongsa S. Abstract WMP60: Automatic Embolic Signal Detection Using Adaptive Wavelet Packet Transform and Adaptive Neuro-Fuzzy Inference System. Stroke 2013. [DOI: 10.1161/str.44.suppl_1.awmp60] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Introduction:
Transcranial Doppler(TCD) can be used to detect emboli in cerebral circulation. To classify the measured TCD as an embolic signal (ES) or artifact is usually done by a well-trained physician. However, inter-rater reliability among those physicians is variable. Also,in countries where skilled physicians are scarce, an automatic ES detection system can be useful as a medical support system.
Method:
We propose a two-step algorithm based on adaptive Wavelet Packet Transform (WPT) and Adaptive Neuro-Fuzzy Inference System (ANFIS) described as follows. First, the TCD signal is windowed using a 256-sample Gaussian window with 80% overlap. A 3-level WPT is used to transform each windowed TCD signal. The best basis algorithm is applied to find the best binary tree from which the entropy and normalized energy are calculated. The entropy and normalized energy are used as features to classify each windowed TCD signal as normal or abnormal. Second, the abnormal TCD windows are classified further as ES or artifact. Specifically, standard deviation of level 3 WPT coefficients from frequency index 1 to 6 are calculated and used as inputs of ANFIS where each standard deviation can be combined with others resulting in 64 rules. After training the ANFIS, ES and artifact can be differentiated.
Results:
Six hundred and sixty abnormal signals in vivo are used to evaluate the algorithm, i.e., 176 ESs collected during emboli monitoring from patients undergoing carotid angioplasty with stenting and from patients with patent foramen ovale, 106 artifacts from normal subjects, and 378 artifacts from stroke patients with carotid stenosis. The signals are divided into training and validation sets. The training set is used to train the algorithm and the validation set is used to evaluate the validity of the algorithm. Experimental results show that the algorithm can differentiate abnormal from normal signals efficiently with 100% accuracy. A sensitivity of 96.6% and specificity of 96.5% can be achieved from the validation set.
Conclusions:
The automatic embolic signal detection algorithm has been developed. The experimental results suggested that the algorithm could detect emboli and differentiate artifacts efficiently and could be used as a medical support system.
Collapse
|