1
|
Sulaiman A, Anand V, Gupta S, Al Reshan MS, Alshahrani H, Shaikh A, Elmagzoub MA. An intelligent LinkNet-34 model with EfficientNetB7 encoder for semantic segmentation of brain tumor. Sci Rep 2024; 14:1345. [PMID: 38228639 PMCID: PMC10792164 DOI: 10.1038/s41598-024-51472-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2023] [Accepted: 01/05/2024] [Indexed: 01/18/2024] Open
Abstract
A brain tumor is an unnatural expansion of brain cells that can't be stopped, making it one of the deadliest diseases of the nervous system. The brain tumor segmentation for its earlier diagnosis is a difficult task in the field of medical image analysis. Earlier, segmenting brain tumors was done manually by radiologists but that requires a lot of time and effort. Inspite of this, in the manual segmentation there was possibility of making mistakes due to human intervention. It has been proved that deep learning models can outperform human experts for the diagnosis of brain tumor in MRI images. These algorithms employ a huge number of MRI scans to learn the difficult patterns of brain tumors to segment them automatically and accurately. Here, an encoder-decoder based architecture with deep convolutional neural network is proposed for semantic segmentation of brain tumor in MRI images. The proposed method focuses on the image downsampling in the encoder part. For this, an intelligent LinkNet-34 model with EfficientNetB7 encoder based semantic segmentation model is proposed. The performance of LinkNet-34 model is compared with other three models namely FPN, U-Net, and PSPNet. Further, the performance of EfficientNetB7 used as encoder in LinkNet-34 model has been compared with three encoders namely ResNet34, MobileNet_V2, and ResNet50. After that, the proposed model is optimized using three different optimizers such as RMSProp, Adamax and Adam. The LinkNet-34 model has outperformed with EfficientNetB7 encoder using Adamax optimizer with the value of jaccard index as 0.89 and dice coefficient as 0.915.
Collapse
Affiliation(s)
- Adel Sulaiman
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Vatsala Anand
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India.
| | - Sheifali Gupta
- Chitkara University Institute of Engineering and Technology, Chitkara University, Rajpura, Punjab, 140401, India
| | - Mana Saleh Al Reshan
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Hani Alshahrani
- Department of Computer Science, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - Asadullah Shaikh
- Department of Information Systems, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| | - M A Elmagzoub
- Department of Network and Communication Engineering, College of Computer Science and Information Systems, Najran University, 61441, Najran, Saudi Arabia
| |
Collapse
|
2
|
Shukla PK, Zakariah M, Hatamleh WA, Tarazi H, Tiwari B. AI-DRIVEN Novel Approach for Liver Cancer Screening and Prediction Using Cascaded Fully Convolutional Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:4277436. [PMID: 35154620 PMCID: PMC8825667 DOI: 10.1155/2022/4277436] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Revised: 12/18/2021] [Accepted: 01/05/2022] [Indexed: 01/01/2023]
Abstract
In experimental analysis and computer-aided design sustain scheme, segmentation of cell liver and hepatic lesions by an automated method is a significant step for studying the biomarkers characteristics in experimental analysis and computer-aided design sustain scheme. Patient to patient, the change in lesion type is dependent on the size, imaging equipment (such as the setting dissimilarity approach), and timing of the lesion, all of which are different. With practical approaches, it is difficult to determine the stages of liver cancer based on the segmentation of lesion patterns. Based on the training accuracy rate, the present algorithm confronts a number of obstacles in some domains. The suggested work proposes a system for automatically detecting liver tumours and lesions in magnetic resonance imaging of the abdomen pictures by using 3D affine invariant and shape parameterization approaches, as well as the results of this study. This point-to-point parameterization addresses the frequent issues associated with concave surfaces by establishing a standard model level for the organ's surface throughout the modelling process. Initially, the geodesic active contour analysis approach is used to separate the liver area from the rest of the body. The proposal is as follows: It is possible to minimise the error rate during the training operations, which are carried out using Cascaded Fully Convolutional Neural Networks (CFCNs) using the input of the segmented tumour area. Liver segmentation may help to reduce the error rate during the training procedures. The stage analysis of the data sets, which are comprised of training and testing pictures, is used to get the findings and validate their validity. The accuracy attained by the Cascaded Fully Convolutional Neural Network (CFCN) for the liver tumour analysis is 94.21 percent, with a calculation time of less than 90 seconds per volume for the liver tumour analysis. The results of the trials show that the total accuracy rate of the training and testing procedure is 93.85 percent in the various volumes of 3DIRCAD datasets tested.
Collapse
Affiliation(s)
- Piyush Kumar Shukla
- Computer Science & Engineering Department, University Institute of Technology, Rajiv Gandhi Proudyogiki Vishwavidyalaya, Bhopal 462033, India
| | - Mohammed Zakariah
- College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Wesam Atef Hatamleh
- Department of Computer Science, College of Computer and Information Sciences, King Saud University, P.O. Box 51178, Riyadh 11543, Saudi Arabia
| | - Hussam Tarazi
- Department of Computer Science and Informatics, School of Engineering and Computer Science, Oakland University, Rochester Hills MI USA 318 Meadow Brook rd, Rochester, MI 48309, USA
| | - Basant Tiwari
- Department of Information Technology, Hawassa University, Institute of Technology, Hawassa, Ethiopia
| |
Collapse
|
3
|
Abstract
AbstractBrain tumor occurs owing to uncontrolled and rapid growth of cells. If not treated at an initial phase, it may lead to death. Despite many significant efforts and promising outcomes in this domain, accurate segmentation and classification remain a challenging task. A major challenge for brain tumor detection arises from the variations in tumor location, shape, and size. The objective of this survey is to deliver a comprehensive literature on brain tumor detection through magnetic resonance imaging to help the researchers. This survey covered the anatomy of brain tumors, publicly available datasets, enhancement techniques, segmentation, feature extraction, classification, and deep learning, transfer learning and quantum machine learning for brain tumors analysis. Finally, this survey provides all important literature for the detection of brain tumors with their advantages, limitations, developments, and future trends.
Collapse
|
4
|
Uthra Devi K, Gomathi R. Convolutional Neural Network Based Brain Tumor Classification Using Robust Background Saliency Detection. JOURNAL OF MEDICAL IMAGING AND HEALTH INFORMATICS 2021. [DOI: 10.1166/jmihi.2021.3849] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
To perceive the tumors found in brain and their treatment, experts manually note and identify different Regions of Interest (ROI). To overcome the faults and divergences during this state, automated analysis is performed. A unique technique is used to classify the tumor section of the
brain from an MRI is proposed using saliency-focused image depiction and optimization in classification based on CNN. Primarily, the MRI images are pre-processed using the Canny Edge Finding algorithm and then those images are represented as saliency driven based on Robust Background Saliency
Detection (RBD). Followed by the abstraction of features then classifying the image is performed using CNN along with ADAM optimization. The implementation is accomplished, and the results are analyzed, showing that it outperforms previous techniques.
Collapse
Affiliation(s)
- K. Uthra Devi
- Department of Information Technology, Indra Ganesan College of Engineering, Trichy 620012, TamilNadu, India
| | - R. Gomathi
- Department of Electronics and Communication Engineering, University College of Engineering-Dindigul Campus, Dindigul 624622, TamilNadu, India
| |
Collapse
|
5
|
Lee DH, Yetisgen M, Vanderwende L, Horvitz E. Predicting severe clinical events by learning about life-saving actions and outcomes using distant supervision. J Biomed Inform 2020; 107:103425. [PMID: 32348850 DOI: 10.1016/j.jbi.2020.103425] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 04/17/2020] [Accepted: 04/18/2020] [Indexed: 10/24/2022]
Abstract
Medical error is a leading cause of patient death in the United States. Among the different types of medical errors, harm to patients caused by doctors missing early signs of deterioration is especially challenging to address due to the heterogeneity of patients' physiological patterns. In this study, we implemented risk prediction models using the gradient boosted tree method to derive risk estimates for acute onset diseases in the near future. The prediction model uses physiological variables as input signals and the time of the administration of outcome-related interventions and discharge diagnoses as labels. We examine four categories of acute onset illness: acute heart failure (AHF), acute lung injury (ALI), acute kidney injury (AKI), and acute liver failure (ALF). To develop and test the model, we consider data from two sources: 23,578 admissions to the Intensive Care Unit (ICU) from the MIMIC-3 dataset (Beth-Israel Hospital) and 16,612 ICU admissions on hospitals affiliated with our institution (University of Washington Medical Center and Harborview Medical Center, the UW-CDR dataset). We systematically identify outcome-related interventions for each acute organ failure, then use them, along with discharge diagnoses, to label proxy events to train gradient boosted trees. The trained models achieve the highest F1 score with a value of 0.6018 when predicting the need for life-saving interventions for ALI within the next 24 h in the MIMIC-3 dataset while showing a median F1 score of 0.3850 from all acute organ failures in both datasets. The approach also achieves the highest F1 score of 0.6301 when classifying a patient's ALI status at the time of discharge from the MIMIC-3 dataset, with a median F1 score of 0.4307 in both datasets. This study shows the potential for using the time of outcome-related intervention administrations and discharge diagnoses as labels to train supervised machine learning models that predict the risk of acute onset illnesses.
Collapse
Affiliation(s)
- Dae Hyun Lee
- Biomedical & Health Informatics, School of Medicine, University of Washington, Seattle, WA, USA.
| | - Meliha Yetisgen
- Biomedical & Health Informatics, School of Medicine, University of Washington, Seattle, WA, USA
| | - Lucy Vanderwende
- Biomedical & Health Informatics, School of Medicine, University of Washington, Seattle, WA, USA
| | - Eric Horvitz
- Biomedical & Health Informatics, School of Medicine, University of Washington, Seattle, WA, USA; Microsoft Research, Redmond, WA, USA
| |
Collapse
|
6
|
Chatelain P, Sharma H, Drukker L, Papageorghiou AT, Noble JA. Evaluation of Gaze Tracking Calibration for Longitudinal Biomedical Imaging Studies. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:153-163. [PMID: 30188843 DOI: 10.1109/tcyb.2018.2866274] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Gaze tracking is a promising technology for studying the visual perception of clinicians during image-based medical exams. It could be used in longitudinal studies to analyze their perceptive process, explore human-machine interactions, and develop innovative computer-aided imaging systems. However, using a remote eye tracker in an unconstrained environment and over time periods of weeks requires a certain guarantee of performance to ensure that collected gaze data are fit for purpose. We report the results of evaluating eye tracking calibration for longitudinal studies. First, we tested the performance of an eye tracker on a cohort of 13 users over a period of one month. For each participant, the eye tracker was calibrated during the first session. The participants were asked to sit in front of a monitor equipped with the eye tracker, but their position was not constrained. Second, we tested the performance of the eye tracker on sonographers positioned in front of a cart-based ultrasound scanner. Experimental results show a decrease of accuracy between calibration and later testing of 0.30° and a further degradation over time at a rate of 0.13°. month-1. The overall median accuracy was 1.00° (50.9 pixels) and the overall median precision was 0.16° (8.3 pixels). The results from the ultrasonography setting show a decrease of accuracy of 0.16° between calibration and later testing. This slow degradation of gaze tracking accuracy could impact the data quality in long-term studies. Therefore, the results we present here can help in planning such long-term gaze tracking studies.
Collapse
|
7
|
Multi-channeled MR brain image segmentation: A new automated approach combining BAT and clustering technique for better identification of heterogeneous tumors. Biocybern Biomed Eng 2019. [DOI: 10.1016/j.bbe.2019.05.007] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022]
|
8
|
Salkowski LR, Russ R. Cognitive processing differences of experts and novices when correlating anatomy and cross-sectional imaging. J Med Imaging (Bellingham) 2018; 5:031411. [PMID: 29795777 DOI: 10.1117/1.jmi.5.3.031411] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2017] [Accepted: 04/23/2018] [Indexed: 01/15/2023] Open
Abstract
The ability to correlate anatomical knowledge and medical imaging is crucial to radiology and as such, should be a critical component of medical education. However, we are hindered in our ability to teach this skill because we know very little about what expert practice looks like, and even less about novices' understanding. Using a unique simulation tool, this research conducted cognitive clinical interviews with experts and novices to explore differences in how they engage in this correlation and the underlying cognitive processes involved in doing so. This research supported what has been known in the literature, that experts are significantly faster at making decisions on medical imaging than novices. It also offers insight into the spatial ability and reasoning that is involved in the correlation of anatomy to medical imaging. There are differences in the cognitive processing of experts and novices with respect to meaningful patterns, organized content knowledge, and the flexibility of retrieval. Presented are some novice-expert similarities and differences in image processing. This study investigated extremes, opening an opportunity to investigate the sequential knowledge acquisition from student to resident to expert, and where educators can help intervene in this learning process.
Collapse
Affiliation(s)
- Lonie R Salkowski
- University of Wisconsin, School of Medicine and Public Health, Department of Radiology, Madison, Wisconsin, United States.,University of Wisconsin, School of Medicine and Public Health, Department of Medical Physics, Madison, Wisconsin, United States
| | - Rosemary Russ
- University of Wisconsin, School of Education, Department of Curriculum and Instruction, Madison, Wisconsin, United States
| |
Collapse
|
9
|
Automatic Semantic Segmentation of Brain Gliomas from MRI Images Using a Deep Cascaded Neural Network. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:4940593. [PMID: 29755716 PMCID: PMC5884212 DOI: 10.1155/2018/4940593] [Citation(s) in RCA: 73] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 02/11/2018] [Indexed: 11/17/2022]
Abstract
Brain tumors can appear anywhere in the brain and have vastly different sizes and morphology. Additionally, these tumors are often diffused and poorly contrasted. Consequently, the segmentation of brain tumor and intratumor subregions using magnetic resonance imaging (MRI) data with minimal human interventions remains a challenging task. In this paper, we present a novel fully automatic segmentation method from MRI data containing in vivo brain gliomas. This approach can not only localize the entire tumor region but can also accurately segment the intratumor structure. The proposed work was based on a cascaded deep learning convolutional neural network consisting of two subnetworks: (1) a tumor localization network (TLN) and (2) an intratumor classification network (ITCN). The TLN, a fully convolutional network (FCN) in conjunction with the transfer learning technology, was used to first process MRI data. The goal of the first subnetwork was to define the tumor region from an MRI slice. Then, the ITCN was used to label the defined tumor region into multiple subregions. Particularly, ITCN exploited a convolutional neural network (CNN) with deeper architecture and smaller kernel. The proposed approach was validated on multimodal brain tumor segmentation (BRATS 2015) datasets, which contain 220 high-grade glioma (HGG) and 54 low-grade glioma (LGG) cases. Dice similarity coefficient (DSC), positive predictive value (PPV), and sensitivity were used as evaluation metrics. Our experimental results indicated that our method could obtain the promising segmentation results and had a faster segmentation speed. More specifically, the proposed method obtained comparable and overall better DSC values (0.89, 0.77, and 0.80) on the combined (HGG + LGG) testing set, as compared to other methods reported in the literature. Additionally, the proposed approach was able to complete a segmentation task at a rate of 1.54 seconds per slice.
Collapse
|
10
|
Mohan G, Subashini MM. MRI based medical image analysis: Survey on brain tumor grade classification. Biomed Signal Process Control 2018. [DOI: 10.1016/j.bspc.2017.07.007] [Citation(s) in RCA: 196] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
11
|
Efficient visual attention driven framework for key frames extraction from hysteroscopy videos. Biomed Signal Process Control 2017. [DOI: 10.1016/j.bspc.2016.11.011] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
12
|
Brain tumor classification from multi-modality MRI using wavelets and machine learning. Pattern Anal Appl 2017. [DOI: 10.1007/s10044-017-0597-8] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
13
|
Su Q, Bi S, Yang X. Prioritization of liver MRI for distinguishing focal lesions. SCIENCE CHINA-LIFE SCIENCES 2017; 60:28-36. [PMID: 28078508 DOI: 10.1007/s11427-016-0388-2] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Accepted: 09/23/2016] [Indexed: 11/24/2022]
Abstract
Liver cancer is one of the leading causes of cancer-related mortality worldwide. Magnetic resonance imaging (MRI) is a non-invasive imaging technique that is often used by radiologists for diagnosis and surgical planning. Analysis of a large amount of liver MRI data for each patient limits the radiologist's efficiency and may lead to misdiagnoses. The redundant MRI data, especially from dynamic contrast enhanced (DCE) sequences, is also a bottleneck in transmitting the images via the internet or PACS for remote consultancy in a reasonable amount of time. This study included 25 patients (aged between 20 and 70 years) with liver cysts (seven cases), hemangiomas (eight cases), or hepatic cell carcinomas (10 cases). DCE T1WI MRI was performed for all the patients. The diagnosis reference included typical MRI findings and post-surgery pathology. The methods were as follows: (i) MRI sequence pre-processing based on large vessels variation level set method to remove non-liver parts from MRI images; (ii) human visual model features (luminance, motion, and contour) extraction and fusion; (iii) anomaly-based MRI ranking; and (iv) methods assessment with the 25 patients' DCE MRI data. The prioritization methods applied to the DCE images could automatically assimilate and determine the content of the medical images, identifying the liver cysts, hemangiomas, and carcinomas. The average uniformity between radiologists and prioritization with the proposed method was 0.805, 0.838, and 0.818 for cysts, hemangiomas, and carcinomas, respectively, which indicates that the proposed method is an efficient method for liver DCE image prioritization.
Collapse
Affiliation(s)
- Qinghua Su
- School of Information, Beijing Wuzi University, Beijing, 101149, China
- School of Mechanical Engineering & Automation, Beihang University, Beijing, 100191, China
| | - Shusheng Bi
- School of Mechanical Engineering & Automation, Beihang University, Beijing, 100191, China
| | - Xuedong Yang
- Department of Radiology, Guang'an Men Hospital, Affiliated to China Academy of Chinese Medical Sciences, Beijing, 100053, China.
| |
Collapse
|
14
|
Muhammad K, Ahmad J, Sajjad M, Baik SW. Visual saliency models for summarization of diagnostic hysteroscopy videos in healthcare systems. SPRINGERPLUS 2016; 5:1495. [PMID: 27652068 PMCID: PMC5013008 DOI: 10.1186/s40064-016-3171-8] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/26/2016] [Accepted: 08/30/2016] [Indexed: 11/10/2022]
Abstract
In clinical practice, diagnostic hysteroscopy (DH) videos are recorded in full which are stored in long-term video libraries for later inspection of previous diagnosis, research and training, and as an evidence for patients' complaints. However, a limited number of frames are required for actual diagnosis, which can be extracted using video summarization (VS). Unfortunately, the general-purpose VS methods are not much effective for DH videos due to their significant level of similarity in terms of color and texture, unedited contents, and lack of shot boundaries. Therefore, in this paper, we investigate visual saliency models for effective abstraction of DH videos by extracting the diagnostically important frames. The objective of this study is to analyze the performance of various visual saliency models with consideration of domain knowledge and nominate the best saliency model for DH video summarization in healthcare systems. Our experimental results indicate that a hybrid saliency model, comprising of motion, contrast, texture, and curvature saliency, is the more suitable saliency model for summarization of DH videos in terms of extracted keyframes and accuracy.
Collapse
Affiliation(s)
- Khan Muhammad
- Intelligent Media Laboratory, Department of Digital Contents, College of Electronics and Information Engineering, Sejong University, Seoul, Republic of Korea
| | - Jamil Ahmad
- Intelligent Media Laboratory, Department of Digital Contents, College of Electronics and Information Engineering, Sejong University, Seoul, Republic of Korea
| | - Muhammad Sajjad
- Digital Image Processing Laboratory, Department of Computer Science, Islamia College Peshawar, Khyber Pakhtunkhwa, Pakistan
| | - Sung Wook Baik
- Intelligent Media Laboratory, Department of Digital Contents, College of Electronics and Information Engineering, Sejong University, Seoul, Republic of Korea
| |
Collapse
|
15
|
Hybrid Tolerance Rough Set–Firefly based supervised feature selection for MRI brain tumor image classification. Appl Soft Comput 2016. [DOI: 10.1016/j.asoc.2016.03.014] [Citation(s) in RCA: 94] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
16
|
Mehmood I, Sajjad M, Rho S, Baik SW. Divide-and-conquer based summarization framework for extracting affective video content. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2015.05.126] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
17
|
GUDM: Automatic Generation of Unified Datasets for Learning and Reasoning in Healthcare. SENSORS 2015; 15:15772-98. [PMID: 26147731 PMCID: PMC4541854 DOI: 10.3390/s150715772] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2015] [Revised: 06/23/2015] [Accepted: 06/24/2015] [Indexed: 12/26/2022]
Abstract
A wide array of biomedical data are generated and made available to healthcare experts. However, due to the diverse nature of data, it is difficult to predict outcomes from it. It is therefore necessary to combine these diverse data sources into a single unified dataset. This paper proposes a global unified data model (GUDM) to provide a global unified data structure for all data sources and generate a unified dataset by a “data modeler” tool. The proposed tool implements user-centric priority based approach which can easily resolve the problems of unified data modeling and overlapping attributes across multiple datasets. The tool is illustrated using sample diabetes mellitus data. The diverse data sources to generate the unified dataset for diabetes mellitus include clinical trial information, a social media interaction dataset and physical activity data collected using different sensors. To realize the significance of the unified dataset, we adopted a well-known rough set theory based rules creation process to create rules from the unified dataset. The evaluation of the tool on six different sets of locally created diverse datasets shows that the tool, on average, reduces 94.1% time efforts of the experts and knowledge engineer while creating unified datasets.
Collapse
|
18
|
Mehmood I, Sajjad M, Baik SW. Mobile-cloud assisted video summarization framework for efficient management of remote sensing data generated by wireless capsule sensors. SENSORS (BASEL, SWITZERLAND) 2014; 14:17112-45. [PMID: 25225874 PMCID: PMC4208216 DOI: 10.3390/s140917112] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/06/2014] [Revised: 08/05/2014] [Accepted: 09/09/2014] [Indexed: 12/22/2022]
Abstract
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use, especially in remote monitoring health-services. However, during the WCE process, the large amount of captured video data demands a significant deal of computation to analyze and retrieve informative video frames. In order to facilitate efficient WCE data collection and browsing task, we present a resource- and bandwidth-aware WCE video summarization framework that extracts the representative keyframes of the WCE video contents by removing redundant and non-informative frames. For redundancy elimination, we use Jeffrey-divergence between color histograms and inter-frame Boolean series-based correlation of color channels. To remove non-informative frames, multi-fractal texture features are extracted to assist the classification using an ensemble-based classifier. Owing to the limited WCE resources, it is impossible for the WCE system to perform computationally intensive video summarization tasks. To resolve computational challenges, mobile-cloud architecture is incorporated, which provides resizable computing capacities by adaptively offloading video summarization tasks between the client and the cloud server. The qualitative and quantitative results are encouraging and show that the proposed framework saves information transmission cost and bandwidth, as well as the valuable time of data analysts in browsing remote sensing data.
Collapse
Affiliation(s)
- Irfan Mehmood
- College of Electronics and Information Engineering, Sejong University, Seoul 143-747, Korea.
| | - Muhammad Sajjad
- College of Electronics and Information Engineering, Sejong University, Seoul 143-747, Korea.
| | - Sung Wook Baik
- College of Electronics and Information Engineering, Sejong University, Seoul 143-747, Korea.
| |
Collapse
|
19
|
Video summarization based tele-endoscopy: a service to efficiently manage visual data generated during wireless capsule endoscopy procedure. J Med Syst 2014; 38:109. [PMID: 25037715 DOI: 10.1007/s10916-014-0109-y] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2014] [Accepted: 07/07/2014] [Indexed: 01/17/2023]
Abstract
Wireless capsule endoscopy (WCE) has great advantages over traditional endoscopy because it is portable and easy to use. More importantly, WCE combined with mobile computing ensures rapid transmission of diagnostic data to hospitals and enables off-site senior gastroenterologists to offer timely decision making support. However, during this WCE process, video data are produced in huge amounts, but only a limited amount of data is actually useful for diagnosis. The sharing and analysis of this video data becomes a challenging task due the constraints such as limited memory, energy, and communication capability. In order to facilitate efficient WCE data collection and browsing tasks, we present a video summarization-based tele-endoscopy service that estimates the semantically relevant video frames from the perspective of gastroenterologists. For this purpose, image moments, curvature, and multi-scale contrast are computed and are fused to obtain the saliency map of each frame. This saliency map is used to select keyframes. The proposed tele-endoscopy service selects keyframes based on their relevance to the disease diagnosis. This ensures the sending of diagnostically relevant frames to the gastroenterologist instead of sending all the data, thus saving transmission costs and bandwidth. The proposed framework also saves storage costs as well as the precious time of doctors in browsing patient's information. The qualitative and quantitative results are encouraging and show that the proposed service provides video keyframes to the gastroenterologists without discarding important information.
Collapse
|