1
|
Zhang J, Fu T, Xiao D, Fan J, Song H, Ai D, Yang J. Bi-Fusion of Structure and Deformation at Multi-Scale for Joint Segmentation and Registration. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2024; 33:3676-3691. [PMID: 38837936 DOI: 10.1109/tip.2024.3407657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
Medical image segmentation and registration are two fundamental and highly related tasks. However, current works focus on the mutual promotion between the two at the loss function level, ignoring the feature information generated by the encoder-decoder network during the task-specific feature mapping process and the potential inter-task feature relationship. This paper proposes a unified multi-task joint learning framework based on bi-fusion of structure and deformation at multi-scale, called BFM-Net, which simultaneously achieves the segmentation results and deformation field in a single-step estimation. BFM-Net consists of a segmentation subnetwork (SegNet), a registration subnetwork (RegNet), and the multi-task connection module (MTC). The MTC module is used to transfer the latent feature representation between segmentation and registration at multi-scale and link different tasks at the network architecture level, including the spatial attention fusion module (SAF), the multi-scale spatial attention fusion module (MSAF) and the velocity field fusion module (VFF). Extensive experiments on MR, CT and ultrasound images demonstrate the effectiveness of our approach. The MTC module can increase the Dice scores of segmentation and registration by 3.2%, 1.6%, 2.2%, and 6.2%, 4.5%, 3.0%, respectively. Compared with six state-of-the-art algorithms for segmentation and registration, BFM-Net can achieve superior performance in various modal images, fully demonstrating its effectiveness and generalization.
Collapse
|
2
|
Farahat IS, Sharafeldeen A, Ghazal M, Alghamdi NS, Mahmoud A, Connelly J, van Bogaert E, Zia H, Tahtouh T, Aladrousy W, Tolba AE, Elmougy S, El-Baz A. An AI-based novel system for predicting respiratory support in COVID-19 patients through CT imaging analysis. Sci Rep 2024; 14:851. [PMID: 38191606 PMCID: PMC10774502 DOI: 10.1038/s41598-023-51053-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2023] [Accepted: 12/29/2023] [Indexed: 01/10/2024] Open
Abstract
The proposed AI-based diagnostic system aims to predict the respiratory support required for COVID-19 patients by analyzing the correlation between COVID-19 lesions and the level of respiratory support provided to the patients. Computed tomography (CT) imaging will be used to analyze the three levels of respiratory support received by the patient: Level 0 (minimum support), Level 1 (non-invasive support such as soft oxygen), and Level 2 (invasive support such as mechanical ventilation). The system will begin by segmenting the COVID-19 lesions from the CT images and creating an appearance model for each lesion using a 2D, rotation-invariant, Markov-Gibbs random field (MGRF) model. Three MGRF-based models will be created, one for each level of respiratory support. This suggests that the system will be able to differentiate between different levels of severity in COVID-19 patients. The system will decide for each patient using a neural network-based fusion system, which combines the estimates of the Gibbs energy from the three MGRF-based models. The proposed system were assessed using 307 COVID-19-infected patients, achieving an accuracy of [Formula: see text], a sensitivity of [Formula: see text], and a specificity of [Formula: see text], indicating a high level of prediction accuracy.
Collapse
Affiliation(s)
- Ibrahim Shawky Farahat
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | | | - Mohammed Ghazal
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Norah Saleh Alghamdi
- Department of Computer Sciences, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh, Saudi Arabia
| | - Ali Mahmoud
- Department of Bioengineering, University of Louisville, Louisville, USA
| | - James Connelly
- Department of Radiology, University of Louisville, Louisville, USA
| | - Eric van Bogaert
- Department of Radiology, University of Louisville, Louisville, USA
| | - Huma Zia
- Electrical, Computer and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi, UAE
| | - Tania Tahtouh
- College of Health Sciences, Abu Dhabi University, Abu Dhabi, UAE
| | - Waleed Aladrousy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ahmed Elsaid Tolba
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
- The Higher Institute of Engineering and Automotive Technology and Energy, Kafr El Sheikh, Egypt
| | - Samir Elmougy
- Department of Computer Science, Faculty of Computers and Information, Mansoura University, Mansoura, Egypt
| | - Ayman El-Baz
- Department of Bioengineering, University of Louisville, Louisville, USA.
| |
Collapse
|
3
|
Saleh GA, Batouty NM, Gamal A, Elnakib A, Hamdy O, Sharafeldeen A, Mahmoud A, Ghazal M, Yousaf J, Alhalabi M, AbouEleneen A, Tolba AE, Elmougy S, Contractor S, El-Baz A. Impact of Imaging Biomarkers and AI on Breast Cancer Management: A Brief Review. Cancers (Basel) 2023; 15:5216. [PMID: 37958390 PMCID: PMC10650187 DOI: 10.3390/cancers15215216] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2023] [Revised: 10/13/2023] [Accepted: 10/21/2023] [Indexed: 11/15/2023] Open
Abstract
Breast cancer stands out as the most frequently identified malignancy, ranking as the fifth leading cause of global cancer-related deaths. The American College of Radiology (ACR) introduced the Breast Imaging Reporting and Data System (BI-RADS) as a standard terminology facilitating communication between radiologists and clinicians; however, an update is now imperative to encompass the latest imaging modalities developed subsequent to the 5th edition of BI-RADS. Within this review article, we provide a concise history of BI-RADS, delve into advanced mammography techniques, ultrasonography (US), magnetic resonance imaging (MRI), PET/CT images, and microwave breast imaging, and subsequently furnish comprehensive, updated insights into Molecular Breast Imaging (MBI), diagnostic imaging biomarkers, and the assessment of treatment responses. This endeavor aims to enhance radiologists' proficiency in catering to the personalized needs of breast cancer patients. Lastly, we explore the augmented benefits of artificial intelligence (AI), machine learning (ML), and deep learning (DL) applications in segmenting, detecting, and diagnosing breast cancer, as well as the early prediction of the response of tumors to neoadjuvant chemotherapy (NAC). By assimilating state-of-the-art computer algorithms capable of deciphering intricate imaging data and aiding radiologists in rendering precise and effective diagnoses, AI has profoundly revolutionized the landscape of breast cancer radiology. Its vast potential holds the promise of bolstering radiologists' capabilities and ameliorating patient outcomes in the realm of breast cancer management.
Collapse
Affiliation(s)
- Gehad A. Saleh
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Nihal M. Batouty
- Diagnostic and Interventional Radiology Department, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (G.A.S.)
| | - Abdelrahman Gamal
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elnakib
- Electrical and Computer Engineering Department, School of Engineering, Penn State Erie, The Behrend College, Erie, PA 16563, USA;
| | - Omar Hamdy
- Surgical Oncology Department, Oncology Centre, Mansoura University, Mansoura 35516, Egypt;
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Jawad Yousaf
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Marah Alhalabi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.G.)
| | - Amal AbouEleneen
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis, Cairo 11829, Egypt
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt (A.E.T.)
| | - Sohail Contractor
- Department of Radiology, University of Louisville, Louisville, KY 40202, USA
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
4
|
El-Baz A, Giridharan GA, Shalaby A, Mahmoud AH, Ghazal M. Special Issue "Computer Aided Diagnosis Sensors". SENSORS (BASEL, SWITZERLAND) 2022; 22:8052. [PMID: 36298403 PMCID: PMC9610085 DOI: 10.3390/s22208052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Accepted: 10/19/2022] [Indexed: 06/16/2023]
Abstract
Sensors used to diagnose, monitor or treat diseases in the medical domain are known as medical sensors [...].
Collapse
Affiliation(s)
- Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | | | - Ahmed Shalaby
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Ali H. Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates
| |
Collapse
|
5
|
Segmentation of Infant Brain Using Nonnegative Matrix Factorization. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12115377] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
This study develops an atlas-based automated framework for segmenting infants’ brains from magnetic resonance imaging (MRI). For the accurate segmentation of different structures of an infant’s brain at the isointense age (6–12 months), our framework integrates features of diffusion tensor imaging (DTI) (e.g., the fractional anisotropy (FA)). A brain diffusion tensor (DT) image and its region map are considered samples of a Markov–Gibbs random field (MGRF) that jointly models visual appearance, shape, and spatial homogeneity of a goal structure. The visual appearance is modeled with an empirical distribution of the probability of the DTI features, fused by their nonnegative matrix factorization (NMF) and allocation to data clusters. Projecting an initial high-dimensional feature space onto a low-dimensional space of the significant fused features with the NMF allows for better separation of the goal structure and its background. The cluster centers in the latter space are determined at the training stage by the K-means clustering. In order to adapt to large infant brain inhomogeneities and segment the brain images more accurately, appearance descriptors of both the first-order and second-order are taken into account in the fused NMF feature space. Additionally, a second-order MGRF model is used to describe the appearance based on the voxel intensities and their pairwise spatial dependencies. An adaptive shape prior that is spatially variant is constructed from a training set of co-aligned images, forming an atlas database. Moreover, the spatial homogeneity of the shape is described with a spatially uniform 3D MGRF of the second-order for region labels. In vivo experiments on nine infant datasets showed promising results in terms of the accuracy, which was computed using three metrics: the 95-percentile modified Hausdorff distance (MHD), the Dice similarity coefficient (DSC), and the absolute volume difference (AVD). Both the quantitative and visual assessments confirm that integrating the proposed NMF-fused DTI feature and intensity MGRF models of visual appearance, the adaptive shape prior, and the shape homogeneity MGRF model is promising in segmenting the infant brain DTI.
Collapse
|
6
|
Fahmy D, Kandil H, Khelifi A, Yaghi M, Ghazal M, Sharafeldeen A, Mahmoud A, El-Baz A. How AI Can Help in the Diagnostic Dilemma of Pulmonary Nodules. Cancers (Basel) 2022; 14:cancers14071840. [PMID: 35406614 PMCID: PMC8997734 DOI: 10.3390/cancers14071840] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 03/29/2022] [Accepted: 03/30/2022] [Indexed: 02/04/2023] Open
Abstract
Simple Summary Pulmonary nodules are considered a sign of bronchogenic carcinoma, detecting them early will reduce their progression and can save lives. Lung cancer is the second most common type of cancer in both men and women. This manuscript discusses the current applications of artificial intelligence (AI) in lung segmentation as well as pulmonary nodule segmentation and classification using computed tomography (CT) scans, published in the last two decades, in addition to the limitations and future prospects in the field of AI. Abstract Pulmonary nodules are the precursors of bronchogenic carcinoma, its early detection facilitates early treatment which save a lot of lives. Unfortunately, pulmonary nodule detection and classification are liable to subjective variations with high rate of missing small cancerous lesions which opens the way for implementation of artificial intelligence (AI) and computer aided diagnosis (CAD) systems. The field of deep learning and neural networks is expanding every day with new models designed to overcome diagnostic problems and provide more applicable and simply used models. We aim in this review to briefly discuss the current applications of AI in lung segmentation, pulmonary nodule detection and classification.
Collapse
Affiliation(s)
- Dalia Fahmy
- Diagnostic Radiology Department, Mansoura University Hospital, Mansoura 35516, Egypt;
| | - Heba Kandil
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Information Technology Department, Faculty of Computers and Informatics, Mansoura University, Mansoura 35516, Egypt
| | - Adel Khelifi
- Computer Science and Information Technology Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Maha Yaghi
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates; (M.Y.); (M.G.)
| | - Ahmed Sharafeldeen
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ali Mahmoud
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
| | - Ayman El-Baz
- Bioengineering Department, University of Louisville, Louisville, KY 40292, USA; (H.K.); (A.S.); (A.M.)
- Correspondence:
| |
Collapse
|
7
|
Farahat IS, Sharafeldeen A, Elsharkawy M, Soliman A, Mahmoud A, Ghazal M, Taher F, Bilal M, Abdel Razek AAK, Aladrousy W, Elmougy S, Tolba AE, El-Melegy M, El-Baz A. The Role of 3D CT Imaging in the Accurate Diagnosis of Lung Function in Coronavirus Patients. Diagnostics (Basel) 2022; 12:696. [PMID: 35328249 PMCID: PMC8947065 DOI: 10.3390/diagnostics12030696] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Revised: 02/25/2022] [Accepted: 03/08/2022] [Indexed: 12/04/2022] Open
Abstract
Early grading of coronavirus disease 2019 (COVID-19), as well as ventilator support machines, are prime ways to help the world fight this virus and reduce the mortality rate. To reduce the burden on physicians, we developed an automatic Computer-Aided Diagnostic (CAD) system to grade COVID-19 from Computed Tomography (CT) images. This system segments the lung region from chest CT scans using an unsupervised approach based on an appearance model, followed by 3D rotation invariant Markov-Gibbs Random Field (MGRF)-based morphological constraints. This system analyzes the segmented lung and generates precise, analytical imaging markers by estimating the MGRF-based analytical potentials. Three Gibbs energy markers were extracted from each CT scan by tuning the MGRF parameters on each lesion separately. The latter were healthy/mild, moderate, and severe lesions. To represent these markers more reliably, a Cumulative Distribution Function (CDF) was generated, then statistical markers were extracted from it, namely, 10th through 90th CDF percentiles with 10% increments. Subsequently, the three extracted markers were combined together and fed into a backpropagation neural network to make the diagnosis. The developed system was assessed on 76 COVID-19-infected patients using two metrics, namely, accuracy and Kappa. In this paper, the proposed system was trained and tested by three approaches. In the first approach, the MGRF model was trained and tested on the lungs. This approach achieved 95.83% accuracy and 93.39% kappa. In the second approach, we trained the MGRF model on the lesions and tested it on the lungs. This approach achieved 91.67% accuracy and 86.67% kappa. Finally, we trained and tested the MGRF model on lesions. It achieved 100% accuracy and 100% kappa. The results reported in this paper show the ability of the developed system to accurately grade COVID-19 lesions compared to other machine learning classifiers, such as k-Nearest Neighbor (KNN), decision tree, naïve Bayes, and random forest.
Collapse
Affiliation(s)
- Ibrahim Shawky Farahat
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (I.S.F.); (A.S.); (M.E.); (A.S.); (A.M.)
| | - Ahmed Sharafeldeen
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (I.S.F.); (A.S.); (M.E.); (A.S.); (A.M.)
| | - Mohamed Elsharkawy
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (I.S.F.); (A.S.); (M.E.); (A.S.); (A.M.)
| | - Ahmed Soliman
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (I.S.F.); (A.S.); (M.E.); (A.S.); (A.M.)
| | - Ali Mahmoud
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (I.S.F.); (A.S.); (M.E.); (A.S.); (A.M.)
| | - Mohammed Ghazal
- Electrical, Computer, and Biomedical Engineering Department, College of Engineering, Abu Dhabi University, Abu Dhabi 59911, United Arab Emirates;
| | - Fatma Taher
- College of Technological Innovation, Zayed University, Dubai 19282, United Arab Emirates;
| | - Maha Bilal
- Department of Diagnostic Radiology, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (M.B.); (A.A.K.A.R.)
| | - Ahmed Abdel Khalek Abdel Razek
- Department of Diagnostic Radiology, Faculty of Medicine, Mansoura University, Mansoura 35516, Egypt; (M.B.); (A.A.K.A.R.)
| | - Waleed Aladrousy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (W.A.); (S.E.); (A.E.T.)
| | - Samir Elmougy
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (W.A.); (S.E.); (A.E.T.)
| | - Ahmed Elsaid Tolba
- Computer Science Department, Faculty of Computers and Information, Mansoura University, Mansoura 35516, Egypt; (W.A.); (S.E.); (A.E.T.)
- The Higher Institute of Engineering and Automotive Technology and Energy, New Heliopolis 11829, Cairo, Egypt
| | - Moumen El-Melegy
- Department of Electrical Engineering, Assiut University, Assiut 71511, Egypt;
| | - Ayman El-Baz
- BioImaging Laboratory, Department of Bioengineering, University of Louisville, Louisville, KY 40292, USA; (I.S.F.); (A.S.); (M.E.); (A.S.); (A.M.)
| |
Collapse
|
8
|
Khan MA, Alhaisoni M, Tariq U, Hussain N, Majid A, Damaševičius R, Maskeliūnas R. COVID-19 Case Recognition from Chest CT Images by Deep Learning, Entropy-Controlled Firefly Optimization, and Parallel Feature Fusion. SENSORS (BASEL, SWITZERLAND) 2021; 21:7286. [PMID: 34770595 PMCID: PMC8588229 DOI: 10.3390/s21217286] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/26/2021] [Revised: 10/28/2021] [Accepted: 10/29/2021] [Indexed: 12/12/2022]
Abstract
In healthcare, a multitude of data is collected from medical sensors and devices, such as X-ray machines, magnetic resonance imaging, computed tomography (CT), and so on, that can be analyzed by artificial intelligence methods for early diagnosis of diseases. Recently, the outbreak of the COVID-19 disease caused many deaths. Computer vision researchers support medical doctors by employing deep learning techniques on medical images to diagnose COVID-19 patients. Various methods were proposed for COVID-19 case classification. A new automated technique is proposed using parallel fusion and optimization of deep learning models. The proposed technique starts with a contrast enhancement using a combination of top-hat and Wiener filters. Two pre-trained deep learning models (AlexNet and VGG16) are employed and fine-tuned according to target classes (COVID-19 and healthy). Features are extracted and fused using a parallel fusion approach-parallel positive correlation. Optimal features are selected using the entropy-controlled firefly optimization method. The selected features are classified using machine learning classifiers such as multiclass support vector machine (MC-SVM). Experiments were carried out using the Radiopaedia database and achieved an accuracy of 98%. Moreover, a detailed analysis is conducted and shows the improved performance of the proposed scheme.
Collapse
Affiliation(s)
- Muhammad Attique Khan
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Majed Alhaisoni
- College of Computer Science and Engineering, University of Ha’il, Ha’il 55211, Saudi Arabia;
| | - Usman Tariq
- Information Systems Department, College of Computer Engineering and Sciences, Prince Sattam Bin Abdulaziz University, Al Khraj 11942, Saudi Arabia;
| | - Nazar Hussain
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Abdul Majid
- Department of Computer Science, HITEC University, Taxila 47080, Pakistan; (M.A.K.); (N.H.); (A.M.)
| | - Robertas Damaševičius
- Faculty of Applied Mathematics, Silesian University of Technology, 44-100 Gliwice, Poland
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| |
Collapse
|