1
|
Zakaria R, Abdelmajid H, Dya Z, Hakim A. PelviNet: A Collaborative Multi-agent Convolutional Network for Enhanced Pelvic Image Registration. JOURNAL OF IMAGING INFORMATICS IN MEDICINE 2025; 38:957-966. [PMID: 39249582 PMCID: PMC11950488 DOI: 10.1007/s10278-024-01249-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Revised: 08/07/2024] [Accepted: 08/08/2024] [Indexed: 09/10/2024]
Abstract
PelviNet introduces a groundbreaking multi-agent convolutional network architecture tailored for enhancing pelvic image registration. This innovative framework leverages shared convolutional layers, enabling synchronized learning among agents and ensuring an exhaustive analysis of intricate 3D pelvic structures. The architecture combines max pooling, parametric ReLU activations, and agent-specific layers to optimize both individual and collective decision-making processes. A communication mechanism efficiently aggregates outputs from these shared layers, enabling agents to make well-informed decisions by harnessing combined intelligence. PelviNet's evaluation centers on both quantitative accuracy metrics and visual representations to elucidate agents' performance in pinpointing optimal landmarks. Empirical results demonstrate PelviNet's superiority over traditional methods, achieving an average image-wise error of 2.8 mm, a subject-wise error of 3.2 mm, and a mean Euclidean distance error of 3.0 mm. These quantitative results highlight the model's efficiency and precision in landmark identification, crucial for medical contexts such as radiation therapy, where exact landmark identification significantly influences treatment outcomes. By reliably identifying critical structures, PelviNet advances pelvic image analysis and offers potential enhancements for broader medical imaging applications, marking a significant step forward in computational healthcare.
Collapse
Affiliation(s)
- Rguibi Zakaria
- LAVETE Laboratory, Hassan First University, Settat, Morocco.
| | | | - Zitouni Dya
- LAVETE Laboratory, Hassan First University, Settat, Morocco
| | - Allali Hakim
- LAVETE Laboratory, Hassan First University, Settat, Morocco
| |
Collapse
|
2
|
Lin N, Shi Y, Ye M, Zhang Y, Jia X. Deep transfer learning radiomics for distinguishing sinonasal malignancies: a preliminary MRI study. Future Oncol 2025; 21:975-982. [PMID: 39991909 PMCID: PMC11938957 DOI: 10.1080/14796694.2025.2469486] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 02/17/2025] [Indexed: 02/25/2025] Open
Abstract
PURPOSE This study aimed to assess the diagnostic accuracy of combining MRI hand-crafted (HC) radiomics features with deep transfer learning (DTL) in identifying sinonasal squamous cell carcinoma (SCC), adenoid cystic carcinoma (ACC), and non-Hodgkin's lymphoma (NHL) using various machine learning (ML) models. METHODS A retrospective analysis of 132 patients (50 with SCC, 42 with NHL, 40 with ACC) was conducted. The dataset was split 80/20 into training and testing cohorts. HC radiomics and DTL features were extracted from T2-weighted, ADC, and contrast-enhanced T1-weighted MRI images. ResNet50, a pre-trained convolutional neural network, was used for DTL feature extraction. LASSO regression was applied to select features and create radiomic signatures. Seven ML models were evaluated for classification performance. RESULTS The radiomic signature included 24 hC and 8 DTL features. The support vector machine (SVM) model achieved the highest accuracy (92.6%) in the testing cohort. The SVM model's ROC analysis showed macro-average and micro-average AUC values of 0.98 and 0.99. AUCs for ACC, NHL, and SCC were 0.99, 0.97, and 1.00. K-nearest neighbors (KNN) and XGBoost also showed AUC values above 0.90. CONCLUSION Combining MRI-based HC radiomics and DTL features with the SVM model enhanced differentiation between sinonasal SCC, NHL, and ACC.
Collapse
Affiliation(s)
- Naier Lin
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Yiqian Shi
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Min Ye
- Department of Pathology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Yiyin Zhang
- Department of Radiology, Eye & ENT Hospital, Fudan University, Shanghai, China
| | - Xianhao Jia
- Department of Otology and Skull Base Surgery, Eye & ENT Hospital, Fudan University, Shanghai, China
| |
Collapse
|
3
|
Rostamian R, Shariat Panahi M, Karimpour M, Kashani HG, Abi A. A deep learning-based multi-view approach to automatic 3D landmarking and deformity assessment of lower limb. Sci Rep 2025; 15:534. [PMID: 39747979 PMCID: PMC11697423 DOI: 10.1038/s41598-024-84387-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Accepted: 12/23/2024] [Indexed: 01/04/2025] Open
Abstract
Anatomical Landmark detection in CT-Scan images is widely used in the identification of skeletal disorders. However, the traditional process of manually detecting anatomical landmarks, especially in three dimensions, is both time-consuming and prone to human errors. We propose a novel, deep-learning-based approach to automatic detection of 3D landmarks in CT images of the lower limb. We generate multiple view renderings of the scanned limb and then integrate them, using a pyramid-style convolutional neural network, to build a 3D model of the bone and to determine the spatial coordinates of the landmarks. Those landmarks are then used to calculate key anatomical indicators that would enable the reliable diagnosis of skeletal disorders. To evaluate the performance of the proposed approach we compare its predicted landmark coordinates and resulting anatomical indicators (both 2D and 3D) with those determined by human experts. The average coordinate error (difference between automatically and manually determined coordinates) of the landmarks was 2.05 ± 1.36 mm on test data, whereas the average angular error (difference between automatically and manually calculated angles in three and two dimensions) on the same dataset was 0.53 ± 0.66° and 0.74 ± 0.87°, respectively. Our proposed deep-learning-based approach not only outperforms the traditional landmark detection and indicator assessment methods in terms of speed and accuracy but also improves the credibility of the ensuing diagnoses by avoiding manual landmarking errors.
Collapse
Affiliation(s)
- Reyhaneh Rostamian
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Masoud Shariat Panahi
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran.
| | - Morad Karimpour
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Hadi G Kashani
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
| | - Amirhossein Abi
- School of Mechanical Engineering, College of Engineering, University of Tehran, Tehran, Iran
| |
Collapse
|
4
|
Zhang R, Mo H, Hu W, Jie B, Xu L, He Y, Ke J, Wang J. Super-resolution landmark detection networks for medical images. Comput Biol Med 2024; 182:109095. [PMID: 39236661 DOI: 10.1016/j.compbiomed.2024.109095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2024] [Revised: 08/06/2024] [Accepted: 08/30/2024] [Indexed: 09/07/2024]
Abstract
Craniomaxillofacial (CMF) and nasal landmark detection are fundamental components in computer-assisted surgery. Medical landmark detection method includes regression-based and heatmap-based methods, and heatmap-based methods are among the main methodology branches. The method relies on high-resolution (HR) features containing more location information to reduce the network error caused by sub-pixel location. Previous studies extracted HR patches around each landmark from downsampling images via object detection and subsequently input them into the network to obtain HR features. Complex multistage tasks affect accuracy. The network error caused by downsampling and upsampling operations during training, which interpolates low-resolution features to generate HR features or predicted heatmap, is still significant. We propose standard super-resolution landmark detection networks (SRLD-Net) and super-resolution UNet (SR-UNet) to reduce network error effectively. SRLD-Net used Pyramid pooling block, Pyramid fusion block and super-resolution fusion block to combine global prior knowledge and multi-scale local features, similarly, SR-UNet adopts Pyramid pooling block and super-resolution block. They can obviously improve representation learning ability of our proposed methods. Then the super-resolution upsampling layer is utilized to generate detail predicted heatmap. Our proposed networks were compared to state-of-the-art methods using the craniomaxillofacial, nasal, and mandibular molar datasets, demonstrating better performance. The mean errors of 18 CMF, 6 nasal and 14 mandibular landmarks are 1.39 ± 1.04, 1.31 ± 1.09, 2.01 ± 4.33 mm. These results indicate that the super-resolution methods have great potential in medical landmark detection tasks. This paper provides two effective heatmap-based landmark detection networks and the code is released in https://github.com/Runshi-Zhang/SRLD-Net.
Collapse
Affiliation(s)
- Runshi Zhang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Hao Mo
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China
| | - Weini Hu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Bimeng Jie
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Lin Xu
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Yang He
- Peking University School and Hospital of Stomatology, Weigong Village, Haidian District, 100081, Beijing, China
| | - Jia Ke
- Peking University Third Hospital, 49 Huayuan North Road, Haidian District, 100191, Beijing, China
| | - Junchen Wang
- School of Mechanical Engineering and Automation, Beihang University, 37 Xueyuan Road, Haidian District, 100191, Beijing, China.
| |
Collapse
|
5
|
Amiri S, Vrtovec T, Mustafaev T, Deufel CL, Thomsen HS, Sillesen MH, Brandt EGS, Andersen MB, Müller CF, Ibragimov B. Reinforcement learning-based anatomical maps for pancreas subregion and duct segmentation. Med Phys 2024; 51:7378-7392. [PMID: 39031886 DOI: 10.1002/mp.17300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2024] [Revised: 06/20/2024] [Accepted: 06/21/2024] [Indexed: 07/22/2024] Open
Abstract
BACKGROUND The pancreas is a complex abdominal organ with many anatomical variations, and therefore automated pancreas segmentation from medical images is a challenging application. PURPOSE In this paper, we present a framework for segmenting individual pancreatic subregions and the pancreatic duct from three-dimensional (3D) computed tomography (CT) images. METHODS A multiagent reinforcement learning (RL) network was used to detect landmarks of the head, neck, body, and tail of the pancreas, and landmarks along the pancreatic duct in a selected target CT image. Using the landmark detection results, an atlas of pancreases was nonrigidly registered to the target image, resulting in anatomical probability maps for the pancreatic subregions and duct. The probability maps were augmented with multilabel 3D U-Net architectures to obtain the final segmentation results. RESULTS To evaluate the performance of our proposed framework, we computed the Dice similarity coefficient (DSC) between the predicted and ground truth manual segmentations on a database of 82 CT images with manually segmented pancreatic subregions and 37 CT images with manually segmented pancreatic ducts. For the four pancreatic subregions, the mean DSC improved from 0.38, 0.44, and 0.39 with standard 3D U-Net, Attention U-Net, and shifted windowing (Swin) U-Net architectures, to 0.51, 0.47, and 0.49, respectively, when utilizing the proposed RL-based framework. For the pancreatic duct, the RL-based framework achieved a mean DSC of 0.70, significantly outperforming the standard approaches and existing methods on different datasets. CONCLUSIONS The resulting accuracy of the proposed RL-based segmentation framework demonstrates an improvement against segmentation with standard U-Net architectures.
Collapse
Affiliation(s)
- Sepideh Amiri
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
| | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | | | | | - Henrik S Thomsen
- Department of Radiology, Herlev Gentofte Hospital, Copenhagen University Hospital, Copenhagen, Denmark
| | - Martin Hylleholt Sillesen
- Department of Organ Surgery and Transplantation, and CSTAR, Copenhagen University Hospital Rigshospitalet, Copenhagen, Denmark
| | | | - Michael Brun Andersen
- Department of Radiology, Herlev Gentofte Hospital, Copenhagen University Hospital, Copenhagen, Denmark
- Department of Clinical Medicine, Copenhagen University, Copenhagen, Denmark
| | - Christoph Felix Müller
- Department of Radiology, Herlev Gentofte Hospital, Copenhagen University Hospital, Copenhagen, Denmark
| | - Bulat Ibragimov
- Department of Computer Science, University of Copenhagen, Copenhagen, Denmark
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| |
Collapse
|
6
|
Zeng B, Wang H, Joskowicz L, Chen X. Fragment distance-guided dual-stream learning for automatic pelvic fracture segmentation. Comput Med Imaging Graph 2024; 116:102412. [PMID: 38943846 DOI: 10.1016/j.compmedimag.2024.102412] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2024] [Revised: 05/27/2024] [Accepted: 06/15/2024] [Indexed: 07/01/2024]
Abstract
Pelvic fracture is a complex and severe injury. Accurate diagnosis and treatment planning require the segmentation of the pelvic structure and the fractured fragments from preoperative CT scans. However, this segmentation is a challenging task, as the fragments from a pelvic fracture typically exhibit considerable variability and irregularity in the morphologies, locations, and quantities. In this study, we propose a novel dual-stream learning framework for the automatic segmentation and category labeling of pelvic fractures. Our method uniquely identifies pelvic fracture fragments in various quantities and locations using a dual-branch architecture that leverages distance learning from bone fragments. Moreover, we develop a multi-size feature fusion module that adaptively aggregates features from diverse receptive fields tailored to targets of different sizes and shapes, thus boosting segmentation performance. Extensive experiments on three pelvic fracture datasets from different medical centers demonstrated the accuracy and generalizability of the proposed method. It achieves a mean Dice coefficient and mean Sensitivity of 0.935±0.068 and 0.929±0.058 in the dataset FracCLINIC, and 0.955±0.072 and 0.912±0.125 in the dataset FracSegData, which are superior than other comparing methods. Our method optimizes the process of pelvic fracture segmentation, potentially serving as an effective tool for preoperative planning in the clinical management of pelvic fractures.
Collapse
Affiliation(s)
- Bolun Zeng
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China
| | - Huixiang Wang
- Department of Orthopedics, National Center for Orthopedics, Shanghai Sixth People's Hospital Affiliated to Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| | - Leo Joskowicz
- School of Computer Science and Engineering and the Edmond and Lily Safra Center for Brain Sciences, Hebrew University of Jerusalem, Jerusalem, Israel
| | - Xiaojun Chen
- Institute of Biomedical Manufacturing and Life Quality Engineering, State Key Laboratory of Mechanical System and Vibration, School of Mechanical Engineering, Shanghai Jiao Tong University, Shanghai, China; Institute of Medical Robotics, Shanghai Jiao Tong University, Shanghai, China.
| |
Collapse
|
7
|
López Diez P, Sundgaard JV, Margeta J, Diab K, Patou F, Paulsen RR. Deep reinforcement learning and convolutional autoencoders for anomaly detection of congenital inner ear malformations in clinical CT images. Comput Med Imaging Graph 2024; 113:102343. [PMID: 38325245 DOI: 10.1016/j.compmedimag.2024.102343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2023] [Revised: 01/25/2024] [Accepted: 01/28/2024] [Indexed: 02/09/2024]
Abstract
Detection of abnormalities within the inner ear is a challenging task even for experienced clinicians. In this study, we propose an automated method for automatic abnormality detection to provide support for the diagnosis and clinical management of various otological disorders. We propose a framework for inner ear abnormality detection based on deep reinforcement learning for landmark detection which is trained uniquely in normative data. In our approach, we derive two abnormality measurements: Dimage and Uimage. The first measurement, Dimage, is based on the variability of the predicted configuration of a well-defined set of landmarks in a subspace formed by the point distribution model of the location of those landmarks in normative data. We create this subspace using Procrustes shape alignment and Principal Component Analysis projection. The second measurement, Uimage, represents the degree of hesitation of the agents when approaching the final location of the landmarks and is based on the distribution of the predicted Q-values of the model for the last ten states. Finally, we unify these measurements in a combined anomaly measurement called Cimage. We compare our method's performance with a 3D convolutional autoencoder technique for abnormality detection using the patch-based mean squared error between the original and the generated image as a basis for classifying abnormal versus normal anatomies. We compare both approaches and show that our method, based on deep reinforcement learning, shows better detection performance for abnormal anatomies on both an artificial and a real clinical CT dataset of various inner ear malformations with an increase of 11.2% of the area under the ROC curve. Our method also shows more robustness against the heterogeneous quality of the images in our dataset.
Collapse
Affiliation(s)
- Paula López Diez
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark.
| | - Josefine Vilsbøll Sundgaard
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark; Novo Nordisk A/S, Denmark
| | - Jan Margeta
- KardioMe, Research & Development, Nova Dubnica, Slovakia; Oticon Medical, Research & Technology, Vallauris, France
| | - Khassan Diab
- Tashkent International Clinic, Tashkent, Uzbekistan
| | - François Patou
- Oticon Medical, Research & Technology group, Smørum, Denmark
| | - Rasmus R Paulsen
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
8
|
Zhai H, Huang J, Li L, Tao H, Wang J, Li K, Shao M, Cheng X, Wang J, Wu X, Wu C, Zhang X, Wang H, Xiong Y. Deep learning-based workflow for hip joint morphometric parameter measurement from CT images. Phys Med Biol 2023; 68:225003. [PMID: 37852280 DOI: 10.1088/1361-6560/ad04aa] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2023] [Accepted: 10/18/2023] [Indexed: 10/20/2023]
Abstract
Objective.Precise hip joint morphometry measurement from CT images is crucial for successful preoperative arthroplasty planning and biomechanical simulations. Although deep learning approaches have been applied to clinical bone surgery planning, there is still a lack of relevant research on quantifying hip joint morphometric parameters from CT images.Approach.This paper proposes a deep learning workflow for CT-based hip morphometry measurement. For the first step, a coarse-to-fine deep learning model is designed for accurate reconstruction of the hip geometry (3D bone models and key landmark points). Based on the geometric models, a robust measurement method is developed to calculate a full set of morphometric parameters, including the acetabular anteversion and inclination, the femoral neck shaft angle and the inclination, etc. Our methods were validated on two datasets with different imaging protocol parameters and further compared with the conventional 2D x-ray-based measurement method.Main results. The proposed method yields high bone segmentation accuracies (Dice coefficients of 98.18% and 97.85%, respectively) and low landmark prediction errors (1.55 mm and 1.65 mm) on both datasets. The automated measurements agree well with the radiologists' manual measurements (Pearson correlation coefficients between 0.47 and 0.99 and intraclass correlation coefficients between 0.46 and 0.98). This method provides more accurate measurements than the conventional 2D x-ray-based measurement method, reducing the error of acetabular cup size from over 2 mm to less than 1 mm. Moreover, our morphometry measurement method is robust against the error of the previous bone segmentation step. As we tested different deep learning methods for the prerequisite bone segmentation, our method produced consistent final measurement results, with only a 0.37 mm maximum inter-method difference in the cup size.Significance. This study proposes a deep learning approach with improved robustness and accuracy for pelvis arthroplasty planning.
Collapse
Affiliation(s)
- Haoyu Zhai
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, People's Republic of China
| | - Jin Huang
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, People's Republic of China
| | - Lei Li
- Department of Vascular Surgery, The Second Affiliated Hospital of Dalian Medical University, Dalian 116024, People's Republic of China
| | - Hairong Tao
- Shanghai Key Laboratory of Orthopaedic Implants, Shanghai 200011, People's Republic of China
- Department of Orthopaedic Surgery, Shanghai Ninth People's Hospital, Shanghai 200011, People's Republic of China
- Shanghai Jiao Tong University Shcool of Medicine, Shanghai 200011, People's Republic of China
| | - Jinwu Wang
- Department of Orthopaedic Surgery, Shanghai Ninth People's Hospital, Shanghai 200011, People's Republic of China
- Shanghai Jiaotong University School of Medicine Department of Orthopaedics & Bone and Joint Research Center, Shanghai 200011, People's Republic of China
| | - Kang Li
- West China Biomedical Big Data Center, West China Hospital, Sichuan University, Chengdu 610041, People's Republic of China
| | - Moyu Shao
- Jiangsu Yunqianbai Digital Technology Co., LTD, Xuzhou 221000, People's Republic of China
| | - Xiaomin Cheng
- Jiangsu Yunqianbai Digital Technology Co., LTD, Xuzhou 221000, People's Republic of China
| | - Jing Wang
- Xi'an JiaoTong University. School of Chemical Engineering and Technology, Xi'an 710049, People's Republic of China
| | - Xiang Wu
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou 221000, People's Republic of China
| | - Chuan Wu
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou 221000, People's Republic of China
| | - Xiao Zhang
- School of Medical Information & Engineering, Xuzhou Medical University, Xuzhou 221000, People's Republic of China
| | - Hongkai Wang
- School of Biomedical Engineering, Faculty of Medicine, Dalian University of Technology, Dalian 116024, People's Republic of China
- Liaoning Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian 116024, People's Republic of China
| | - Yan Xiong
- Department of Orthopaedics, Daping Hospital, Army Medical University, Chongqing, People's Republic of China
| |
Collapse
|
9
|
Yoon N, Park S, Son M, Cho KH. Automation of membrane capacitive deionization process using reinforcement learning. WATER RESEARCH 2022; 227:119337. [PMID: 36370591 DOI: 10.1016/j.watres.2022.119337] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 10/17/2022] [Accepted: 11/04/2022] [Indexed: 06/16/2023]
Abstract
Capacitive deionization (CDI) is an alternative desalination technology that uses electrochemical ion separation. Although several attempts have been made to maximize the energy efficiency and productivity of CDI with conventional control methods, it is difficult to optimize the CDI processes because of the complex correlation between the operational conditions and the composition of feed water. To address these challenges, we applied deep reinforcement learning (DRL) to automatically control the membrane capacitive deionization (MCDI) process, which is one of the representative CDI processes, to accomplish high energy efficiency while desalinating water. In the DRL model, the numerical model is combined as the environment that provides states according to the actions. The feed water conditions, that is, the input state of the DRL, were assumed to have a random salt concentration and constant foulant concentration. The model was constructed to minimize energy consumption and maximize desalted water volume per cycle. After training of 1,000 episodes, the DRL model achieved a 22.07% reduction in specific energy consumption (from 0.054 to 0.042 kWh m-3) and 11.60% increase in water desalted water volume per cycle (from 1.96×10-5 to 2.19×10-5 m3), achieving the desired degree of desalination, compared to the first episode. This improved performance was because the trained model selected the optimized operating conditions of current, voltage, and the number and intensity of flushing. Furthermore, it was possible to train the model depending on demand by modifying the reward function of the DRL model. The fundamental principle described in this study for applying the DRL model in MCDI operations can be the cornerstone of a fully automated water desalination process.
Collapse
Affiliation(s)
- Nakyung Yoon
- Department of Urban and Environmental Engineering, Ulsan National Institute of Science and Technology, UNIST-gil 50, Ulsan 44919, Republic of Korea; Center for Water Cycle Research, Korea Institute of Science and Technology, 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea.
| | - Sanghun Park
- Center for Water Cycle Research, Korea Institute of Science and Technology, 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea
| | - Moon Son
- Center for Water Cycle Research, Korea Institute of Science and Technology, 5 Hwarang-ro 14-gil, Seongbuk-gu, Seoul 02792, Republic of Korea; Division of Energy and Environment Technology, KIST-School, University of Science and Technology, Seoul 02792, Republic of Korea.
| | - Kyung Hwa Cho
- Department of Urban and Environmental Engineering, Ulsan National Institute of Science and Technology, UNIST-gil 50, Ulsan 44919, Republic of Korea.
| |
Collapse
|