1
|
Wang G, Zhou M, Ning X, Tiwari P, Zhu H, Yang G, Yap CH. US2Mask: Image-to-mask generation learning via a conditional GAN for cardiac ultrasound image segmentation. Comput Biol Med 2024; 172:108282. [PMID: 38503085 DOI: 10.1016/j.compbiomed.2024.108282] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Revised: 02/29/2024] [Accepted: 03/12/2024] [Indexed: 03/21/2024]
Abstract
Cardiac ultrasound (US) image segmentation is vital for evaluating clinical indices, but it often demands a large dataset and expert annotations, resulting in high costs for deep learning algorithms. To address this, our study presents a framework utilizing artificial intelligence generation technology to produce multi-class RGB masks for cardiac US image segmentation. The proposed approach directly performs semantic segmentation of the heart's main structures in US images from various scanning modes. Additionally, we introduce a novel learning approach based on conditional generative adversarial networks (CGAN) for cardiac US image segmentation, incorporating a conditional input and paired RGB masks. Experimental results from three cardiac US image datasets with diverse scan modes demonstrate that our approach outperforms several state-of-the-art models, showcasing improvements in five commonly used segmentation metrics, with lower noise sensitivity. Source code is available at https://github.com/energy588/US2mask.
Collapse
Affiliation(s)
- Gang Wang
- School of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing, Chongqing; Department of Bioengineering, Imperial College London, London, UK
| | - Mingliang Zhou
- School of Computer Science, Chongqing University, Chongqing, Chongqing.
| | - Xin Ning
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Prayag Tiwari
- School of Information Technology, Halmstad University, Halmstad, Sweden
| | | | - Guang Yang
- Department of Bioengineering, Imperial College London, London, UK; Cardiovascular Research Centre, Royal Brompton Hospital, London, UK; National Heart and Lung Institute, Imperial College London, London, UK
| | - Choon Hwai Yap
- Department of Bioengineering, Imperial College London, London, UK
| |
Collapse
|
2
|
Xu J, Shen J, Jiang Q, Wan C, Zhou F, Zhang S, Yan Z, Yang W. A multi-modal fundus image based auxiliary location method of lesion boundary for guiding the layout of laser spot in central serous chorioretinopathy therapy. Comput Biol Med 2023; 155:106648. [PMID: 36805213 DOI: 10.1016/j.compbiomed.2023.106648] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 01/14/2023] [Accepted: 02/07/2023] [Indexed: 02/15/2023]
Abstract
The lesion boundary of central serous chorioretinopathy (CSCR) is the guarantee to guide the ophthalmologist to accurately arrange the laser spots, so as to enable this ophthalmopathy to be treated precisely. Currently, the accuracy and rapidity of manually locating CSCR lesion boundary in clinic based on single-modal fundus image are limited by imaging quality and ophthalmologist experience, which is also accompanied by poor repeatability, weak reliability and low efficiency. Consequently, a multi-modal fundus image-based lesion boundary auxiliary location method is developed. Firstly, the initial location module (ILM) is employed to achieve the preliminary location of key boundary points of CSCR lesion area on the optical coherence tomography (OCT) B-scan image, then followed by the joint location module (JLM) created based on reinforcement learning for further enhancing the location accuracy. Secondly, the scanning line detection module (SLDM) is constructed to realize the location of lesion scanning line on the scanning laser ophthalmoscope (SLO) image, so as to facilitate the cross-modal mapping of key boundary points. Finally, a simple yet effective lesion boundary location module (LBLM) is designed to assist the automatic cross-modal mapping of key boundary points and enable the final location of lesion boundary. Extensive experiments show that each module can perform well on its corresponding sub task, such as JLM, which makes the correction rate (CR) of ILM increase to 92.11%, comprehensively indicating the effectiveness and feasibility of this method in providing effective lesion boundary guidance for assisting ophthalmologists to precisely arrange the laser spots, and also opening a new research idea for the automatic location of lesion boundary of other fundus diseases.
Collapse
Affiliation(s)
- Jianguo Xu
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, 210016, Nanjing, PR China
| | - Jianxin Shen
- College of Mechanical and Electrical Engineering, Nanjing University of Aeronautics and Astronautics, 210016, Nanjing, PR China.
| | - Qin Jiang
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Cheng Wan
- College of Electronic and Information Engineering, Nanjing University of Aeronautics and Astronautics, 211106, Nanjing, PR China
| | - Fen Zhou
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Shaochong Zhang
- Shenzhen Eye Hospital, Jinan University, 518040, Shenzhen, PR China
| | - Zhipeng Yan
- The Affiliated Eye Hospital of Nanjing Medical University, 210029, Nanjing, PR China
| | - Weihua Yang
- Shenzhen Eye Hospital, Jinan University, 518040, Shenzhen, PR China.
| |
Collapse
|
3
|
Magic of 5G Technology and Optimization Methods Applied to Biomedical Devices: A Survey. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12147096] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
Wireless networks have gained significant attention and importance in healthcare as various medical devices such as mobile devices, sensors, and remote monitoring equipment must be connected to communication networks. In order to provide advanced medical treatments to patients, high-performance technologies such as the emerging fifth generation/sixth generation (5G/6G) are required for transferring data to and from medical devices and in addition to their major components developed with improved optimization methods which are substantially needed and embedded in them. Providing intelligent system design is a challenging task in medical applications, as it affects the whole behaviors of medical devices. A critical review of the medical devices and the various optimization methods employed are presented in this paper, to pave the way for designers to develop an apparatus that is applicable in the healthcare industry under 5G technology and future 6G wireless networks.
Collapse
|
4
|
Change detection based on unsupervised sparse representation for fundus image pair. Sci Rep 2022; 12:9820. [PMID: 35701500 PMCID: PMC9197950 DOI: 10.1038/s41598-022-13754-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2021] [Accepted: 05/27/2022] [Indexed: 11/08/2022] Open
Abstract
Detecting changes is an important issue for ophthalmology to compare longitudinal fundus images at different stages and obtain change regions. Illumination variations bring distractions on the change regions by the pixel-by-pixel comparison. In this paper, a new unsupervised change detection method based on sparse representation classification (SRC) is proposed for the fundus image pair. First, the local neighborhood patches are extracted from the reference image to build a dictionary of the local background. Then the current image patch is represented sparsely and its background is reconstructed by the obtained dictionary. Finally, change regions are given through background subtracting. The SRC method can correct automatically illumination variations through the representation coefficients and filter local contrast and global intensity effectively. In experiments of this paper, the AUC and mAP values of SRC method are 0.9858 and 0.8647 respectively for the image pair with small lesions; the AUC and mAP values of the fusion method of IRHSF and SRC are 0.9892 and 0.9692 separately for the image pair with the big change region. Experiments show that the proposed method in this paper is more robust than RPCA for the illumination variations and can detect change regions more effectively than pixel-wised image differencing.
Collapse
|
5
|
Integrating object detection and image segmentation for detecting the tool wear area on stitched image. Sci Rep 2021; 11:19938. [PMID: 34620900 PMCID: PMC8497480 DOI: 10.1038/s41598-021-97610-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2021] [Accepted: 08/26/2021] [Indexed: 11/15/2022] Open
Abstract
Flank wear is the most common wear that happens in the end milling process. However, the process of detecting the flank wear is cumbersome. To achieve comprehensively automatic detecting the flank wear area of the spiral end milling cutter, this study proposed a novel flank wear detection method of combining the template matching and deep learning techniques to expand the curved surface images into panorama images, which is more available to detect the flank wear areas without choosing a specific position of cutting tool image. You Only Look Once v4 model was employed to automatically detect the range of cutting tips. Then, popular segmentation models, namely, U-Net, Segnet and Autoencoder were used to extract the areas of the tool flank wear. To evaluate the segmenting performance among these models, U-Net model obtained the best maximum dice coefficient score with 0.93. Moreover, the predicting wear areas of the U-Net model is presented in the trend figure, which can determine the times of the tool change depend on the curve of the tool wear. Overall, the experiments have shown that the proposed methods can effectively extract the tool wear regions of the spiral cutting tool. With the developed system, users can obtain detailed information about the cutting tool before being worn severely to change the cutting tools in advance.
Collapse
|
6
|
Gong C, Brunton SL, Schowengerdt BT, Seibel EJ. Intensity-Mosaic: automatic panorama mosaicking of disordered images with insufficient features. J Med Imaging (Bellingham) 2021; 8:054002. [PMID: 34604440 PMCID: PMC8479456 DOI: 10.1117/1.jmi.8.5.054002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Accepted: 09/13/2021] [Indexed: 11/14/2022] Open
Abstract
Purpose: Handling low-quality and few-feature medical images is a challenging task in automatic panorama mosaicking. Current mosaicking methods for disordered input images are based on feature point matching, whereas in this case intensity-based registration achieves better performance than feature-point registration methods. We propose a mosaicking method that enables the use of mutual information (MI) registration for mosaicking randomly ordered input images with insufficient features. Approach: Dimensionality reduction is used to map disordered input images into a low dimensional space. Based on the low dimensional representation, the image global correspondence can be recognized efficiently. For adjacent image pairs, we optimize the MI metric for registration. The panorama is then created after image blending. We demonstrate our method on relatively lower-cost handheld devices that acquire images from the retina in vivo, kidney ex vivo, and bladder phantom, all of which contain sparse features. Results: Our method is compared with three baselines: AutoStitch, "dimension reduction + SIFT," and "MI-Only." Our method compared to the first two feature-point based methods exhibits 1.25 (ex vivo microscope dataset) to two times (in vivo retina dataset) rate of mosaic completion, and MI-Only has the lowest complete rate among three datasets. When comparing the subsequent complete mosaics, our target registration errors can be 2.2 and 3.8 times reduced when using the microscopy and bladder phantom datasets. Conclusions: Using dimensional reduction increases the success rate of detecting adjacent images, which makes MI-based registration feasible and narrows the search range of MI optimization. To the best of our knowledge, this is the first mosaicking method that allows automatic stitching of disordered images with intensity-based alignment, which provides more robust and accurate results when there are insufficient features for classic mosaicking methods.
Collapse
Affiliation(s)
- Chen Gong
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | - Steven L. Brunton
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| | | | - Eric J. Seibel
- University of Washington, Department of Mechanical Engineering, Seattle, Washington, United States
| |
Collapse
|
7
|
Pujari A, Saluja G, Agarwal D, Sinha A, P R A, Kumar A, Sharma N. Clinical Role of Smartphone Fundus Imaging in Diabetic Retinopathy and Other Neuro-retinal Diseases. Curr Eye Res 2021; 46:1605-1613. [PMID: 34325587 DOI: 10.1080/02713683.2021.1958347] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/02/2023]
Abstract
Purpose: In today's life, many electronic gadgets have the potential to become invaluable health care devices in future. The gadgets in this category include smartphones, smartwatches, and others. Till now, smartphone role has been highlighted on many occasions in different areas, and they continue to possess immense role in clinical documentation, clinical consultation, and digitalization of ocular care. In last one decade, many treatable conditions including diabetic retinopathy, glaucoma, and other pediatric retinal diseases are being imaged using smartphones.Methods: To comprehend this cumulative knowledge, a detailed medical literature search was conducted on PubMed/Medline, Scopus, and Web of Science till February 2021.Results: The included literature revealed a definitive progress in posterior segment imaging. From simple torch light with smartphone examination to present day compact handy devices with artificial intelligence integrated software's have changed the very perspectives of ocular imaging in ophthalmology. The consistently reproducible results, constantly improving imaging techniques, and most importantly their affordable costs have renegotiated their role as effective screening devices in ophthalmology. Moreover, the obtained field of view, ocular safety, and their key utility in non-ophthalmic specialties are also growing.Conclusions: To conclude, smartphone imaging can now be considered as a quick, cost-effective, and digitalized tool for posterior segment screenings, however, their definite role in routine ophthalmic clinics is yet to be established.
Collapse
Affiliation(s)
- Amar Pujari
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Gunjan Saluja
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Divya Agarwal
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Ayushi Sinha
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Ananya P R
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Atul Kumar
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| | - Namrata Sharma
- Dr. Rajendra Prasad Centre for Ophthalmic Sciences, All India Institute of Medical Sciences, New Delhi, India
| |
Collapse
|