1
|
Tang C, Li F, He L, Hu Q, Qin Y, Yan X, Ai T. Comparison of continuous-time random walk and fractional order calculus models in characterizing breast lesions using histogram analysis. Magn Reson Imaging 2024; 108:47-58. [PMID: 38307375 DOI: 10.1016/j.mri.2024.01.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 11/11/2023] [Accepted: 01/22/2024] [Indexed: 02/04/2024]
Abstract
OBJECTIVE To compare the diagnostic performance of different mathematical models for DWI and explore whether parameters reflecting spatial and temporal heterogeneity can demonstrate better diagnostic accuracy than the diffusion coefficient parameter in distinguishing benign and malignant breast lesions, using whole-tumor histogram analysis. METHODS This retrospective study was approved by the institutional ethics committee and included 104 malignant and 42 benign cases. All patients underwent breast magnetic resonance imaging (MRI) with a 3.0 T MR scanner using the simultaneous multi-slice (SMS) readout-segment ed echo-planar imaging (rs-EPI). Histogram metrics of Mono- apparent diffusion coefficient (ADC), CTRW, and FROC-derived parameters were compared between benign and malignant breast lesions, and the diagnostic performance of each diffusion parameter was evaluated. Statistical analysis was performed using Mann-Whitney U test and receiver operating characteristic (ROC) curve. RESULTS The DFROC-median exhibited the highest AUC for distinguishing benign and malignant breast lesions (AUC = 0.965). The temporal heterogeneity parameter αCTRW-median generated a statistically higher AUC compared to the spatial heterogeneity parameter βCTRW-median (AUC = 0.850 and 0.741, respectively; p = 0.047). Finally, the combination of median values of CTRW parameters displayed a slightly higher AUC than that of FROC parameters, with no significant difference however (AUC = 0.971 and 0.965, respectively; p = 0.172). CONCLUSIONS The diffusion coefficient parameter exhibited superior diagnostic performance in distinguishing breast lesions when compared to the temporal and spatial heterogeneity parameters.
Collapse
Affiliation(s)
- Caili Tang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Feng Li
- Department of Radiology, Xiangyang Central Hospital, Affiliated Hospital of Hubei University of Arts and Science, Xiangyang, Hubei 441021, China
| | - Litong He
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Qilan Hu
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Yanjin Qin
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China
| | - Xu Yan
- MR Research Collaboration Team, Siemens Healthineers Ltd, 278, Zhouzhu Road, Nanhui, Shanghai 201318, China
| | - Tao Ai
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan 430030, China.
| |
Collapse
|
2
|
Thiermann R, Sandler M, Ahir G, Sauls JT, Schroeder J, Brown S, Le Treut G, Si F, Li D, Wang JD, Jun S. Tools and methods for high-throughput single-cell imaging with the mother machine. eLife 2024; 12:RP88463. [PMID: 38634855 PMCID: PMC11026091 DOI: 10.7554/elife.88463] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/19/2024] Open
Abstract
Despite much progress, image processing remains a significant bottleneck for high-throughput analysis of microscopy data. One popular platform for single-cell time-lapse imaging is the mother machine, which enables long-term tracking of microbial cells under precisely controlled growth conditions. While several mother machine image analysis pipelines have been developed in the past several years, adoption by a non-expert audience remains a challenge. To fill this gap, we implemented our own software, MM3, as a plugin for the multidimensional image viewer napari. napari-MM3 is a complete and modular image analysis pipeline for mother machine data, which takes advantage of the high-level interactivity of napari. Here, we give an overview of napari-MM3 and test it against several well-designed and widely used image analysis pipelines, including BACMMAN and DeLTA. Researchers often analyze mother machine data with custom scripts using varied image analysis methods, but a quantitative comparison of the output of different pipelines has been lacking. To this end, we show that key single-cell physiological parameter correlations and distributions are robust to the choice of analysis method. However, we also find that small changes in thresholding parameters can systematically alter parameters extracted from single-cell imaging experiments. Moreover, we explicitly show that in deep learning-based segmentation, 'what you put is what you get' (WYPIWYG) - that is, pixel-level variation in training data for cell segmentation can propagate to the model output and bias spatial and temporal measurements. Finally, while the primary purpose of this work is to introduce the image analysis software that we have developed over the last decade in our lab, we also provide information for those who want to implement mother machine-based high-throughput imaging and analysis methods in their research.
Collapse
Affiliation(s)
- Ryan Thiermann
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - Michael Sandler
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - Gursharan Ahir
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - John T Sauls
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | - Jeremy Schroeder
- Department of Biological Chemistry, University of Michigan Medical SchoolAnn ArborUnited States
| | - Steven Brown
- Department of Physics, University of California, San DiegoLa JollaUnited States
| | | | - Fangwei Si
- Department of Physics, Carnegie Mellon UniversityPittsburghUnited States
| | - Dongyang Li
- Division of Biology and Biological Engineering, California Institute of TechnologyPasadenaUnited States
| | - Jue D Wang
- Department of Bacteriology, University of Wisconsin–MadisonMadisonUnited States
| | - Suckjoon Jun
- Department of Physics, University of California, San DiegoLa JollaUnited States
| |
Collapse
|
3
|
Mikhailov I, Chauveau B, Bourdel N, Bartoli A. A deep learning-based interactive medical image segmentation framework with sequential memory. Comput Methods Programs Biomed 2024; 245:108038. [PMID: 38271792 DOI: 10.1016/j.cmpb.2024.108038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Revised: 12/22/2023] [Accepted: 01/16/2024] [Indexed: 01/27/2024]
Abstract
BACKGROUND AND OBJECTIVE Image segmentation is an essential component in medical image analysis. The case of 3D images such as MRI is particularly challenging and time consuming. Interactive or semi-automatic methods are thus highly desirable. However, existing methods do not exploit the typical sequentiality of real user interactions. This is due to the interaction memory used in these systems, which discards ordering. In contrast, we argue that the order of the user corrections should be used for training and lead to performance improvements. METHODS We contribute to solving this problem by proposing a general multi-class deep learning-based interactive framework for image segmentation, which embeds a base network in a user interaction loop with a user feedback memory. We propose to model the memory explicitly as a sequence of consecutive system states, from which the features can be learned, generally learning from the segmentation refinement process. Training is a major difficulty owing to the network's input being dependent on the previous output. We adapt the network to this loop by introducing a virtual user in the training process, modelled by dynamically simulating the iterative user feedback. RESULTS We evaluated our framework against existing methods on the complex task of multi-class semantic instance female pelvis MRI segmentation with 5 classes, including up to 27 tumour instances, using a segmentation dataset collected in our hospital, and on liver and pancreas CT segmentation, using public datasets. We conducted a user evaluation, involving both senior and junior medical personnel in matching and adjacent areas of expertise. We observed an annotation time reduction with 5'56" for our framework against 25' on average for classical tools. We systematically evaluated the influence of the number of clicks on the segmentation accuracy. A single interaction round our framework outperforms existing automatic systems with a comparable setup. We provide an ablation study and show that our framework outperforms existing interactive systems. CONCLUSIONS Our framework largely outperforms existing systems in accuracy, with the largest impact on the smallest, most difficult classes, and drastically reduces the average user segmentation time with fast inference at 47.2±6.2 ms per image.
Collapse
Affiliation(s)
- Ivan Mikhailov
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France.
| | - Benoit Chauveau
- SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| | - Nicolas Bourdel
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| | - Adrien Bartoli
- EnCoV, Institut Pascal, Université Clermont Auvergne, Clermont-Ferrand, 63000, France; SurgAR, 22 All. Alan Turing, Clermont-Ferrand, 63000, France; CHU de Clermont-Ferrand, Clermont-Ferrand, 63000, France
| |
Collapse
|
4
|
Zhao X, Pan H, Bai W, Li B, Wang H, Zhang M, Li Y, Zhang D, Geng H, Chen M. Interactive segmentation of medical images using deep learning. Phys Med Biol 2024; 69:045006. [PMID: 38198729 DOI: 10.1088/1361-6560/ad1cf8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2023] [Accepted: 01/10/2024] [Indexed: 01/12/2024]
Abstract
Medical image segmentation algorithms based on deep learning have achieved good segmentation results in recent years, but they require a large amount of labeled data. When performing pixel-level labeling on medical images, labeling a target requires marking ten or even hundreds of points along its edge, which requires a lot of time and labor costs. To reduce the labeling cost, we utilize a click-based interactive segmentation method to generate high-quality segmentation labels. However, in current interactive segmentation algorithms, only the interaction information clicked by the user and the image features are fused as the input of the backbone network (so-called early fusion). The early fusion method has the problem that the interactive information is much sparse at this time. Furthermore, the interactive segmentation algorithms do not take into account the boundary problem, resulting in poor model performance. So we propose early fusion and late fusion strategy to prevent the interaction information from being diluted prematurely and make better use of the interaction information. At the same time, we propose a decoupled head structure, by extracting the image boundary information, and combining the boundary loss function to establish the boundary constraint term, so that the network can pay more attention to the boundary information and further improve the performance of the network. Finally, we conduct experiments on three medical datasets (Chaos, VerSe and Uterine Myoma MRI) to verify the effectiveness of our network. The experimental results show that our network greatly improved compared with the baseline, and NoC@80(the number of interactive clicks over 80% of the IoU threshold) improved by 0.1, 0.1, and 0.2. In particular, we have achieved a NoC@80 score of 1.69 on Chaos. According to statistics, manual annotation takes 25 min to label a case(Uterine Myoma MRI). Annotating a medical image with our method can be done in only 2 or 3 clicks, which can save more than 50% of the cost.
Collapse
Affiliation(s)
- Xiaoran Zhao
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Haixia Pan
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Wenpei Bai
- Department of Obstetrics and Gynecology, Beijing Shijitan Hospital, Capital Medical University, Beijing 100038, People's Republic of China
| | - Bin Li
- Department of MRI, Beijing Shijitan Hospital, Capital MedicalUniversity/Peking University, Ninth Clinical Medical College, Beijing 100038, People's Republic of China
| | - Hongqiang Wang
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Meng Zhang
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Yanan Li
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Dongdong Zhang
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Haotian Geng
- College of Software, Beihang University, Beijing 100191, People's Republic of China
| | - Minghuang Chen
- Department of Obstetrics and Gynecology, Beijing Shijitan Hospital, Capital Medical University, Beijing 100038, People's Republic of China
| |
Collapse
|
5
|
Thiermann R, Sandler M, Ahir G, Sauls JT, Schroeder JW, Brown SD, Le Treut G, Si F, Li D, Wang JD, Jun S. Tools and methods for high-throughput single-cell imaging with the mother machine. bioRxiv 2024:2023.03.27.534286. [PMID: 37066401 PMCID: PMC10103947 DOI: 10.1101/2023.03.27.534286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
Despite much progress, image processing remains a significant bottleneck for high-throughput analysis of microscopy data. One popular platform for single-cell time-lapse imaging is the mother machine, which enables long-term tracking of microbial cells under precisely controlled growth conditions. While several mother machine image analysis pipelines have been developed in the past several years, adoption by a non-expert audience remains a challenge. To fill this gap, we implemented our own software, MM3, as a plugin for the multidimensional image viewer napari. napari-MM3 is a complete and modular image analysis pipeline for mother machine data, which takes advantage of the high-level interactivity of napari. Here, we give an overview of napari-MM3 and test it against several well-designed and widely-used image analysis pipelines, including BACMMAN and DeLTA. Researchers often analyze mother machine data with custom scripts using varied image analysis methods, but a quantitative comparison of the output of different pipelines has been lacking. To this end, we show that key single-cell physiological parameter correlations and distributions are robust to the choice of analysis method. However, we also find that small changes in thresholding parameters can systematically alter parameters extracted from single-cell imaging experiments. Moreover, we explicitly show that in deep learning based segmentation, "what you put is what you get" (WYPIWYG) - i.e., pixel-level variation in training data for cell segmentation can propagate to the model output and bias spatial and temporal measurements. Finally, while the primary purpose of this work is to introduce the image analysis software that we have developed over the last decade in our lab, we also provide information for those who want to implement mother-machine-based high-throughput imaging and analysis methods in their research.
Collapse
Affiliation(s)
- Ryan Thiermann
- Department of Physics, University of California San Diego, La Jolla CA
| | - Michael Sandler
- Department of Physics, University of California San Diego, La Jolla CA
| | - Gursharan Ahir
- Department of Physics, University of California San Diego, La Jolla CA
| | - John T. Sauls
- Department of Physics, University of California San Diego, La Jolla CA
| | - Jeremy W. Schroeder
- Department of Biological Chemistry, University of Michigan Medical School, Ann Arbor, MI
| | - Steven D. Brown
- Department of Physics, University of California San Diego, La Jolla CA
| | | | - Fangwei Si
- Department of Physics, Carnegie Mellon University, Pittsburgh, PA
| | - Dongyang Li
- Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA
| | - Jue D. Wang
- Department of Bacteriology, University of Wisconsin-Madison, Madison, WI
| | - Suckjoon Jun
- Department of Physics, University of California San Diego, La Jolla CA
| |
Collapse
|
6
|
Liu L, Chen D, Shu M, Cohen LD. Grouping Boundary Proposals for Fast Interactive Image Segmentation. IEEE Trans Image Process 2024; 33:793-808. [PMID: 38215327 DOI: 10.1109/tip.2024.3349867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2024]
Abstract
Geodesic models are known as an efficient tool for solving various image segmentation problems. Most of existing approaches only exploit local pointwise image features to track geodesic paths for delineating the objective boundaries. However, such a segmentation strategy cannot take into account the connectivity of the image edge features, increasing the risk of shortcut problem, especially in the case of complicated scenario. In this work, we introduce a new image segmentation model based on the minimal geodesic framework in conjunction with an adaptive cut-based circular optimal path computation scheme and a graph-based boundary proposals grouping scheme. Specifically, the adaptive cut can disconnect the image domain such that the target contours are imposed to pass through this cut only once. The boundary proposals are comprised of precomputed image edge segments, providing the connectivity information for our segmentation model. These boundary proposals are then incorporated into the proposed image segmentation model, such that the target segmentation contours are made up of a set of selected boundary proposals and the corresponding geodesic paths linking them. Experimental results show that the proposed model indeed outperforms state-of-the-art minimal paths-based image segmentation approaches.
Collapse
|
7
|
Rußwurm M, Venkatesa SJ, Tuia D. Large-scale detection of marine debris in coastal areas with Sentinel-2. iScience 2023; 26:108402. [PMID: 38077146 PMCID: PMC10709011 DOI: 10.1016/j.isci.2023.108402] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2023] [Revised: 09/08/2023] [Accepted: 11/03/2023] [Indexed: 02/06/2024] Open
Abstract
Detecting and quantifying marine pollution and macroplastics is an increasingly pressing ecological issue that directly impacts ecology and human health. Here, remote sensing can provide reliable estimates of plastic pollution by regularly monitoring and detecting marine debris in coastal areas. In this work, we present a detector for marine debris built on a deep segmentation model that outputs a probability for marine debris at the pixel level. We train this detector with a combination of annotated datasets of marine debris and evaluate it on specifically selected test sites where it is highly probable that plastic pollution is present in the detected marine debris. We integrate data-centric artificial intelligence principles by devising a training strategy with extensive sampling of negative examples and an automated label refinement of coarse hand labels. This yields a deep learning model that achieves higher accuracies on benchmark comparisons than existing detection models trained on previous datasets.
Collapse
Affiliation(s)
- Marc Rußwurm
- Wageningen University, Geo-information Science and Remote Sensing Laboratory, Droevendaalsesteeg 3, Wageningen Gelderland 6708 PB, the Netherlands
- École Polytechnique Fédérale de Lausanne (EPFL), Environmental Computational Science and Earth Observation (ECEO) Laboratory, Route des Ronquos 86, Sion, Valais 1950, Switzerland
| | - Sushen Jilla Venkatesa
- École Polytechnique Fédérale de Lausanne (EPFL), Environmental Computational Science and Earth Observation (ECEO) Laboratory, Route des Ronquos 86, Sion, Valais 1950, Switzerland
| | - Devis Tuia
- École Polytechnique Fédérale de Lausanne (EPFL), Environmental Computational Science and Earth Observation (ECEO) Laboratory, Route des Ronquos 86, Sion, Valais 1950, Switzerland
| |
Collapse
|
8
|
Lin L, Peng L, He H, Cheng P, Wu J, Wong KKY, Tang X. YoloCurvSeg: You only label one noisy skeleton for vessel-style curvilinear structure segmentation. Med Image Anal 2023; 90:102937. [PMID: 37672901 DOI: 10.1016/j.media.2023.102937] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2023] [Revised: 06/30/2023] [Accepted: 08/16/2023] [Indexed: 09/08/2023]
Abstract
Weakly-supervised learning (WSL) has been proposed to alleviate the conflict between data annotation cost and model performance through employing sparsely-grained (i.e., point-, box-, scribble-wise) supervision and has shown promising performance, particularly in the image segmentation field. However, it is still a very challenging task due to the limited supervision, especially when only a small number of labeled samples are available. Additionally, almost all existing WSL segmentation methods are designed for star-convex structures which are very different from curvilinear structures such as vessels and nerves. In this paper, we propose a novel sparsely annotated segmentation framework for curvilinear structures, named YoloCurvSeg. A very essential component of YoloCurvSeg is image synthesis. Specifically, a background generator delivers image backgrounds that closely match the real distributions through inpainting dilated skeletons. The extracted backgrounds are then combined with randomly emulated curves generated by a Space Colonization Algorithm-based foreground generator and through a multilayer patch-wise contrastive learning synthesizer. In this way, a synthetic dataset with both images and curve segmentation labels is obtained, at the cost of only one or a few noisy skeleton annotations. Finally, a segmenter is trained with the generated dataset and possibly an unlabeled dataset. The proposed YoloCurvSeg is evaluated on four publicly available datasets (OCTA500, CORN, DRIVE and CHASEDB1) and the results show that YoloCurvSeg outperforms state-of-the-art WSL segmentation methods by large margins. With only one noisy skeleton annotation (respectively 0.14%, 0.03%, 1.40%, and 0.65% of the full annotation), YoloCurvSeg achieves more than 97% of the fully-supervised performance on each dataset. Code and datasets will be released at https://github.com/llmir/YoloCurvSeg.
Collapse
Affiliation(s)
- Li Lin
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China
| | - Linkai Peng
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Huaqing He
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China
| | - Pujin Cheng
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China
| | - Jiewei Wu
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China
| | - Kenneth K Y Wong
- Department of Electrical and Electronic Engineering, the University of Hong Kong, Hong Kong, China
| | - Xiaoying Tang
- Department of Electronic and Electrical Engineering, Southern University of Science and Technology, Shenzhen, China; Jiaxing Research Institute, Southern University of Science and Technology, Jiaxing, China.
| |
Collapse
|
9
|
Ju M, Yang J, Lee J, Lee M, Ji J, Kim Y. Pixel Diffuser: Practical Interactive Medical Image Segmentation without Ground Truth. Bioengineering (Basel) 2023; 10:1280. [PMID: 38002404 PMCID: PMC10669538 DOI: 10.3390/bioengineering10111280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2023] [Revised: 10/30/2023] [Accepted: 10/31/2023] [Indexed: 11/26/2023] Open
Abstract
Medical image segmentation is essential for doctors to diagnose diseases and manage patient status. While deep learning has demonstrated potential in addressing segmentation challenges within the medical domain, obtaining a substantial amount of data with accurate ground truth for training high-performance segmentation models is both time-consuming and demands careful attention. While interactive segmentation methods can reduce the costs of acquiring segmentation labels for training supervised models, they often still necessitate considerable amounts of ground truth data. Moreover, achieving precise segmentation during the refinement phase results in increased interactions. In this work, we propose an interactive medical segmentation method called PixelDiffuser that requires no medical segmentation ground truth data and only a few clicks to obtain high-quality segmentation using a VGG19-based autoencoder. As the name suggests, PixelDiffuser starts with a small area upon the initial click and gradually detects the target segmentation region. Specifically, we segment the image by creating a distortion in the image and repeating it during the process of encoding and decoding the image through an autoencoder. Consequently, PixelDiffuser enables the user to click a part of the organ they wish to segment, allowing the segmented region to expand to nearby areas with pixel values similar to the chosen organ. To evaluate the performance of PixelDiffuser, we employed the dice score, based on the number of clicks, to compare the ground truth image with the inferred segment. For validation of our method's performance, we leveraged the BTCV dataset, containing CT images of various organs, and the CHAOS dataset, which encompasses both CT and MRI images of the liver, kidneys and spleen. Our proposed model is an efficient and effective tool for medical image segmentation, achieving competitive performance compared to previous work in less than five clicks and with very low memory consumption without additional training.
Collapse
Affiliation(s)
- Mingeon Ju
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea; (M.J.); (J.Y.); (J.L.); (J.J.)
| | - Jaewoo Yang
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea; (M.J.); (J.Y.); (J.L.); (J.J.)
| | - Jaeyoung Lee
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea; (M.J.); (J.Y.); (J.L.); (J.J.)
| | - Moonhyun Lee
- Major in Bio Artificial Intelligence, Department of Computer Science & Engineering, Hanyang University at Ansan, Ansan 15588, Republic of Korea;
| | - Junyung Ji
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea; (M.J.); (J.Y.); (J.L.); (J.J.)
| | - Younghoon Kim
- Major in Bio Artificial Intelligence, Department of Applied Artificial Intelligence, Hanyang University at Ansan, Ansan 15588, Republic of Korea; (M.J.); (J.Y.); (J.L.); (J.J.)
| |
Collapse
|
10
|
Gong X, Wang L, Miao L, Chen N, Li J. PIMedSeg: Progressive interactive medical image segmentation. Comput Methods Programs Biomed 2023; 241:107776. [PMID: 37651820 DOI: 10.1016/j.cmpb.2023.107776] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 08/21/2023] [Accepted: 08/22/2023] [Indexed: 09/02/2023]
Abstract
BACKGROUND AND OBJECTIVE Accurate object segmentation in medical images is a crucial step in medical diagnosis and other applications. Despite years of research on automatic segmentation approaches, achieving clinically acceptable image quality remains challenging. Interactive segmentation is seen as a promising alternative; thus, we propose a new interactive segmentation framework based on a progressive workflow to reduce user effort and provide high-quality results. METHOD First, our approach encodes user-provided region clicks and edge scribbles using our proposed disk and curve transform. Then, it is followed by refinement with a transformer-based module that extracts effective features from the outputs of the convolutional neural network (CNN) and the extra input maps. RESULT Extensive experiments conducted on various medical images, including ultrasound (US), computerized tomography (CT), and magnetic resonance images (MRI), have demonstrated the effectiveness of our new approach over the state-of-the-art alternatives. CONCLUSION The proposed framework can achieve high-quality segmentation using minimal interactions without the substantial cost of manual segmentation.
Collapse
Affiliation(s)
- Xun Gong
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China.
| | - Li Wang
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China
| | - Longlong Miao
- Tangshan Research Institute, Southwest Jiaotong University, Tangshan 063002, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China
| | - Nuo Chen
- School of Computing and Artificial Intelligence, Southwest Jiaotong University, Chengdu 611756, PR China; Engineering Research Center of Sustainable Urban Intelligent Transportation, Ministry of Education, Chengdu 611756, PR China; Manufacturing Industry Chains Collaboration and Information Support Technology Key Laboratory of Sichuan Province, Southwest Jiaotong University, Chengdu 611756, PR China
| | - Jiao Li
- Department of Gastroenterology, The Third People's Hospital of Chendu, Affiliated Hospital of Southwest Jiaotong University, Chengdu 610031, PR China.
| |
Collapse
|
11
|
Li J, Fan J, Wang Y, Yang Y, Zhang Z. Coarse Mask Guided Interactive Object Segmentation. IEEE Trans Image Process 2023; 32:5808-5822. [PMID: 37824315 DOI: 10.1109/tip.2023.3322564] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/14/2023]
Abstract
Interactive object segmentation aims to produce object masks with user interactions, such as clicks, bounding boxes, and scribbles. Click point is the most popular interactive cue for its efficiency, and related deep learning methods have attracted lots of interest in recent years. Most works encode click points as gaussian maps and concatenate them with images as the model's input. However, the spatial and semantic information of gaussian maps would be noised through multiple convolution layers and won't be fully exploited by top layers for mask prediction. To pass click information to top layers exactly and efficiently, we propose a coarse mask guided model (CMG) which predicts coarse masks with a coarse module to guide the object mask prediction. Specifically, the coarse module encodes user clicks as query features and enriches their semantic information with backbone features through transformer layers, coarse masks are generated based on the enriched query feature and fed into CMG's decoder. Benefiting from the efficiency of transformer, CMG's coarse module and decoder module are lightweight and computationally efficient, making the interaction process more smooth. Experiments on several segmentation benchmarks demonstrate the effectiveness of our method, and we get new state-of-the-art results compared with previous works.
Collapse
|
12
|
Zhang L, Han G, Qiao Y, Xu L, Chen L, Tang J. Interactive Dairy Goat Image Segmentation for Precision Livestock Farming. Animals (Basel) 2023; 13:3250. [PMID: 37893974 PMCID: PMC10603657 DOI: 10.3390/ani13203250] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 07/20/2023] [Accepted: 10/16/2023] [Indexed: 10/29/2023] Open
Abstract
Semantic segmentation and instance segmentation based on deep learning play a significant role in intelligent dairy goat farming. However, these algorithms require a large amount of pixel-level dairy goat image annotations for model training. At present, users mainly use Labelme for pixel-level annotation of images, which makes it quite inefficient and time-consuming to obtain a high-quality annotation result. To reduce the annotation workload of dairy goat images, we propose a novel interactive segmentation model called UA-MHFF-DeepLabv3+, which employs layer-by-layer multi-head feature fusion (MHFF) and upsampling attention (UA) to improve the segmentation accuracy of the DeepLabv3+ on object boundaries and small objects. Experimental results show that our proposed model achieved state-of-the-art segmentation accuracy on the validation set of DGImgs compared with four previous state-of-the-art interactive segmentation models, and obtained 1.87 and 4.11 on mNoC@85 and mNoC@90, which are significantly lower than the best performance of the previous models of 3 and 5. Furthermore, to promote the implementation of our proposed algorithm, we design and develop a dairy goat image-annotation system named DGAnnotation for pixel-level annotation of dairy goat images. After the test, we found that it just takes 7.12 s to annotate a dairy goat instance with our developed DGAnnotation, which is five times faster than Labelme.
Collapse
Affiliation(s)
- Lianyue Zhang
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (L.Z.); (G.H.); (L.X.)
| | - Gaoge Han
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (L.Z.); (G.H.); (L.X.)
| | - Yongliang Qiao
- Australian Institute for Machine Learning (AIML), The University of Adelaide, Adelaide 5005, Australia;
| | - Liu Xu
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (L.Z.); (G.H.); (L.X.)
| | - Ling Chen
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (L.Z.); (G.H.); (L.X.)
| | - Jinglei Tang
- College of Information Engineering, Northwest A&F University, Yangling, Xianyang 712100, China; (L.Z.); (G.H.); (L.X.)
- The Key Laboratory of Agricultural Internet of Things, Ministry of Agriculture, Yangling, Xianyang 712100, China
- Shaanxi Key Laboratory of Agricultural Information Perception and Intelligent Service, Yangling, Xianyang 712100, China
| |
Collapse
|
13
|
Zhang Z, Peng Q, Fu S, Wang W, Cheung YM, Zhao Y, Yu S, You X. A Componentwise Approach to Weakly Supervised Semantic Segmentation Using Dual-Feedback Network. IEEE Trans Neural Netw Learn Syst 2023; 34:7541-7554. [PMID: 35120009 DOI: 10.1109/tnnls.2022.3144194] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Recent weakly supervised semantic segmentation methods generate pseudolabels to recover the lost position information in weak labels for training the segmentation network. Unfortunately, those pseudolabels often contain mislabeled regions and inaccurate boundaries due to the incomplete recovery of position information. It turns out that the result of semantic segmentation becomes determinate to a certain degree. In this article, we decompose the position information into two components: high-level semantic information and low-level physical information, and develop a componentwise approach to recover each component independently. Specifically, we propose a simple yet effective pseudolabels updating mechanism to iteratively correct mislabeled regions inside objects to precisely refine high-level semantic information. To reconstruct low-level physical information, we utilize a customized superpixel-based random walk mechanism to trim the boundaries. Finally, we design a novel network architecture, namely, a dual-feedback network (DFN), to integrate the two mechanisms into a unified model. Experiments on benchmark datasets show that DFN outperforms the existing state-of-the-art methods in terms of intersection-over-union (mIoU).
Collapse
|
14
|
Bao C, Gu L, Wang S, Zou K, Zhang Z, Jiang L, Chen L, Fang H. Priority index for asthma (PIA): In silico discovery of shared and distinct drug targets for adult- and childhood-onset disease. Comput Biol Med 2023; 162:107095. [PMID: 37285660 DOI: 10.1016/j.compbiomed.2023.107095] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 04/30/2023] [Accepted: 05/27/2023] [Indexed: 06/09/2023]
Abstract
Asthma is a chronic disease that is caused by a combination of genetic risks and environmental triggers and can affect both adults and children. Genome-wide association studies have revealed partly distinct genetic architectures for its two age-of-onset subtypes (namely, adult-onset and childhood-onset). We reason that identifying shared and distinct drug targets between these subtypes may inform the development of subtype-specific therapeutic strategies. In attempting this, we here introduce Priority Index for Asthma or PIA, a genetics-led and network-driven drug target prioritisation tool for asthma. We demonstrate the validity of the tool in improving drug target prioritisation for asthma compared to the status quo methods, as well as in capturing the underlying etiology and existing therapeutics for the disease. We also illustrate how PIA can be used to prioritise drug targets for adult- and childhood-onset asthma, as well as to identify shared and distinct pathway crosstalk genes. Shared crosstalk genes are mostly involved in JAK-STAT signaling, with clinical evidence supporting that targeting this pathway may be a promising drug repurposing opportunity for both subtypes. Crosstalk genes specific to childhood-onset asthma are enriched for PI3K-AKT-mTOR signaling, and we identify genes that are already targeted by licensed medications as repurposed drug candidates for this subtype. We make all our results accessible and reproducible at http://www.genetictargets.com/PIA. Collectively, our study has significant implications for asthma computational medicine research and can guide the future development of subtype-specific therapeutic strategies for the disease.
Collapse
Affiliation(s)
- Chaohui Bao
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Leyao Gu
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; Faculty of Medical Laboratory Science, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Shan Wang
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Kexin Zou
- School of Life Sciences, Central South University, Hunan, China
| | - Zhiqiang Zhang
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China; School of Life Sciences and Biotechnology, Shanghai Jiao Tong University, Shanghai, China
| | - Lulu Jiang
- Translational Health Sciences, University of Bristol, Bristol, UK
| | - Liye Chen
- Nuffield Department of Orthopaedics, Rheumatology and Musculoskeletal Sciences, University of Oxford, Oxford, UK
| | - Hai Fang
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China.
| |
Collapse
|
15
|
Zou R, Wang Q, Wen F, Chen Y, Liu J, Du S, Yuan C. An Interactive Image Segmentation Method Based on Multi-Level Semantic Fusion. Sensors (Basel) 2023; 23:6394. [PMID: 37514688 PMCID: PMC10383896 DOI: 10.3390/s23146394] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Revised: 07/10/2023] [Accepted: 07/12/2023] [Indexed: 07/30/2023]
Abstract
Understanding and analyzing 2D/3D sensor data is crucial for a wide range of machine learning-based applications, including object detection, scene segmentation, and salient object detection. In this context, interactive object segmentation is a vital task in image editing and medical diagnosis, involving the accurate separation of the target object from its background based on user annotation information. However, existing interactive object segmentation methods struggle to effectively leverage such information to guide object-segmentation models. To address these challenges, this paper proposes an interactive image-segmentation technique for static images based on multi-level semantic fusion. Our method utilizes user-guidance information both inside and outside the target object to segment it from the static image, making it applicable to both 2D and 3D sensor data. The proposed method introduces a cross-stage feature aggregation module, enabling the effective propagation of multi-scale features from previous stages to the current stage. This mechanism prevents the loss of semantic information caused by multiple upsampling and downsampling of the network, allowing the current stage to make better use of semantic information from the previous stage. Additionally, we incorporate a feature channel attention mechanism to address the issue of rough network segmentation edges. This mechanism captures richer feature details from the feature channel level, leading to finer segmentation edges. In the experimental evaluation conducted on the PASCAL Visual Object Classes (VOC) 2012 dataset, our proposed interactive image segmentation method based on multi-level semantic fusion demonstrates an intersection over union (IOU) accuracy approximately 2.1% higher than the currently popular interactive image segmentation method in static images. The comparative analysis highlights the improved performance and effectiveness of our method. Furthermore, our method exhibits potential applications in various fields, including medical imaging and robotics. Its compatibility with other machine learning methods for visual semantic analysis allows for integration into existing workflows. These aspects emphasize the significance of our contributions in advancing interactive image-segmentation techniques and their practical utility in real-world applications.
Collapse
Affiliation(s)
- Ruirui Zou
- School of Physics and Mechanical and Electrical Engineering, Longyan University, Longyan 364012, China
| | - Qinghui Wang
- School of Physics and Mechanical and Electrical Engineering, Longyan University, Longyan 364012, China
| | - Falin Wen
- School of Physics and Mechanical and Electrical Engineering, Longyan University, Longyan 364012, China
| | - Yang Chen
- School of Physics and Mechanical and Electrical Engineering, Longyan University, Longyan 364012, China
| | - Jiale Liu
- School of Software Engineering, Xi'an Jiaotong University, Xi'an 710049, China
| | - Shaoyi Du
- Institute of Artificial Intelligence and Robotics, Xi'an Jiaotong University, Xi'an 710049, China
| | - Chengzhi Yuan
- Department of Mechanical, Industrial and Systems Engineering, University of Rhode Island, Kingston, RI 02881, USA
| |
Collapse
|
16
|
Li X, Li X. Multimodal brain image fusion based on error texture elimination and salient feature detection. Front Neurosci 2023; 17:1204263. [PMID: 37521686 PMCID: PMC10372795 DOI: 10.3389/fnins.2023.1204263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 06/13/2023] [Indexed: 08/01/2023] Open
Abstract
As an important clinically oriented information fusion technology, multimodal medical image fusion integrates useful information from different modal images into a comprehensive fused image. Nevertheless, existing methods routinely consider only energy information when fusing low-frequency or base layers, ignoring the fact that useful texture information may exist in pixels with lower energy values. Thus, erroneous textures may be introduced into the fusion results. To resolve this problem, we propose a novel multimodal brain image fusion algorithm based on error texture removal. A two-layer decomposition scheme is first implemented to generate the high- and low-frequency subbands. We propose a salient feature detection operator based on gradient difference and entropy. The proposed operator integrates the gradient difference and amount of information in the high-frequency subbands to effectively identify clearly detailed information. Subsequently, we detect the energy information of the low-frequency subband by utilizing the local phase feature of each pixel as the intensity measurement and using a random walk algorithm to detect the energy information. Finally, we propose a rolling guidance filtering iterative least-squares model to reconstruct the texture information in the low-frequency components. Through extensive experiments, we successfully demonstrate that the proposed algorithm outperforms some state-of-the-art methods. Our source code is publicly available at https://github.com/ixilai/ETEM.
Collapse
|
17
|
Qin Y, Tang C, Hu Q, Yi J, Yin T, Ai T. Assessment of Prognostic Factors and Molecular Subtypes of Breast Cancer With a Continuous-Time Random-Walk MR Diffusion Model: Using Whole Tumor Histogram Analysis. J Magn Reson Imaging 2023; 58:93-105. [PMID: 36251468 DOI: 10.1002/jmri.28474] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 09/28/2022] [Accepted: 09/29/2022] [Indexed: 11/07/2022] Open
Abstract
BACKGROUND The continuous-time random-walk (CTRW) diffusion model to evaluate breast cancer prognosis is rarely reported. PURPOSE To investigate the correlations between apparent diffusion coefficient (ADC) and CTRW-specific parameters with prognostic factors and molecular subtypes of breast cancer. STUDY TYPE Retrospective. POPULATION One hundred fifty-seven women (median age, 50 years; range, 26-81 years) with histopathology-confirmed breast cancer. FIELD STRENGTH/SEQUENCE Simultaneous multi-slice readout-segmented echo-planar imaging at 3.0T. ASSESSMENT The histogram metrics of ADC, anomalous diffusion coefficient (D), temporal diffusion heterogeneity (α), and spatial diffusion heterogeneity (β) were calculated for whole-tumor volume. Associations between histogram metrics and prognostic factors (estrogen receptor [ER], progesterone receptor [PR], human epidermal growth factor receptor 2 [HER2], and Ki-67 proliferation index), axillary lymph node metastasis (ALNM), and tumor grade were assessed. The performance of histogram metrics, both alone and in combination, for differentiating molecular subtypes (HER2-positive, Luminal or triple negative) was also assessed. STATISTICAL TESTS Comparisons were made using Mann-Whitney test between different prognostic factor statuses and molecular subtypes. Receiver operating characteristic curve analysis was used to assess the performance of mean and median histogram metrics in differentiating the molecular subtypes. A P value <0.05 was considered statistically significant. RESULTS The histogram metrics of ADC, D, and α differed significantly between ER-positive and ER-negative status, and between PR-positive and PR-negative status. The histogram metrics of ADC, D, α, and β were also significantly different between the HER2-positive and HER2-negative subgroups, and between ALNM-positive and ALNM-negative subgroups. The histogram metrics of α and β significantly differed between high and low Ki-67 proliferation subgroups, and between histological grade subgroups. The combination of αmean and βmean achieved the highest performance (AUC = 0.702) to discriminate the Luminal and HER2-positive subtypes. DATA CONCLUSION Whole-tumor histogram analysis of the CTRW model has potential to provide additional information on the prognosis and intrinsic subtyping classification of breast cancer. EVIDENCE LEVEL 4 TECHNICAL EFFICACY: Stage 2.
Collapse
Affiliation(s)
- Yanjin Qin
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Caili Tang
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Qilan Hu
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Jingru Yi
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Ting Yin
- MR Collaborations, Siemens Healthineers Ltd., Chengdu, China
| | - Tao Ai
- Department of Radiology, Tongji Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
18
|
Kim S, Park J, Kim DH, Sun J, Lee SY. Combined exercise and nutrition intervention for older women with spinal sarcopenia: an open-label single-arm trial. BMC Geriatr 2023; 23:346. [PMID: 37264334 PMCID: PMC10236709 DOI: 10.1186/s12877-023-04063-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 05/23/2023] [Indexed: 06/03/2023] Open
Abstract
PURPOSE Spinal sarcopenia is a multifactorial disorder associated with atrophy and fatty changes in paraspinal muscles. Interventional studies for spinal sarcopenia are limited. We aimed to evaluate the effectiveness of a combined exercise and nutrition intervention for the treatment of spinal sarcopenia. METHODS 35 community-dwelling older women diagnosed with spinal sarcopenia in a previous cohort study were included. The 12-week combined intervention consisted of back extensor strengthening exercises and protein supplementation. The following outcomes were measured at baseline (week 0), after the intervention (week 12), and follow-up (week 24): conventional variables of sarcopenia (appendicular skeletal muscle mass, handgrip strength, 6-meter gait speed, and short physical performance battery); lumbar extensor muscle mass; lumbar extensor muscle volume and signal intensity; back extensor isokinetic strength; and back performance scale. We used the intention-to-treat analysis method, and repeated measures analysis of variance was used to analyze the data. RESULTS Of the total 35 potential participants, 26 older women participated in the study (mean age 72.5 ± 4.0 years old). After 12 weeks of combined exercise and nutrition intervention, there were no changes in the appendicular skeletal muscle mass, lumbar extensor muscle mass, volume, or signal intensity. Handgrip strength and back extensor isokinetic strength did not change significantly. Short physical performance battery significantly increased (P = 0.042) from 11.46 ± 0.86 to 11.77 ± 0.53 at week 12 and 11.82 ± 0.40 at week 24. The back performance scale sum score also significantly improved (P = 0.034) from 2.68 ± 1.81 to 1.95 ± 1.21 at week 12 and 2.09 ± 1.34 at week 24. CONCLUSION The combined exercise and nutrition intervention for community-dwelling older women with spinal sarcopenia could be feasible and helpful in improving the physical performance as well as back performance.
Collapse
Affiliation(s)
- Seungcheol Kim
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul National University Hospital, Seoul, Republic of Korea
| | - Jinhee Park
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, SMG-SNU Boramae Medical Center, 20 Boramae-ro 5-gil, Dongjak-gu, Seoul, 07061, Republic of Korea
| | - Dong Hyun Kim
- Department of Radiology, Seoul National University College of Medicine, SMG-SNU Boramae Medical Center, Seoul, Republic of Korea
| | - Jiyu Sun
- Integrated Biostatistics Branch, Division of Cancer Data Science, National Cancer Center, Goyang-si, Korea
| | - Sang Yoon Lee
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, SMG-SNU Boramae Medical Center, 20 Boramae-ro 5-gil, Dongjak-gu, Seoul, 07061, Republic of Korea.
| |
Collapse
|
19
|
Khan MS, Ali H, Zakarya M, Tirunagari S, Khan AA, Khan R, Ahmed A, Rada L. A convex selective segmentation model based on a piece-wise constant metric-guided edge detector function. Soft comput 2023. [DOI: 10.1007/s00500-023-08173-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/01/2023] [Indexed: 10/23/2023]
|
20
|
Kostrykin L, Rohr K. Superadditivity and Convex Optimization for Globally Optimal Cell Segmentation Using Deformable Shape Models. IEEE Trans Pattern Anal Mach Intell 2023; 45:3831-3847. [PMID: 35737620 DOI: 10.1109/tpami.2022.3185583] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Cell nuclei segmentation is challenging due to shape variation and closely clustered or partially overlapping objects. Most previous methods are not globally optimal, limited to elliptical models, or are computationally expensive. In this work, we introduce a globally optimal approach based on deformable shape models and global energy minimization for cell nuclei segmentation and cluster splitting. We propose an implicit parameterization of deformable shape models and show that it leads to a convex energy. Convex energy minimization yields the global solution independently of the initialization, is fast, and robust. To jointly perform cell nuclei segmentation and cluster splitting, we developed a novel iterative global energy minimization method, which leverages the inherent property of superadditivity of the convex energy. This property exploits the lower bound of the energy of the union of the models and improves the computational efficiency. Our method provably determines a solution close to global optimality. In addition, we derive a closed-form solution of the proposed global minimization based on the superadditivity property for non-clustered cell nuclei. We evaluated our method using fluorescence microscopy images of five different cell types comprising various challenges, and performed a quantitative comparison with previous methods. Our method achieved state-of-the-art or improved performance.
Collapse
|
21
|
Wang X, Zhang X, Li J, Zhao S, Sun H. Tensor-based multi-feature affinity graph learning for natural image segmentation. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08279-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/16/2023]
|
22
|
Chen J, Liu B, Qu Z, Wang C. Empathy structure in multi-agent system with the mechanism of self-other separation: Design and analysis from a random walk view. COGN SYST RES 2023. [DOI: 10.1016/j.cogsys.2023.02.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/12/2023]
|
23
|
Abreu de Souza M, Alka Cordeiro DC, de Oliveira J, de Oliveira MFA, Bonafini BL. 3D Multi-Modality Medical Imaging: Combining Anatomical and Infrared Thermal Images for 3D Reconstruction. Sensors (Basel) 2023; 23:s23031610. [PMID: 36772650 PMCID: PMC9919921 DOI: 10.3390/s23031610] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/06/2022] [Revised: 01/17/2023] [Accepted: 01/25/2023] [Indexed: 06/12/2023]
Abstract
Medical thermography provides an overview of the human body with two-dimensional (2D) information that assists the identification of temperature changes, based on the analysis of surface distribution. However, this approach lacks spatial depth information, which can be enhanced by adding multiple images or three-dimensional (3D) systems. Therefore, the methodology applied for this paper generates a 3D point cloud (from thermal infrared images), a 3D geometry model (from CT images), and the segmented inner anatomical structures. Thus, the following computational processing was employed: Structure from Motion (SfM), image registration, and alignment (affine transformation) between the 3D models obtained to combine and unify them. This paper presents the 3D reconstruction and visualization of the respective geometry of the neck/bust and inner anatomical structures (thyroid, trachea, veins, and arteries). Additionally, it shows the whole 3D thermal geometry in different anatomical sections (i.e., coronal, sagittal, and axial), allowing it to be further examined by a medical team, improving pathological assessments. The generation of 3D thermal anatomy models allows for a combined visualization, i.e., functional and anatomical images of the neck region, achieving encouraging results. These 3D models bring correlation of the inner and outer regions, which could improve biomedical applications and future diagnosis with such a methodology.
Collapse
|
24
|
Li Y, Wang T, Ji Z, Fu P, Shen X, Sun Q. Spatiotemporal consistent selection-correction network for deep interactive image segmentation. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08210-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
|
25
|
Zhou P, Kang X, Ming A. Vine Spread for Superpixel Segmentation. IEEE Trans Image Process 2023; PP:878-891. [PMID: 37018702 DOI: 10.1109/tip.2023.3234700] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
Superpixel is the over-segmentation region of an image, whose basic units "pixels" have similar properties. Although many popular seeds-based algorithms have been proposed to improve the segmentation quality of superpixels, they still suffer from the seeds initialization problem and the pixel assignment problem. In this paper, we propose Vine Spread for Superpixel Segmentation (VSSS) to form superpixel with high quality. First, we extract image color and gradient features to define the soil model that establishes a "soil" environment for vine, and then we define the vine state model by simulating the vine "physiological" state. Thereafter, to catch more image details and twigs of the object, we propose a new seeds initialization strategy that perceives image gradients at the pixel-level and without randomness. Next, to balance the boundary adherence and the regularity of the superpixel, we define a three-stage "parallel spreading" vine spread process as a novel pixel assignment scheme, in which the proposed nonlinear velocity for vines helps to form the superpixel with regular shape and homogeneity, the crazy spreading mode for vines and the soil averaging strategy help to enhance the boundary adherence of superpixel. Finally, a series of experimental results demonstrate that our VSSS offers competitive performance in the seed-based methods, especially in catching object details and twigs, balancing boundary adherence and obtaining regular shape superpixels.
Collapse
|
26
|
Zhou T, Li L, Bredell G, Li J, Unkelbach J, Konukoglu E. Volumetric memory network for interactive medical image segmentation. Med Image Anal 2023; 83:102599. [PMID: 36327652 DOI: 10.1016/j.media.2022.102599] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2022] [Revised: 06/23/2022] [Accepted: 08/24/2022] [Indexed: 12/13/2022]
Abstract
Despite recent progress of automatic medical image segmentation techniques, fully automatic results usually fail to meet clinically acceptable accuracy, thus typically require further refinement. To this end, we propose a novel Volumetric Memory Network, dubbed as VMN, to enable segmentation of 3D medical images in an interactive manner. Provided by user hints on an arbitrary slice, a 2D interaction network is firstly employed to produce an initial 2D segmentation for the chosen slice. Then, the VMN propagates the initial segmentation mask bidirectionally to all slices of the entire volume. Subsequent refinement based on additional user guidance on other slices can be incorporated in the same manner. To facilitate smooth human-in-the-loop segmentation, a quality assessment module is introduced to suggest the next slice for interaction based on the segmentation quality of each slice produced in the previous round. Our VMN demonstrates two distinctive features: First, the memory-augmented network design offers our model the ability to quickly encode past segmentation information, which will be retrieved later for the segmentation of other slices; Second, the quality assessment module enables the model to directly estimate the quality of each segmentation prediction, which allows for an active learning paradigm where users preferentially label the lowest-quality slice for multi-round refinement. The proposed network leads to a robust interactive segmentation engine, which can generalize well to various types of user annotations (e.g., scribble, bounding box, extreme clicking). Extensive experiments have been conducted on three public medical image segmentation datasets (i.e., MSD, KiTS19, CVC-ClinicDB), and the results clearly confirm the superiority of our approach in comparison with state-of-the-art segmentation models. The code is made publicly available at https://github.com/0liliulei/Mem3D.
Collapse
Affiliation(s)
- Tianfei Zhou
- Computer Vision Laboratory, ETH Zurich, Switzerland.
| | - Liulei Li
- School of Computer Science and Technology, Beijing Institute of Technology, China
| | | | - Jianwu Li
- School of Computer Science and Technology, Beijing Institute of Technology, China
| | - Jan Unkelbach
- Department of Radiation Oncology, University Hospital of Zurich, Zurich, Switzerland
| | | |
Collapse
|
27
|
Bruzadin A, Boaventura M, Colnago M, Negri RG, Casaca W. Learning Label Diffusion Maps for Semi-Automatic Segmentation of Lung CT Images with COVID-19. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.12.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
|
28
|
Saito S, Herbster M. Generalizing p-Laplacian: spectral hypergraph theory and a partitioning algorithm. Mach Learn 2022. [DOI: 10.1007/s10994-022-06264-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/05/2022]
Abstract
AbstractFor hypergraph clustering, various methods have been proposed to define hypergraph p-Laplacians in the literature. This work proposes a general framework for an abstract class of hypergraph p-Laplacians from a differential-geometric view. This class includes previously proposed hypergraph p-Laplacians and also includes previously unstudied novel generalizations. For this abstract class, we extend current spectral theory by providing an extension of nodal domain theory for the eigenvectors of our hypergraph p-Laplacian. We use this nodal domain theory to provide bounds on the eigenvalues via a higher-order Cheeger inequality. Following our extension of spectral theory, we propose a novel hypergraph partitioning algorithm for our generalized p-Laplacian. Our empirical study shows that our algorithm outperforms spectral methods based on existing p-Laplacians.
Collapse
|
29
|
Bui V, Hsu LY, Chang LC, Sun AY, Tran L, Shanbhag SM, Zhou W, Mehta NN, Chen MY. DeepHeartCT: A fully automatic artificial intelligence hybrid framework based on convolutional neural network and multi-atlas segmentation for multi-structure cardiac computed tomography angiography image segmentation. Front Artif Intell 2022; 5:1059007. [PMID: 36483981 PMCID: PMC9723331 DOI: 10.3389/frai.2022.1059007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2022] [Accepted: 11/03/2022] [Indexed: 01/25/2023] Open
Abstract
Cardiac computed tomography angiography (CTA) is an emerging imaging modality for assessing coronary artery as well as various cardiovascular structures. Recently, deep learning (DL) methods have been successfully applied to many applications of medical image analysis including cardiac CTA structure segmentation. However, DL requires a large amounts of data and high-quality labels for training which can be burdensome to obtain due to its labor-intensive nature. In this study, we aim to develop a fully automatic artificial intelligence (AI) system, named DeepHeartCT, for accurate and rapid cardiac CTA segmentation based on DL. The proposed system was trained using a large clinical dataset with computer-generated labels to segment various cardiovascular structures including left and right ventricles (LV, RV), left and right atria (LA, RA), and LV myocardium (LVM). This new system was trained directly using high-quality computer labels generated from our previously developed multi-atlas based AI system. In addition, a reverse ranking strategy was proposed to assess the segmentation quality in the absence of manual reference labels. This strategy allowed the new framework to assemble optimal computer-generated labels from a large dataset for effective training of a deep convolutional neural network (CNN). A large clinical cardiac CTA studies (n = 1,064) were used to train and validate our framework. The trained model was then tested on another independent dataset with manual labels (n = 60). The Dice score, Hausdorff distance and mean surface distance were used to quantify the segmentation accuracy. The proposed DeepHeartCT framework yields a high median Dice score of 0.90 [interquartile range (IQR), 0.90-0.91], a low median Hausdorff distance of 7 mm (IQR, 4-15 mm) and a low mean surface distance of 0.80 mm (IQR, 0.57-1.29 mm) across all segmented structures. An additional experiment was conducted to evaluate the proposed DL-based AI framework trained with a small vs. large dataset. The results show our framework also performed well when trained on a small optimal training dataset (n = 110) with a significantly reduced training time. These results demonstrated that the proposed DeepHeartCT framework provides accurate and rapid cardiac CTA segmentation that can be readily generalized for handling large-scale medical imaging applications.
Collapse
Affiliation(s)
- Vy Bui
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States
| | - Li-Yueh Hsu
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Bethesda, MD, United States,*Correspondence: Li-Yueh Hsu
| | - Lin-Ching Chang
- Department of Electrical Engineering and Computer Science, Catholic University of America, Washington, DC, United States
| | - An-Yu Sun
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States,Department of Electrical Engineering and Computer Science, Catholic University of America, Washington, DC, United States
| | - Loc Tran
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States,Department of Electrical Engineering and Computer Science, Catholic University of America, Washington, DC, United States
| | - Sujata M. Shanbhag
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States
| | - Wunan Zhou
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States
| | - Nehal N. Mehta
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States
| | - Marcus Y. Chen
- National Heart Lung and Blood Institute, National Institutes of Health, Bethesda, MD, United States
| |
Collapse
|
30
|
Gupta AU, Singh Bhadauria S, Liu J. Multi Level Approach for Segmentation of Interstitial Lung Disease (ILD) Patterns Classification Based on Superpixel Processing and Fusion of K-Means Clusters: SPFKMC. Computational Intelligence and Neuroscience 2022; 2022:1-22. [PMID: 36317075 PMCID: PMC9617705 DOI: 10.1155/2022/4431817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/27/2022] [Revised: 09/23/2022] [Accepted: 09/30/2022] [Indexed: 11/17/2022]
Abstract
During the COVID-19 pandemic, huge interstitial lung disease (ILD) lung images have been captured. It is high time to develop the efficient segmentation techniques utilized to separate the anatomical structures and ILD patterns for disease and infection level identification. The effectiveness of disease classification directly depends on the accuracy of initial stages like preprocessing and segmentation. This paper proposed a hybrid segmentation algorithm designed for ILD images by taking advantage of superpixel and K-means clustering approaches. Segmented superpixel images adapt the better irregular local and spatial neighborhoods that are helpful to improving the performance of K-means clustering-based ILD image segmentation. To overcome the limitations of multiclass belongings, semiadaptive wavelet-based fusion is applied over selected K-means clusters. The performance of the proposed SPFKMC was compared with that of 3-class Fuzzy C-Means clustering (FCM) and K-Means clustering in terms of accuracy, Jaccard similarity index, and Dice similarity coefficient. The SPFKMC algorithm gives an accuracy of 99.28%, DSC 98.72%, and JSI 97.87%. The proposed Fused Clustering gives better results as compared to traditional K-means clustering segmentation with wavelet-based fused cluster results.
Collapse
|
31
|
Ameli S, Venkatesh BA, Shaghaghi M, Ghadimi M, Hazhirkarzar B, Rezvani Habibabadi R, Aliyari Ghasabeh M, Khoshpouri P, Pandey A, Pandey P, Pan L, Grimm R, Kamel IR. Role of MRI-Derived Radiomics Features in Determining Degree of Tumor Differentiation of Hepatocellular Carcinoma. Diagnostics (Basel) 2022; 12:diagnostics12102386. [PMID: 36292074 PMCID: PMC9600274 DOI: 10.3390/diagnostics12102386] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 09/23/2022] [Accepted: 09/26/2022] [Indexed: 11/16/2022] Open
Abstract
Background: To investigate radiomics ability in predicting hepatocellular carcinoma histological degree of differentiation by using volumetric MR imaging parameters. Methods: Volumetric venous enhancement and apparent diffusion coefficient were calculated on baseline MRI of 171 lesions. Ninety-five radiomics features were extracted, then random forest classification identified the performance of the texture features in classifying tumor degree of differentiation based on their histopathological features. The Gini index was used for split criterion, and the random forest was optimized to have a minimum of nine participants per leaf node. Predictor importance was estimated based on the minimal depth of the maximal subtree. Results: Out of 95 radiomics features, four top performers were apparent diffusion coefficient (ADC) features. The mean ADC and venous enhancement map alone had an overall error rate of 39.8%. The error decreased to 32.8% with the addition of the radiomics features in the multi-class model. The area under the receiver-operator curve (AUC) improved from 75.2% to 83.2% with the addition of the radiomics features for distinguishing well- from moderately/poorly differentiated HCCs in the multi-class model. Conclusions: The addition of radiomics-based texture analysis improved classification over that of ADC or venous enhancement values alone. Radiomics help us move closer to non-invasive histologic tumor grading of HCC.
Collapse
Affiliation(s)
- Sanaz Ameli
- Department of Radiology, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR 72205, USA
| | | | - Mohammadreza Shaghaghi
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
| | - Maryam Ghadimi
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
| | - Bita Hazhirkarzar
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
| | - Roya Rezvani Habibabadi
- Department of Radiology, University of Florida College of Medicine, 1600 SW Archer Rd., Gainesville, FL 32610, USA
| | - Mounes Aliyari Ghasabeh
- Department of Radiology, Saint Louis University, 1201 S Grand Blvd, St. Louis, MO 63104, USA
| | - Pegah Khoshpouri
- Department of Radiology, University of Washington Main Hospital, 1959 NE Pacific St., 2nd Floor, Seattle, WA 98195, USA
| | - Ankur Pandey
- Department of Radiology, University of Maryland Medical Center, 22 S Greene St., Baltimore, MD 21201, USA
| | - Pallavi Pandey
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
| | - Li Pan
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
| | - Robert Grimm
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
| | - Ihab R. Kamel
- Department of Radiology, Johns Hopkins Hospital, 600 N Wolfe St., Baltimore, MD 21287, USA
- Correspondence:
| |
Collapse
|
32
|
Li X, Li B, Yin H, Xu B. An Automatic Random Walker Algorithm for Segmentation of Ground Glass Opacity Pulmonary Nodules. Journal of Healthcare Engineering 2022; 2022:1-15. [PMID: 36212245 PMCID: PMC9537033 DOI: 10.1155/2022/6727957] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Revised: 07/02/2021] [Accepted: 01/06/2022] [Indexed: 11/24/2022]
Abstract
Automatic and accurate segmentation of ground glass opacity (GGO) nodules still remains challenging due to inhomogeneous interiors, irregular shapes, and blurred boundaries from different patients. Despite successful applications in the image processing domains, the random walk has some limitations for segmentation of GGO pulmonary nodules. In this paper, an improved random walker method is proposed for the segmentation of GGO nodules. To calculate a new affinity matrix, intensity, spatial, and texture features are incorporated. It strengthens discriminative power between two adjacent nodes on the graph. To address the problem of robustness in seed acquisition, the geodesic distance is introduced and a novel local search strategy is presented to automatically acquire reliable seeds. For segmentation, a label constraint term is introduced to the energy function of original random walker, which alleviates the accumulation of errors caused by the initial seeds acquisition. Massive experiments conducted on Lung Images Dataset Consortium (LIDC) demonstrate that the proposed method achieves visually satisfactory results without user interactions. Both qualitative and quantitative evaluations also demonstrate that the proposed method obtains better performance compared with conventional random walker method and state-of-the-art segmentation methods in terms of the overlap score and F-measure.
Collapse
|
33
|
Li X, Zhao Z, Wang Q. ABSSNet: Attention-Based Spatial Segmentation Network for Traffic Scene Understanding. IEEE Trans Cybern 2022; 52:9352-9362. [PMID: 33531327 DOI: 10.1109/tcyb.2021.3050558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
The location information of road and lane lines is the supremely important thing for the automatic drive and auxiliary drive. The detection accuracy of these two elements dramatically affects the reliability and practicality of the whole system. In real applications, the traffic scene can be very complicated, which makes it particularly challenging to obtain the precise location of road and lane lines. Commonly used deep learning-based object detection models perform pretty well on the lane line and road detection tasks, but they still encounter false detection and missing detection frequently. Besides, existing convolution neural network (CNN) structures only pay attention to the information flow between layers, while it cannot fully utilize the spatial information inside the layers. To address those problems, we propose an attention-based spatial segmentation network for traffic scene understanding. We use the convolutional attention module to improve the network's understanding capacity of spatial location distribution. Spatial CNN (SCNN) obtains through the information flow within one single convolutional layer and improves the spatial relationship modeling ability of the network. The experimental results demonstrate that this method effectively improves the neural network's application ability of the spatial information, thereby improving the effect of traffic scene understanding. Furthermore, a pixel-level road segmentation dataset called NWPU Road Dataset is built to help improve the process of traffic scene understanding.
Collapse
|
34
|
Chen J, Liu B, Zhang D, Qu Z, Wang C. Social decision-making in a large-scale MultiAgent system considering the influence of empathy. APPL INTELL 2022. [DOI: 10.1007/s10489-022-03933-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
35
|
Zhao S, Chen J, Chen J, Zhang Y, Tang J. Hierarchical label with imbalance and attributed network structure fusion for network embedding. AI Open 2022. [DOI: 10.1016/j.aiopen.2022.07.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022] Open
|
36
|
Pace DF, Dalca AV, Brosch T, Geva T, Powell AJ, Weese J, Moghari MH, Golland P. Learned iterative segmentation of highly variable anatomy from limited data: Applications to whole heart segmentation for congenital heart disease. Med Image Anal 2022; 80:102469. [PMID: 35640385 PMCID: PMC9617683 DOI: 10.1016/j.media.2022.102469] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Revised: 04/26/2022] [Accepted: 04/29/2022] [Indexed: 02/08/2023]
Abstract
Training deep learning models that segment an image in one step typically requires a large collection of manually annotated images that captures the anatomical variability in a cohort. This poses challenges when anatomical variability is extreme but training data is limited, as when segmenting cardiac structures in patients with congenital heart disease (CHD). In this paper, we propose an iterative segmentation model and show that it can be accurately learned from a small dataset. Implemented as a recurrent neural network, the model evolves a segmentation over multiple steps, from a single user click until reaching an automatically determined stopping point. We develop a novel loss function that evaluates the entire sequence of output segmentations, and use it to learn model parameters. Segmentations evolve predictably according to growth dynamics encapsulated by training data, which consists of images, partially completed segmentations, and the recommended next step. The user can easily refine the final segmentation by examining those that are earlier or later in the output sequence. Using a dataset of 3D cardiac MR scans from patients with a wide range of CHD types, we show that our iterative model offers better generalization to patients with the most severe heart malformations.
Collapse
Affiliation(s)
- Danielle F Pace
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
| | - Adrian V Dalca
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA; A.A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
| | - Tom Brosch
- Philips Research Laboratories, Hamburg, Germany
| | - Tal Geva
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Andrew J Powell
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | | | - Mehdi H Moghari
- Department of Cardiology, Boston Children's Hospital, Boston, MA, USA; Department of Pediatrics, Harvard Medical School, Boston, MA, USA
| | - Polina Golland
- Computer Science and Artificial Intelligence Laboratory, Massachusetts Institute of Technology, Cambridge, MA, USA
| |
Collapse
|
37
|
Lee S, Kume H, Urakubo H, Kasai H, Ishii S. Tri-view two-photon microscopic image registration and deblurring with convolutional neural networks. Neural Netw 2022; 152:57-69. [DOI: 10.1016/j.neunet.2022.04.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Revised: 02/10/2022] [Accepted: 04/11/2022] [Indexed: 10/18/2022]
|
38
|
Wang J, Hu Y, Li Z. A New Coherence Detection Method for Mapping Inland Water Bodies Using CYGNSS Data. Remote Sensing 2022; 14:3195. [DOI: 10.3390/rs14133195] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Inland water is an important part of the Earth’s water cycle. Mapping inland water is vital for understanding surface hydrology and climate change. Spaceborne global navigation satellite systems reflectometry (GNSS-R) has been proven to be an effective technique to detect inland water bodies. This paper proposes a new method to map inland water bodies using the delay-Doppler map (DDM) measurements provided by the GNSS-R platform Cyclone GNSS (CYGNSS). In this new method, we develop a refined power ratio to identify the coherence in DDM caused by the inland water. Processed with an image segmentation method, the refined power ratio is then applied to discriminate the permanent inland water bodies from the land. Using CYGNSS data over the Amazon Basin and the Congo Basin in 2020, we successfully generated water masks with a spatial resolution of 0.01°. Compared with the reference optical water masks, the overall detection accuracy in the Amazon Basin is 94.48% and the water detection accuracy is 92.23%, and the corresponding accuracies in the Congo Basin are 96.12% and 93.16%, respectively. Compared with the previous DDM power-spread detector (DPSD) method, the new method’s false alarms and misses in the Amazon Basin are reduced by 17.1% and 9.1%, respectively, while the false alarms and misses in the Congo Basin are reduced by 10.2% and 22%, respectively. Moreover, our method is proven to be useful for detecting short-term flood inundation.
Collapse
|
39
|
Atzeni A, Peter L, Robinson E, Blackburn E, Althonayan J, Alexander DC, Iglesias JE. Deep active learning for suggestive segmentation of biomedical image stacks via optimisation of Dice scores and traced boundary length. Med Image Anal 2022; 81:102549. [DOI: 10.1016/j.media.2022.102549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2021] [Revised: 07/12/2022] [Accepted: 07/13/2022] [Indexed: 10/16/2022]
|
40
|
Drees D, Eilers F, Jiang X. Hierarchical Random Walker Segmentation for Large Volumetric Biomedical Images. IEEE Trans Image Process 2022; 31:4431-4446. [PMID: 35763479 DOI: 10.1109/tip.2022.3185551] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The random walker method for image segmentation is a popular tool for semi-automatic image segmentation, especially in the biomedical field. However, its linear asymptotic run time and memory requirements make application to 3D datasets of increasing sizes impractical. We propose a hierarchical framework that, to the best of our knowledge, is the first attempt to overcome these restrictions for the random walker algorithm and achieves sublinear run time and constant memory complexity. The goal of this framework is- rather than improving the segmentation quality compared to the baseline method- to make interactive segmentation on out-of-core datasets possible. The method is evaluated quantitatively on synthetic data and the CT-ORG dataset where the expected improvements in algorithm run time while maintaining high segmentation quality are confirmed. The incremental (i.e., interaction update) run time is demonstrated to be in seconds on a standard PC even for volumes of hundreds of gigabytes in size. In a small case study the applicability to large real world from current biomedical research is demonstrated. An implementation of the presented method is publicly available in version 5.2 of the widely used volume rendering and processing software Voreen (https://www.uni-muenster.de/Voreen/).
Collapse
|
41
|
Xia D, Lianoglou S, Sandmann T, Calvert M, Suh JH, Thomsen E, Dugas J, Pizzo ME, DeVos SL, Earr TK, Lin CC, Davis S, Ha C, Leung AWS, Nguyen H, Chau R, Yulyaningsih E, Lopez I, Solanoy H, Masoud ST, Liang CC, Lin K, Astarita G, Khoury N, Zuchero JY, Thorne RG, Shen K, Miller S, Palop JJ, Garceau D, Sasner M, Whitesell JD, Harris JA, Hummel S, Gnörich J, Wind K, Kunze L, Zatcepin A, Brendel M, Willem M, Haass C, Barnett D, Zimmer TS, Orr AG, Scearce-Levie K, Lewcock JW, Di Paolo G, Sanchez PE. Novel App knock-in mouse model shows key features of amyloid pathology and reveals profound metabolic dysregulation of microglia. Mol Neurodegener 2022; 17:41. [PMID: 35690868 PMCID: PMC9188195 DOI: 10.1186/s13024-022-00547-7] [Citation(s) in RCA: 26] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 05/25/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Genetic mutations underlying familial Alzheimer's disease (AD) were identified decades ago, but the field is still in search of transformative therapies for patients. While mouse models based on overexpression of mutated transgenes have yielded key insights in mechanisms of disease, those models are subject to artifacts, including random genetic integration of the transgene, ectopic expression and non-physiological protein levels. The genetic engineering of novel mouse models using knock-in approaches addresses some of those limitations. With mounting evidence of the role played by microglia in AD, high-dimensional approaches to phenotype microglia in those models are critical to refine our understanding of the immune response in the brain. METHODS We engineered a novel App knock-in mouse model (AppSAA) using homologous recombination to introduce three disease-causing coding mutations (Swedish, Arctic and Austrian) to the mouse App gene. Amyloid-β pathology, neurodegeneration, glial responses, brain metabolism and behavioral phenotypes were characterized in heterozygous and homozygous AppSAA mice at different ages in brain and/ or biofluids. Wild type littermate mice were used as experimental controls. We used in situ imaging technologies to define the whole-brain distribution of amyloid plaques and compare it to other AD mouse models and human brain pathology. To further explore the microglial response to AD relevant pathology, we isolated microglia with fibrillar Aβ content from the brain and performed transcriptomics and metabolomics analyses and in vivo brain imaging to measure energy metabolism and microglial response. Finally, we also characterized the mice in various behavioral assays. RESULTS Leveraging multi-omics approaches, we discovered profound alteration of diverse lipids and metabolites as well as an exacerbated disease-associated transcriptomic response in microglia with high intracellular Aβ content. The AppSAA knock-in mouse model recapitulates key pathological features of AD such as a progressive accumulation of parenchymal amyloid plaques and vascular amyloid deposits, altered astroglial and microglial responses and elevation of CSF markers of neurodegeneration. Those observations were associated with increased TSPO and FDG-PET brain signals and a hyperactivity phenotype as the animals aged. DISCUSSION Our findings demonstrate that fibrillar Aβ in microglia is associated with lipid dyshomeostasis consistent with lysosomal dysfunction and foam cell phenotypes as well as profound immuno-metabolic perturbations, opening new avenues to further investigate metabolic pathways at play in microglia responding to AD-relevant pathogenesis. The in-depth characterization of pathological hallmarks of AD in this novel and open-access mouse model should serve as a resource for the scientific community to investigate disease-relevant biology.
Collapse
Affiliation(s)
- Dan Xia
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Steve Lianoglou
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Thomas Sandmann
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Meredith Calvert
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Jung H. Suh
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Elliot Thomsen
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Jason Dugas
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Michelle E. Pizzo
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Sarah L. DeVos
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Timothy K. Earr
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Chia-Ching Lin
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Sonnet Davis
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Connie Ha
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Amy Wing-Sze Leung
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Hoang Nguyen
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Roni Chau
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Ernie Yulyaningsih
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Isabel Lopez
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Hilda Solanoy
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Shababa T. Masoud
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Chun-chi Liang
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Karin Lin
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Giuseppe Astarita
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Nathalie Khoury
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Joy Yu Zuchero
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Robert G. Thorne
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
- Department of Pharmaceutics, University of Minnesota, 9-177 Weaver-Densford Hall, 308 Harvard St. SE, Minneapolis, MN 55455 USA
| | - Kevin Shen
- Gladstone Institute of Neurological Disease, San Francisco, CA 94158 USA
- Department of Neurology, University of California, San Francisco, CA 94158 USA
| | - Stephanie Miller
- Gladstone Institute of Neurological Disease, San Francisco, CA 94158 USA
- Department of Neurology, University of California, San Francisco, CA 94158 USA
| | - Jorge J. Palop
- Gladstone Institute of Neurological Disease, San Francisco, CA 94158 USA
- Department of Neurology, University of California, San Francisco, CA 94158 USA
| | | | | | | | | | - Selina Hummel
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Johannes Gnörich
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Karin Wind
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Lea Kunze
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Artem Zatcepin
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Matthias Brendel
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Michael Willem
- Department of Nuclear Medicine, University Hospital of Munich, LMU Munich, Munich, Germany
| | - Christian Haass
- German Center for Neurodegenerative Diseases (DZNE) Munich, 81377 Munich, Germany
- Metabolic Biochemistry, Biomedical Center (BMC), Faculty of Medicine, Ludwig- Maximilians-Universität, München, 81377 Munich, Germany
- Munich Cluster for Systems Neurology (SyNergy), 81377 Munich, Germany
| | - Daniel Barnett
- Appel Alzheimer’s Disease Research Institute, Weill Cornell Medicine, New York, NY USA
- Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY USA
- Neuroscience Graduate Program, Weill Cornell Medicine, New York, NY USA
| | - Till S. Zimmer
- Appel Alzheimer’s Disease Research Institute, Weill Cornell Medicine, New York, NY USA
- Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY USA
| | - Anna G. Orr
- Appel Alzheimer’s Disease Research Institute, Weill Cornell Medicine, New York, NY USA
- Feil Family Brain and Mind Research Institute, Weill Cornell Medicine, New York, NY USA
- Neuroscience Graduate Program, Weill Cornell Medicine, New York, NY USA
| | - Kimberly Scearce-Levie
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Joseph W. Lewcock
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Gilbert Di Paolo
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| | - Pascal E. Sanchez
- Denali Therapeutics, Inc., 161 Oyster Point Blvd, South San Francisco, California, 94080 USA
| |
Collapse
|
42
|
Sionkowski P, Bełdowski P, Kruszewska N, Weber P, Marciniak B, Domino K. Effect of Ion and Binding Site on the Conformation of Chosen Glycosaminoglycans at the Albumin Surface. Entropy (Basel) 2022; 24:811. [PMID: 35741532 DOI: 10.3390/e24060811] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/11/2022] [Revised: 06/06/2022] [Accepted: 06/08/2022] [Indexed: 12/24/2022]
Abstract
Albumin is one of the major components of synovial fluid. Due to its negative surface charge, it plays an essential role in many physiological processes, including the ability to form molecular complexes. In addition, glycosaminoglycans such as hyaluronic acid and chondroitin sulfate are crucial components of synovial fluid involved in the boundary lubrication regime. This study presents the influence of Na+, Mg2+ and Ca2+ ions on human serum albumin–hyaluronan/chondroitin-6-sulfate interactions examined using molecular docking followed by molecular dynamics simulations. We analyze chosen glycosaminoglycans binding by employing a conformational entropy approach. In addition, several protein–polymer complexes have been studied to check how the binding site and presence of ions influence affinity. The presence of divalent cations contributes to the decrease of conformational entropy near carboxyl and sulfate groups. This observation can indicate the higher affinity between glycosaminoglycans and albumin. Moreover, domains IIIA and IIIB of albumin have the highest affinity as those are two domains that show a positive net charge that allows for binding with negatively charged glycosaminoglycans. Finally, in discussion, we suggest some research path to find particular features that would carry information about the dynamics of the particular type of polymers or ions.
Collapse
|
43
|
Li C, He W, Liao N, Gong J, Hou S, Guo B. Superpixels with contour adherence via label expansion for image decomposition. Neural Comput Appl. [DOI: 10.1007/s00521-022-07315-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
44
|
Xu X, Sanford T, Turkbey B, Xu S, Wood BJ, Yan P. Shadow-Consistent Semi-Supervised Learning for Prostate Ultrasound Segmentation. IEEE Trans Med Imaging 2022; 41:1331-1345. [PMID: 34971530 PMCID: PMC9709821 DOI: 10.1109/tmi.2021.3139999] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
Prostate segmentation in transrectal ultrasound (TRUS) image is an essential prerequisite for many prostate-related clinical procedures, which, however, is also a long-standing problem due to the challenges caused by the low image quality and shadow artifacts. In this paper, we propose a Shadow-consistent Semi-supervised Learning (SCO-SSL) method with two novel mechanisms, namely shadow augmentation (Shadow-AUG) and shadow dropout (Shadow-DROP), to tackle this challenging problem. Specifically, Shadow-AUG enriches training samples by adding simulated shadow artifacts to the images to make the network robust to the shadow patterns. Shadow-DROP enforces the segmentation network to infer the prostate boundary using the neighboring shadow-free pixels. Extensive experiments are conducted on two large clinical datasets (a public dataset containing 1,761 TRUS volumes and an in-house dataset containing 662 TRUS volumes). In the fully-supervised setting, a vanilla U-Net equipped with our Shadow-AUG&Shadow-DROP outperforms the state-of-the-arts with statistical significance. In the semi-supervised setting, even with only 20% labeled training data, our SCO-SSL method still achieves highly competitive performance, suggesting great clinical value in relieving the labor of data annotation. Source code is released at https://github.com/DIAL-RPI/SCO-SSL.
Collapse
|
45
|
Kong W, Miao Q, Lei Y, Ren C. Guided filter random walk and improved spiking cortical model based image fusion method in NSST domain. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.11.060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
46
|
Fang H. PiER: web-based facilities tailored for genetic target prioritisation harnessing human disease genetics, functional genomics and protein interactions. Nucleic Acids Res 2022; 50:W583-W592. [PMID: 35610036 PMCID: PMC9252812 DOI: 10.1093/nar/gkac379] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2022] [Revised: 04/19/2022] [Accepted: 05/02/2022] [Indexed: 11/23/2022] Open
Abstract
Integrative prioritisation promotes translational use of disease genetic findings in target discovery. I report ‘PiER’ (http://www.genetictargets.com/PiER), web-based facilities that support ab initio and real-time genetic target prioritisation through integrative use of human disease genetics, functional genomics and protein interactions. By design, the PiER features two facilities: elementary and combinatory. The elementary facility is designed to perform specific tasks, including three online tools: eV2CG, utilising functional genomics to link disease-associated variants (particularly located at the non-coding genome) to core genes likely responsible for genetic associations in disease; eCG2PG, using knowledge of protein interactions to ‘network’ core genes and additional peripheral genes, producing a ranked list of core and peripheral genes; and eCrosstalk, exploiting the information of pathway-derived interactions to identify highly-ranked genes mediating crosstalk between molecular pathways. Each of elementary tasks giving results is sequentially piped to the next one. By chaining together elementary tasks, the combinatory facility automates genetics-led and network-based integrative prioritisation for genetic targets at the gene level (cTGene) and at the crosstalk level (cTCrosstalk). Together with a tutorial-like booklet describing instructions on how to use, the PiER facilities meet multi-tasking needs to accelerate computational translational medicine that leverages human disease genetics and genomics for early-stage target discovery and drug repurposing.
Collapse
Affiliation(s)
- Hai Fang
- Shanghai Institute of Hematology, State Key Laboratory of Medical Genomics, National Research Center for Translational Medicine at Shanghai, Ruijin Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai 200025, China
| |
Collapse
|
47
|
Feng B, He K. Image Segmentation via Multiscale Perceptual Grouping. Symmetry (Basel) 2022; 14:1076. [DOI: 10.3390/sym14061076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/10/2022] Open
Abstract
The human eyes observe an image through perceptual units surrounded by symmetrical or asymmetrical object contours at a proper scale, which enables them to quickly extract the foreground of the image. Inspired by this characteristic, a model combined with multiscale perceptual grouping and unit-based segmentation is proposed in this paper. In the multiscale perceptual grouping part, a novel total variation regularization is proposed to smooth the image into different scales, which removes the inhomogeneity and preserves the edges. To simulate perceptual units surrounded by contours, the watershed method is utilized to cluster pixels into groups. The scale of smoothness is determined by the number of perceptual units. In the segmentation part, perceptual units are regarded as the basic element instead of discrete pixels in the graph cut. The appearance models of the foreground and background are constructed by combining the perceptual units. According to the relationship between perceptual units and the appearance model, the foreground can be segmented through a minimum-cut/maximum-flow algorithm. The experiment conducted on the CMU-Cornell iCoseg database shows that the proposed model has a promising performance.
Collapse
|
48
|
Ansari MY, Abdalla A, Ansari MY, Ansari MI, Malluhi B, Mohanty S, Mishra S, Singh SS, Abinahed J, Al-Ansari A, Balakrishnan S, Dakua SP. Practical utility of liver segmentation methods in clinical surgeries and interventions. BMC Med Imaging 2022; 22:97. [PMID: 35610600 PMCID: PMC9128093 DOI: 10.1186/s12880-022-00825-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2022] [Accepted: 05/09/2022] [Indexed: 12/15/2022] Open
Abstract
Clinical imaging (e.g., magnetic resonance imaging and computed tomography) is a crucial adjunct for clinicians, aiding in the diagnosis of diseases and planning of appropriate interventions. This is especially true in malignant conditions such as hepatocellular carcinoma (HCC), where image segmentation (such as accurate delineation of liver and tumor) is the preliminary step taken by the clinicians to optimize diagnosis, staging, and treatment planning and intervention (e.g., transplantation, surgical resection, radiotherapy, PVE, embolization, etc). Thus, segmentation methods could potentially impact the diagnosis and treatment outcomes. This paper comprehensively reviews the literature (during the year 2012–2021) for relevant segmentation methods and proposes a broad categorization based on their clinical utility (i.e., surgical and radiological interventions) in HCC. The categorization is based on the parameters such as precision, accuracy, and automation.
Collapse
|
49
|
Yuan J, Kaur D, Zhou Z, Nagle M, Kiddle NG, Doshi NA, Behnoudfar A, Peremyslova E, Ma C, Strauss SH, Li F. Robust High-Throughput Phenotyping with Deep Segmentation Enabled by a Web-Based Annotator. Plant Phenomics 2022; 2022:9893639. [PMID: 36059601 PMCID: PMC9394117 DOI: 10.34133/2022/9893639] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2021] [Accepted: 03/17/2022] [Indexed: 11/24/2022]
Abstract
The abilities of plant biologists and breeders to characterize the genetic basis of physiological traits are limited by their abilities to obtain quantitative data representing precise details of trait variation, and particularly to collect this data at a high-throughput scale with low cost. Although deep learning methods have demonstrated unprecedented potential to automate plant phenotyping, these methods commonly rely on large training sets that can be time-consuming to generate. Intelligent algorithms have therefore been proposed to enhance the productivity of these annotations and reduce human efforts. We propose a high-throughput phenotyping system which features a Graphical User Interface (GUI) and a novel interactive segmentation algorithm: Semantic-Guided Interactive Object Segmentation (SGIOS). By providing a user-friendly interface and intelligent assistance with annotation, this system offers potential to streamline and accelerate the generation of training sets, reducing the effort required by the user. Our evaluation shows that our proposed SGIOS model requires fewer user inputs compared to the state-of-art models for interactive segmentation. As a case study of the use of the GUI applied for genetic discovery in plants, we present an example of results from a preliminary genome-wide association study (GWAS) of in planta regeneration in Populus trichocarpa (poplar). We further demonstrate that the inclusion of a semantic prior map with SGIOS can accelerate the training process for future GWAS, using a sample of a dataset extracted from a poplar GWAS of in vitro regeneration. The capabilities of our phenotyping system surpass those of unassisted humans to rapidly and precisely phenotype our traits of interest. The scalability of this system enables large-scale phenomic screens that would otherwise be time-prohibitive, thereby providing increased power for GWAS, mutant screens, and other studies relying on large sample sizes to characterize the genetic basis of trait variation. Our user-friendly system can be used by researchers lacking a computational background, thus helping to democratize the use of deep segmentation as a tool for plant phenotyping.
Collapse
Affiliation(s)
| | | | - Zheng Zhou
- Oregon State University, Corvallis, OR, USA
| | | | | | | | | | | | | | | | - Fuxin Li
- Oregon State University, Corvallis, OR, USA
| |
Collapse
|
50
|
Abstract
This article analyzes the behavior of a Brownian fluctuation process under a mixed strategic game setup. A variant of a compound Brownian motion has been newly proposed, which is called the Shifted Brownian Fluctuation Process to predict the turning points of a stochastic process. This compound process evolves until it reaches one step prior to the turning point. The Shifted Brownian Fluctuation Game has been constructed based on this new process to find the optimal moment of actions. Analytically tractable results are obtained by using the fluctuation theory and the mixed strategy game theory. The joint functional of the Shifted Brownian Fluctuation Process is targeted for transformation of the first passage time and its index. These results enable us to predict the moment of a turning point and the moment of actions to obtain the optimal payoffs of a game. This research adapts the theoretical framework to implement an autonomous trader for value assets including stocks and cybercurrencies.
Collapse
|