1
|
Shao L, Fu T, Lin Y, Xiao D, Ai D, Zhang T, Fan J, Song H, Yang J. Facial augmented reality based on hierarchical optimization of similarity aspect graph. Comput Methods Programs Biomed 2024; 248:108108. [PMID: 38461712 DOI: 10.1016/j.cmpb.2024.108108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 02/05/2024] [Accepted: 02/29/2024] [Indexed: 03/12/2024]
Abstract
BACKGROUND The existing face matching method requires a point cloud to be drawn on the real face for registration, which results in low registration accuracy due to the irregular deformation of the patient's skin that makes the point cloud have many outlier points. METHODS This work proposes a non-contact pose estimation method based on similarity aspect graph hierarchical optimization. The proposed method constructs a distance-weighted and triangular-constrained similarity measure to describe the similarity between views by automatically identifying the 2D and 3D feature points of the face. A mutual similarity clustering method is proposed to construct a hierarchical aspect graph with 3D pose as nodes. A Monte Carlo tree search strategy is used to search the hierarchical aspect graph for determining the optimal pose of the facial 3D model, so as to realize the accurate registration of the facial 3D model and the real face. RESULTS The proposed method was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with four advanced pose calibration methods. The proposed method obtained average fusion errors of 1.13 ± 0.20 mm and 0.92 ± 0.08 mm in head phantom and volunteer experiments, respectively, which exhibits the best fusion performance among all comparison methods. CONCLUSIONS Our experiments proved the effectiveness of the proposed pose estimation method in facial augmented reality.
Collapse
Affiliation(s)
- Long Shao
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tao Zhang
- Department of Stomatology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Hong Song
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
2
|
Yang S, Yin Y, Sun Y, Ai D, Xia X, Xu X, Song J. AZGP1 Aggravates Macrophage M1 Polarization and Pyroptosis in Periodontitis. J Dent Res 2024:220345241235616. [PMID: 38491721 DOI: 10.1177/00220345241235616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/18/2024] Open
Abstract
Periodontal tissue destruction in periodontitis is a consequence of the host inflammatory response to periodontal pathogens, which could be aggravated in the presence of type 2 diabetes mellitus (T2DM). Accumulating evidence highlights the intricate involvement of macrophage-mediated inflammation in the pathogenesis of periodontitis under both normal and T2DM conditions. However, the underlying mechanism remains elusive. Alpha-2-glycoprotein 1 (AZGP1), a glycoprotein featuring an MHC-I domain, has been implicated in both inflammation and metabolic disorders. In this study, we found that AZGP1 was primarily colocalized with macrophages in periodontitis tissues. AZGP1 was increased in periodontitis compared with controls, which was further elevated when accompanied by T2DM. Adeno-associated virus-mediated overexpression of Azgp1 in the periodontium significantly enhanced periodontal inflammation and alveolar bone loss, accompanied by elevated M1 macrophages and pyroptosis in murine models of periodontitis and T2DM-associated periodontitis, while Azgp1-/- mice exhibited opposite effects. In primary bone marrow-derived macrophages stimulated by lipopolysaccharide (LPS) or LPS and palmitic acid (PA), overexpression or knockout of Azgp1 markedly upregulated or suppressed, respectively, the expression of macrophage M1 markers and key components of the NLR Family Pyrin Domain Containing 3 (NLRP3)/caspase-1 signaling. Moreover, conditioned medium from Azgp1-overexpressed macrophages under LPS or LPS+PA stimulation induced higher inflammatory activation and lower osteogenic differentiation in human periodontal ligament stem cells (hPDLSCs). Furthermore, elevated M1 polarization and pyroptosis in macrophages and associated detrimental effects on hPDLSCs induced by Azgp1 overexpression could be rescued by NLRP3 or caspase-1 inhibition. Collectively, our study elucidated that AZGP1 could aggravate periodontitis by promoting macrophage M1 polarization and pyroptosis through the NLRP3/casapse-1 pathway, which was accentuated in T2DM-associated periodontitis. This finding deepens the understanding of AZGP1 in the pathogenesis of periodontitis and suggests AZGP1 as a crucial link mediating the adverse effects of diabetes on periodontal inflammation.
Collapse
Affiliation(s)
- S Yang
- College of Stomatology, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - Y Yin
- College of Stomatology, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - Y Sun
- College of Stomatology, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - D Ai
- College of Stomatology, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - X Xia
- Department of Endocrinology, The Second Affiliated Hospital, Chongqing Medical University, Chongqing, China
| | - X Xu
- College of Stomatology, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| | - J Song
- College of Stomatology, Chongqing Medical University, Chongqing, China
- Chongqing Key Laboratory of Oral Diseases and Biomedical Sciences, Chongqing, China
- Chongqing Municipal Key Laboratory of Oral Biomedical Engineering of Higher Education, Chongqing, China
| |
Collapse
|
3
|
Li W, Song H, Ai D, Shi J, Wang Y, Wu W, Yang J. Semi-supervised segmentation of orbit in CT images with paired copy-paste strategy. Comput Biol Med 2024; 171:108176. [PMID: 38401453 DOI: 10.1016/j.compbiomed.2024.108176] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2023] [Revised: 02/06/2024] [Accepted: 02/18/2024] [Indexed: 02/26/2024]
Abstract
The segmentation of the orbit in computed tomography (CT) images plays a crucial role in facilitating the quantitative analysis of orbital decompression surgery for patients with Thyroid-associated Ophthalmopathy (TAO). However, the task of orbit segmentation, particularly in postoperative images, remains challenging due to the significant shape variation and limited amount of labeled data. In this paper, we present a two-stage semi-supervised framework for the automatic segmentation of the orbit in both preoperative and postoperative images, which consists of a pseudo-label generation stage and a semi-supervised segmentation stage. A Paired Copy-Paste strategy is concurrently introduced to proficiently amalgamate features extracted from both preoperative and postoperative images, thereby augmenting the network discriminative capability in discerning changes within orbital boundaries. More specifically, we employ a random cropping technique to transfer regions from labeled preoperative images (foreground) onto unlabeled postoperative images (background), as well as unlabeled preoperative images (foreground) onto labeled postoperative images (background). It is imperative to acknowledge that every set of preoperative and postoperative images belongs to the identical patient. The semi-supervised segmentation network (stage 2) utilizes a combination of mixed supervisory signals from pseudo labels (stage 1) and ground truth to process the two mixed images. The training and testing of the proposed method have been conducted on the CT dataset obtained from the Eye Hospital of Wenzhou Medical University. The experimental results demonstrate that the proposed method achieves a mean Dice similarity coefficient (DSC) of 91.92% with only 5% labeled data, surpassing the performance of the current state-of-the-art method by 2.4%.
Collapse
Affiliation(s)
- Wentao Li
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jieliang Shi
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325027, China.
| | - Yuanyuan Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Wencan Wu
- Eye Hospital of Wenzhou Medical University, Wenzhou, 325027, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
4
|
Zhang Z, Song H, Fan J, Fu T, Li Q, Ai D, Xiao D, Yang J. Dual-correlate optimized coarse-fine strategy for monocular laparoscopic videos feature matching via multilevel sequential coupling feature descriptor. Comput Biol Med 2024; 169:107890. [PMID: 38168646 DOI: 10.1016/j.compbiomed.2023.107890] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2023] [Revised: 12/13/2023] [Accepted: 12/18/2023] [Indexed: 01/05/2024]
Abstract
Feature matching of monocular laparoscopic videos is crucial for visualization enhancement in computer-assisted surgery, and the keys to conducting high-quality matches are accurate homography estimation, relative pose estimation, as well as sufficient matches and fast calculation. However, limited by various monocular laparoscopic imaging characteristics such as highlight noises, motion blur, texture interference and illumination variation, most exiting feature matching methods face the challenges of producing high-quality matches efficiently and sufficiently. To overcome these limitations, this paper presents a novel sequential coupling feature descriptor to extract and express multilevel feature maps efficiently, and a dual-correlate optimized coarse-fine strategy to establish dense matches in coarse level and adjust pixel-wise matches in fine level. Firstly, a novel sequential coupling swin transformer layer is designed in feature descriptor to learn and extract multilevel feature representations richly without increasing complexity. Then, a dual-correlate optimized coarse-fine strategy is proposed to match coarse feature sequences under low resolution, and the correlated fine feature sequences is optimized to refine pixel-wise matches based on coarse matching priors. Finally, the sequential coupling feature descriptor and dual-correlate optimization are merged into the Sequential Coupling Dual-Correlate Network (SeCo DC-Net) to produce high-quality matches. The evaluation is conducted on two public laparoscopic datasets: Scared and EndoSLAM, and the experimental results show the proposed network outperforms state-of-the-art methods in homography estimation, relative pose estimation, reprojection error, matching pairs number and inference runtime. The source code is publicly available at https://github.com/Iheckzza/FeatureMatching.
Collapse
Affiliation(s)
- Ziang Zhang
- The School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- The School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jingfan Fan
- The School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Tianyu Fu
- The School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Qiang Li
- The School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- The School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Deqaing Xiao
- The School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- The School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
5
|
Yang S, Wang Y, Ai D, Geng H, Zhang D, Xiao D, Song H, Li M, Yang J. Augmented Reality Navigation System for Biliary Interventional Procedures With Dynamic Respiratory Motion Correction. IEEE Trans Biomed Eng 2024; 71:700-711. [PMID: 38241137 DOI: 10.1109/tbme.2023.3316290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2024]
Abstract
OBJECTIVE Biliary interventional procedures require physicians to track the interventional instrument tip (Tip) precisely with X-ray image. However, Tip positioning relies heavily on the physicians' experience due to the limitations of X-ray imaging and the respiratory interference, which leads to biliary damage, prolonged operation time, and increased X-ray radiation. METHODS We construct an augmented reality (AR) navigation system for biliary interventional procedures. It includes system calibration, respiratory motion correction and fusion navigation. Firstly, the magnetic and 3D computed tomography (CT) coordinates are aligned through system calibration. Secondly, a respiratory motion correction method based on manifold regularization is proposed to correct the misalignment of the two coordinates caused by respiratory motion. Thirdly, the virtual biliary, liver and Tip from CT are overlapped to the corresponding position of the patient for dynamic virtual-real fusion. RESULTS Our system is respectively evaluated and achieved an average alignment error of 0.75 ± 0.17 mm and 2.79 ± 0.46 mm on phantoms and patients. The navigation experiments conducted on phantoms achieve an average Tip positioning error of 0.98 ± 0.15 mm and an average fusion error of 1.67 ± 0.34 mm after correction. CONCLUSION Our system can automatically register the Tip to the corresponding location in CT, and dynamically overlap the 3D virtual model onto patients to provide accurate and intuitive AR navigation. SIGNIFICANCE This study demonstrates the clinical potential of our system by assisting physicians during biliary interventional procedures. Our system enables dynamic visualization of virtual model on patients, reducing the reliance on contrast agents and X-ray usage.
Collapse
|
6
|
Chen S, Fan J, Ding Y, Geng H, Ai D, Xiao D, Song H, Wang Y, Yang J. PEA-Net: A progressive edge information aggregation network for vessel segmentation. Comput Biol Med 2024; 169:107766. [PMID: 38150885 DOI: 10.1016/j.compbiomed.2023.107766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/18/2023] [Accepted: 11/21/2023] [Indexed: 12/29/2023]
Abstract
Automatic vessel segmentation is a critical area of research in medical image analysis, as it can greatly assist doctors in accurately and efficiently diagnosing vascular diseases. However, accurately extracting the complete vessel structure from images remains a challenge due to issues such as uneven contrast and background noise. Existing methods primarily focus on segmenting individual pixels and often fail to consider vessel features and morphology. As a result, these methods often produce fragmented results and misidentify vessel-like background noise, leading to missing and outlier points in the overall segmentation. To address these issues, this paper proposes a novel approach called the progressive edge information aggregation network for vessel segmentation (PEA-Net). The proposed method consists of several key components. First, a dual-stream receptive field encoder (DRE) is introduced to preserve fine structural features and mitigate false positive predictions caused by background noise. This is achieved by combining vessel morphological features obtained from different receptive field sizes. Second, a progressive complementary fusion (PCF) module is designed to enhance fine vessel detection and improve connectivity. This module complements the decoding path by combining features from previous iterations and the DRE, incorporating nonsalient information. Additionally, segmentation-edge decoupling enhancement (SDE) modules are employed as decoders to integrate upsampling features with nonsalient information provided by the PCF. This integration enhances both edge and segmentation information. The features in the skip connection and decoding path are iteratively updated to progressively aggregate fine structure information, thereby optimizing segmentation results and reducing topological disconnections. Experimental results on multiple datasets demonstrate that the proposed PEA-Net model and strategy achieve optimal performance in both pixel-level and topology-level metrics.
Collapse
Affiliation(s)
- Sigeng Chen
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yang Ding
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Haixiao Geng
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Deqiang Xiao
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
7
|
Liu S, Fan J, Yang Y, Xiao D, Ai D, Song H, Wang Y, Yang J. Monocular endoscopy images depth estimation with multi-scale residual fusion. Comput Biol Med 2024; 169:107850. [PMID: 38145602 DOI: 10.1016/j.compbiomed.2023.107850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 11/16/2023] [Accepted: 12/11/2023] [Indexed: 12/27/2023]
Abstract
BACKGROUND Monocular depth estimation plays a fundamental role in clinical endoscopy surgery. However, the coherent illumination, smooth surfaces, and texture-less nature of endoscopy images present significant challenges to traditional depth estimation methods. Existing approaches struggle to accurately perceive depth in such settings. METHOD To overcome these challenges, this paper proposes a novel multi-scale residual fusion method for estimating the depth of monocular endoscopy images. Specifically, we address the issue of coherent illumination by leveraging image frequency domain component space transformation, thereby enhancing the stability of the scene's light source. Moreover, we employ an image radiation intensity attenuation model to estimate the initial depth map. Finally, to refine the accuracy of depth estimation, we utilize a multi-scale residual fusion optimization technique. RESULTS To evaluate the performance of our proposed method, extensive experiments were conducted on public datasets. The structural similarity measures for continuous frames in three distinct clinical data scenes reached impressive values of 0.94, 0.82, and 0.84, respectively. These results demonstrate the effectiveness of our approach in capturing the intricate details of endoscopy images. Furthermore, the depth estimation accuracy achieved remarkable levels of 89.3 % and 91.2 % for the two models' data, respectively, underscoring the robustness of our method. CONCLUSIONS Overall, the promising results obtained on public datasets highlight the significant potential of our method for clinical applications, facilitating reliable depth estimation and enhancing the quality of endoscopy surgical procedures.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China; China Center for Information Industry Development, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yun Yang
- Department of General Surgery, Beijing Friendship Hospital, Capital Medical University, National Clinical Research Center for Digestive Diseases, Beijing 100050, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
8
|
Yan Q, Xiao D, Jia Y, Ai D, Fan J, Song H, Xu C, Wang Y, Yang J. A multi-dimensional CFD framework for fast patient-specific fractional flow reserve prediction. Comput Biol Med 2024; 168:107718. [PMID: 37988787 DOI: 10.1016/j.compbiomed.2023.107718] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2023] [Revised: 10/01/2023] [Accepted: 11/15/2023] [Indexed: 11/23/2023]
Abstract
Fractional flow reserve (FFR) is considered as the gold standard for diagnosing coronary myocardial ischemia. Existing 3D computational fluid dynamics (CFD) methods attempt to predict FFR noninvasively using coronary computed tomography angiography (CTA). However, the accuracy and efficiency of the 3D CFD methods in coronary arteries are considerably limited. In this work, we introduce a multi-dimensional CFD framework that improves the accuracy of FFR prediction by estimating 0D patient-specific boundary conditions, and increases the efficiency by generating 3D initial conditions. The multi-dimensional CFD models contain the 3D vascular model for coronary simulation, the 1D vascular model for iterative optimization, and the 0D vascular model for boundary conditions expression. To improve the accuracy, we utilize clinical parameters to derive 0D patient-specific boundary conditions with an optimization algorithm. To improve the efficiency, we evaluate the convergence state using the 1D vascular model and obtain the convergence parameters to generate appropriate 3D initial conditions. The 0D patient-specific boundary conditions and the 3D initial conditions are used to predict FFR (FFRC). We conducted a retrospective study involving 40 patients (61 diseased vessels) with invasive FFR and their corresponding CTA images. The results demonstrate that the FFRC and the invasive FFR have a strong linear correlation (r = 0.80, p < 0.001) and high consistency (mean difference: 0.014 ±0.071). After applying the cut-off value of FFR (0.8), the accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of FFRC were 88.5%, 93.3%, 83.9%, 84.8%, and 92.9%, respectively. Compared with the conventional zero initial conditions method, our method improves prediction efficiency by 71.3% per case. Therefore, our multi-dimensional CFD framework is capable of improving the accuracy and efficiency of FFR prediction significantly.
Collapse
Affiliation(s)
- Qing Yan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Deqiang Xiao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Yaosong Jia
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Computer Science, Beijing Institute of Technology, Beijing 100081, China
| | - Cheng Xu
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
9
|
Lin Y, Li J, Xiao H, Zheng L, Xiao Y, Song H, Fan J, Xiao D, Ai D, Fu T, Wang F, Lv H, Yang J. Automatic literature screening using the PAJO deep-learning model for clinical practice guidelines. BMC Med Inform Decis Mak 2023; 23:247. [PMID: 37924054 PMCID: PMC10625217 DOI: 10.1186/s12911-023-02328-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2023] [Accepted: 10/06/2023] [Indexed: 11/06/2023] Open
Abstract
BACKGROUND Clinical practice guidelines (CPGs) are designed to assist doctors in clinical decision making. High-quality research articles are important for the development of good CPGs. Commonly used manual screening processes are time-consuming and labor-intensive. Artificial intelligence (AI)-based techniques have been widely used to analyze unstructured data, including texts and images. Currently, there are no effective/efficient AI-based systems for screening literature. Therefore, developing an effective method for automatic literature screening can provide significant advantages. METHODS Using advanced AI techniques, we propose the Paper title, Abstract, and Journal (PAJO) model, which treats article screening as a classification problem. For training, articles appearing in the current CPGs are treated as positive samples. The others are treated as negative samples. Then, the features of the texts (e.g., titles and abstracts) and journal characteristics are fully utilized by the PAJO model using the pretrained bidirectional-encoder-representations-from-transformers (BERT) model. The resulting text and journal encoders, along with the attention mechanism, are integrated in the PAJO model to complete the task. RESULTS We collected 89,940 articles from PubMed to construct a dataset related to neck pain. Extensive experiments show that the PAJO model surpasses the state-of-the-art baseline by 1.91% (F1 score) and 2.25% (area under the receiver operating characteristic curve). Its prediction performance was also evaluated with respect to subject-matter experts, proving that PAJO can successfully screen high-quality articles. CONCLUSIONS The PAJO model provides an effective solution for automatic literature screening. It can screen high-quality articles on neck pain and significantly improve the efficiency of CPG development. The methodology of PAJO can also be easily extended to other diseases for literature screening.
Collapse
Affiliation(s)
- Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Jia Li
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China
| | - Huan Xiao
- School of Statistics, Renmin University of China, Beijing, 100872, China
| | - Lujie Zheng
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Ying Xiao
- School of Automation, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Deqiang Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Feifei Wang
- School of Statistics, Renmin University of China, Beijing, 100872, China.
- Center for Applied Statistics, Renmin University of China, Beijing, 100872, China.
| | - Han Lv
- Department of Radiology, Beijing Friendship Hospital, Capital Medical University, Beijing, 100050, China.
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
10
|
Li Q, Song H, Wei Z, Yang F, Fan J, Ai D, Lin Y, Yu X, Yang J. Densely Connected U-Net With Criss-Cross Attention for Automatic Liver Tumor Segmentation in CT Images. IEEE/ACM Trans Comput Biol Bioinform 2023; 20:3399-3410. [PMID: 35984790 DOI: 10.1109/tcbb.2022.3198425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Automatic liver tumor segmentation plays a key role in radiation therapy of hepatocellular carcinoma. In this paper, we propose a novel densely connected U-Net model with criss-cross attention (CC-DenseUNet) to segment liver tumors in computed tomography (CT) images. The dense interconnections in CC-DenseUNet ensure the maximum information flow between encoder layers when extracting intra-slice features of liver tumors. Moreover, the criss-cross attention is used in CC-DenseUNet to efficiently capture only the necessary and meaningful non-local contextual information of CT images containing liver tumors. We evaluated the proposed CC-DenseUNet on the LiTS dataset and the 3DIRCADb dataset. Experimental results show that the proposed method reaches the state-of-the-art performance for liver tumor segmentation. We further experimentally demonstrate the robustness of the proposed method on a clinical dataset comprising 20 CT volumes.
Collapse
|
11
|
Liu D, Ai D, Fu T, Gao Y, Fan J, Song H, Xiao D, Liang P, Yang J. Local Contractive Registration with Biomechanical Model: Assessing Microwave Ablation after Compensation for Tissue Shrinkage. IEEE J Biomed Health Inform 2023; PP:1-17. [PMID: 37747865 DOI: 10.1109/jbhi.2023.3318893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Microwave ablation (MWA) is a minimally invasive procedure for the treatment of liver tumor. Accumulating clinical evidence has considered the minimal ablative margin (MAM) as a significant predictor of local tumor progression (LTP). In clinical practice, MAM assessment is typically carried out through image registration of pre- and post-MWA images. However, this process faces two main challenges: non-homologous match between tumor and coagulation with inconsistent image appearance, and tissue shrinkage caused by thermal dehydration. These challenges result in low precision when using traditional registration methods for MAM assessment. In this paper, we present a local contractive nonrigid registration method using a biomechanical model (LC-BM) to address these challenges and precisely assess the MAM. The LC-BM contains two consecutive parts: (1) local contractive decomposition (LC-part), which reduces the incorrect match between the tumor and coagulation and quantifies the shrinkage in the external coagulation region, and (2) biomechanical model constraint (BM-part), which compensates for the shrinkage in the internal coagulation region. After quantifying and compensating for tissue shrinkage, the warped tumor is overlaid on the coagulation, and then the MAM is assessed. We evaluated the method using prospectively collected data from 36 patients with 47 liver tumors, comparing LC-BM with 11 state-of-the-art methods. LTP was diagnosed through contrast-enhanced MR follow-up images, serving as the ground truth for tumor recurrence. LC-BM achieved the highest accuracy (97.9%) in predicting LTP, outperforming other methods. Therefore, our proposed method holds significant potential to improve MAM assessment in MWA surgeries.
Collapse
|
12
|
Geng H, Xiao D, Yang S, Fan J, Fu T, Lin Y, Bai Y, Ai D, Song H, Wang Y, Duan F, Yang J. CT2X-IRA: CT to x-ray image registration agent using domain-cross multi-scale-stride deep reinforcement learning. Phys Med Biol 2023; 68:175024. [PMID: 37549676 DOI: 10.1088/1361-6560/acede5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 08/07/2023] [Indexed: 08/09/2023]
Abstract
Objective.In computer-assisted minimally invasive surgery, the intraoperative x-ray image is enhanced by overlapping it with a preoperative CT volume to improve visualization of vital anatomical structures. Therefore, accurate and robust 3D/2D registration of CT volume and x-ray image is highly desired in clinical practices. However, previous registration methods were prone to initial misalignments and struggled with local minima, leading to issues of low accuracy and vulnerability.Approach.To improve registration performance, we propose a novel CT/x-ray image registration agent (CT2X-IRA) within a task-driven deep reinforcement learning framework, which contains three key strategies: (1) a multi-scale-stride learning mechanism provides multi-scale feature representation and flexible action step size, establishing fast and globally optimal convergence of the registration task. (2) A domain adaptation module reduces the domain gap between the x-ray image and digitally reconstructed radiograph projected from the CT volume, decreasing the sensitivity and uncertainty of the similarity measurement. (3) A weighted reward function facilitates CT2X-IRA in searching for the optimal transformation parameters, improving the estimation accuracy of out-of-plane transformation parameters under large initial misalignments.Main results.We evaluate the proposed CT2X-IRA on both the public and private clinical datasets, achieving target registration errors of 2.13 mm and 2.33 mm with the computation time of 1.5 s and 1.1 s, respectively, showing an accurate and fast workflow for CT/x-ray image rigid registration.Significance.The proposed CT2X-IRA obtains the accurate and robust 3D/2D registration of CT and x-ray images, suggesting its potential significance in clinical applications.
Collapse
Affiliation(s)
- Haixiao Geng
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Deqiang Xiao
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Shuo Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Tianyu Fu
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yucong Lin
- School of Medical Engineering, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yanhua Bai
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yongtian Wang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Feng Duan
- Department of Interventional Radiology, The First Medical Center of Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
13
|
Mu J, Lin Y, Meng X, Fan J, Ai D, Chen D, Qiu H, Yang J, Gu Y. M-CSAFN: Multi-Color Space Adaptive Fusion Network for Automated Port-Wine Stains Segmentation. IEEE J Biomed Health Inform 2023; 27:3924-3935. [PMID: 37027679 DOI: 10.1109/jbhi.2023.3247479] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/24/2023]
Abstract
Automatic segmentation of port-wine stains (PWS) from clinical images is critical for accurate diagnosis and objective assessment of PWS. However, this is a challenging task due to the color heterogeneity, low contrast, and indistinguishable appearance of PWS lesions. To address such challenges, we propose a novel multi-color space adaptive fusion network (M-CSAFN) for PWS segmentation. First, a multi-branch detection model is constructed based on six typical color spaces, which utilizes rich color texture information to highlight the difference between lesions and surrounding tissues. Second, an adaptive fusion strategy is used to fuse complementary predictions, which address the significant differences within the lesions caused by color heterogeneity. Third, a structural similarity loss with color information is proposed to measure the detail error between predicted lesions and truth lesions. Additionally, a PWS clinical dataset consisting of 1413 image pairs was established for the development and evaluation of PWS segmentation algorithms. To verify the effectiveness and superiority of the proposed method, we compared it with other state-of-the-art methods on our collected dataset and four publicly available skin lesion datasets (ISIC 2016, ISIC 2017, ISIC 2018, and PH2). The experimental results show that our method achieves remarkable performance in comparison with other state-of-the-art methods on our collected dataset, achieving 92.29% and 86.14% on Dice and Jaccard metrics, respectively. Comparative experiments on other datasets also confirmed the reliability and potential capability of M-CSAFN in skin lesion segmentation.
Collapse
|
14
|
Li ZP, Liu GQ, Yao WR, Chen ZP, Cheng XL, Sun J, Ai D, Wu RH. [Inhibitor with congenital factor Ⅶ deficiency in a child]. Zhonghua Er Ke Za Zhi 2023; 61:269-271. [PMID: 36849357 DOI: 10.3760/cma.j.cn112140-20230114-00033] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Subscribe] [Scholar Register] [Indexed: 03/01/2023]
Affiliation(s)
- Z P Li
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - G Q Liu
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - W R Yao
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - Z P Chen
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - X L Cheng
- Pharmacology Department, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - J Sun
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - D Ai
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - R H Wu
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| |
Collapse
|
15
|
Han T, Ai D, Li X, Fan J, Song H, Wang Y, Yang J. Coronary artery stenosis detection via proposal-shifted spatial-temporal transformer in X-ray angiography. Comput Biol Med 2023; 153:106546. [PMID: 36641935 DOI: 10.1016/j.compbiomed.2023.106546] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2022] [Revised: 01/03/2023] [Accepted: 01/10/2023] [Indexed: 01/13/2023]
Abstract
Accurate detection of coronary artery stenosis in X-ray angiography (XRA) images is crucial for the diagnosis and treatment of coronary artery disease. However, stenosis detection remains a challenging task due to complicated vascular structures, poor imaging quality, and fickle lesions. While devoted to accurate stenosis detection, most methods are inefficient in the exploitation of spatio-temporal information of XRA sequences, leading to a limited performance on the task. To overcome the problem, we propose a new stenosis detection framework based on a Transformer-based module to aggregate proposal-level spatio-temporal features. In the module, proposal-shifted spatio-temporal tokenization (PSSTT) scheme is devised to gather spatio-temporal region-of-interest (RoI) features for obtaining visual tokens within a local window. Then, the Transformer-based feature aggregation (TFA) network takes the tokens as the inputs to enhance the RoI features by learning the long-range spatio-temporal context for final stenosis prediction. The effectiveness of our method was validated by conducting qualitative and quantitative experiments on 233 XRA sequences of coronary artery. Our method achieves a high F1 score of 90.88%, outperforming other 15 state-of-the-art detection methods. It demonstrates that our method can perform accurate stenosis detection from XRA images due to the strong ability to aggregate spatio-temporal features.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Xinyu Li
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| |
Collapse
|
16
|
Wu C, Fu T, Chen X, Xiao J, Ai D, Fan J, Lin Y, Song H, Yang J. Automatic spatial calibration of freehand ultrasound probe with a multilayer N-wire phantom. Ultrasonics 2023; 128:106862. [PMID: 36240539 DOI: 10.1016/j.ultras.2022.106862] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 08/25/2022] [Accepted: 10/03/2022] [Indexed: 06/16/2023]
Abstract
The classic N-wire phantom has been widely used in the calibration of freehand ultrasound probes. One of the main challenges of the phantom is accurately identifying N-fiducials in ultrasound images, especially with multiple N-wire structures. In this study, a method using a multilayer N-wire phantom for the automatic spatial calibration of ultrasound images is proposed. All dots in the ultrasound image are segmented, scored, and classified according to the unique geometric features of the multilayer N-wire phantom. A recognition method for identifying N-fiducials from the dots is proposed for calibrating the spatial transformation of the ultrasound probe. At depths of 9, 11, 13, and 15 cm, the reconstruction error of 50 points is 1.24 ± 0.16, 1.09 ± 0.06, 0.95 ± 0.08, 1.02 ± 0.05 mm, respectively. The reconstruction mockup test shows that the distance accuracy is 1.11 ± 0.82 mm at a depth of 15 cm.
Collapse
Affiliation(s)
- Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Xinyu Chen
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Xiao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing 100081, China.
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| |
Collapse
|
17
|
Li W, Fan J, Li S, Zheng Z, Tian Z, Ai D, Song H, Chen X, Yang J. An incremental registration method for endoscopic sinus and skull base surgery navigation: From phantom study to clinical trials. Med Phys 2023; 50:226-239. [PMID: 35997999 DOI: 10.1002/mp.15941] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2021] [Revised: 06/30/2022] [Accepted: 08/02/2022] [Indexed: 01/27/2023] Open
Abstract
PURPOSE Surface-based image-to-patient registration in current surgical navigation is mainly achieved by a 3D scanner, which has several limitations in clinical practice such as uncontrollable scanning range, complicated operation, and even high failure rate. An accurate, robust, and easy-to-perform image-to-patient registration method is urgently required. METHODS An incremental point cloud registration method was proposed for surface-based image-to-patient registration. The point cloud in image space was extracted from the computed tomography (CT) image, and a template matching method was applied to remove the redundant points. The corresponding point cloud in patient space was incrementally collected by an optically tracked pointer, while the nearest point distance (NPD) constraint was applied to ensure the uniformity of the collected points. A coarse-to-fine registration method under the constraints of coverage ratio (CR) and outliers ratio (OR) was then proposed to obtain the optimal rigid transformation from image to patient space. The proposed method was integrated in the recently developed endoscopic navigation system, and phantom study and clinical trials were conducted to evaluate the performance of the proposed method. RESULTS The results of the phantom study revealed that the proposed constraints greatly improved the accuracy and robustness of registration. The comparative experimental results revealed that the proposed registration method significantly outperform the scanner-based method, and achieved comparable accuracy to the fiducial-based method. In the clinical trials, the average registration duration was 1.24 ± 0.43 min, the target registration error (TRE) of 294 marker points (59 patients) was 1.25 ± 0.40 mm, and the lower 97.5% confidence limit of the success rate of positioning marker points exceeds the expected value (97.56% vs. 95.00%), revealed that the accuracy of the proposed method significantly met the clinical requirements (TRE ⩽ 2 mm, p < 0.05). CONCLUSIONS The proposed method has both the advantages of high accuracy and convenience, which were absent in the scanner-based method and the fiducial-based method. Our findings will help improve the quality of endoscopic sinus and skull base surgery.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) Co., Ltd., Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Xiaohong Chen
- Department of Otolaryngology Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
18
|
Meng X, Fan J, Yu H, Mu J, Li Z, Yang A, Liu B, Lv K, Ai D, Lin Y, Song H, Fu T, Xiao D, Ma G, Yang J, Gu Y. Volume-awareness and outlier-suppression co-training for weakly-supervised MRI breast mass segmentation with partial annotations. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2022.109988] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
19
|
Ai D, Chen ZP, Li G, Yao JF, Ma JY, Ma J, Zhang LQ, Jiang J, Wu RH. [Three cases of von Willebrand type 2B in children]. Zhonghua Er Ke Za Zhi 2022; 60:943-945. [PMID: 36038307 DOI: 10.3760/cma.j.cn112140-20220220-00133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [MESH Headings] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- D Ai
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - Z P Chen
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - G Li
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - J F Yao
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - J Y Ma
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - J Ma
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - L Q Zhang
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - J Jiang
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| | - R H Wu
- Hematology Center, Beijing Children's Hospital, Capital Medical University, National Center for Children's Health, Beijing 100045, China
| |
Collapse
|
20
|
Shao L, Yang S, Fu T, Lin Y, Geng H, Ai D, Fan J, Song H, Zhang T, Yang J. Augmented reality calibration using feature triangulation iteration-based registration for surgical navigation. Comput Biol Med 2022; 148:105826. [PMID: 35810696 DOI: 10.1016/j.compbiomed.2022.105826] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 06/24/2022] [Accepted: 07/03/2022] [Indexed: 11/03/2022]
Abstract
BACKGROUND Marker-based augmented reality (AR) calibration methods for surgical navigation often require a second computed tomography scan of the patient, and their clinical application is limited due to high manufacturing costs and low accuracy. METHODS This work introduces a novel type of AR calibration framework that combines a Microsoft HoloLens device with a single camera registration module for surgical navigation. A camera is used to gather multi-view images of a patient for reconstruction in this framework. A shape feature matching-based search method is proposed to adjust the size of the reconstructed model. The double clustering-based 3D point cloud segmentation method and 3D line segment detection method are also proposed to extract the corner points of the image marker. The corner points are the registration data of the image marker. A feature triangulation iteration-based registration method is proposed to quickly and accurately calibrate the pose relationship between the image marker and the patient in the virtual and real space. The patient model after registration is wirelessly transmitted to the HoloLens device to display the AR scene. RESULTS The proposed approach was used to conduct accuracy verification experiments on the phantoms and volunteers, which were compared with six advanced AR calibration methods. The proposed method obtained average fusion errors of 0.70 ± 0.16 and 0.91 ± 0.13 mm in phantom and volunteer experiments, respectively. The fusion accuracy of the proposed method is the highest among all comparison methods. A volunteer liver puncture clinical simulation experiment was also conducted to show the clinical feasibility. CONCLUSIONS Our experiments proved the effectiveness of the proposed AR calibration method, and revealed a considerable potential for improving surgical performance.
Collapse
Affiliation(s)
- Long Shao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Shuo Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yucong Lin
- School of Medical Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Haixiao Geng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science & Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Tao Zhang
- Peking Union Medical College Hospital, Department of Oral and Maxillofacial Surgery, Beijing, 100730, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
21
|
Han T, Ai D, Wang Y, Bian Y, An R, Fan J, Song H, Xie H, Yang J. Recursive Centerline- and Direction-Aware Joint Learning Network with Ensemble Strategy for Vessel Segmentation in X-ray Angiography Images. Comput Methods Programs Biomed 2022; 220:106787. [PMID: 35436660 DOI: 10.1016/j.cmpb.2022.106787] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 03/05/2022] [Accepted: 03/17/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND AND OBJECTIVE Automatic vessel segmentation from X-ray angiography images is an important research topic for the diagnosis and treatment of cardiovascular disease. The main challenge is how to extract continuous and completed vessel structures from XRA images with poor quality and high complexity. Most existing methods predominantly focus on pixel-wise segmentation and overlook the geometric features, resulting in breaking and absence in segmentation results. To improve the completeness and accuracy of vessel segmentation, we propose a recursive joint learning network embedded with geometric features. METHODS The network joins the centerline- and direction-aware auxiliary tasks with the primary task of segmentation, which guides the network to explore the geometric features of vessel connectivity. Moreover, the recursive learning strategy is designed by passing the previous segmentation result into the same network iteratively to improve segmentation. To further enhance connectivity, we present a complementary-task ensemble strategy by fusing the outputs of the three tasks for the final segmentation result with majority voting. RESULTS To validate the effectiveness of our method, we conduct qualitative and quantitative experiments on the XRA images of the coronary artery and aorta including aortic arch, thoracic aorta, and abdominal aorta. Our method achieves F1 scores of 85.61±3.48% for the coronary artery, 89.02±2.89% for the aortic arch, 88.22±3.33% for the thoracic aorta, and 83.12±4.61% for the abdominal aorta. CONCLUSIONS Compared with six state-of-the-art methods, our method shows the most complete and accurate vessel segmentation results.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China.
| | - Yonglin Bian
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Ruirui An
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Hongzhi Xie
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing, 100730, China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
22
|
Li W, Fan J, Li Y, Hao P, Lin Y, Fu T, Ai D, Song H, Yang J. Endoscopy image enhancement method by generalized imaging defect models based adversarial training. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac6724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2022] [Accepted: 04/13/2022] [Indexed: 11/12/2022]
Abstract
Abstract
Objective. Smoke, uneven lighting, and color deviation are common issues in endoscopic surgery, which have increased the risk of surgery and even lead to failure. Approach. In this study, we present a new physics model driven semi-supervised learning framework for high-quality pixel-wise endoscopic image enhancement, which is generalizable for smoke removal, light adjustment, and color correction. To improve the authenticity of the generated images, and thereby improve the network performance, we integrated specific physical imaging defect models with the CycleGAN framework. No ground-truth data in pairs are required. In addition, we propose a transfer learning framework to address the data scarcity in several endoscope enhancement tasks and improve the network performance. Main results. Qualitative and quantitative studies reveal that the proposed network outperforms the state-of-the-art image enhancement methods. In particular, the proposed method performs much better than the original CycleGAN, for example, the structural similarity improved from 0.7925 to 0.8648, feature similarity for color images from 0.8917 to 0.9283, and quaternion structural similarity from 0.8097 to 0.8800 in the smoke removal task. Experimental results of the proposed transfer learning method also reveal its superior performance when trained with small datasets of target tasks. Significance. Experimental results on endoscopic images prove the effectiveness of the proposed network in smoke removal, light adjustment, and color correction, showing excellent clinical usefulness.
Collapse
|
23
|
Liu S, Fan J, Ai D, Song H, Fu T, Wang Y, Yang J. Feature matching for texture-less endoscopy images via superpixel vector field consistency. Biomed Opt Express 2022; 13:2247-2265. [PMID: 35519251 PMCID: PMC9045917 DOI: 10.1364/boe.450259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 01/05/2022] [Accepted: 01/23/2022] [Indexed: 06/14/2023]
Abstract
Feature matching is an important technology to obtain the surface morphology of soft tissues in intraoperative endoscopy images. The extraction of features from clinical endoscopy images is a difficult problem, especially for texture-less images. The reduction of surface details makes the problem more challenging. We proposed an adaptive gradient-preserving method to improve the visual feature of texture-less images. For feature matching, we first constructed a spatial motion field by using the superpixel blocks and estimated its information entropy matching with the motion consistency algorithm to obtain the initial outlier feature screening. Second, we extended the superpixel spatial motion field to the vector field and constrained it with the vector feature to optimize the confidence of the initial matching set. Evaluations were implemented on public and undisclosed datasets. Our method increased by an order of magnitude in the three feature point extraction methods than the original image. In the public dataset, the accuracy and F1-score increased to 92.6% and 91.5%. The matching score was improved by 1.92%. In the undisclosed dataset, the reconstructed surface integrity of the proposed method was improved from 30% to 85%. Furthermore, we also presented the surface reconstruction result of differently sized images to validate the robustness of our method, which showed high-quality feature matching results. Overall, the experiment results proved the effectiveness of the proposed matching method. This demonstrates its capability to extract sufficient visual feature points and generate reliable feature matches for 3D reconstruction and meaningful applications in clinical.
Collapse
Affiliation(s)
- Shiyuan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
24
|
Guo Q, Song H, Fan J, Ai D, Gao Y, Yu X, Yang J. Portal Vein and Hepatic Vein Segmentation in Multi-Phase MR Images Using Flow-Guided Change Detection. IEEE Trans Image Process 2022; 31:2503-2517. [PMID: 35275817 DOI: 10.1109/tip.2022.3157136] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Segmenting portal vein (PV) and hepatic vein (HV) from magnetic resonance imaging (MRI) scans is important for hepatic tumor surgery. Compared with single phase-based methods, multiple phases-based methods have better scalability in distinguishing HV and PV by exploiting multi-phase information. However, these methods just coarsely extract HV and PV from different phase images. In this paper, we propose a unified framework to automatically and robustly segment 3D HV and PV from multi-phase MR images, which considers both the change and appearance caused by the vascular flow event to improve segmentation performance. Firstly, inspired by change detection, flow-guided change detection (FGCD) is designed to detect the changed voxels related to hepatic venous flow by generating hepatic venous phase map and clustering the map. The FGCD uniformly deals with HV and PV clustering by the proposed shared clustering, thus making the appearance correlated with portal venous flow robustly delineate without increasing framework complexity. Then, to refine vascular segmentation results produced by both HV and PV clustering, interclass decision making (IDM) is proposed by combining the overlapping region discrimination and neighborhood direction consistency. Finally, our framework is evaluated on multi-phase clinical MR images of the public dataset (TCGA) and local hospital dataset. The quantitative and qualitative evaluations show that our framework outperforms the existing methods.
Collapse
|
25
|
Wu C, Fu T, Wang Y, Lin Y, Wang Y, Ai D, Fan J, Song H, Yang J. Fusion Siamese network with drift correction for target tracking in ultrasound sequences. Phys Med Biol 2022; 67. [DOI: 10.1088/1361-6560/ac4fa1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 01/27/2022] [Indexed: 12/25/2022]
Abstract
Abstract
Motion tracking techniques can revise the bias arising from respiration-caused motion in radiation therapy. Tracking key structures accurately and at a real-time speed is necessary for effective motion tracking. In this work, we propose a fusion Siamese network with drift correction for target tracking in ultrasound sequences. Specifically, the network fuses four response maps generated by the cross-correlation between convolution layers at different resolutions to reduce up-sampling error. A correction strategy combining local structural similarity and target trajectory is proposed to revise the target drift predicted by the network. Moreover, a coarse-to-fine strategy is proposed to train the network with a limited number of annotated images, in which an augmented dataset is generated by corner points to learn network features with high generalizability. The proposed method is evaluated on the basis of the public dataset of the MICCAI 2015 Challenge on Liver UltraSound Tracking (CLUST) and our ultrasound image dataset, which is provided by the Chinese People’s Liberation Army General Hospital (CPLAGH). A tracking error of 0.80 ± 1.16 mm is observed for 85 targets across 39 ultrasound sequences in the CLUST dataset. A tracking error of 0.61 ± 0.36 mm is observed for 20 targets across 10 ultrasound sequences in the CPLAGH dataset. The effectiveness of the proposed fusion and correction strategies is verified via two ablation experiments. Overall, the experimental results demonstrate the effectiveness of the proposed fusion Siamese network with drift correction and reveal its potential in clinical practice.
Collapse
|
26
|
Zhang Y, Li M, Yu B, Lu S, Zhang L, Zhu S, Yu Z, Xia T, Huang H, Jiang W, Zhang S, Sun L, Ye Q, Sun J, Zhu H, Huang P, Hong H, Yu S, Li W, Ai D, Fan J, Li W, Song H, Xu L, Chen X, Chen T, Zhou M, Ou J, Yang J, Li W, Hu Y, Wu W. Cold protection allows local cryotherapy in a clinical-relevant model of traumatic optic neuropathy. eLife 2022; 11:75070. [PMID: 35352678 PMCID: PMC9068221 DOI: 10.7554/elife.75070] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2021] [Accepted: 03/29/2022] [Indexed: 11/24/2022] Open
Abstract
Therapeutic hypothermia (TH) is potentially an important therapy for central nervous system (CNS) trauma. However, its clinical application remains controversial, hampered by two major factors: (1) Many of the CNS injury sites, such as the optic nerve (ON), are deeply buried, preventing access for local TH. The alternative is to apply TH systemically, which significantly limits the applicable temperature range. (2) Even with possible access for 'local refrigeration', cold-induced cellular damage offsets the benefit of TH. Here we present a clinically translatable model of traumatic optic neuropathy (TON) by applying clinical trans-nasal endoscopic surgery to goats and non-human primates. This model faithfully recapitulates clinical features of TON such as the injury site (pre-chiasmatic ON), the spatiotemporal pattern of neural degeneration, and the accessibility of local treatments with large operating space. We also developed a computer program to simplify the endoscopic procedure and expand this model to other large animal species. Moreover, applying a cold-protective treatment, inspired by our previous hibernation research, enables us to deliver deep hypothermia (4 °C) locally to mitigate inflammation and metabolic stress (indicated by the transcriptomic changes after injury) without cold-induced cellular damage, and confers prominent neuroprotection both structurally and functionally. Intriguingly, neither treatment alone was effective, demonstrating that in situ deep hypothermia combined with cold protection constitutes a breakthrough for TH as a therapy for TON and other CNS traumas.
Collapse
Affiliation(s)
- Yikui Zhang
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Mengyun Li
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Bo Yu
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Shengjian Lu
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Lujie Zhang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of TechnologyBeijingChina
| | - Senmiao Zhu
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Zhonghao Yu
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Tian Xia
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Haoliang Huang
- Department of Ophthalmology, Stanford University School of MedicinePalo AltoUnited States
| | - WenHao Jiang
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Si Zhang
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Lanfang Sun
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Qian Ye
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Jiaying Sun
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Hui Zhu
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Pingping Huang
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Huifeng Hong
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Shuaishuai Yu
- School of Laboratory Medicine and Life Sciences, Wenzhou Medical UniversityWenzhouChina
| | - Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of TechnologyBeijingChina
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of TechnologyBeijingChina
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of TechnologyBeijingChina
| | - Wentao Li
- School of Computer Science & Technology, Beijing Institute of TechnologyBeijingChina
| | - Hong Song
- School of Computer Science & Technology, Beijing Institute of TechnologyBeijingChina
| | - Lei Xu
- Medical Radiology Department, 2nd Affiliated Hospital, Wenzhou Medical UniversityWenzhouChina
| | - Xiwen Chen
- Animal Facility Center, Wenzhou Medical UniversityWenzhouChina
| | - Tongke Chen
- Animal Facility Center, Wenzhou Medical UniversityWenzhouChina
| | - Meng Zhou
- School of Biomedical Engineering, The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| | - Jingxing Ou
- Department of Hepatic Surgery and Liver Transplantation Center of the Third Affiliated, Hospital, Guangdong Province Engineering Laboratory for Transplantation MedicineGuangzhouChina,Guangdong Key Laboratory of Liver Disease Research, the Third Affiliated Hospital of Sun Yat-sen UniversityGuangzhouChina
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of TechnologyBeijingChina
| | - Wei Li
- Retinal Neurophysiology Section, National Eye Institute, National Institute of Health, NIHBethesdaUnited States
| | - Yang Hu
- Department of Ophthalmology, Stanford University School of MedicinePalo AltoUnited States
| | - Wencan Wu
- The Eye Hospital, School of Ophthalmology & Optometry, Wenzhou Medical UniversityWenzhouChina
| |
Collapse
|
27
|
Liang P, Cheng Z, Yu X, Han Z, Liu F, Yu J, Yang J, Ai D. Percutaneous microwave ablation under ultrasound guidance for renal cell carcinomas at clinical staging T1 in patients aged 65 years and older: A comparative study. J Cancer Res Ther 2022; 18:509-515. [DOI: 10.4103/jcrt.jcrt_531_22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023]
|
28
|
Weng X, Song H, Fu T, Gao Y, Fan J, Ai D, Lin Y, Yang J. An optimal ablation time prediction model based on minimizing the relapse risk. Comput Methods Programs Biomed 2021; 212:106438. [PMID: 34656904 DOI: 10.1016/j.cmpb.2021.106438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 09/18/2021] [Indexed: 06/13/2023]
Abstract
OBJECTIVE Percutaneous microwave ablation is an essential and safe method for the treatment of liver cancer. As one therapeutic dose, ablation time is crucial to the treatment effect determined by the physicians. However, due to the different experiences of physicians and the significant individual differences of patients, the final treatment effect is also different, which makes it difficult for the ablation time recorded in the electronic health records (EHRs) to follow the same pattern. To solve this problem, we propose a data mining method based on historical treatment data recorded in EHR, which uses a robust relapse risk as strong supervision to correct the ablation time. The prediction results of this method are closer to the situation of patients without relapse, which can provide physicians with reference. METHODS In the proposed method, we introduce the optimization method to iteratively minimize the postoperative relapse risk and utilize gradient propagation between the risk and ablation time during iteration to correct the latter. We also apply a self-attention mechanism to find the global dependencies between each feature in EHR to improve the final prediction performance of the model. RESULTS Comparative experimental results show that compared with other baseline model, the proposed model achieves better performance on R-square, MAE, and MSE metric. The results of ablation experiments show that the integration of label correction and self-attention mechanism can improve the model performance. CONCLUSIONS We using relapse risk as strong supervision related to the ablation time can effectively correct the deviation of the ablation time as weak supervision. The self-attention mechanism in the proposed model can significantly improve the prediction performance.
Collapse
Affiliation(s)
- Xutao Weng
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China.
| | - Tianyu Fu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yuanjin Gao
- Department of Interventional Ultrasound, Chinese PLA general hospital, Beijing, 100853, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yucong Lin
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
29
|
Li W, Fan J, Li S, Tian Z, Ai D, Song H, Yang J. Homography-based robust pose compensation and fusion imaging for augmented reality based endoscopic navigation system. Comput Biol Med 2021; 138:104864. [PMID: 34634638 DOI: 10.1016/j.compbiomed.2021.104864] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 08/23/2021] [Accepted: 09/09/2021] [Indexed: 11/17/2022]
Abstract
BACKGROUND Augmented reality (AR) based fusion imaging in endoscopic surgeries rely on the quality of image-to-patient registration and camera calibration, and these two offline steps are usually performed independently to get the target transformation separately. The optimal solution can be obtained under independent conditions but may not be globally optimal. All residual errors will be accumulated and eventually lead to inaccurate AR fusion. METHODS After a careful analysis of the principle of AR imaging, a robust online calibration framework was proposed for an endoscopic camera to enable accurate AR fusion. A 2D checkerboard-based homography estimation algorithm was proposed to estimate the local pose of the endoscopic camera, and the least square method was used to calculate the compensation matrix in combination with the optical tracking system. RESULTS In comparison with conventional methods, the proposed compensation method improved the performance of AR fusion, which reduced physical error by up to 82%, reduced pixel error by up to 83%, and improved target coverage by up to 6%. Experimental results of simulating mechanical noise revealed that the proposed compensation method effectively corrected the fusion errors caused by the rotation of the endoscopic tube without recalibrating the camera. Furthermore, the simulation results revealed the robustness of the proposed compensation method to noises. CONCLUSIONS Overall, the experiment results proved the effectiveness of the proposed compensation method and online calibration framework, and revealed a considerable potential in clinical practice.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China.
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
30
|
Han T, Ai D, An R, Fan J, Song H, Wang Y, Yang J. Ordered multi-path propagation for vessel centerline extraction. Phys Med Biol 2021; 66. [PMID: 34157702 DOI: 10.1088/1361-6560/ac0d8e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 06/22/2021] [Indexed: 11/12/2022]
Abstract
Vessel centerline extraction from x-ray angiography images is essential for vessel structure analysis in the diagnosis of coronary artery disease. However, complete and continuous centerline extraction remains a challenging task due to image noise, poor contrast, and complexity of vessel structure. Thus, an iterative multi-path search framework for automatic vessel centerline extraction is proposed. First, the seed points of the vessel structure are detected and sorted by confidence. With the ordered seed points, multi-bifurcation centerline is searched through multi-path propagation of wavefront and accumulated voting. Finally, the centerline is further extended piecewise by wavefront propagation on the basis of keypoint detection. The latter two steps are performed alternately to obtain the final centerline result. The proposed method is qualitatively and quantitatively evaluated on 1260 synthetic images and 50 clinical angiography images. The results demonstrate that our method has a highF1score of 87.8% ± 2.7% for the angiography images and achieves accurate and continuous results of vessel centerline extraction.
Collapse
Affiliation(s)
- Tao Han
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Ruirui An
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Yining Wang
- Department of Radiology, Peking Union Medical College Hospital, Chinese Academy of Medical Sciences and Peking Union Medical College, Beijing 100730, People's Republic of China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
31
|
Lu WG, Ai D, Song H, Xie Y, Liu S, Zhu W, Yang J. Epidemiological and numerical simulation of rabies spreading from canines to various human populations in mainland China. PLoS Negl Trop Dis 2021; 15:e0009527. [PMID: 34260584 PMCID: PMC8312940 DOI: 10.1371/journal.pntd.0009527] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2020] [Revised: 07/26/2021] [Accepted: 06/01/2021] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND The mortality of humans due to rabies in China has been declining in recent years, but it is still a significant public health problem. According to the global framework, China strives to achieve the goal of eliminating human rabies before 2030. METHODS We reviewed the epidemiology of human deaths from rabies in mainland China from 2004 to 2018. We identified high risk regions, age and occupational groups, and used a continuous deterministic susceptibility-exposure-infection-recovery (SEIR) model with periodic transmission rate to explore seasonal rabies prevalence in different human populations. The SEIR model was used to simulate the data of human deaths from rabies reported by the Chinese Center for Disease Control and Prevention (China CDC). We calculated the relative transmission intensity of rabies from canines to different human groups, and they provided a reliable epidemiological basis for further control and prevention of human rabies. RESULTS Results showed that human deaths from rabies exhibited regional differences and seasonal characteristics in mainland China. The annual human death from rabies in different regions, age groups and occupational groups decreased steadily across time. Nevertheless, the decreasing rates and the calculated R0s of canines of various human groups were different. The transmission intensity of rabies from canines to human populations was the highest in the central regions of China, in people over 45 years old, and in farmers. CONCLUSIONS Although the annual cases of human deaths from rabies have decreased steadily since 2007, the proportion of human deaths from rabies varies with region, age, gender, and occupation. Further enhancement of public awareness and immunization status in high-risk population groups and blocking the transmission routes of rabies from canines to humans are necessary. The concept of One Health should be abided and human, animal, and environmental health should be considered simultaneously to achieve the goal of eradicating human rabies before 2030.
Collapse
Affiliation(s)
- Wen-gao Lu
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Yuan Xie
- National Institute for Viral Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, China
| | - Shuqing Liu
- National Institute for Viral Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, China
- * E-mail: (SL); (WZ); (JY)
| | - Wuyang Zhu
- National Institute for Viral Disease Control and Prevention, Chinese Center for Disease Control and Prevention, Beijing, China
- * E-mail: (SL); (WZ); (JY)
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
- * E-mail: (SL); (WZ); (JY)
| |
Collapse
|
32
|
Dong J, Ai D, Fan J, Deng Q, Song H, Cheng Z, Liang P, Wang Y, Yang J. Local-global active contour model based on tensor-based representation for 3D ultrasound vessel segmentation. Phys Med Biol 2021; 66. [PMID: 33910173 DOI: 10.1088/1361-6560/abfc92] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/28/2021] [Indexed: 11/11/2022]
Abstract
Three-dimensional (3D) vessel segmentation can provide full spatial information about an anatomic structure to help physicians gain increased understanding of vascular structures, which plays an utmost role in many medical image-processing and analysis applications. The purpose of this paper aims to develop a 3D vessel-segmentation method that can improve segmentation accuracy in 3D ultrasound (US) images. We propose a 3D tensor-based active contour model method for accurate 3D vessel segmentation. With our method, the contrast-independent multiscale bottom-hat tensor representation and local-global information are captured. This strategy ensures the effective extraction of the boundaries of vessels from inhomogeneous and homogeneous regions without being affected by the noise and low-contrast of the 3D US images. Experimental results in clinical 3D US and public 3D Multiphoton Microscopy datasets are used for quantitative and qualitative comparison with several state-of-the-art vessel segmentation methods. Clinical experiments demonstrate that our method can achieve a smoother and more accurate boundary of the vessel object than competing methods. The mean SE, SP and ACC of the proposed method are: 0.7768 ± 0.0597, 0.9978 ± 0.0013 and 0.9971 ± 0.0015 respectively. Experiments on the public dataset show that our method can segment complex vessels in different medical images with noise and low- contrast.
Collapse
Affiliation(s)
- Jiahui Dong
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Danni Ai
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jingfan Fan
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Qiaoling Deng
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Zhigang Cheng
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Ping Liang
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing 100853, People's Republic of China
| | - Yongtian Wang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | - Jian Yang
- Laboratory of Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| |
Collapse
|
33
|
Li W, Fan J, Li S, Tian Z, Zheng Z, Ai D, Song H, Yang J. Calibrating 3D Scanner in the Coordinate System of Optical Tracker for Image-To-Patient Registration. Front Neurorobot 2021; 15:636772. [PMID: 34054454 PMCID: PMC8160243 DOI: 10.3389/fnbot.2021.636772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Accepted: 04/13/2021] [Indexed: 11/13/2022] Open
Abstract
Three-dimensional scanners have been widely applied in image-guided surgery (IGS) given its potential to solve the image-to-patient registration problem. How to perform a reliable calibration between a 3D scanner and an external tracker is especially important for these applications. This study proposes a novel method for calibrating the extrinsic parameters of a 3D scanner in the coordinate system of an optical tracker. We bound an optical marker to a 3D scanner and designed a specified 3D benchmark for calibration. We then proposed a two-step calibration method based on the pointset registration technique and nonlinear optimization algorithm to obtain the extrinsic matrix of the 3D scanner. We applied repeat scan registration error (RSRE) as the cost function in the optimization process. Subsequently, we evaluated the performance of the proposed method on a recaptured verification dataset through RSRE and Chamfer distance (CD). In comparison with the calibration method based on 2D checkerboard, the proposed method achieved a lower RSRE (1.73 mm vs. 2.10, 1.94, and 1.83 mm) and CD (2.83 mm vs. 3.98, 3.46, and 3.17 mm). We also constructed a surgical navigation system to further explore the application of the tracked 3D scanner in image-to-patient registration. We conducted a phantom study to verify the accuracy of the proposed method and analyze the relationship between the calibration accuracy and the target registration error (TRE). The proposed scanner-based image-to-patient registration method was also compared with the fiducial-based method, and TRE and operation time (OT) were used to evaluate the registration results. The proposed registration method achieved an improved registration efficiency (50.72 ± 6.04 vs. 212.97 ± 15.91 s in the head phantom study). Although the TRE of the proposed registration method met the clinical requirements, its accuracy was lower than that of the fiducial-based registration method (1.79 ± 0.17 mm vs. 0.92 ± 0.16 mm in the head phantom study). We summarized and analyzed the limitations of the scanner-based image-to-patient registration method and discussed its possible development.
Collapse
Affiliation(s)
- Wenjie Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Shaowen Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Zhaorui Tian
- Ariemedi Medical Technology (Beijing) CO., LTD., Beijing, China
| | - Zhao Zheng
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
34
|
Wu C, Fu T, Gao Y, Liu Y, Fan J, Ai D, Song H, Yang J. Multiple feature-based portal vein classification for liver segment extraction. Med Phys 2021; 48:2354-2373. [PMID: 33529390 DOI: 10.1002/mp.14745] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2020] [Revised: 12/30/2020] [Accepted: 01/25/2021] [Indexed: 12/17/2022] Open
Abstract
PURPOSE The liver segments divided by Couinaud classification method are used to understand the functional anatomy of liver, which is significant in hepatic resection surgery. In Couinaud classification method, each third-order branch of the portal vein (PV) defines the supplied territory of a corresponding liver segment. However, the accuracies of the reconstruction and classification of PV are affected by the complicated structure of the vein. The purpose of this paper is to develop a separation and classification method that can accurately extract the liver segments. METHODS In this paper, a multiple feature-based method is proposed to obtain liver segments. Because the portal and hepatic veins usually connect in the vessel segmentation result, the PV is first completely separated based on the different strategies for minimal node cut using fused features of topology and appearance. Meanwhile, all bifurcation nodes of PV are detected. The bifurcation nodes are initial ordered through their linkages to classify the branches. Then, the feature of the vascular topology is used to refine the orders of bifurcation nodes. The bifurcation nodes with the refined orders classify the branches between them, and the third-order branches of PV are obtained. The liver segments are eventually obtained through the third-order branches. RESULTS The separation and classification in the proposed method are evaluated on the CT and MR datasets. The average values of Dice, Jaccard, Recall, and Precision obtained by the proposed method are 93.00%, 87.90%, 93.47%, and 93.19%, respectively. Compared with the state-of-the-art methods, the separation results obtained by the proposed method are more accurate. The branches of PV are classified based on the separation result. According to the third-order branches, eight liver segments correspond to the different functional areas are precisely extracted. CONCLUSIONS The proposed method achieves a high accuracy for the liver segment extraction. And the extracted liver segments are significant for the preplanning of resection surgery.
Collapse
Affiliation(s)
- Chan Wu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianyu Fu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Yuanjin Gao
- Department of Interventional Ultrasound, Chinese PLA General Hospital, Beijing, 100853, China
| | - Yuhan Liu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing, 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing, 100081, China
| |
Collapse
|
35
|
Fu T, Fan J, Liu D, Song H, Zhang C, Ai D, Cheng Z, Liang P, Yang J. Divergence-Free Fitting-Based Incompressible Deformation Quantification of Liver. IEEE J Biomed Health Inform 2021; 25:720-736. [PMID: 32750981 DOI: 10.1109/jbhi.2020.3013126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Liver is an incompressible organ that maintains its volume during the respiration-induced deformation. Quantifying this deformation with the incompressible constraint is significant for liver tracking. The constraint can be accomplished with retaining the divergence-free field obtained by the deformation decomposition. However, the decomposition process is time-consuming, and the removal of non-divergence-free field weakens the deformation. In this study, a divergence-free fitting-based registration method is proposed to quantify the incompressible deformation rapidly and accurately. First, the deformation to be estimated is mapped to the velocity in a diffeomorphic space. Then, this velocity is decomposed by a fast Fourier-based Hodge-Helmholtz decomposition to obtain the divergence-free, curl-free, and harmonic fields. The curl-free field is replaced and fitted by the obtained harmonic field with a translation field to generate a new divergence-free velocity. By optimizing this velocity, the final incompressible deformation is obtained. Moreover, a deep learning framework (DLF) is constructed to accelerate the incompressible deformation quantification. An incompressible respiratory motion model is built for the DLF by using the proposed registration method and is then used to augment the training data. An encoder-decoder network is introduced to learn appearance-velocity correlation at patch scale. In the experiment, we compare the proposed registration with three state-of-the-art methods. The results show that the proposed method can accurately achieve the incompressible registration of liver with a mean liver overlap ratio of 95.33%. Moreover, the time consumed by DLF is nearly 15 times shorter than that by other methods.
Collapse
|
36
|
Huang S, Han X, Fan J, Chen J, Du L, Gao W, Liu B, Chen Y, Liu X, Wang Y, Ai D, Ma G, Yang J. Anterior Mediastinal Lesion Segmentation Based on Two-Stage 3D ResUNet With Attention Gates and Lung Segmentation. Front Oncol 2021; 10:618357. [PMID: 33634027 PMCID: PMC7901488 DOI: 10.3389/fonc.2020.618357] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2020] [Accepted: 12/15/2020] [Indexed: 01/13/2023] Open
Abstract
OBJECTIVES Anterior mediastinal disease is a common disease in the chest. Computed tomography (CT), as an important imaging technology, is widely used in the diagnosis of mediastinal diseases. Doctors find it difficult to distinguish lesions in CT images because of image artifact, intensity inhomogeneity, and their similarity with other tissues. Direct segmentation of lesions can provide doctors a method to better subtract the features of the lesions, thereby improving the accuracy of diagnosis. METHOD As the trend of image processing technology, deep learning is more accurate in image segmentation than traditional methods. We employ a two-stage 3D ResUNet network combined with lung segmentation to segment CT images. Given that the mediastinum is between the two lungs, the original image is clipped through the lung mask to remove some noises that may affect the segmentation of the lesion. To capture the feature of the lesions, we design a two-stage network structure. In the first stage, the features of the lesion are learned from the low-resolution downsampled image, and the segmentation results under a rough scale are obtained. The results are concatenated with the original image and encoded into the second stage to capture more accurate segmentation information from the image. In addition, attention gates are introduced in the upsampling of the network, and these gates can focus on the lesion and play a role in filtering the features. The proposed method has achieved good results in the segmentation of the anterior mediastinal. RESULTS The proposed method was verified on 230 patients, and the anterior mediastinal lesions were well segmented. The average Dice coefficient reached 87.73%. Compared with the model without lung segmentation, the model with lung segmentation greatly improved the accuracy of lesion segmentation by approximately 9%. The addition of attention gates slightly improved the segmentation accuracy. CONCLUSION The proposed automatic segmentation method has achieved good results in clinical data. In clinical application, automatic segmentation of lesions can assist doctors in the diagnosis of diseases and may facilitate the automated diagnosis of illnesses in the future.
Collapse
Affiliation(s)
- Su Huang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Xiaowei Han
- Department of Radiology, The Affiliated Drum Tower Hospital of Nanjing University Medical School, Nanjing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jing Chen
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Lei Du
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Wenwen Gao
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Bing Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Yue Chen
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Xiuxiu Liu
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Yige Wang
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Guolin Ma
- Department of Radiology, China-Japan Friendship Hospital, Beijing, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| |
Collapse
|
37
|
Zhu J, Li H, Ai D, Yang Q, Fan J, Huang Y, Song H, Han Y, Yang J. Iterative closest graph matching for non-rigid 3D/2D coronary arteries registration. Comput Methods Programs Biomed 2021; 199:105901. [PMID: 33360681 DOI: 10.1016/j.cmpb.2020.105901] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2019] [Accepted: 12/05/2020] [Indexed: 06/12/2023]
Abstract
Background and objective Fusion of the preoperative computed tomography angiography and intraoperative X-ray angiography images can considerably enhance the visual perception of physicians during percutaneous coronary interventions. This technique can provide 3D information of the arteries and reduce the uncertainty of 2D guidance images. For this purpose, 3D/2D vascular registration with high accuracy and robustness is crucial for performing accurate surgery. Methods In this study, we propose an iterative closest graph matching (ICGM) method that utilizes an alternative iteration framework including correspondence and transformation phases. A coarse-to-fine matching approach based on redundant graph matching is proposed for the correspondence phase. The transformation phase involves rigid and non-rigid transformations, in which rigid transformation is calculated using a closed-form solution, and non-rigid transformation is achieved using a statistical shape model established from a synthetic deformation dataset. Results The proposed method is evaluated and compared with nine state-of-the-art methods on simulated data and clinical datasets. Experiments demonstrate that our method is insensitive to the pose of data and robust to noise and deformation. Moreover, it outperforms other methods in terms of registering real data. Conclusions Given its high capture range, the proposed method can register 3D vessels without prior initialization in clinical practice.
Collapse
Affiliation(s)
- Jianjun Zhu
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Heng Li
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Qi Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yong Huang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China
| | - Yechen Han
- Department of Cardiology, Peking Union Medical College Hospital, Beijing 100730, China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
38
|
Shao L, Li H, Liu X, Wang Y, Shi L, Ai D, Fan J, Song H, Zhang H, Yang J. Quantitative analysis of bony birth canal for periacetabular osteotomy patient by template fitting. Phys Med Biol 2021; 66:025007. [PMID: 33202400 DOI: 10.1088/1361-6560/abcb22] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Periacetabular osteotomy (PAO) is a joint preservation procedure for developmental dysplasia of the hip. Such a procedure requires osteotomy of the medial wall of the acetabulum, which may cause the narrow of the bony birth canal and increase the risk of complications during the childbirth process in the future. Using quantitative analysis of the bony birth canal to determine the risk of complications for the childbirth process remains a challenging task. The purpose of this paper is to explore a new 3D CT measurement method to quantify the narrowest parameters of the bony birth canal of the female patients with hip dysplasia before and after unilateral PAO surgery. By analyzing the impact of PAO surgery on the bony birth canal, the patient's risk of complications during the childbirth process in the future can be estimated, and it can be utilized for doctors to determine the impact of unilateral PAO for choosing appropriate delivery method. In this paper, a mean shape of the preoperative pelvises is obtained by using the statistical shape model algorithm, and the mean shape includes pelvic shape features of all the preoperative pelvises, and it can be utilized as the standard pelvic template. A bidirectional iterative algorithm is used to generate a standard bony birth canal path template. Then, the pelvic registration and principal plane deformation constraint are utilized to calculate the optimal bony birth canal path. The proposed method is verified in 31 cases of CT data with the approval of the institutional review board. The test data contain preoperative and postoperative CT images. Compared with the benchmark method, the measurement accuracy of the narrowest position and diameter of the bony birth canal is improved by 65% and 78%, respectively. In addition, the processing speed is increased by 32%. Experimental results demonstrate that the proposed method has high accuracy and validity for quantifying the bony birth canal. The proposed method can measure the anatomical parameters of the bony birth canal accurately. In addition, the doctor can make optimal planning for childbirth with the help of the quantitative analysis of the bony birth canal.
Collapse
Affiliation(s)
- Long Shao
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | | | | | | | | | | | | | | | | | | |
Collapse
|
39
|
Fang H, Li H, Song S, Pang K, Ai D, Fan J, Song H, Yu Y, Yang J. Motion-flow-guided recurrent network for respiratory signal estimation of x-ray angiographic image sequences. Phys Med Biol 2020; 65:245020. [PMID: 32590382 DOI: 10.1088/1361-6560/aba087] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Motion compensation can eliminate inconsistencies of respiratory movement during image acquisitions for precise vascular reconstruction in the clinical diagnosis of vascular disease from x-ray angiographic image sequences. In x-ray-based vascular interventional therapy, motion modeling can simulate the process of organ deformation driven by motion signals to display a dynamic organ on angiograms without contrast agent injection. Automatic respiratory signal estimation from x-ray angiographic image sequences is essential for motion compensation and modeling. The effects of respiratory motion, cardiac impulses, and tremors on structures in the chest and abdomen bring difficulty in extracting accurate respiratory signals individually. In this study, an end-to-end deep learning framework based on a motion-flow-guided recurrent network is proposed to address the aforementioned problem. The proposed method utilizes a convolutional neural network to learn the spatial features of every single frame, and a recurrent neural network to learn the temporal features of the entire sequence. The combination of the two networks can effectively analyze the image sequence to realize respiratory signal estimation. In addition, the motion-flow between consecutive frames is introduced to provide a dynamic constraint of spatial features, which enables the recurrent network to learn better temporal features from dynamic spatial features than from static spatial features. We demonstrate the advantages of our approach on designed datasets which contain coronary and hepatic angiographic sequences with diaphragm structures, and coronary angiographic sequences without diaphragm structures. Our method improves over state-of-the-art manifold-learning-based methods by 85.7%, 81.5% and 75.3% in respiratory signal accuracy metric on these datasets. The results demonstrate that the proposed method can effectively estimate respiratory signals from multiple motion patterns.
Collapse
Affiliation(s)
- Huihui Fang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, People's Republic of China
| | | | | | | | | | | | | | | | | |
Collapse
|
40
|
Yang J, Zhu J, Sze DY, Cui L, Li X, Bai Y, Ai D, Fan J, Song H, Duan F. Feasibility of Augmented Reality-Guided Transjugular Intrahepatic Portosystemic Shunt. J Vasc Interv Radiol 2020; 31:2098-2103. [PMID: 33261744 DOI: 10.1016/j.jvir.2020.07.025] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 07/14/2020] [Accepted: 07/16/2020] [Indexed: 02/07/2023] Open
Abstract
PURPOSE To investigate an augmented reality (AR)-guided endovascular puncture to facilitate successful transjugular intrahepatic portosystemic shunt (TIPS). MATERIALS AND METHODS An AR navigation system for TIPS was designed. Three-dimensional (3D) liver models including portal and hepatic vein anatomy were extracted from preoperative CT images. The 3D models, intraoperative subjects, and electromagnetic tracking information of the puncture needles were integrated through the system calibration. In the AR head-mounted display, the 3D models were overlaid on the subjects, which was a liver phantom in the first phase and live beagle dogs in the second phase. One life-size liver phantom and 9 beagle dogs were used in the experiments. Imaging after puncture was performed to validate whether the needle tip accessed the target hepatic vein successfully. RESULTS Endovascular punctures of the portal vein of the liver phantom were repeated 30 times under the guidance of the AR system, and the puncture needle successfully accessed the target vein during each attempt. In the experiments of live canine subjects, the punctures were successful in 2 attempts in 7 beagle dogs and in 1 attempt in the remaining 2 dogs. The puncture time of needle from hepatic vein to portal vein was 5-10 s in the phantom experiments and 10-30 s in the canine experiments. CONCLUSIONS The feasibility of AR-based navigation facilitating accurate and successful portal vein access in preclinical models of TIPS was validated.
Collapse
Affiliation(s)
- Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jianjun Zhu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Daniel Y Sze
- Division of Interventional Radiology, Stanford University School of Medicine, Palo Alto, California
| | - Li Cui
- Department of Interventional Radiology, Chinese PLA General Hospital, Fuxing Road 28, Haidian District, Beijing 100853, China
| | - Xiaohui Li
- Department of Interventional Radiology, Chinese PLA General Hospital, Fuxing Road 28, Haidian District, Beijing 100853, China
| | - Yanhua Bai
- Department of Interventional Radiology, Chinese PLA General Hospital, Fuxing Road 28, Haidian District, Beijing 100853, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Jingfan Fan
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, Beijing, China
| | - Feng Duan
- Department of Interventional Radiology, Chinese PLA General Hospital, Fuxing Road 28, Haidian District, Beijing 100853, China.
| |
Collapse
|
41
|
Zhang X, Ai D, Zhao W, Zhao K. Efficacy and Safety of Large-field Postoperative Radiotherapy Using Three-dimensional Radiation Technique for Local Advanced Thoracic Esophageal Squamous Cell Carcinoma: A Phase II Clinical Trial. Int J Radiat Oncol Biol Phys 2020. [DOI: 10.1016/j.ijrobp.2020.07.1795] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
42
|
Zhou R, Hao S, Zeng Y, Ai D, Zhu H, Liu Q, Deng J, Zhao K, Chen Y. NEIL1 rs4462560 Affects Acute Radiation-Induced Lung Injury Via MAPK/JNK Pathway. Int J Radiat Oncol Biol Phys 2020. [DOI: 10.1016/j.ijrobp.2020.07.1602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
43
|
Ai D, Ye J, Chen Y, Liu Q, Zheng X, Yunhai L, Wei S, LI J, Lin Q, Luo H, Cao J, Zhou J, Huang G, Fan M, Wu K, Yang H, Zhu Z, Zhao W, Li L, Zhao K. Final Results of a Phase III Randomized Trial of Comparison of Three Paclitaxel-based Regimens Concurrent with Radiotherapy for Patients with Local Advanced Esophageal Squamous Cell Carcinoma (ESO-Shanghai2). Int J Radiat Oncol Biol Phys 2020. [DOI: 10.1016/j.ijrobp.2020.07.2158] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
44
|
Ai D, Zhao Z, Fan J, Song H, Qu X, Xian J, Yang J. Spatial probabilistic distribution map-based two-channel 3D U-net for visual pathway segmentation. Pattern Recognit Lett 2020. [DOI: 10.1016/j.patrec.2020.09.003] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
45
|
Chu Y, Yang X, Li H, Ai D, Ding Y, Fan J, Song H, Yang J. Multi-level feature aggregation network for instrument identification of endoscopic images. Phys Med Biol 2020; 65:165004. [PMID: 32344381 DOI: 10.1088/1361-6560/ab8dda] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Identification of surgical instruments is crucial in understanding surgical scenarios and providing an assistive process in endoscopic image-guided surgery. This study proposes a novel multilevel feature-aggregated deep convolutional neural network (MLFA-Net) for identifying surgical instruments in endoscopic images. First, a global feature augmentation layer is created on the top layer of the backbone to improve the localization ability of object identification by boosting the high-level semantic information to the feature flow network. Second, a modified interaction path of cross-channel features is proposed to increase the nonlinear combination of features in the same level and improve the efficiency of information propagation. Third, a multiview fusion branch of features is built to aggregate the location-sensitive information of the same level in different views, increase the information diversity of features, and enhance the localization ability of objects. By utilizing the latent information, the proposed network of multilevel feature aggregation can accomplish multitask instrument identification with a single network. Three tasks are handled by the proposed network, including object detection, which classifies the type of instrument and locates its border; mask segmentation, which detects the instrument shape; and pose estimation, which detects the keypoint of instrument parts. The experiments are performed on laparoscopic images from MICCAI 2017 Endoscopic Vision Challenge, and the mean average precision (AP) and average recall (AR) are utilized to quantify the segmentation and pose estimation results. For the bounding box regression, the AP and AR are 79.1% and 63.2%, respectively, while the AP and AR of mask segmentation are 78.1% and 62.1%, and the AP and AR of the pose estimation achieve 67.1% and 55.7%, respectively. The experiments demonstrate that our method efficiently improves the recognition accuracy of the instrument in endoscopic images, and outperforms the other state-of-the-art methods.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081 People's Republic of China. Authors contribute equally to this article
| | | | | | | | | | | | | | | |
Collapse
|
46
|
Chu Y, Li H, Li X, Ding Y, Yang X, Ai D, Chen X, Wang Y, Yang J. Endoscopic image feature matching via motion consensus and global bilateral regression. Comput Methods Programs Biomed 2020; 190:105370. [PMID: 32036206 DOI: 10.1016/j.cmpb.2020.105370] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2019] [Revised: 12/17/2019] [Accepted: 01/26/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND AND OBJECTIVE Feature matching of endoscopic images is of crucial importance in many clinical applications, such as object tracking and surface reconstruction. However, with the presence of low texture, specular reflections and deformations, the feature matching methods of natural scene are facing great challenges in minimally invasive surgery (MIS) scenarios. We propose a novel motion consensus-based method for endoscopic image feature matching to address these problems. METHODS Our method starts by correcting the radial distortion with the spherical projection model and removing the specular reflection regions with an adaptive detection method, which helps to eliminate the image distortion and to reduce the quantity of outliers. We solve the matching problem with a two-stage strategy that progressively estimates a consensus of inliers; the result is a precisely smoothed motion field. First, we construct a spatial motion field from candidate feature matches and estimate its maximum posterior with expectation maximization algorithm, which is computationally efficient and able to obtain smoothed motion field quickly. Second, we extend the smoothed motion field to the affine domain and refine it with bilateral regression to preserve locally subtle motions. The true matches can be identified by checking the difference of feature motion against the estimated field. RESULTS Evaluations are implemented on two simulation datasets of deformation (218 images) and four different types of endoscopic datasets (1032 images). Our method is compared with three other state-of-the-art methods and achieves the best performance on affine transformation and nonrigid deformation simulations, with inlier ratio of 86.7% and 94.3%, sensitivity of 90.0% and 96.2%, precision of 88.2% and 93.9%, and F1-Score of 89.1% and 95.0%, respectively. On clinical datasets evaluations, the proposed method achieves an average reprojection error of 3.7 pixels and a consistent performance in multi-image correspondence of sequential images. Furthermore, we also present a surface reconstruction result from rhinoscopic images to validate the reliability of our method, which shows high-quality feature matching results. CONCLUSIONS The proposed motion consensus-based feature matching method is proved effective and robust for endoscopic images correspondence. This demonstrates its capability to generate reliable feature matches for surface reconstruction and other meaningful applications in MIS scenarios.
Collapse
Affiliation(s)
- Yakui Chu
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Heng Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| | - Xu Li
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Yuan Ding
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xilin Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Xiaohong Chen
- Department of Otolaryngology, Head and Neck Surgery, Beijing Tongren Hospital, Beijing 100730, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Electronics, Beijing Institute of Technology, Beijing 100081, China.
| |
Collapse
|
47
|
Fu T, Yang J, Li Q, Ai D, Song H, Jiang Y, Wang Y, Frangi AF. Groupwise registration with global-local graph shrinkage in atlas construction. Med Image Anal 2020; 64:101711. [PMID: 32585570 DOI: 10.1016/j.media.2020.101711] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2018] [Revised: 01/16/2020] [Accepted: 04/18/2020] [Indexed: 11/30/2022]
Abstract
Graph-based groupwise registration methods are widely used in atlas construction. Given a group of images, a graph is built whose nodes represent the images, and whose edges represent a geodesic path between two nodes. The distribution of images on an image manifold is explored through edge traversal in a graph. The final atlas is a mean image at the population center of the distribution on the manifold. The procedure of warping all images to the mean image turns to dynamic graph shrinkage in which nodes become closer to each other. Most conventional groupwise registration frameworks construct and shrink a graph without considering the local distribution of images on the dataset manifold and the local structure variations between image pairs. Neglecting the local information fundamentally decrease the accuracy and efficiency when population atlases are built for organs with large inter-subject anatomical variabilities. To overcome the problem, this paper proposes a global-local graph shrinkage approach that can generate accurate atlas. A connected graph is constructed automatically based on global similarities across the images to explore the global distribution. A local image distribution obtained by image clustering is used to simplify the edges of the constructed graph. Subsequently, local image similarities refine the deformation estimated through global image similarity for each image warping along the graph edges. Through the image warping, the overall simplified graph shrinks gradually to yield the atlas with respecting both global and local features. The proposed method is evaluated on 61 synthetic and 20 clinical liver datasets, and the results are compared with those of six state-of-the-art groupwise registration methods. The experimental results show that the proposed method outperforms non-global-local method approaches in terms of accuracy.
Collapse
Affiliation(s)
- Tianyu Fu
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China; Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Jian Yang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China.
| | - Qin Li
- School of Life Science, Beijing Institute of Technology, Beijing 100081, China
| | - Danni Ai
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Hong Song
- School of Software, Beijing Institute of Technology, Beijing 100081, China
| | - Yurong Jiang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Yongtian Wang
- Beijing Engineering Research Center of Mixed Reality and Advanced Display, School of Optics and Photonics, Beijing Institute of Technology, Beijing 100081, China
| | - Alejandro F Frangi
- Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), School of Computing and School of Medicine, University of Leeds, Leeds, UK; Leeds Institute of Cardiovascular and Metabolic Medicine, School of Medicine, University of Leeds, Leeds, UK; Medical Imaging Research Center (MIRC), University Hospital Gasthuisberg. Cardiovascular Sciences and Electrical Engineering Departments, KU Leuven, Leuven, Belgium
| |
Collapse
|
48
|
Wu C, Qiao Z, Zhang N, Li X, Fan J, Song H, Ai D, Yang J, Huang Y. Phase unwrapping based on a residual en-decoder network for phase images in Fourier domain Doppler optical coherence tomography. Biomed Opt Express 2020; 11:1760-1771. [PMID: 32341846 PMCID: PMC7173896 DOI: 10.1364/boe.386101] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2019] [Revised: 02/19/2020] [Accepted: 02/27/2020] [Indexed: 06/01/2023]
Abstract
To solve the phase unwrapping problem for phase images in Fourier domain Doppler optical coherence tomography (DOCT), we propose a deep learning-based residual en-decoder network (REDN) method. In our approach, we reformulate the definition for obtaining the true phase as obtaining an integer multiple of 2π at each pixel by semantic segmentation. The proposed REDN architecture can provide recognition performance with pixel-level accuracy. To address the lack of phase images that are noise and wrapping free from DOCT systems for training, we used simulated images synthesized with DOCT phase image background noise features. An evaluation study on simulated images, DOCT phase images of phantom milk flowing in a plastic tube and a mouse artery, was performed. Meanwhile, a comparison study with recently proposed deep learning-based DeepLabV3+ and PhaseNet methods for signal phase unwrapping and traditional modified networking programming (MNP) method was also performed. Both visual inspection and quantitative metrical evaluation based on accuracy, specificity, sensitivity, root-mean-square-error, total-variation, and processing time demonstrate the robustness, effectiveness and superiority of our method. The proposed REDN method will benefit accurate and fast DOCT phase image-based diagnosis and evaluation when the detected phase is wrapped and will enrich the deep learning-based image processing platform for DOCT images.
Collapse
Affiliation(s)
- Chuanchao Wu
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Zhengyu Qiao
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Nan Zhang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Xiaochen Li
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Jingfan Fan
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Hong Song
- School of Computer Science and Technology, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Danni Ai
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Jian Yang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| | - Yong Huang
- School of Optics and Photonics, Beijing Institute of Technology, No. 5 South Zhongguancun Street, Haidian, Beijing 100081, China
| |
Collapse
|
49
|
Zhu J, Fan J, Guo S, Ai D, Song H, Wang C, Zhou S, Yang J. Heuristic tree searching for pose-independent 3D/2D rigid registration of vessel structures. ACTA ACUST UNITED AC 2020; 65:055010. [DOI: 10.1088/1361-6560/ab6b43] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
50
|
Kang R, Ai D, Qu G, Li Q, Li X, Jiang Y, Huang Y, Song H, Wang Y, Yang J. Prior information constrained alternating direction method of multipliers for longitudinal compressive sensing MR imaging. Neurocomputing 2020. [DOI: 10.1016/j.neucom.2019.09.057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|