1
|
Xie W, Chen P, Li Z, Wang X, Wang C, Zhang L, Wu W, Xiang J, Wang Y, Zhong D. A Two stage deep learning network for automated femoral segmentation in bilateral lower limb CT scans. Sci Rep 2025; 15:9198. [PMID: 40097821 PMCID: PMC11914536 DOI: 10.1038/s41598-025-94180-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2024] [Accepted: 03/12/2025] [Indexed: 03/19/2025] Open
Abstract
This study presents the development of a deep learning-based two-stage network designed for the efficient and precise segmentation of the femur in full lower limb CT images. The proposed network incorporates a dual-phase approach: rapid delineation of regions of interest followed by semantic segmentation of the femur. The experimental dataset comprises 100 samples obtained from a hospital, partitioned into 85 for training, 8 for validation, and 7 for testing. In the first stage, the model achieves an average Intersection over Union of 0.9671 and a mean Average Precision of 0.9656, effectively delineating the femoral region with high accuracy. During the second stage, the network attains an average Dice coefficient of 0.953, sensitivity of 0.965, specificity of 0.998, and pixel accuracy of 0.996, ensuring precise segmentation of the femur. When compared to the single-stage SegResNet architecture, the proposed two-stage model demonstrates faster convergence during training, reduced inference times, higher segmentation accuracy, and overall superior performance. Comparative evaluations against the TransUnet model further highlight the network's notable advantages in accuracy and robustness. In summary, the proposed two-stage network offers an efficient, accurate, and autonomous solution for femur segmentation in large-scale and complex medical imaging datasets. Requiring relatively modest training and computational resources, the model exhibits significant potential for scalability and clinical applicability, making it a valuable tool for advancing femoral image segmentation and supporting diagnostic workflows.
Collapse
Affiliation(s)
- Wenqing Xie
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Peng Chen
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Zhigang Li
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Xiaopeng Wang
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Chenggong Wang
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China
| | - Lin Zhang
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China
| | - Wenhao Wu
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China
| | - Junjie Xiang
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China
| | - Yiping Wang
- Changzhou Jinse Medical Information Technology Co., Ltd, Changzhou, 213000, Jiangsu, China.
| | - Da Zhong
- Deparment of Orthopedics, Xiangya Hospital, Central South University, Changsha, 410008, Hunan, China.
- National Clinical Research Center for Geriatric Disorders, Xiangya Hospital, Central South University, Changshan, 410008, Huna, China.
| |
Collapse
|
2
|
Guidetti M, Malloy P, Alter TD, Newhouse AC, Nho SJ, Espinoza Orías AA. Noninvasive shape-fitting method quantifies cam morphology in femoroacetabular impingement syndrome: Implications for diagnosis and surgical planning. J Orthop Res 2022; 41:1256-1265. [PMID: 36227086 DOI: 10.1002/jor.25469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 09/30/2022] [Accepted: 10/08/2022] [Indexed: 02/04/2023]
Abstract
There are considerable limitations associated with the standard 2D imaging currently used for the diagnosis and surgical planning of cam-type femoroacetabular impingement syndrome (FAIS). The aim of this study was to determine the accuracy of a new patient-specific shape-fitting method that quantifies cam morphology in 3D based solely on preoperative MRI imaging. Preoperative and postoperative 1.5T MRI scans were performed on n = 15 patients to generate 3D models of the proximal femur, in turn used to create the actual and the virtual cam. The actual cams were reconstructed by subtracting the postoperative from the preoperative 3D model and used as reference, while the virtual cams were generated by subtracting the preoperative 3D model from the virtual shape template produced with the shape-fitting method based solely on preoperative MRI scans. The accuracy of the shape-fitting method was tested on all patients by evaluating the agreement between the metrics of height, surface area, and volume that quantified virtual and actual cams. Accuracy of the shape-fitting method was demonstrated obtaining a 97.8% average level of agreement between these metrics. In conclusion, the shape-fitting technique is a noninvasive and patient-specific tool for the quantification and localization of cam morphology. Future studies will include the implementation of the technique within a clinically based software for diagnosis and surgical planning for cam-type FAIS.
Collapse
Affiliation(s)
- Martina Guidetti
- Section of Young Adult Hip Surgery, Department of Orthopedic Surgery, Division of Sports Medicine, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Philip Malloy
- Section of Young Adult Hip Surgery, Department of Orthopedic Surgery, Division of Sports Medicine, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA.,Department of Physical Therapy, Arcadia University, Glenside, Pennsylvania, USA
| | - Thomas D Alter
- Section of Young Adult Hip Surgery, Department of Orthopedic Surgery, Division of Sports Medicine, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Alexander C Newhouse
- Section of Young Adult Hip Surgery, Department of Orthopedic Surgery, Division of Sports Medicine, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Shane J Nho
- Section of Young Adult Hip Surgery, Department of Orthopedic Surgery, Division of Sports Medicine, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Alejandro A Espinoza Orías
- Section of Young Adult Hip Surgery, Department of Orthopedic Surgery, Division of Sports Medicine, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| |
Collapse
|
3
|
Guidetti M, Malloy P, Alter TD, Newhouse AC, Espinoza Orías AA, Inoue N, Nho SJ. MRI-- and CT--based metrics for the quantification of arthroscopic bone resections in femoroacetabular impingement syndrome. J Orthop Res 2022; 40:1174-1181. [PMID: 34192370 DOI: 10.1002/jor.25139] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 05/19/2021] [Accepted: 06/09/2021] [Indexed: 02/04/2023]
Abstract
The purpose of this in vitro study was to quantify the bone resected from the proximal femur during hip arthroscopy using metrics generated from magnetic resonance imaging (MRI) and computed tomography (CT) reconstructed three-dimensional (3D) bone models. Seven cadaveric hemi-pelvises underwent both a 1.5 T MRI and CT scan before and following an arthroscopic proximal femoral osteochondroplasty. The images from MRI and CT were segmented to generate 3D proximal femoral surface models. A validated 3D--3D registration method was used to compare surface--to--surface distances between the 3D models before and following surgery. The new metrics of maximum height, mean height, surface area and volume, were computed to quantify bone resected during osteochondroplasty. Stability of the metrics across imaging modalities was established through paired sample t--tests and bivariate correlation. Bivariate correlation analyses indicated strong correlations between all metrics (r = 0.728--0.878) computed from MRI and CT derived models. There were no differences in the MRI- and CT-based metrics used to quantify bone resected during femoral osteochondroplasty. Preoperative- and postoperative MRI and CT derived 3D bone models can be used to quantify bone resected during femoral osteochondroplasty, without significant differences between the imaging modalities.
Collapse
Affiliation(s)
- Martina Guidetti
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Philip Malloy
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA.,Department of Physical Therapy, Arcadia University, Glenside, Pennsylvania, USA
| | - Thomas D Alter
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Alexander C Newhouse
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Alejandro A Espinoza Orías
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Nozomu Inoue
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| | - Shane J Nho
- Section of Young Adult Hip Surgery, Division of Sports Medicine, Department of Orthopedic Surgery, Rush Medical College of Rush University, Rush University Medical Center, Chicago, Illinois, USA
| |
Collapse
|
4
|
Deng Y, Wang L, Zhao C, Tang S, Cheng X, Deng HW, Zhou W. A deep learning-based approach to automatic proximal femur segmentation in quantitative CT images. Med Biol Eng Comput 2022; 60:1417-1429. [PMID: 35322343 DOI: 10.1007/s11517-022-02529-9] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 02/13/2022] [Indexed: 11/30/2022]
Abstract
Automatic CT segmentation of proximal femur has a great potential for use in orthopedic diseases, especially in the imaging-based assessments of hip fracture risk. In this study, we proposed an approach based on deep learning for the fast and automatic extraction of the periosteal and endosteal contours of proximal femur in order to differentiate cortical and trabecular bone compartments. A three-dimensional (3D) end-to-end fully convolutional neural network (CNN), which can better combine the information among neighbor slices and get more accurate segmentation results by 3D CNN, was developed for our segmentation task. The separation of cortical and trabecular bones derived from the QCT software MIAF-Femur was used as the segmentation reference. Two models with the same network structures were trained, and they achieved a dice similarity coefficient (DSC) of 97.82% and 96.53% for the periosteal and endosteal contours, respectively. Compared with MIAF-Femur, it takes half an hour to segment a case, and our CNN model takes a few minutes. To verify the excellent performance of our model for proximal femoral segmentation, we measured the volumes of different parts of the proximal femur and compared it with the ground truth, and the relative errors of femur volume between predicted result and ground truth are all less than 5%. This approach will be expected helpful to measure the bone mineral densities of cortical and trabecular bones, and to evaluate the bone strength based on FEA.
Collapse
Affiliation(s)
- Yu Deng
- School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, 710121, Shaanxi, China
| | - Ling Wang
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, 100035, China
| | - Chen Zhao
- College of Computing, Michigan Technological University, Houghton, MI, 49931, USA
| | - Shaojie Tang
- School of Automation, Xi'an University of Posts and Telecommunications, Xi'an, 710121, Shaanxi, China. .,Xi'an Key Laboratory of Advanced Controlling and Intelligent Processing (ACIP), Xi'an, , 71021, Shaanxi, China.
| | - Xiaoguang Cheng
- Department of Radiology, Beijing Jishuitan Hospital, Beijing, 100035, China
| | - Hong-Wen Deng
- Department of Biomedical Engineering, Tulane University, New Orleans, LA, 70118, USA
| | - Weihua Zhou
- College of Computing, Michigan Technological University, Houghton, MI, 49931, USA
| |
Collapse
|
5
|
Bekkouch IEI, Maksudov B, Kiselev S, Mustafaev T, Vrtovec T, Ibragimov B. Multi-landmark environment analysis with reinforcement learning for pelvic abnormality detection and quantification. Med Image Anal 2022; 78:102417. [PMID: 35325712 DOI: 10.1016/j.media.2022.102417] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2021] [Revised: 01/14/2022] [Accepted: 03/03/2022] [Indexed: 12/22/2022]
Abstract
Morphological abnormalities of the femoroacetabular (hip) joint are among the most common human musculoskeletal disorders and often develop asymptomatically at early easily treatable stages. In this paper, we propose an automated framework for landmark-based detection and quantification of hip abnormalities from magnetic resonance (MR) images. The framework relies on a novel idea of multi-landmark environment analysis with reinforcement learning. In particular, we merge the concepts of the graphical lasso and Morris sensitivity analysis with deep neural networks to quantitatively estimate the contribution of individual landmark and landmark subgroup locations to the other landmark locations. Convolutional neural networks for image segmentation are utilized to propose the initial landmark locations, and landmark detection is then formulated as a reinforcement learning (RL) problem, where each landmark-agent can adjust its position by observing the local MR image neighborhood and the locations of the most-contributive landmarks. The framework was validated on T1-, T2- and proton density-weighted MR images of 260 patients with the aim to measure the lateral center-edge angle (LCEA), femoral neck-shaft angle (NSA), and the anterior and posterior acetabular sector angles (AASA and PASA) of the hip, and derive the quantitative abnormality metrics from these angles. The framework was successfully tested using the UNet and feature pyramid network (FPN) segmentation architectures for landmark proposal generation, and the deep Q-network (DeepQN), deep deterministic policy gradient (DDPG), twin delayed deep deterministic policy gradient (TD3), and actor-critic policy gradient (A2C) RL networks for landmark position optimization. The resulting overall landmark detection error of 1.5 mm and angle measurement error of 1.4° indicates a superior performance in comparison to existing methods. Moreover, the automatically estimated abnormality labels were in 95% agreement with those generated by an expert radiologist.
Collapse
Affiliation(s)
- Imad Eddine Ibrahim Bekkouch
- Sorbonne Center for Artificial Intelligence, Sorbonne University, Paris, France; Institute of Data Science and Artificial Intelligence, Innopolis University, Innopolis, Russia
| | - Bulat Maksudov
- Institute of Data Science and Artificial Intelligence, Innopolis University, Innopolis, Russia; Department of Computer Science, University College Dublin, Dublin, Ireland
| | - Semen Kiselev
- Institute of Data Science and Artificial Intelligence, Innopolis University, Innopolis, Russia
| | - Tamerlan Mustafaev
- Institute of Data Science and Artificial Intelligence, Innopolis University, Innopolis, Russia; Public Hospital #2, Department of Radiology, Kazan, Russia
| | - Tomaž Vrtovec
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia
| | - Bulat Ibragimov
- Faculty of Electrical Engineering, University of Ljubljana, Ljubljana, Slovenia; Department of Computer Science, University of Copenhagen, Copenhagen, Denmark.
| |
Collapse
|
6
|
Facco G, Massetti D, Coppa V, Procaccini R, Greco L, Simoncini M, Mari A, Marinelli M, Gigante A. The use of 3D printed models for the pre-operative planning of surgical correction of pediatric hip deformities: a case series and concise review of the literature. ACTA BIO-MEDICA : ATENEI PARMENSIS 2022; 92:e2021221. [PMID: 35075078 PMCID: PMC8823571 DOI: 10.23750/abm.v92i6.11703] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 06/24/2021] [Indexed: 11/23/2022]
Abstract
BACKGROUND AND AIM Three-dimensional (3D) printing is prevailing in surgical planning of complex cases. The aim of this study is to describe the use of 3D printed models during the surgical planning for the treatment of four pediatric hip deformity cases. Moreover, pediatric pelvic deformities analyzed by 3D printed models have been object of a concise review. METHODS All treated patients were females, with an average age of 5 years old. Patients' dysplastic pelvises were 3D-printed in real scale using processed files from Computed Tomography (CT) or Magnetic Resonance Imaging (MRI). Data about 3D printing, surgery time, blood loss and fluoroscopy have been recorded. RESULTS The Zanoli-Pemberton or Ganz-Paley osteotomies were performed on the four 3D printed models, then the real surgery was performed in the operating room. Time and costs to produce 3D printed models were respectively on average 17:26 h and 34.66 €. The surgical duration took about 87.5 min while the blood loss average was 1.9 ml/dl. Fluoroscopy time was 21 sec. MRI model resulted inaccurate and more difficult to produce. 10 papers have been selected for the concise literature review. CONCLUSIONS 3D printed models have proved themselves useful in the reduction of surgery time, blood loss and ionizing radiation, as well as they have improved surgical outcomes. 3D printed model is a valid tool to deepen the complex anatomy and orientate surgical choices by allowing surgeons to carefully plan the surgery.
Collapse
Affiliation(s)
- Giulia Facco
- Department of Clinical and Molecular Sciences, Università Politecnica delle Marche, Ancona, Italy.
| | - Daniele Massetti
- Department of Orthopedic and Trauma Surgery, Ospedali Riuniti, Ancona, Italy.
| | - Valentino Coppa
- Clinic of Adult and Paediatric Orthopaedics, Ospedali Riuniti, Ancona, Italy.
| | - Roberto Procaccini
- Clinic of Adult and Paediatric Orthopaedics, Ospedali Riuniti, Ancona, Italy.
| | - Luciano Greco
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Ancona, Italy.
| | | | - Alberto Mari
- 6Health Physics Department, Ospedali Riuniti, Ancona, Italy.
| | - Mario Marinelli
- Clinic of Adult and Paediatric Orthopaedics, Ospedali Riuniti, Ancona, Italy.
| | - Antonio Gigante
- Department of Clinical and Molecular Sciences, Università Politecnica delle Marche, Ancona, Italy.
| |
Collapse
|
7
|
Zeng G, Degonda C, Boschung A, Schmaranzer F, Gerber N, Siebenrock KA, Steppacher SD, Tannast M, Lerch TD. Three-Dimensional Magnetic Resonance Imaging Bone Models of the Hip Joint Using Deep Learning: Dynamic Simulation of Hip Impingement for Diagnosis of Intra- and Extra-articular Hip Impingement. Orthop J Sports Med 2021; 9:23259671211046916. [PMID: 34938819 PMCID: PMC8685729 DOI: 10.1177/23259671211046916] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/13/2021] [Accepted: 06/23/2021] [Indexed: 11/26/2022] Open
Abstract
Background: Dynamic 3-dimensional (3D) simulation of hip impingement enables better
understanding of complex hip deformities in young adult patients with
femoroacetabular impingement (FAI). Deep learning algorithms may improve
magnetic resonance imaging (MRI) segmentation. Purpose: (1) To evaluate the accuracy of 3D models created using convolutional neural
networks (CNNs) for fully automatic MRI bone segmentation of the hip joint,
(2) to correlate hip range of motion (ROM) between manual and automatic
segmentation, and (3) to compare location of hip impingement in 3D models
created using automatic bone segmentation in patients with FAI. Study Design: Cohort study (diagnosis); Level of evidence, 3. Methods: The authors retrospectively reviewed 31 hip MRI scans from 26 symptomatic
patients (mean age, 27 years) with hip pain due to FAI. All patients had
matched computed tomography (CT) and MRI scans of the pelvis and the knee.
CT- and MRI-based osseous 3D models of the hip joint of the same patients
were compared (MRI: T1 volumetric interpolated breath-hold examination
high-resolution sequence; 0.8 mm3 isovoxel). CNNs were used to
develop fully automatic bone segmentation of the hip joint, and the 3D
models created using this method were compared with manual segmentation of
CT- and MRI-based 3D models. Impingement-free ROM and location of hip
impingement were calculated using previously validated collision detection
software. Results: The difference between the CT- and MRI-based 3D models was <1 mm, and the
difference between fully automatic and manual segmentation of MRI-based 3D
models was <1 mm. The correlation of automatic and manual MRI-based 3D
models was excellent and significant for impingement-free ROM
(r = 0.995; P < .001), flexion
(r = 0.953; P < .001), and internal
rotation at 90° of flexion (r = 0.982; P
< .001). The correlation for impingement-free flexion between automatic
MRI-based 3D models and CT-based 3D models was 0.953 (P
< .001). The location of impingement was not significantly different
between manual and automatic segmentation of MRI-based 3D models, and the
location of extra-articular hip impingement was not different between CT-
and MRI-based 3D models. Conclusion: CNN can potentially be used in clinical practice to provide rapid and
accurate 3D MRI hip joint models for young patients. The created models can
be used for simulation of impingement during diagnosis of intra- and
extra-articular hip impingement to enable radiation-free and
patient-specific surgical planning for hip arthroscopy and open hip
preservation surgery.
Collapse
Affiliation(s)
- Guodong Zeng
- Sitem Center for Translational Medicine and Biomedical Entrepreneurship, University of Bern, Switzerland
| | - Celia Degonda
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland
| | - Adam Boschung
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland.,Department of Diagnostic, Interventional and Paediatric Radiology, University of Bern, Inselspital, Bern, Switzerland
| | - Florian Schmaranzer
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland.,Department of Diagnostic, Interventional and Paediatric Radiology, University of Bern, Inselspital, Bern, Switzerland
| | - Nicolas Gerber
- Sitem Center for Translational Medicine and Biomedical Entrepreneurship, University of Bern, Switzerland
| | - Klaus A Siebenrock
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland
| | - Simon D Steppacher
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland
| | - Moritz Tannast
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland.,Department of Orthopaedic Surgery and Traumatology, Cantonal Hospital, University of Fribourg, Fribourg, Switzerland
| | - Till D Lerch
- Department of Orthopedic Surgery, Inselspital, University of Bern, Bern, Switzerland.,Department of Diagnostic, Interventional and Paediatric Radiology, University of Bern, Inselspital, Bern, Switzerland
| |
Collapse
|
8
|
Zhao C, Keyak JH, Tang J, Kaneko TS, Khosla S, Amin S, Atkinson EJ, Zhao LJ, Serou MJ, Zhang C, Shen H, Deng HW, Zhou W. ST-V-Net: incorporating shape prior into convolutional neural networks for proximal femur segmentation. COMPLEX INTELL SYST 2021; 9:2747-2758. [PMID: 37304840 PMCID: PMC10256660 DOI: 10.1007/s40747-021-00427-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 06/05/2021] [Indexed: 12/13/2022]
Abstract
We aim to develop a deep-learning-based method for automatic proximal femur segmentation in quantitative computed tomography (QCT) images. We proposed a spatial transformation V-Net (ST-V-Net), which contains a V-Net and a spatial transform network (STN) to extract the proximal femur from QCT images. The STN incorporates a shape prior into the segmentation network as a constraint and guidance for model training, which improves model performance and accelerates model convergence. Meanwhile, a multi-stage training strategy is adopted to fine-tune the weights of the ST-V-Net. We performed experiments using a QCT dataset which included 397 QCT subjects. During the experiments for the entire cohort and then for male and female subjects separately, 90% of the subjects were used in ten-fold stratified cross-validation for training and the rest of the subjects were used to evaluate the performance of models. In the entire cohort, the proposed model achieved a Dice similarity coefficient (DSC) of 0.9888, a sensitivity of 0.9966 and a specificity of 0.9988. Compared with V-Net, the Hausdorff distance was reduced from 9.144 to 5.917 mm, and the average surface distance was reduced from 0.012 to 0.009 mm using the proposed ST-V-Net. Quantitative evaluation demonstrated excellent performance of the proposed ST-V-Net for automatic proximal femur segmentation in QCT images. In addition, the proposed ST-V-Net sheds light on incorporating shape prior to segmentation to further improve the model performance.
Collapse
Affiliation(s)
- Chen Zhao
- Department of Applied Computing, Michigan Technological University, 1400 Townsend Dr, Houghton, MI 49931 USA
| | - Joyce H. Keyak
- Department of Radiological Sciences, Department of Mechanical and Aerospace Engineering, Department of Biomedical Engineering, and Chao Family Comprehensive Cancer Center, University of California, Irvine, Irvine, CA 92697 USA
| | - Jinshan Tang
- Department of Applied Computing, Michigan Technological University, 1400 Townsend Dr, Houghton, MI 49931 USA
- Center of Biocomputing and Digital Health, Institute of Computing and Cybersystems, and Health Research Institute, Michigan Technological University, Houghton, MI 49931 USA
| | - Tadashi S. Kaneko
- Department of Radiological Sciences, University of California, Irvine, Irvine, CA 92697 USA
| | - Sundeep Khosla
- Division of Endocrinology, Department of Medicine, Mayo Clinic, Rochester, MN USA
| | - Shreyasee Amin
- Division of Epidemiology, Department of Health Sciences Research, and Division of Rheumatology, Department of Medicine, Mayo Clinic, Rochester, MN USA
| | - Elizabeth J. Atkinson
- Division of Biomedical Statistics and Informatics, Department of Health Sciences Research, Mayo Clinic, Rochester, MN USA
| | - Lan-Juan Zhao
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA 70112 USA
| | - Michael J. Serou
- Department of Radiology, Tulane University School of Medicine, New Orleans, LA 70112 USA
| | - Chaoyang Zhang
- School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS 39406 USA
| | - Hui Shen
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA 70112 USA
| | - Hong-Wen Deng
- Division of Biomedical Informatics and Genomics, Tulane Center of Biomedical Informatics and Genomics, Deming Department of Medicine, Tulane University, School of Medicine, 1440 Canal Street, Suite 1610, New Orleans, LA 70112 USA
| | - Weihua Zhou
- Department of Applied Computing, Michigan Technological University, 1400 Townsend Dr, Houghton, MI 49931 USA
- Center of Biocomputing and Digital Health, Institute of Computing and Cybersystems, and Health Research Institute, Michigan Technological University, Houghton, MI 49931 USA
| |
Collapse
|
9
|
Semantic segmentation of the multiform proximal femur and femoral head bones with the deep convolutional neural networks in low quality MRI sections acquired in different MRI protocols. Comput Med Imaging Graph 2020; 81:101715. [PMID: 32240933 DOI: 10.1016/j.compmedimag.2020.101715] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2019] [Revised: 03/01/2020] [Accepted: 03/03/2020] [Indexed: 01/22/2023]
Abstract
Medical image segmentation is one of the most crucial issues in medical image processing and analysis. In general, segmentation of the various structures in medical images is performed for the further image analyzes such as quantification, assessment, diagnosis, prognosis and classification. In this paper, a research study for the 2D semantic segmentation of the multiform, both spheric and aspheric, femoral head and proximal femur bones in magnetic resonance imaging (MRI) sections of the patients with Legg-Calve-Perthes disease (LCPD) with the deep convolutional neural networks (CNNs) is presented. In the scope of the proposed study, bilateral hip MRI sections acquired in coronal plane were used. The main characteristic of the MRI sections that were used is to be low quality images which were obtained in different MRI protocols by using 3 different MRI scanners with 1.5 T imaging capability. In performance evaluations, promising segmentation results were achieved with deep CNNs in low quality MRI sections acquired in different MRI protocols. A success rate about 90% was observed in semantic segmentation of the multiform femoral head and proximal femur bones in a total of 194 MRI sections obtained from 33 MRI sequences of 13 patients with deep CNNs.
Collapse
|
10
|
Holistic decomposition convolution for effective semantic segmentation of medical volume images. Med Image Anal 2019; 57:149-164. [DOI: 10.1016/j.media.2019.07.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2018] [Revised: 05/22/2019] [Accepted: 07/04/2019] [Indexed: 11/24/2022]
|
11
|
Cai J, He WG, Wang L, Zhou K, Wu TX. Osteoporosis Recognition in Rats under Low-Power Lens Based on Convexity Optimization Feature Fusion. Sci Rep 2019; 9:10971. [PMID: 31358772 PMCID: PMC6662810 DOI: 10.1038/s41598-019-47281-7] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Accepted: 07/15/2019] [Indexed: 11/09/2022] Open
Abstract
Considering the poor medical conditions in some regions of China, this paper attempts to develop a simple and easy way to extract and process the bone features of blurry medical images and improve the diagnosis accuracy of osteoporosis as much as possible. After reviewing the previous studies on osteoporosis, especially those focusing on texture analysis, a convexity optimization model was proposed based on intra-class dispersion, which combines texture features and shape features. Experimental results show that the proposed model boasts a larger application scope than Lasso, a popular feature selection method that only supports generalized linear models. The research findings ensure the accuracy of osteoporosis diagnosis and enjoy good potentials for clinical application.
Collapse
Affiliation(s)
- Jie Cai
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Wen-Guang He
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Long Wang
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Ke Zhou
- School of Information Engineering, Guangdong Medical University, Zhanjiang, 524023, China
| | - Tian-Xiu Wu
- School of Basic Medical Science, Guangdong Medical University, Zhanjiang, 524023, China.
| |
Collapse
|
12
|
Cam-type femoroacetabular impingement—correlations between alpha angle versus volumetric measurements and surgical findings. Eur Radiol 2019; 29:3431-3440. [DOI: 10.1007/s00330-018-5968-z] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 11/23/2018] [Accepted: 12/12/2018] [Indexed: 01/03/2023]
|
13
|
Damopoulos D, Lerch TD, Schmaranzer F, Tannast M, Chênes C, Zheng G, Schmid J. Segmentation of the proximal femur in radial MR scans using a random forest classifier and deformable model registration. Int J Comput Assist Radiol Surg 2019; 14:545-561. [PMID: 30604143 DOI: 10.1007/s11548-018-1899-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2018] [Accepted: 12/10/2018] [Indexed: 11/25/2022]
Abstract
BACKGROUND Radial 2D MRI scans of the hip are routinely used for the diagnosis of the cam type of femoroacetabular impingement (FAI) and of avascular necrosis (AVN) of the femoral head, both considered causes of hip joint osteoarthritis in young and active patients. A method for automated and accurate segmentation of the proximal femur from radial MRI scans could be very useful in both clinical routine and biomechanical studies. However, to our knowledge, no such method has been published before. PURPOSE The aims of this study are the development of a system for the segmentation of the proximal femur from radial MRI scans and the reconstruction of its 3D model that can be used for diagnosis and planning of hip-preserving surgery. METHODS The proposed system relies on: (a) a random forest classifier and (b) the registration of a 3D template mesh of the femur to the radial slices based on a physically based deformable model. The input to the system are the radial slices and the manually specified positions of three landmarks. Our dataset consists of the radial MRI scans of 25 patients symptomatic of FAI or AVN and accompanying manual segmentation of the femur, treated as the ground truth. RESULTS The achieved segmentation of the proximal femur has an average Dice similarity coefficient (DSC) of 96.37 ± 1.55%, an average symmetric mean absolute distance (SMAD) of 0.94 ± 0.39 mm and an average Hausdorff distance of 2.37 ± 1.14 mm. In the femoral head subregion, the average SMAD is 0.64 ± 0.18 mm and the average Hausdorff distance is 1.41 ± 0.56 mm. CONCLUSIONS We validated a semiautomated method for the segmentation of the proximal femur from radial MR scans. A 3D model of the proximal femur is also reconstructed, which can be used for the planning of hip-preserving surgery.
Collapse
Affiliation(s)
- Dimitrios Damopoulos
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, 3014, Bern, Switzerland.
| | - Till Dominic Lerch
- Department of Orthopaedic Surgery and Traumatology, Inselspital, University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Florian Schmaranzer
- Department of Orthopaedic Surgery and Traumatology, Inselspital, University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Moritz Tannast
- Department of Orthopaedic Surgery and Traumatology, Inselspital, University of Bern, Freiburgstrasse, 3010, Bern, Switzerland
| | - Christophe Chênes
- School of Health Sciences - Geneva, HES-SO University of Applied Sciences and Arts Western Switzerland, Avenue de Champel 47, 1206, Geneva, Switzerland
| | - Guoyan Zheng
- Institute for Surgical Technology and Biomechanics, University of Bern, Stauffacherstrasse 78, 3014, Bern, Switzerland.
| | - Jérôme Schmid
- School of Health Sciences - Geneva, HES-SO University of Applied Sciences and Arts Western Switzerland, Avenue de Champel 47, 1206, Geneva, Switzerland
| |
Collapse
|
14
|
Deniz CM, Xiang S, Hallyburton RS, Welbeck A, Babb JS, Honig S, Cho K, Chang G. Segmentation of the Proximal Femur from MR Images using Deep Convolutional Neural Networks. Sci Rep 2018; 8:16485. [PMID: 30405145 PMCID: PMC6220200 DOI: 10.1038/s41598-018-34817-6] [Citation(s) in RCA: 91] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2018] [Accepted: 10/26/2018] [Indexed: 11/20/2022] Open
Abstract
Magnetic resonance imaging (MRI) has been proposed as a complimentary method to measure bone quality and assess fracture risk. However, manual segmentation of MR images of bone is time-consuming, limiting the use of MRI measurements in the clinical practice. The purpose of this paper is to present an automatic proximal femur segmentation method that is based on deep convolutional neural networks (CNNs). This study had institutional review board approval and written informed consent was obtained from all subjects. A dataset of volumetric structural MR images of the proximal femur from 86 subjects were manually-segmented by an expert. We performed experiments by training two different CNN architectures with multiple number of initial feature maps, layers and dilation rates, and tested their segmentation performance against the gold standard of manual segmentations using four-fold cross-validation. Automatic segmentation of the proximal femur using CNNs achieved a high dice similarity score of 0.95 ± 0.02 with precision = 0.95 ± 0.02, and recall = 0.95 ± 0.03. The high segmentation accuracy provided by CNNs has the potential to help bring the use of structural MRI measurements of bone quality into clinical practice for management of osteoporosis.
Collapse
Affiliation(s)
- Cem M Deniz
- Department of Radiology, New York University School of Medicine, New York, NY, 10016, USA.
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, New York, NY, 10016, USA.
| | - Siyuan Xiang
- Center for Data Science, New York University, New York, NY, 10012, USA
| | | | - Arakua Welbeck
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, New York, NY, 10016, USA
| | - James S Babb
- Bernard and Irene Schwartz Center for Biomedical Imaging, New York University School of Medicine, New York, NY, 10016, USA
| | - Stephen Honig
- Osteoporosis Center, Hospital for Joint Diseases, New York University Langone Medical Center, New York, NY, 10003, USA
| | - Kyunghyun Cho
- Center for Data Science, New York University, New York, NY, 10012, USA
- Courant Institute of Mathematical Science, New York University, New York, NY, 10012, USA
| | - Gregory Chang
- Department of Radiology, New York University School of Medicine, New York, NY, 10016, USA
| |
Collapse
|
15
|
Hoving AM, Kraeima J, Schepers RH, Dijkstra H, Potze JH, Dorgelo B, Witjes MJH. Optimisation of three-dimensional lower jaw resection margin planning using a novel Black Bone magnetic resonance imaging protocol. PLoS One 2018; 13:e0196059. [PMID: 29677217 PMCID: PMC5909900 DOI: 10.1371/journal.pone.0196059] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2016] [Accepted: 04/05/2018] [Indexed: 11/18/2022] Open
Abstract
BACKGROUND MRI is the optimal method for sensitive detection of tumour tissue and pre-operative staging in oral cancer. When jawbone resections are necessary, the current standard of care for oral tumour surgery in our hospital is 3D virtual planning from CT data. 3D printed jawbone cutting guides are designed from the CT data. The tumour margins are difficult to visualise on CT, whereas they are clearly visible on MRI scans. The aim of this study was to change the conventional CT-based workflow by developing a method for 3D MRI-based lower jaw models. The MRI-based visualisation of the tumour aids in planning bone resection margins. MATERIALS AND FINDINGS A workflow for MRI-based 3D surgical planning with bone cutting guides was developed using a four-step approach. Key MRI parameters were defined (phase 1), followed by an application of selected Black Bone MRI sequences on healthy volunteers (phase 2). Three Black Bone MRI sequences were chosen for phase 3: standard, fat saturated, and an out of phase sequence. These protocols were validated by applying them on patients (n = 10) and comparison to corresponding CT data. The mean deviation values between the MRI- and the CT-based models were 0.63, 0.59 and 0.80 mm for the three evaluated Black Bone MRI sequences. Phase 4 entailed examination of the clinical value during surgery, using excellently fitting printed bone cutting guides designed from MRI-based lower jaw models, in two patients with oral cancer. The mean deviation of the resection planes was 2.3 mm, 3.8 mm for the fibula segments, and the mean axis deviation was the fibula segments of 1.9°. CONCLUSIONS This study offers a method for 3D virtual resection planning and surgery using cutting guides based solely on MRI imaging. Therefore, no additional CT data are required for 3D virtual planning in oral cancer surgery.
Collapse
Affiliation(s)
- Astrid M. Hoving
- Department of Oral and Maxillofacial Surgery, University Medical Centre Groningen, Groningen, The Netherlands
| | - Joep Kraeima
- Department of Oral and Maxillofacial Surgery, University Medical Centre Groningen, Groningen, The Netherlands
- * E-mail:
| | - Rutger H. Schepers
- Department of Oral and Maxillofacial Surgery, University Medical Centre Groningen, Groningen, The Netherlands
| | - Hildebrand Dijkstra
- Department of Radiology, University Medical Centre Groningen, Groningen, The Netherlands
| | - Jan Hendrik Potze
- Department of Radiology, University Medical Centre Groningen, Groningen, The Netherlands
| | - Bart Dorgelo
- Department of Radiology, University Medical Centre Groningen, Groningen, The Netherlands
| | - Max J. H. Witjes
- Department of Oral and Maxillofacial Surgery, University Medical Centre Groningen, Groningen, The Netherlands
| |
Collapse
|
16
|
Latent3DU-net: Multi-level Latent Shape Space Constrained 3D U-net for Automatic Segmentation of the Proximal Femur from Radial MRI of the Hip. MACHINE LEARNING IN MEDICAL IMAGING 2018. [DOI: 10.1007/978-3-030-00919-9_22] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/14/2023]
|
17
|
Deep Learning-Based Automatic Segmentation of the Proximal Femur from MR Images. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2018; 1093:73-79. [DOI: 10.1007/978-981-13-1396-7_6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
18
|
Zeng G, Yang X, Li J, Yu L, Heng PA, Zheng G. 3D U-net with Multi-level Deep Supervision: Fully Automatic Segmentation of Proximal Femur in 3D MR Images. MACHINE LEARNING IN MEDICAL IMAGING 2017. [DOI: 10.1007/978-3-319-67389-9_32] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/04/2022]
|
19
|
|