1
|
Udupa JK, Liu T, Jin C, Zhao L, Odhner D, Tong Y, Agrawal V, Pednekar G, Nag S, Kotia T, Goodman M, Wileyto EP, Mihailidis D, Lukens JN, Berman AT, Stambaugh J, Lim T, Chowdary R, Jalluri D, Jabbour SK, Kim S, Reyhan M, Robinson CG, Thorstad WL, Choi JI, Press R, Simone CB, Camaratta J, Owens S, Torigian DA. Combining natural and artificial intelligence for robust automatic anatomy segmentation: Application in neck and thorax auto-contouring. Med Phys 2022; 49:7118-7149. [PMID: 35833287 PMCID: PMC10087050 DOI: 10.1002/mp.15854] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2022] [Revised: 06/20/2022] [Accepted: 06/30/2022] [Indexed: 01/01/2023] Open
Abstract
BACKGROUND Automatic segmentation of 3D objects in computed tomography (CT) is challenging. Current methods, based mainly on artificial intelligence (AI) and end-to-end deep learning (DL) networks, are weak in garnering high-level anatomic information, which leads to compromised efficiency and robustness. This can be overcome by incorporating natural intelligence (NI) into AI methods via computational models of human anatomic knowledge. PURPOSE We formulate a hybrid intelligence (HI) approach that integrates the complementary strengths of NI and AI for organ segmentation in CT images and illustrate performance in the application of radiation therapy (RT) planning via multisite clinical evaluation. METHODS The system employs five modules: (i) body region recognition, which automatically trims a given image to a precisely defined target body region; (ii) NI-based automatic anatomy recognition object recognition (AAR-R), which performs object recognition in the trimmed image without DL and outputs a localized fuzzy model for each object; (iii) DL-based recognition (DL-R), which refines the coarse recognition results of AAR-R and outputs a stack of 2D bounding boxes (BBs) for each object; (iv) model morphing (MM), which deforms the AAR-R fuzzy model of each object guided by the BBs output by DL-R; and (v) DL-based delineation (DL-D), which employs the object containment information provided by MM to delineate each object. NI from (ii), AI from (i), (iii), and (v), and their combination from (iv) facilitate the HI system. RESULTS The HI system was tested on 26 organs in neck and thorax body regions on CT images obtained prospectively from 464 patients in a study involving four RT centers. Data sets from one separate independent institution involving 125 patients were employed in training/model building for each of the two body regions, whereas 104 and 110 data sets from the 4 RT centers were utilized for testing on neck and thorax, respectively. In the testing data sets, 83% of the images had limitations such as streak artifacts, poor contrast, shape distortion, pathology, or implants. The contours output by the HI system were compared to contours drawn in clinical practice at the four RT centers by utilizing an independently established ground-truth set of contours as reference. Three sets of measures were employed: accuracy via Dice coefficient (DC) and Hausdorff boundary distance (HD), subjective clinical acceptability via a blinded reader study, and efficiency by measuring human time saved in contouring by the HI system. Overall, the HI system achieved a mean DC of 0.78 and 0.87 and a mean HD of 2.22 and 4.53 mm for neck and thorax, respectively. It significantly outperformed clinical contouring in accuracy and saved overall 70% of human time over clinical contouring time, whereas acceptability scores varied significantly from site to site for both auto-contours and clinically drawn contours. CONCLUSIONS The HI system is observed to behave like an expert human in robustness in the contouring task but vastly more efficiently. It seems to use NI help where image information alone will not suffice to decide, first for the correct localization of the object and then for the precise delineation of the boundary.
Collapse
Affiliation(s)
- Jayaram K. Udupa
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tiange Liu
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
- School of Information Science and EngineeringYanshan UniversityQinhuangdaoChina
| | - Chao Jin
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Liming Zhao
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dewey Odhner
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Yubing Tong
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Vibhu Agrawal
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Gargi Pednekar
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Sanghita Nag
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Tarun Kotia
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | | | - E. Paul Wileyto
- Department of Biostatistics and EpidemiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dimitris Mihailidis
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - John Nicholas Lukens
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Abigail T. Berman
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Joann Stambaugh
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Tristan Lim
- Department of Radiation OncologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Rupa Chowdary
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Dheeraj Jalluri
- Department of MedicineUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| | - Salma K. Jabbour
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Sung Kim
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | - Meral Reyhan
- Department of Radiation OncologyRutgers UniversityNew BrunswickNew JerseyUSA
| | | | - Wade L. Thorstad
- Department of Radiation OncologyWashington UniversitySt. LouisMissouriUSA
| | | | | | | | - Joe Camaratta
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Steve Owens
- Quantitative Radiology SolutionsPhiladelphiaPennsylvaniaUSA
| | - Drew A. Torigian
- Medical Image Processing GroupDepartment of RadiologyUniversity of PennsylvaniaPhiladelphiaPennsylvaniaUSA
| |
Collapse
|
2
|
Liu C, Xie H, Zhang S, Mao Z, Sun J, Zhang Y. Misshapen Pelvis Landmark Detection With Local-Global Feature Learning for Diagnosing Developmental Dysplasia of the Hip. IEEE TRANSACTIONS ON MEDICAL IMAGING 2020; 39:3944-3954. [PMID: 32746137 DOI: 10.1109/tmi.2020.3008382] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Developmental dysplasia of the hip (DDH) is one of the most common orthopedic disorders in infants and young children. Accurately detecting and identifying the misshapen anatomical landmarks plays a crucial role in the diagnosis of DDH. However, the diversity during the calcification and the deformity due to the dislocation lead it a difficult task to detect the misshapen pelvis landmarks for both human expert and computer. Generally, the anatomical landmarks exhibit stable morphological features in part regions and rigid structural features in long ranges, which can be strong identification for the landmarks. In this paper, we investigate the local morphological features and global structural features for the misshapen landmark detection with a novel Pyramid Non-local UNet (PN-UNet). Firstly, we mine the local morphological features with a series of convolutional neural network (CNN) stacks, and convert the detection of a landmark to the segmentation of the landmark's local neighborhood by UNet. Secondly, a non-local module is employed to capture the global structural features with high-level structural knowledge. With the end-to-end and accurate detection of pelvis landmarks, we realize a fully automatic and highly reliable diagnosis of DDH. In addition, a dataset with 10,000 pelvis X-ray images is constructed in our work. It is the first public dataset for diagnosing DDH and has been already released for open research. To the best of our knowledge, this is the first attempt to apply deep learning method in the diagnosis of DDH. Experimental results show that our approach achieves an excellent precision in landmark detection (average point to point error of 0.9286mm) and illness diagnosis over human experts. Project is available at http://imcc.ustc.edu.cn/project/ddh/.
Collapse
|
3
|
Bailador G, Ríos‐Sánchez B, Sánchez‐Reillo R, Ishikawa H, Sánchez‐Ávila C. Flooding‐based segmentation for contactless hand biometrics oriented to mobile devices. IET BIOMETRICS 2018. [DOI: 10.1049/iet-bmt.2017.0166] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Gonzalo Bailador
- Group of Biometrics, Biosignals and SecurityUniversidad Politécnica de MadridEdif. CeDInt‐UPM, Campus de Montegancedo28223Pozuelo de AlarcónMadridSpain
| | - Belén Ríos‐Sánchez
- Group of Biometrics, Biosignals and SecurityUniversidad Politécnica de MadridEdif. CeDInt‐UPM, Campus de Montegancedo28223Pozuelo de AlarcónMadridSpain
| | - Raúl Sánchez‐Reillo
- University Group for Identification TechnologiesUniversidad Carlos III de Leganes MadridSpain
| | - Hiroshi Ishikawa
- Department of Computer Science and EngineeringWaseda UniversityOkubo 3‐4‐1ShinjukuTokyo169‐8555Japan
| | - Carmen Sánchez‐Ávila
- Group of Biometrics, Biosignals and SecurityUniversidad Politécnica de MadridEdif. CeDInt‐UPM, Campus de Montegancedo28223Pozuelo de AlarcónMadridSpain
| |
Collapse
|
4
|
Liu X, Yang J, Song S, Cong W, Jiao P, Song H, Ai D, Jiang Y, Wang Y. Sparse intervertebral fence composition for 3D cervical vertebra segmentation. ACTA ACUST UNITED AC 2018; 63:115010. [DOI: 10.1088/1361-6560/aac226] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
5
|
Bernard F, Salamanca L, Thunberg J, Tack A, Jentsch D, Lamecker H, Zachow S, Hertel F, Goncalves J, Gemmar P. Shape-aware surface reconstruction from sparse 3D point-clouds. Med Image Anal 2017; 38:77-89. [DOI: 10.1016/j.media.2017.02.005] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2016] [Revised: 02/07/2017] [Accepted: 02/08/2017] [Indexed: 10/20/2022]
|
6
|
Mansoor A, Cerrolaza JJ, Idrees R, Biggs E, Alsharid MA, Avery RA, Linguraru MG. Deep Learning Guided Partitioned Shape Model for Anterior Visual Pathway Segmentation. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1856-65. [PMID: 26930677 DOI: 10.1109/tmi.2016.2535222] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Analysis of cranial nerve systems, such as the anterior visual pathway (AVP), from MRI sequences is challenging due to their thin long architecture, structural variations along the path, and low contrast with adjacent anatomic structures. Segmentation of a pathologic AVP (e.g., with low-grade gliomas) poses additional challenges. In this work, we propose a fully automated partitioned shape model segmentation mechanism for AVP steered by multiple MRI sequences and deep learning features. Employing deep learning feature representation, this framework presents a joint partitioned statistical shape model able to deal with healthy and pathological AVP. The deep learning assistance is particularly useful in the poor contrast regions, such as optic tracts and pathological areas. Our main contributions are: 1) a fast and robust shape localization method using conditional space deep learning, 2) a volumetric multiscale curvelet transform-based intensity normalization method for robust statistical model, and 3) optimally partitioned statistical shape and appearance models based on regional shape variations for greater local flexibility. Our method was evaluated on MRI sequences obtained from 165 pediatric subjects. A mean Dice similarity coefficient of 0.779 was obtained for the segmentation of the entire AVP (optic nerve only =0.791 ) using the leave-one-out validation. Results demonstrated that the proposed localized shape and sparse appearance-based learning approach significantly outperforms current state-of-the-art segmentation approaches and is as robust as the manual segmentation.
Collapse
|
7
|
Wang Q, Kang W, Hu H, Wang B. HOSVD-Based 3D Active Appearance Model: Segmentation of Lung Fields in CT Images. J Med Syst 2016; 40:176. [PMID: 27277277 DOI: 10.1007/s10916-016-0535-0] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2015] [Accepted: 06/01/2016] [Indexed: 11/24/2022]
Abstract
An Active Appearance Model (AAM) is a computer vision model which can be used to effectively segment lung fields in CT images. However, the fitting result is often inadequate when the lungs are affected by high-density pathologies. To overcome this problem, we propose a Higher-order Singular Value Decomposition (HOSVD)-based Three-dimensional (3D) AAM. An evaluation was performed on 310 diseased lungs form the Lung Image Database Consortium Image Collection. Other contemporary AAMs operate directly on patterns represented by vectors, i.e., before applying the AAM to a 3D lung volume,it has to be vectorized first into a vector pattern by some technique like concatenation. However, some implicit structural or local contextual information may be lost in this transformation. According to the nature of the 3D lung volume, HOSVD is introduced to represent and process the lung in tensor space. Our method can not only directly operate on the original 3D tensor patterns, but also efficiently reduce the computer memory usage. The evaluation resulted in an average Dice coefficient of 97.0 % ± 0.59 %, a mean absolute surface distance error of 1.0403 ± 0.5716 mm, a mean border positioning errors of 0.9187 ± 0.5381 pixel, and a Hausdorff Distance of 20.4064 ± 4.3855, respectively. Experimental results showed that our methods delivered significant and better segmentation results, compared with the three other model-based lung segmentation approaches, namely 3D Snake, 3D ASM and 3D AAM.
Collapse
Affiliation(s)
- Qingzhu Wang
- School of Information Engineering, Northeast Dianli University, Jilin, 132012, China.
| | - Wanjun Kang
- School of Information Engineering, Northeast Dianli University, Jilin, 132012, China
| | - Haihui Hu
- School of Information Engineering, Northeast Dianli University, Jilin, 132012, China
| | - Bin Wang
- Jilin Tumor Hospital, Changchun, China
| |
Collapse
|
8
|
Gao Y, Shao Y, Lian J, Wang AZ, Chen RC, Shen D. Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests. IEEE TRANSACTIONS ON MEDICAL IMAGING 2016; 35:1532-43. [PMID: 26800531 PMCID: PMC4918760 DOI: 10.1109/tmi.2016.2519264] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.
Collapse
Affiliation(s)
- Yaozong Gao
- Department of Computer Science, the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Yeqin Shao
- Nantong University, Jiangsu 226019, China and also with the Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA ()
| | - Jun Lian
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Andrew Z. Wang
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Ronald C. Chen
- Department of Radiation Oncology, University of North Carolina, Chapel Hill, NC, 27599 USA
| | - Dinggang Shen
- Department of Radiology and Biomedical Research Imaging Center, University of North Carolina, Chapel Hill, NC 27599 USA and also with Department of Brain and Cognitive Engineering, Korea University, Seoul 02841, Republic of Korea ()
| |
Collapse
|
9
|
Sun K, Udupa JK, Odhner D, Tong Y, Zhao L, Torigian DA. Automatic thoracic anatomy segmentation on CT images using hierarchical fuzzy models and registration. Med Phys 2016; 43:1487-500. [DOI: 10.1118/1.4942486] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Kaiqiong Sun
- School of Mathematics and Computer Science, Wuhan Polytechnic University, Wuhan 430023, China
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| | - Liming Zhao
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104 and Research Center of Intelligent System and Robotics, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Drew A. Torigian
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104
| |
Collapse
|
10
|
Phellan R, Falcão AX, Udupa JK. Medical image segmentation via atlases and fuzzy object models: Improving efficacy through optimum object search and fewer models. Med Phys 2015; 43:401. [DOI: 10.1118/1.4938577] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Affiliation(s)
- Renzo Phellan
- LIV, Institute of Computing, University of Campinas, Av. Albert Einstein, 1251, Cidade Universitária "Zeferino Vaz," Campinas, SP 13083-852, Brazil
| | - Alexandre X Falcão
- LIV, Institute of Computing, University of Campinas, Av. Albert Einstein, 1251, Cidade Universitária "Zeferino Vaz," Campinas, SP 13083-852, Brazil
| | - Jayaram K Udupa
- Medical Image Processing Group Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Philadelphia, Pennsylvania 19104-6021
| |
Collapse
|
11
|
Cai Y, Osman S, Sharma M, Landis M, Li S. Multi-Modality Vertebra Recognition in Arbitrary Views Using 3D Deformable Hierarchical Model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2015; 34:1676-1693. [PMID: 25594966 DOI: 10.1109/tmi.2015.2392054] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Computer-aided diagnosis of spine problems relies on the automatic identification of spine structures in images. The task of automatic vertebra recognition is to identify the global spine and local vertebra structural information such as spine shape, vertebra location and pose. Vertebra recognition is challenging due to the large appearance variations in different image modalities/views and the high geometric distortions in spine shape. Existing vertebra recognitions are usually simplified as vertebrae detections, which mainly focuses on the identification of vertebra locations and labels but cannot support further spine quantitative assessment. In this paper, we propose a vertebra recognition method using 3D deformable hierarchical model (DHM) to achieve cross-modality local vertebra location+pose identification with accurate vertebra labeling, and global 3D spine shape recovery. We recast vertebra recognition as deformable model matching, fitting the input spine images with the 3D DHM via deformations. The 3D model-matching mechanism provides a more comprehensive vertebra location+pose+label simultaneous identification than traditional vertebra location+label detection, and also provides an articulated 3D mesh model for the input spine section. Moreover, DHM can conduct versatile recognition on volume and multi-slice data, even on single slice. Experiments show our method can successfully extract vertebra locations, labels, and poses from multi-slice T1/T2 MR and volume CT, and can reconstruct 3D spine model on different image views such as lumbar, cervical, even whole spine. The resulting vertebra information and the recovered shape can be used for quantitative diagnosis of spine problems and can be easily digitalized and integrated in modern medical PACS systems.
Collapse
|
12
|
Spina TV, de Miranda PAV, Falcão AX. Hybrid approaches for interactive image segmentation using the live markers paradigm. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2014; 23:5756-5769. [PMID: 25376038 DOI: 10.1109/tip.2014.2367319] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
Interactive image segmentation methods normally rely on cues about the foreground imposed by the user as region constraints (markers/brush strokes) or boundary constraints (anchor points). These paradigms often have complementary strengths and weaknesses, which can be addressed to improve the interactive experience by reducing the user’s effort. We propose a novel hybrid paradigm based on a new form of interaction called live markers, where optimum boundary-tracking segments are turned into internal and external markers for region-based delineation to effectively extract the object. We present four techniques within this paradigm: 1) LiveMarkers; 2) RiverCut; 3) LiveCut; and 4) RiverMarkers. The homonym LiveMarkers couples boundary-tracking via live-wire-on-the-fly (LWOF) with optimum seed competition by the image foresting transform (IFT-SC). The IFT-SC can cope with complex object silhouettes, but presents a leaking problem on weaker parts of the boundary that is solved by the effective live markers produced by LWOF. Conversely, in RiverCut, the long boundary segments computed by Riverbed around complex shapes provide markers for Graph Cuts by the Min-Cut/Max-Flow algorithm (GCMF) to complete segmentation on poorly defined sections of the object’s border. LiveCut and RiverMarkers further demonstrate that live markers can improve segmentation even when the combined approaches are not complementary (e.g., GCMFs shrinking bias is also dramatically prevented when using it with LWOF). Moreover, since delineation is always region based, our methodology subsumes both paradigms, representing a new way of extending boundary tracking to the 3D image domain, while speeding up the addition of markers close to the object’s boundary-a necessary but time consuming task when done manually. We justify our claims through an extensive experimental evaluation on natural and medical images data sets, using recently proposed robot users for boundary-tracking methods.
Collapse
|
13
|
Compounding local invariant features and global deformable geometry for medical image registration. PLoS One 2014; 9:e105815. [PMID: 25165985 PMCID: PMC4148338 DOI: 10.1371/journal.pone.0105815] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2014] [Accepted: 07/20/2014] [Indexed: 11/19/2022] Open
Abstract
Using deformable models to register medical images can result in problems of initialization of deformable models and robustness and accuracy of matching of inter-subject anatomical variability. To tackle these problems, a novel model is proposed in this paper by compounding local invariant features and global deformable geometry. This model has four steps. First, a set of highly-repeatable and highly-robust local invariant features, called Key Features Model (KFM), are extracted by an effective matching strategy. Second, local features can be matched more accurately through the KFM for the purpose of initializing a global deformable model. Third, the positional relationship between the KFM and the global deformable model can be used to precisely pinpoint all landmarks after initialization. And fourth, the final pose of the global deformable model is determined by an iterative process with a lower time cost. Through the practical experiments, the paper finds three important conclusions. First, it proves that the KFM can detect the matching feature points well. Second, the precision of landmark locations adjusted by the modeled relationship between KFM and global deformable model is greatly improved. Third, regarding the fitting accuracy and efficiency, by observation from the practical experiments, it is found that the proposed method can improve % of the fitting accuracy and reduce around 50% of the computational time compared with state-of-the-art methods.
Collapse
|
14
|
Book inner boundary extraction with modified active shape model. Pattern Recognit Lett 2014. [DOI: 10.1016/j.patrec.2014.03.012] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022]
|
15
|
Udupa JK, Odhner D, Zhao L, Tong Y, Matsumoto MMS, Ciesielski KC, Falcao AX, Vaideeswaran P, Ciesielski V, Saboury B, Mohammadianrasanani S, Sin S, Arens R, Torigian DA. Body-wide hierarchical fuzzy modeling, recognition, and delineation of anatomy in medical images. Med Image Anal 2014; 18:752-71. [PMID: 24835182 PMCID: PMC4086870 DOI: 10.1016/j.media.2014.04.003] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2013] [Revised: 04/11/2014] [Accepted: 04/11/2014] [Indexed: 11/16/2022]
Abstract
To make Quantitative Radiology (QR) a reality in radiological practice, computerized body-wide Automatic Anatomy Recognition (AAR) becomes essential. With the goal of building a general AAR system that is not tied to any specific organ system, body region, or image modality, this paper presents an AAR methodology for localizing and delineating all major organs in different body regions based on fuzzy modeling ideas and a tight integration of fuzzy models with an Iterative Relative Fuzzy Connectedness (IRFC) delineation algorithm. The methodology consists of five main steps: (a) gathering image data for both building models and testing the AAR algorithms from patient image sets existing in our health system; (b) formulating precise definitions of each body region and organ and delineating them following these definitions; (c) building hierarchical fuzzy anatomy models of organs for each body region; (d) recognizing and locating organs in given images by employing the hierarchical models; and (e) delineating the organs following the hierarchy. In Step (c), we explicitly encode object size and positional relationships into the hierarchy and subsequently exploit this information in object recognition in Step (d) and delineation in Step (e). Modality-independent and dependent aspects are carefully separated in model encoding. At the model building stage, a learning process is carried out for rehearsing an optimal threshold-based object recognition method. The recognition process in Step (d) starts from large, well-defined objects and proceeds down the hierarchy in a global to local manner. A fuzzy model-based version of the IRFC algorithm is created by naturally integrating the fuzzy model constraints into the delineation algorithm. The AAR system is tested on three body regions - thorax (on CT), abdomen (on CT and MRI), and neck (on MRI and CT) - involving a total of over 35 organs and 130 data sets (the total used for model building and testing). The training and testing data sets are divided into equal size in all cases except for the neck. Overall the AAR method achieves a mean accuracy of about 2 voxels in localizing non-sparse blob-like objects and most sparse tubular objects. The delineation accuracy in terms of mean false positive and negative volume fractions is 2% and 8%, respectively, for non-sparse objects, and 5% and 15%, respectively, for sparse objects. The two object groups achieve mean boundary distance relative to ground truth of 0.9 and 1.5 voxels, respectively. Some sparse objects - venous system (in the thorax on CT), inferior vena cava (in the abdomen on CT), and mandible and naso-pharynx (in neck on MRI, but not on CT) - pose challenges at all levels, leading to poor recognition and/or delineation results. The AAR method fares quite favorably when compared with methods from the recent literature for liver, kidneys, and spleen on CT images. We conclude that separation of modality-independent from dependent aspects, organization of objects in a hierarchy, encoding of object relationship information explicitly into the hierarchy, optimal threshold-based recognition learning, and fuzzy model-based IRFC are effective concepts which allowed us to demonstrate the feasibility of a general AAR system that works in different body regions on a variety of organs and on different modalities.
Collapse
Affiliation(s)
- Jayaram K Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States.
| | - Dewey Odhner
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Liming Zhao
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Yubing Tong
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Monica M S Matsumoto
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Krzysztof C Ciesielski
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States; Department of Mathematics, West Virginia University, Morgantown, WV 26506-6310, United States
| | - Alexandre X Falcao
- LIV, Institute of Computing, University of Campinas, Av. Albert Einstein 1251, 13084-851 Campinas, SP, Brazil
| | - Pavithra Vaideeswaran
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Victoria Ciesielski
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Babak Saboury
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Syedmehrdad Mohammadianrasanani
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, 423 Guardian Drive, Blockley Hall, 4th Floor, Philadelphia, PA 19104, United States
| | - Sanghun Sin
- Division of Respiratory and Sleep Medicine, Children's Hospital at Montefiore, 3415 Bainbridge Avenue, Bronx, NY 10467, United States
| | - Raanan Arens
- Division of Respiratory and Sleep Medicine, Children's Hospital at Montefiore, 3415 Bainbridge Avenue, Bronx, NY 10467, United States
| | - Drew A Torigian
- Department of Radiology, Hospital of the University of Pennsylvania, Philadelphia, PA 19104-4283, United States
| |
Collapse
|
16
|
Ibragimov B, Likar B, Pernuš F, Vrtovec T. Shape representation for efficient landmark-based segmentation in 3-d. IEEE TRANSACTIONS ON MEDICAL IMAGING 2014; 33:861-874. [PMID: 24710155 DOI: 10.1109/tmi.2013.2296976] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
In this paper, we propose a novel approach to landmark-based shape representation that is based on transportation theory, where landmarks are considered as sources and destinations, all possible landmark connections as roads, and established landmark connections as goods transported via these roads. Landmark connections, which are selectively established, are identified through their statistical properties describing the shape of the object of interest, and indicate the least costly roads for transporting goods from sources to destinations. From such a perspective, we introduce three novel shape representations that are combined with an existing landmark detection algorithm based on game theory. To reduce computational complexity, which results from the extension from 2-D to 3-D segmentation, landmark detection is augmented by a concept known in game theory as strategy dominance. The novel shape representations, game-theoretic landmark detection and strategy dominance are combined into a segmentation framework that was evaluated on 3-D computed tomography images of lumbar vertebrae and femoral heads. The best shape representation yielded symmetric surface distance of 0.75 mm and 1.11 mm, and Dice coefficient of 93.6% and 96.2% for lumbar vertebrae and femoral heads, respectively. By applying strategy dominance, the computational costs were further reduced for up to three times.
Collapse
|
17
|
Using classifiers as heuristics to describe local structure in Active Shape Models with small training sets. Pattern Recognit Lett 2013. [DOI: 10.1016/j.patrec.2013.04.026] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
18
|
Wu S, Weinstein SP, Conant EF, Schnall MD, Kontos D. Automated chest wall line detection for whole-breast segmentation in sagittal breast MR images. Med Phys 2013; 40:042301. [PMID: 23556914 DOI: 10.1118/1.4793255] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023] Open
Abstract
PURPOSE Breast magnetic resonance imaging (MRI) plays an important role in the clinical management of breast cancer. Computerized analysis is increasingly used to quantify breast MRI features in applications such as computer-aided lesion detection and fibroglandular tissue estimation for breast cancer risk assessment. Automated segmentation of the whole-breast as an organ from the other parts imaged is an important step in aiding lesion localization and fibroglandular tissue quantification. For this task, identifying the chest wall line (CWL) is most challenging due to image contrast variations, intensity discontinuity, and bias field. METHODS In this work, the authors develop and validate a fully automated image processing algorithm for accurate delineation of the CWL in sagittal breast MRI. The CWL detection is based on an integrated scheme of edge extraction and CWL candidate evaluation. The edge extraction consists of applying edge-enhancing filters and an edge linking algorithm. Increased accuracy is achieved by the synergistic use of multiple image inputs for edge extraction, where multiple CWL candidates are evaluated by the dynamic time warping algorithm coupled with the construction of a CWL reference. Their method is quantitatively validated by a dataset of 60 3D bilateral sagittal breast MRI scans (in total 3360 2D MR slices) that span the full American College of Radiology Breast Imaging Reporting and Data System (BI-RADS) breast density range. Agreement with manual segmentation obtained by an experienced breast imaging radiologist is assessed by both volumetric and boundary-based metrics, including four quantitative measures. RESULTS In terms of breast volume agreement with manual segmentation, the overlay percentage expressed by the Dice's similarity coefficient is 95.0% and the difference percentage is 10.1%. More specifically, for the segmentation accuracy of the CWL boundary, the CWL overlay percentage is 92.7% and averaged deviation distance is 2.3 mm. Their method requires ≈ 4.5 min for segmenting each 3D breast MRI scan (56 slices) in comparison to ≈ 35 min required for manual segmentation. Further analysis indicates that the segmentation performance of their method is relatively stable across the different BI-RADS density categories and breast volume, and also robust with respect to a varying range of the major parameters of the algorithm. CONCLUSIONS Their fully automated method achieves high segmentation accuracy in a time-efficient manner. It could support large scale quantitative breast MRI analysis and holds the potential to become integrated into the clinical workflow for breast cancer clinical applications in the future.
Collapse
Affiliation(s)
- Shandong Wu
- Department of Radiology, University of Pennsylvania, Philadelphia, Pennsylvania 19104, USA.
| | | | | | | | | |
Collapse
|
19
|
Jaumard NV, Udupa JK, Siegler S, Schuster JM, Hilibrand AS, Hirsch BE, Borthakur A, Winkelstein BA. Three-dimensional kinematic stress magnetic resonance image analysis shows promise for detecting altered anatomical relationships of tissues in the cervical spine associated with painful radiculopathy. Med Hypotheses 2013; 81:738-44. [PMID: 23942030 DOI: 10.1016/j.mehy.2013.07.043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2013] [Accepted: 07/20/2013] [Indexed: 10/26/2022]
Abstract
For some patients with radiculopathy a source of nerve root compression cannot be identified despite positive electromyography (EMG) evidence. This discrepancy hampers the effective clinical management for these individuals. Although it has been well-established that tissues in the cervical spine move in a three-dimensional (3D) manner, the 3D motions of the neural elements and their relationship to the bones surrounding them are largely unknown even for asymptomatic normal subjects. We hypothesize that abnormal mechanical loading of cervical nerve roots during pain-provoking head positioning may be responsible for radicular pain in those cases in which there is no evidence of nerve root compression on conventional cervical magnetic resonance imaging (MRI) with the neck in the neutral position. This biomechanical imaging proof-of-concept study focused on quantitatively defining the architectural relationships between the neural and bony structures in the cervical spine using measurements derived from 3D MR images acquired in neutral and pain-provoking neck positions for subjects: (1) with radicular symptoms and evidence of root compression by conventional MRI and positive EMG, (2) with radicular symptoms and no evidence of root compression by MRI but positive EMG, and (3) asymptomatic age-matched controls. Function and pain scores were measured, along with neck range of motion, for all subjects. MR imaging was performed in both a neutral position and a pain-provoking position. Anatomical architectural data derived from analysis of the 3D MR images were compared between symptomatic and asymptomatic groups, and the symptomatic groups with and without imaging evidence of root compression. Several differences in the architectural relationships between the bone and neural tissues were identified between the asymptomatic and symptomatic groups. In addition, changes in architectural relationships were also detected between the symptomatic groups with and without imaging evidence of nerve root compression. As demonstrated in the data and a case study the 3D stress MR imaging approach provides utility to identify biomechanical relationships between hard and soft tissues that are otherwise undetected by standard clinical imaging methods. This technique offers a promising approach to detect the source of radiculopathy to inform clinical management for this pathology.
Collapse
Affiliation(s)
- N V Jaumard
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, United States; Department of Neurosurgery, University of Pennsylvania, Philadelphia, PA, United States
| | | | | | | | | | | | | | | |
Collapse
|
20
|
Joint graph cut and relative fuzzy connectedness image segmentation algorithm. Med Image Anal 2013; 17:1046-57. [PMID: 23880374 DOI: 10.1016/j.media.2013.06.006] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2012] [Revised: 06/14/2013] [Accepted: 06/19/2013] [Indexed: 11/22/2022]
Abstract
We introduce an image segmentation algorithm, called GC(sum)(max), which combines, in novel manner, the strengths of two popular algorithms: Relative Fuzzy Connectedness (RFC) and (standard) Graph Cut (GC). We show, both theoretically and experimentally, that GC(sum)(max) preserves robustness of RFC with respect to the seed choice (thus, avoiding "shrinking problem" of GC), while keeping GC's stronger control over the problem of "leaking though poorly defined boundary segments." The analysis of GC(sum)(max) is greatly facilitated by our recent theoretical results that RFC can be described within the framework of Generalized GC (GGC) segmentation algorithms. In our implementation of GC(sum)(max) we use, as a subroutine, a version of RFC algorithm (based on Image Forest Transform) that runs (provably) in linear time with respect to the image size. This results in GC(sum)(max) running in a time close to linear. Experimental comparison of GC(sum)(max) to GC, an iterative version of RFC (IRFC), and power watershed (PW), based on a variety medical and non-medical images, indicates superior accuracy performance of GC(sum)(max) over these other methods, resulting in a rank ordering of GC(sum)(max)>PW∼IRFC>GC.
Collapse
|
21
|
Barbu A. Hierarchical object parsing from structured noisy point clouds. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:1649-1659. [PMID: 23681993 DOI: 10.1109/tpami.2012.262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Object parsing and segmentation from point clouds are challenging tasks because the relevant data is available only as thin structures along object boundaries or other features, and is corrupted by large amounts of noise. To handle this kind of data, flexible shape models are desired that can accurately follow the object boundaries. Popular models such as active shape and active appearance models (AAMs) lack the necessary flexibility for this task, while recent approaches such as the recursive compositional models make model simplifications to obtain computational guarantees. This paper investigates a hierarchical Bayesian model of shape and appearance in a generative setting. The input data is explained by an object parsing layer which is a deformation of a hidden principal component analysis (PCA) shape model with Gaussian prior. The paper also introduces a novel efficient inference algorithm that uses informed data-driven proposals to initialize local searches for the hidden variables. Applied to the problem of object parsing from structured point clouds such as edge detection images, the proposed approach obtains state-of-the-art parsing errors on two standard datasets without using any intensity information.
Collapse
Affiliation(s)
- Adrian Barbu
- Department of Statistics, Florida State University, 820 Concord Road, Tallahassee, FL 32306, USA.
| |
Collapse
|
22
|
Chen X, Udupa JK, Alavi A, Torigian DA. GC-ASM: Synergistic Integration of Graph-Cut and Active Shape Model Strategies for Medical Image Segmentation. COMPUTER VISION AND IMAGE UNDERSTANDING : CVIU 2013; 117:513-524. [PMID: 23585712 PMCID: PMC3622953 DOI: 10.1016/j.cviu.2012.12.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Image segmentation methods may be classified into two categories: purely image based and model based. Each of these two classes has its own advantages and disadvantages. In this paper, we propose a novel synergistic combination of the image based graph-cut (GC) method with the model based ASM method to arrive at the GC-ASM method for medical image segmentation. A multi-object GC cost function is proposed which effectively integrates the ASM shape information into the GC framework. The proposed method consists of two phases: model building and segmentation. In the model building phase, the ASM model is built and the parameters of the GC are estimated. The segmentation phase consists of two main steps: initialization (recognition) and delineation. For initialization, an automatic method is proposed which estimates the pose (translation, orientation, and scale) of the model, and obtains a rough segmentation result which also provides the shape information for the GC method. For delineation, an iterative GC-ASM algorithm is proposed which performs finer delineation based on the initialization results. The proposed methods are implemented to operate on 2D images and evaluated on clinical chest CT, abdominal CT, and foot MRI data sets. The results show the following: (a) An overall delineation accuracy of TPVF > 96%, FPVF < 0.6% can be achieved via GC-ASM for different objects, modalities, and body regions. (b) GC-ASM improves over ASM in its accuracy and precision to search region. (c) GC-ASM requires far fewer landmarks (about 1/3 of ASM) than ASM. (d) GC-ASM achieves full automation in the segmentation step compared to GC which requires seed specification and improves on the accuracy of GC. (e) One disadvantage of GC-ASM is its increased computational expense owing to the iterative nature of the algorithm.
Collapse
Affiliation(s)
- Xinjian Chen
- School of Electronics and Information Engineering, Soochow University, Suzhou, China 215006
| | - Jayaram K. Udupa
- Medical Image Processing Group, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104
| | - Abass Alavi
- Hospital of the University of Pennsylvania, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104
| | - Drew A. Torigian
- Hospital of the University of Pennsylvania, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104
| |
Collapse
|
23
|
Chen X, Bagci U. 3D automatic anatomy segmentation based on iterative graph-cut-ASM. Med Phys 2011; 38:4610-22. [PMID: 21928634 DOI: 10.1118/1.3602070] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
PURPOSE This paper studies the feasibility of developing an automatic anatomy segmentation (AAS) system in clinical radiology and demonstrates its operation on clinical 3D images. METHODS The AAS system, the authors are developing consists of two main parts: object recognition and object delineation. As for recognition, a hierarchical 3D scale-based multiobject method is used for the multiobject recognition task, which incorporates intensity weighted ball-scale (b-scale) information into the active shape model (ASM). For object delineation, an iterative graph-cut-ASM (IGCASM) algorithm is proposed, which effectively combines the rich statistical shape information embodied in ASM with the globally optimal delineation capability of the GC method. The presented IGCASM algorithm is a 3D generalization of the 2D GC-ASM method that they proposed previously in Chen et al. [Proc. SPIE, 7259, 72590C1-72590C-8 (2009)]. The proposed methods are tested on two datasets comprised of images obtained from 20 patients (10 male and 10 female) of clinical abdominal CT scans, and 11 foot magnetic resonance imaging (MRI) scans. The test is for four organs (liver, left and right kidneys, and spleen) segmentation, five foot bones (calcaneus, tibia, cuboid, talus, and navicular). The recognition and delineation accuracies were evaluated separately. The recognition accuracy was evaluated in terms of translation, rotation, and scale (size) error. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF, FPVF). The efficiency of the delineation method was also evaluated on an Intel Pentium IV PC with a 3.4 GHZ CPU machine. RESULTS The recognition accuracies in terms of translation, rotation, and scale error over all organs are about 8 mm, 10 degrees and 0.03, and over all foot bones are about 3.5709 mm, 0.35 degrees and 0.025, respectively. The accuracy of delineation over all organs for all subjects as expressed in TPVF and FPVF is 93.01% and 0.22%, and all foot bones for all subjects are 93.75% and 0.28%, respectively. While the delineations for the four organs can be accomplished quite rapidly with average of 78 s, the delineations for the five foot bones can be accomplished with average of 70 s. CONCLUSIONS The experimental results showed the feasibility and efficacy of the proposed automatic anatomy segmentation system: (a) the incorporation of shape priors into the GC framework is feasible in 3D as demonstrated previously for 2D images; (b) our results in 3D confirm the accuracy behavior observed in 2D. The hybrid strategy IGCASM seems to be more robust and accurate than ASM and GC individually; and (c) delineations within body regions and foot bones of clinical importance can be accomplished quite rapidly within 1.5 min.
Collapse
Affiliation(s)
- Xinjian Chen
- Radiology and Imaging Sciences, Clinical Center, National Institutes of Health, Building 10 Room IC515, Bethesda, Maryland 20892-1182, USA.
| | | |
Collapse
|
24
|
Chen X, Udupa JK, Alavi A, Torigian DA. Automatic anatomy recognition via multiobject oriented active shape models. Med Phys 2010; 37:6390-401. [PMID: 21302796 PMCID: PMC3003721 DOI: 10.1118/1.3515751] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2010] [Revised: 10/15/2010] [Accepted: 10/25/2010] [Indexed: 11/07/2022] Open
Abstract
PURPOSE This paper studies the feasibility of developing an automatic anatomy recognition (AAR) system in clinical radiology and demonstrates its operation on clinical 2D images. METHODS The anatomy recognition method described here consists of two main components: (a) multiobject generalization of OASM and (b) object recognition strategies. The OASM algorithm is generalized to multiple objects by including a model for each object and assigning a cost structure specific to each object in the spirit of live wire. The delineation of multiobject boundaries is done in MOASM via a three level dynamic programming algorithm, wherein the first level is at pixel level which aims to find optimal oriented boundary segments between successive landmarks, the second level is at landmark level which aims to find optimal location for the landmarks, and the third level is at the object level which aims to find optimal arrangement of object boundaries over all objects. The object recognition strategy attempts to find that pose vector (consisting of translation, rotation, and scale component) for the multiobject model that yields the smallest total boundary cost for all objects. The delineation and recognition accuracies were evaluated separately utilizing routine clinical chest CT, abdominal CT, and foot MRI data sets. The delineation accuracy was evaluated in terms of true and false positive volume fractions (TPVF and FPVF). The recognition accuracy was assessed (1) in terms of the size of the space of the pose vectors for the model assembly that yielded high delineation accuracy, (2) as a function of the number of objects and objects' distribution and size in the model, (3) in terms of the interdependence between delineation and recognition, and (4) in terms of the closeness of the optimum recognition result to the global optimum. RESULTS When multiple objects are included in the model, the delineation accuracy in terms of TPVF can be improved to 97%-98% with a low FPVF of 0.1%-0.2%. Typically, a recognition accuracy of > or = 90% yielded a TPVF > or = 95% and FPVF < or = 0.5%. Over the three data sets and over all tested objects, in 97% of the cases, the optimal solutions found by the proposed method constituted the true global optimum. CONCLUSIONS The experimental results showed the feasibility and efficacy of the proposed automatic anatomy recognition system. Increasing the number of objects in the model can significantly improve both recognition and delineation accuracy. More spread out arrangement of objects in the model can lead to improved recognition and delineation accuracy. Including larger objects in the model also improved recognition and delineation. The proposed method almost always finds globally optimum solutions.
Collapse
Affiliation(s)
- Xinjian Chen
- Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Room 1C515, Building 10, Bethesda, Maryland 20892-1182, USA
| | | | | | | |
Collapse
|