101
|
Matsopoulos GK, Asvestas PA, Delibasis KK, Mouravliansky NA, Zeyen TG. Detection of glaucomatous change based on vessel shape analysis. Comput Med Imaging Graph 2008; 32:183-92. [DOI: 10.1016/j.compmedimag.2007.11.003] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2006] [Revised: 11/13/2007] [Accepted: 11/26/2007] [Indexed: 10/22/2022]
|
102
|
Grisan E, Foracchia M, Ruggeri A. A novel method for the automatic grading of retinal vessel tortuosity. IEEE TRANSACTIONS ON MEDICAL IMAGING 2008; 27:310-9. [PMID: 18334427 DOI: 10.1109/tmi.2007.904657] [Citation(s) in RCA: 96] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
Tortuosity is among the first alterations in the retinal vessel network to appear in many retinopathies, such as those due to hypertension. An automatic evaluation of retinal vessel tortuosity would help the early detection of such retinopathies. Quite a few techniques for tortuosity measurement and classification have been proposed, but they do not always match the clinical concept of tortuosity. This justifies the need for a new definition, able to express in mathematical terms the tortuosity as perceived by ophthalmologists. We propose here a new algorithm for the evaluation of tortuosity in vessels recognized in digital fundus images. It is based on partitioning each vessel in segments of constant-sign curvature and then combining together each evaluation of such segments and their number. The algorithm has been compared with other available tortuosity measures on a set of 30 arteries and one of 30 veins from 60 different images. These vessels had been preliminarily ordered by a retina specialist by increasing perceived tortuosity. The proposed algorithm proved to be the best one in matching the clinically perceived vessel tortuosity.
Collapse
Affiliation(s)
- Enrico Grisan
- Department of Information Engineering, University of Padova, 35131 Padova, Italy
| | | | | |
Collapse
|
103
|
Shim DS, Chang S. Sub-Pixel Retinal Vessel Tracking and Measurement Using Modified Canny Edge Detection Method. J Imaging Sci Technol 2008. [DOI: 10.2352/j.imagingsci.technol.(2008)52:2(020505)] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
|
104
|
Automated axon tracking of 3D confocal laser scanning microscopy images using guided probabilistic region merging. Neuroinformatics 2008; 5:189-203. [PMID: 17917130 DOI: 10.1007/s12021-007-0013-4] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/1999] [Revised: 11/30/1999] [Accepted: 11/30/1999] [Indexed: 10/22/2022]
Abstract
This paper presents a new algorithm for extracting the centerlines of the axons from a 3D data stack collected by a confocal laser scanning microscope. Recovery of neuronal structures from such datasets is critical for quantitatively addressing a range of neurobiological questions such as the manner in which the branching pattern of motor neurons change during synapse elimination. Unfortunately, the data acquired using fluorescence microscopy contains many imaging artifacts, such as blurry boundaries and non-uniform intensities of fluorescent radiation. This makes the centerline extraction difficult. We propose a robust segmentation method based on probabilistic region merging to extract the centerlines of individual axons with minimal user interaction. The 3D model of the extracted axon centerlines in three datasets is presented in this paper. The results are validated with the manual tracking results while the robustness of the algorithm is compared with the published repulsive snake algorithm.
Collapse
|
105
|
Al-Kofahi Y, Dowell-Mesfin N, Pace C, Shain W, Turner JN, Roysam B. Improved detection of branching points in algorithms for automated neuron tracing from 3D confocal images. Cytometry A 2008; 73:36-43. [DOI: 10.1002/cyto.a.20499] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
106
|
Choe TE, Medioni G, Cohen I, Walsh AC, Sadda SR. 2-D registration and 3-D shape inference of the retinal fundus from fluorescein images. Med Image Anal 2007; 12:174-90. [PMID: 18060827 DOI: 10.1016/j.media.2007.10.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2007] [Revised: 09/13/2007] [Accepted: 10/01/2007] [Indexed: 11/15/2022]
Abstract
This study presents methods to 2-D registration of retinal image sequences and 3-D shape inference from fluorescein images. The Y-feature is a robust geometric entity that is largely invariant across modalities as well as across the temporal grey level variations induced by the propagation of the dye in the vessels. We first present a Y-feature extraction method that finds a set of Y-feature candidates using local image gradient information. A gradient-based approach is then used to align an articulated model of the Y-feature to the candidates more accurately while optimizing a cost function. Using mutual information, fitted Y-features are subsequently matched across images, including colors and fluorescein angiographic frames, for registration. To reconstruct the retinal fundus in 3-D, the extracted Y-features are used to estimate the epipolar geometry with a plane-and-parallax approach. The proposed solution provides a robust estimation of the fundamental matrix suitable for plane-like surfaces, such as the retinal fundus. The mutual information criterion is used to accurately estimate the dense disparity map. Our experimental results validate the proposed method on a set of difficult fluorescein image pairs.
Collapse
Affiliation(s)
- Tae Eun Choe
- Institute for Robotics and Intelligent Systems, University of Southern California, 3737 Watt way, Los Angeles, CA 90248, USA.
| | | | | | | | | |
Collapse
|
107
|
Ricci E, Perfetti R. Retinal blood vessel segmentation using line operators and support vector classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:1357-1365. [PMID: 17948726 DOI: 10.1109/tmi.2007.898551] [Citation(s) in RCA: 273] [Impact Index Per Article: 15.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
In the framework of computer-aided diagnosis of eye diseases, retinal vessel segmentation based on line operators is proposed. A line detector, previously used in mammography, is applied to the green channel of the retinal image. It is based on the evaluation of the average grey level along lines of fixed length passing through the target pixel at different orientations. Two segmentation methods are considered. The first uses the basic line detector whose response is thresholded to obtain unsupervised pixel classification. As a further development, we employ two orthogonal line detectors along with the grey level of the target pixel to construct a feature vector for supervised classification using a support vector machine. The effectiveness of both methods is demonstrated through receiver operating characteristic analysis on two publicly available databases of color fundus images.
Collapse
Affiliation(s)
- Elisa Ricci
- Department of Electronic and Information Engineering, University of Perugia, I-06125 Perugia, Italy
| | | |
Collapse
|
108
|
Winter K, Metz LHW, Kuska JP, Frerich B. Characteristic quantities of microvascular structures in CLSM volume datasets. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:1103-14. [PMID: 17695130 DOI: 10.1109/tmi.2007.900379] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/16/2023]
Abstract
A method for fully automated morphological and topological quantification of microvascular structures in confocal laser scanning microscopy (CLSM) volume datasets is presented. Several characteristic morphological and topological quantities are calculated in a series of image-processing steps and can be used to compare single components as well as whole networks of microvascular structures to each other. The effect of the individual image-processing steps is illustrated and characteristic quantities of measured volume datasets are presented and discussed.
Collapse
Affiliation(s)
- Karsten Winter
- Translational Centre for Regenerative Medicine, University of Leipzig, 04103 Leipzig, Germany.
| | | | | | | |
Collapse
|
109
|
Narasimha-Iyer H, Can A, Roysam B, Tanenbaum HL, Majerovics A. Integrated Analysis of Vascular and Nonvascular Changes From Color Retinal Fundus Image Sequences. IEEE Trans Biomed Eng 2007; 54:1436-45. [PMID: 17694864 DOI: 10.1109/tbme.2007.900807] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Algorithms are presented for integrated analysis of both vascular and nonvascular changes observed in longitudinal time-series of color retinal fundus images, extending our prior work. A Bayesian model selection algorithm that combines color change information, and image understanding systems outputs in a novel manner is used to analyze vascular changes such as increase/decrease in width, and disappearance/appearance of vessels, as well as nonvascular changes such as appearance/disappearance of different kinds of lesions. The overall system is robust to false changes due to inter-image and intra-image nonuniform illumination, imaging artifacts such as dust particles in the optical path, alignment errors and outliers in the training-data. An expert observer validated the algorithms on 54 regions selected from 34 image pairs. The regions were selected such that they represented diverse types of vascular changes of interest, as well as no-change regions. The algorithm achieved a sensitivity of 82% and a 9% false positive rate for vascular changes. For the nonvascular changes, 97% sensitivity and a 10% false positive rate are achieved. The combined system is intended for diverse applications including computer-assisted retinal screening, image-reading centers, quantitative monitoring of disease onset and progression, assessment of treatment efficacy, and scoring clinical trials.
Collapse
|
110
|
Narasimha-Iyer H, Beach JM, Khoobehi B, Roysam B. Automatic Identification of Retinal Arteries and Veins From Dual-Wavelength Images Using Structural and Functional Features. IEEE Trans Biomed Eng 2007; 54:1427-35. [PMID: 17694863 DOI: 10.1109/tbme.2007.900804] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
This paper presents an automated method to identify arteries and veins in dual-wavelength retinal fundus images recorded at 570 and 600 nm. Dual-wavelength imaging provides both structural and functional features that can be exploited for identification. The processing begins with automated tracing of the vessels from the 570-nm image. The 600-nm image is registered to this image, and structural and functional features are computed for each vessel segment. We use the relative strength of the vessel central reflex as the structural feature. The central reflex phenomenon, caused by light reflection from vessel surfaces that are parallel to the incident light, is especially pronounced at longer wavelengths for arteries compared to veins. We use a dual-Gaussian to model the cross-sectional intensity profile of vessels. The model parameters are estimated using a robust M-estimator, and the relative strength of the central reflex is computed from these parameters. The functional feature exploits the fact that arterial blood is more oxygenated relative to that in veins. This motivates use of the ratio of the vessel optical densities (ODs) from images at oxygen-sensitive and oxygen-insensitive wavelengths (ODR = OD600/OD570) as a functional indicator. Finally, the structural and functional features are combined in a classifier to identify the type of the vessel. We experimented with four different classifiers and the best result was given by a support vector machine (SVM) classifier. With the SVM classifier, the proposed algorithm achieved true positive rates of 97% for the arteries and 90% for the veins, when applied to a set of 251 vessel segments obtained from 25 dual wavelength images. The ability to identify the vessel type is useful in applications such as automated retinal vessel oximetry and automated analysis of vascular changes without manual intervention.
Collapse
|
111
|
Smith CM, Cole Smith J, Williams SK, Rodriguez JJ, Hoying JB. Automatic thresholding of three-dimensional microvascular structures from confocal microscopy images. J Microsc 2007; 225:244-57. [PMID: 17371447 DOI: 10.1111/j.1365-2818.2007.01739.x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
We have combined confocal microscopy, image processing, and optimization techniques to obtain automated, accurate volumetric measurements of microvasculature. Initially, we made tissue phantoms containing 15-microm FocalCheck microspheres suspended in type I collagen. Using these phantoms we obtained a stack of confocal images and examined the accuracy of various thresholding schemes. Thresholding algorithms from the literature that utilize a unimodal histogram, a bimodal histogram, or an intensity and edge-based algorithm all significantly overestimated the volume of foreground structures in the image stack. Instead, we developed a heuristic technique to automatically determine good-quality threshold values based on the depth, intensity, and (optionally) gradient of each voxel. This method analyzed intensity and gradient threshold methods for each individual image stack, taking into account the intensity attenuation that is seen in deeper images of the stack. Finally, we generated a microvascular construct comprised of rat fat microvessel fragments embedded in collagen I gels and obtained stacks of confocal images. Using our new thresholding scheme we were able to obtain automatic volume measurements of growing microvessel fragments.
Collapse
Affiliation(s)
- Cynthia M Smith
- Biomedical Engineering Program, University of Arizona, Tucson, Arizona 85724, USA
| | | | | | | | | |
Collapse
|
112
|
Adjeroh DA, Kandaswamy U, Odom JV. Texton-based segmentation of retinal vessels. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2007; 24:1384-93. [PMID: 17429484 DOI: 10.1364/josaa.24.001384] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
With improvements in fundus imaging technology and the increasing use of digital images in screening and diagnosis, the issue of automated analysis of retinal images is gaining more serious attention. We consider the problem of retinal vessel segmentation, a key issue in automated analysis of digital fundus images. We propose a texture-based vessel segmentation algorithm based on the notion of textons. Using a weak statistical learning approach, we construct textons for retinal vasculature by designing filters that are specifically tuned to the structural and photometric properties of retinal vessels. We evaluate the performance of the proposed approach using a standard database of retinal images. On the DRIVE data set, the proposed method produced an average performance of 0.9568 specificity at 0.7346 sensitivity. This compares well with the best-published results on the data set 0.9773 specificity at 0.7194 sensitivity [Proc. SPIE5370, 648 (2004)].
Collapse
Affiliation(s)
- Donald A Adjeroh
- Lane Department of Computer Science and Electrical Engineering, Vido and Image Processing Laboratory, West Virginia University, Morgantown 26506, USA.
| | | | | |
Collapse
|
113
|
Wang YP, Ragib H, Huang CM. A Wavelet Approach for the Identification of Axonal Synaptic Varicosities from Microscope Images. ACTA ACUST UNITED AC 2007; 11:296-304. [PMID: 17521079 DOI: 10.1109/titb.2006.884370] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Direct visualization of synapses is a prerequisite to the analysis of the spatial distribution patterns of synaptic systems. Such an analysis is essential to the understanding of synaptic circuitry. In order to facilitate the visualization of individual synapses at the subcellular level from microscope images, we have introduced a wavelet-based approach for the semiautomated recognition of axonal synaptic varicosities. The proposed approach to image analysis employs a family of redundant wavelet representations. They are specifically designed for the recognition of signal peaks, which correspond to the presence of axonal synaptic varicosities. In this paper, the two-dimensional image of an axon together with its synaptic varicosities is first transformed into a one-dimensional (1-D) profile in which the axonal varicosities are represented by peaks in the signal. Next, by decomposing the 1-D profile in the differential wavelet domain, we employ the multi-scale point-wise product to distinguish between peaks and noises. The ability to separate the true signals (due to synaptic varicosities) from noise makes possible a reliable and accurate recognition of axonal synaptic varicosities. The proposed algorithms are also designed with a variable threshold that effectively allows variable sensitivities in varicosity detection. The algorithm has been systematically validated using images containing varicosities (< or =30) that have been consistently identified by seven human observers. The proposed algorithm can give high sensitivity and specificity with appropriate threshold. The results have indicated that the semiautomatic approach is satisfactory for processing a variety of microscopic images of axons under different conditions.
Collapse
Affiliation(s)
- Yu-Ping Wang
- School of Computing and Engineering, University of Missouri-Kansas City, Kansas City, MO 64110, USA.
| | | | | |
Collapse
|
114
|
Abstract
Accurate retinal blood vessel detection offers a great opportunity to predict and detect the stages of various ocular and systemic diseases, such as glaucoma, hypertension and congestive heart failure, since the change in width of blood vessels in retina has been reported as an independent and significant prospective risk factor for such diseases. In large-population studies of disease control and prevention, there exists an overwhelming need for an automatic tool that can reliably and accurately identify and measure retinal vessel diameters. To address requirements in this clinical setting, a vessel detection algorithm is proposed to quantitatively measure the salient properties of retinal vessel and combine the measurements by Bayesian decision to generate a confidence value for each detected vessel segment. The salient properties of vessels provide an alternative approach for retinal vessel detection at a level higher than detection at the pixel level. Experiments show superior detection performance than currently published results using a publicly available data set. More importantly, the proposed algorithm provides the confidence measurement that can be used as an objective criterion to select reliable vessel segments for diameter measurement.
Collapse
Affiliation(s)
- Ke Huang
- Department of Electrical and Computer Engineering, Michigan State University, East Lansing, MI 48824, USA
| | | |
Collapse
|
115
|
Grisan E, Pesce A, Giani A, Foracchia M, Ruggeri A. A new tracking system for the robust extraction of retinal vessel structure. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2007; 2004:1620-3. [PMID: 17272011 DOI: 10.1109/iembs.2004.1403491] [Citation(s) in RCA: 17] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Identification and measurement of blood vessels in retinal images could allow quantitative evaluation of clinical features, which may allow early diagnosis and effective monitoring of therapies in retinopathy. A new system is proposed for the automatic extraction of the vascular structure in retinal images, based on a sparse tracking technique. After processing pixels on a grid of rows and columns to determine a set of starting points (seeds), the tracking procedure starts. It moves along the vessel by analyzing subsequent vessel cross sections (lines perpendicular to the vessel direction), and extracting the vessel center, calibre and direction. Vessel points in a cross section are found by means of a fuzzy c-means classifier. When tracking stops because of a critical area, e.g. low contrast, bifurcation or crossing, a "bubble technique" module is run. It grows and analyzes circular scan lines around the critical points, allowing the exploration of the vessel structure beyond the critical areas. After tracking the vessels, identified segments are connected by a greedy connection algorithm. Finally bifurcations and crossings are identified analyzing vessel end points with respect to the vessel structure. Numerical evaluation of the performances of the system compared to human expert are reported.
Collapse
Affiliation(s)
- Enrico Grisan
- Department of Information Engineering, Padova University, Italy
| | | | | | | | | |
Collapse
|
116
|
Wang L, Bhalerao A, Wilson R. Analysis of retinal vasculature using a multiresolution Hermite model. IEEE TRANSACTIONS ON MEDICAL IMAGING 2007; 26:137-52. [PMID: 17304729 DOI: 10.1109/tmi.2006.889732] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
This paper presents a vascular representation and segmentation algorithm based on a multiresolution Hermite model (MHM). A two-dimensional Hermite function intensity model is developed which models blood vessel profiles in a quad-tree structure over a range of spatial resolutions. The use of a multiresolution representation simplifies the image modeling and allows for a robust analysis by combining information across scales. Estimation over scale also reduces the overall computational complexity. As well as using MHM for vessel labelling, the local image modeling can accurately represent vessel directions, widths, amplitudes, and branch points which readily enable the global topology to be inferred. An expectation-maximization (EM) type of optimization scheme is used to estimate local model parameters and an information theoretic test is then applied to select the most appropriate scale/feature model for each region of the image. In the final stage, Bayesian stochastic inference is employed for linking the local features to obtain a description of the global vascular structure. After a detailed description and analysis of MHM, experimental results on two standard retinal databases are given that demonstrate its comparative performance. These show MHM to perform comparably with other retinal vessel labelling methods.
Collapse
Affiliation(s)
- Li Wang
- Department of Computer Science, University of Warwick, Coventry, U.K
| | | | | |
Collapse
|
117
|
Narro ML, Yang F, Kraft R, Wenk C, Efrat A, Restifo LL. NeuronMetrics: software for semi-automated processing of cultured neuron images. Brain Res 2007; 1138:57-75. [PMID: 17270152 PMCID: PMC1945162 DOI: 10.1016/j.brainres.2006.10.094] [Citation(s) in RCA: 53] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2006] [Revised: 10/04/2006] [Accepted: 10/30/2006] [Indexed: 12/28/2022]
Abstract
Using primary cell culture to screen for changes in neuronal morphology requires specialized analysis software. We developed NeuronMetrics for semi-automated, quantitative analysis of two-dimensional (2D) images of fluorescently labeled cultured neurons. It skeletonizes the neuron image using two complementary image-processing techniques, capturing fine terminal neurites with high fidelity. An algorithm was devised to span wide gaps in the skeleton. NeuronMetrics uses a novel strategy based on geometric features called faces to extract a branch number estimate from complex arbors with numerous neurite-to-neurite contacts, without creating a precise, contact-free representation of the neurite arbor. It estimates total neurite length, branch number, primary neurite number, territory (the area of the convex polygon bounding the skeleton and cell body), and Polarity Index (a measure of neuronal polarity). These parameters provide fundamental information about the size and shape of neurite arbors, which are critical factors for neuronal function. NeuronMetrics streamlines optional manual tasks such as removing noise, isolating the largest primary neurite, and correcting length for self-fasciculating neurites. Numeric data are output in a single text file, readily imported into other applications for further analysis. Written as modules for ImageJ, NeuronMetrics provides practical analysis tools that are easy to use and support batch processing. Depending on the need for manual intervention, processing time for a batch of approximately 60 2D images is 1.0-2.5 h, from a folder of images to a table of numeric data. NeuronMetrics' output accelerates the quantitative detection of mutations and chemical compounds that alter neurite morphology in vitro, and will contribute to the use of cultured neurons for drug discovery.
Collapse
Affiliation(s)
- Martha L. Narro
- ARL Division of Neurobiology, University of Arizona, Tucson, AZ 85721
| | - Fan Yang
- ARL Division of Neurobiology, University of Arizona, Tucson, AZ 85721
| | - Robert Kraft
- ARL Division of Neurobiology, University of Arizona, Tucson, AZ 85721
| | - Carola Wenk
- Department of Computer Science, University of Texas at San Antonio, San Antonio, TX 78249
| | - Alon Efrat
- Department of Computer Science, University of Arizona, Tucson, AZ 85721
| | - Linda L. Restifo
- ARL Division of Neurobiology, University of Arizona, Tucson, AZ 85721
- Interdisciplinary Programs in Neuroscience, Genetics and Cognitive Science, University of Arizona, Tucson, AZ 85721
- BIO5 Institute for Collaborative Bioresearch, University of Arizona, Tucson, AZ 85721
- Department of Neurology, Arizona Health Sciences Center, Tucson, AZ 85724
- * Author for correspondence: Linda L. Restifo, 611 Gould-Simpson Bldg., University of Arizona, Tucson, AZ 85721-0077, phone: (520) 621-9821, FAX: (520) 621-8282,
| |
Collapse
|
118
|
Zhang Y, Zhou X, Degterev A, Lipinski M, Adjeroh D, Yuan J, Wong ST. Automated neurite extraction using dynamic programming for high-throughput screening of neuron-based assays. Neuroimage 2007; 35:1502-15. [PMID: 17363284 PMCID: PMC2000820 DOI: 10.1016/j.neuroimage.2007.01.014] [Citation(s) in RCA: 35] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2005] [Revised: 09/14/2006] [Accepted: 01/12/2007] [Indexed: 11/30/2022] Open
Abstract
High-throughput screening (HTS) of cell-based assays has recently emerged as an important tool of drug discovery. The analysis and modeling of HTS microscopy neuron images, however, is particularly challenging. In this paper we present a novel algorithm for extraction and quantification of neurite segments from HTS neuron images. The algorithm is designed to be able to detect and link neurites even with complex neuronal structures and of poor imaging quality. Our proposed algorithm automatically detects initial seed points on a set of grid lines and estimates the ending points of the neurite by iteratively tracing the centerline points along the line path representing the neurite segment. The live-wire method is then applied to link the seed points and the corresponding ending points using dynamic programming techniques, thus enabling the extraction of the centerlines of the neurite segments accurately and robustly against noise, discontinuity, and other image artifacts. A fast implementation of our algorithm using dynamic programming is also provided in the paper. Any thin neurite and its segments with low intensity contrast can be well preserved by detecting the starting and ending points of the neurite. All these properties make the proposed algorithm attractive for high-throughput screening of neuron-based assays.
Collapse
Affiliation(s)
- Yong Zhang
- Center for Bioinformatics, Harvard Center for Neurodegeneration and Repair, Harvard Medical School, Boston, MA 02215
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, West Virginia, 26506
| | - Xiaobo Zhou
- Center for Bioinformatics, Harvard Center for Neurodegeneration and Repair, Harvard Medical School, Boston, MA 02215
- Functional and Molecular Imaging Center, Department of Radiology, Brigham & Women’s Hospital, Boston, MA 02115
- *corresponding author:
| | - Alexei Degterev
- Department of Biochemistry, Tufts University School of Medicine, Boston, MA 02111
| | - Marta Lipinski
- Department of Cell Biology, Harvard Medical School, Boston, MA 02115
| | - Donald Adjeroh
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, West Virginia, 26506
| | - Junying Yuan
- Department of Cell Biology, Harvard Medical School, Boston, MA 02115
| | - Stephen T.C. Wong
- Center for Bioinformatics, Harvard Center for Neurodegeneration and Repair, Harvard Medical School, Boston, MA 02215
- Functional and Molecular Imaging Center, Department of Radiology, Brigham & Women’s Hospital, Boston, MA 02115
| |
Collapse
|
119
|
Groher M, Bender F, Hoffmann RT, Navab N. Segmentation-driven 2D-3D registration for abdominal catheter interventions. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2007; 10:527-535. [PMID: 18044609 DOI: 10.1007/978-3-540-75759-7_64] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
2D-3D registration of abdominal angiographic data is a difficult problem due to hard time constraints during the intervention, different vessel contrast in volume and image, and motion blur caused by breathing. We propose a novel method for aligning 2D Digitally Subtracted Angiograms (DSA) to Computed Tomography Angiography (CTA) volumes, which requires no user interaction intrainterventionally. In an iterative process, we link 2D segmentation and 2D-3D registration using a probability map, which creates a common feature space where outliers in 2D and 3D are discarded consequently. Unlike other approaches, we keep user interaction low while high capture range and robustness against vessel variability and deformation are maintained. Tests on five patient data sets and a comparison to two recently proposed methods show the good performance of our method.
Collapse
Affiliation(s)
- Martin Groher
- Computer Aided Medical Procedures (CAMP), TUM, Munich, Germany.
| | | | | | | |
Collapse
|
120
|
Sofka M, Stewart CV. Retinal vessel centerline extraction using multiscale matched filters, confidence and edge measures. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1531-46. [PMID: 17167990 DOI: 10.1109/tmi.2006.884190] [Citation(s) in RCA: 95] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Motivated by the goals of improving detection of low-contrast and narrow vessels and eliminating false detections at nonvascular structures, a new technique is presented for extracting vessels in retinal images. The core of the technique is a new likelihood ratio test that combines matched-filter responses, confidence measures and vessel boundary measures. Matched filter responses are derived in scale-space to extract vessels of widely varying widths. A vessel confidence measure is defined as a projection of a vector formed from a normalized pixel neighborhood onto a normalized ideal vessel profile. Vessel boundary measures and associated confidences are computed at potential vessel boundaries. Combined, these responses form a six-dimensional measurement vector at each pixel. A training technique is used to develop a mapping of this vector to a likelihood ratio that measures the "vesselness" at each pixel. Results comparing this vesselness measure to matched filters alone and to measures based on the Hessian of intensities show substantial improvements, both qualitatively and quantitatively. The Hessian can be used in place of the matched filter to obtain similar but less-substantial improvements or to steer the matched filter by preselecting kernel orientations. Finally, the new vesselness likelihood ratio is embedded into a vessel tracing framework, resulting in an efficient and effective vessel centerline extraction algorithm.
Collapse
Affiliation(s)
- Michal Sofka
- Department of Computer Science, Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA.
| | | |
Collapse
|
121
|
Zhang Y, Zhou X, Degterev A, Lipinski M, Adjeroh D, Yuan J, Wong STC. A novel tracing algorithm for high throughput imaging Screening of neuron-based assays. J Neurosci Methods 2006; 160:149-62. [PMID: 16987551 DOI: 10.1016/j.jneumeth.2006.07.028] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2006] [Revised: 07/25/2006] [Accepted: 07/26/2006] [Indexed: 10/24/2022]
Abstract
High throughput neuron image processing is an important method for drug screening and quantitative neurobiological studies. The method usually includes detection of neurite structures, feature extraction, quantification, and statistical analysis. In this paper, we present a new algorithm for fast and automatic extraction of neurite structures in microscopy neuron images. The algorithm is based on novel methods for soma segmentation, seed point detection, recursive center-line detection, and 2D curve smoothing. The algorithm is fully automatic without any human interaction, and robust enough for usage on images with poor quality, such as those with low contrast or low signal-to-noise ratio. It is able to completely and accurately extract neurite segments in neuron images with highly complicated neurite structures. Robustness comes from the use of 2D smoothening techniques and the idea of center-line extraction by estimating the surrounding edges. Efficiency is achieved by processing only pixels that are close enough to the line structures, and by carefully chosen stopping conditions. These make the proposed approach suitable for demanding image processing tasks in high throughput screening of neuron-based assays. Detailed results on experimental validation of the proposed method and on its comparative performance with other proposed schemes are included.
Collapse
Affiliation(s)
- Yong Zhang
- Center for Bioinformatics, Harvard Center for Neurodegeneration and Repair, Harvard Medical School, Boston, MA 02215, United States
| | | | | | | | | | | | | |
Collapse
|
122
|
Soares JVB, Leandro JJG, Cesar Júnior RM, Jelinek HF, Cree MJ. Retinal vessel segmentation using the 2-D Gabor wavelet and supervised classification. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1214-22. [PMID: 16967806 DOI: 10.1109/tmi.2006.879967] [Citation(s) in RCA: 500] [Impact Index Per Article: 26.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
We present a method for automated segmentation of the vasculature in retinal images. The method produces segmentations by classifying each image pixel as vessel or nonvessel, based on the pixel's feature vector. Feature vectors are composed of the pixel's intensity and two-dimensional Gabor wavelet transform responses taken at multiple scales. The Gabor wavelet is capable of tuning to specific frequencies, thus allowing noise filtering and vessel enhancement in a single step. We use a Bayesian classifier with class-conditional probability density functions (likelihoods) described as Gaussian mixtures, yielding a fast classification, while being able to model complex decision surfaces. The probability distributions are estimated based on a training set of labeled pixels obtained from manual segmentations. The method's performance is evaluated on publicly available DRIVE (Staal et al., 2004) and STARE (Hoover et al., 2000) databases of manually labeled images. On the DRIVE database, it achieves an area under the receiver operating characteristic curve of 0.9614, being slightly superior than that presented by state-of-the-art approaches. We are making our implementation available as open source MATLAB scripts for researchers interested in implementation details, evaluation, or development of methods.
Collapse
Affiliation(s)
- João V B Soares
- Institute of Mathematics and Statistics, University of São Paulo, 05508-090 Brazil.
| | | | | | | | | |
Collapse
|
123
|
Melek Z, Mayerich D, Yuksel C, Keyser J. Visualization of fibrous and thread-like data. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2006; 12:1165-72. [PMID: 17080848 DOI: 10.1109/tvcg.2006.197] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Thread-like structures are becoming more common in modern volumetric data sets as our ability to image vascular and neural tissue at higher resolutions improves. The thread-like structures of neurons and micro-vessels pose a unique problem in visualization since they tend to be densely packed in small volumes of tissue. This makes it difficult for an observer to interpret useful patterns from the data or trace individual fibers. In this paper we describe several methods for dealing with large amounts of thread-like data, such as data sets collected using Knife-Edge Scanning Microscopy (KESM) and Serial Block-Face Scanning Electron Microscopy (SBF-SEM). These methods allow us to collect volumetric data from embedded samples of whole-brain tissue. The neuronal and microvascular data that we acquire consists of thin, branching structures extending over very large regions. Traditional visualization schemes are not sufficient to make sense of the large, dense, complex structures encountered. In this paper, we address three methods to allow a user to explore a fiber network effectively. We describe interactive techniques for rendering large sets of neurons using self-orienting surfaces implemented on the GPU. We also present techniques for rendering fiber networks in a way that provides useful information about flow and orientation. Third, a global illumination framework is used to create high-quality visualizations that emphasize the underlying fiber structure. Implementation details, performance, and advantages and disadvantages of each approach are discussed.
Collapse
Affiliation(s)
- Zeki Melek
- Computer Science, Texas A&M University, USA.
| | | | | | | |
Collapse
|
124
|
Mendonça AM, Campilho A. Segmentation of retinal blood vessels by combining the detection of centerlines and morphological reconstruction. IEEE TRANSACTIONS ON MEDICAL IMAGING 2006; 25:1200-13. [PMID: 16967805 DOI: 10.1109/tmi.2006.879955] [Citation(s) in RCA: 285] [Impact Index Per Article: 15.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/11/2023]
Abstract
This paper presents an automated method for the segmentation of the vascular network in retinal images. The algorithm starts with the extraction of vessel centerlines, which are used as guidelines for the subsequent vessel filling phase. For this purpose, the outputs of four directional differential operators are processed in order to select connected sets of candidate points to be further classified as centerline pixels using vessel derived features. The final segmentation is obtained using an iterative region growing method that integrates the contents of several binary images resulting from vessel width dependent morphological filters. Our approach was tested on two publicly available databases and its results are compared with recently published methods. The results demonstrate that our algorithm outperforms other solutions and approximates the average accuracy of a human observer without a significant degradation of sensitivity and specificity.
Collapse
Affiliation(s)
- Ana Maria Mendonça
- Signal and Image Laboratory, Institute for Biomedical Engineering, University of Porto, Campus da FEUP/DEEC, 4200-465 Porto, Portugal.
| | | |
Collapse
|
125
|
Narasimha-Iyer H, Can A, Roysam B, Stewart CV, Tanenbaum HL, Majerovics A, Singh H. Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy. IEEE Trans Biomed Eng 2006; 53:1084-98. [PMID: 16761836 DOI: 10.1109/tbme.2005.863971] [Citation(s) in RCA: 114] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
A fully automated approach is presented for robust detection and classification of changes in longitudinal time-series of color retinal fundus images of diabetic retinopathy. The method is robust to: 1) spatial variations in illumination resulting from instrument limitations and changes both within, and between patient visits; 2) imaging artifacts such as dust particles; 3) outliers in the training data; 4) segmentation and alignment errors. Robustness to illumination variation is achieved by a novel iterative algorithm to estimate the reflectance of the retina exploiting automatically extracted segmentations of the retinal vasculature, optic disk, fovea, and pathologies. Robustness to dust artifacts is achieved by exploiting their spectral characteristics, enabling application to film-based, as well as digital imaging systems. False changes from alignment errors are minimized by subpixel accuracy registration using a 12-parameter transformation that accounts for unknown retinal curvature and camera parameters. Bayesian detection and classification algorithms are used to generate a color-coded output that is readily inspected. A multiobserver validation on 43 image pairs from 22 eyes involving nonproliferative and proliferative diabetic retinopathies, showed a 97% change detection rate, a 3% miss rate, and a 10% false alarm rate. The performance in correctly classifying the changes was 99.3%. A self-consistency metric, and an error factor were developed to measure performance over more than two periods. The average self consistency was 94% and the error factor was 0.06%. Although this study focuses on diabetic changes, the proposed techniques have broader applicability in ophthalmology.
Collapse
Affiliation(s)
- Harihar Narasimha-Iyer
- Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA.
| | | | | | | | | | | | | |
Collapse
|
126
|
Jiang M, Ji Q, McEwen BF. Automated extraction of fine features of kinetochore microtubules and plus-ends from electron tomography volume. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2006; 15:2035-48. [PMID: 16830922 DOI: 10.1109/tip.2006.877054] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Kinetochore microtubules (KMTs) and the associated plus-ends have been areas of intense investigation in both cell biology and molecular medicine. Though electron tomography opens up new possibilities in understanding their function by imaging their high-resolution structures, the interpretation of the acquired data remains an obstacle because of the complex and cluttered cellular environment. As a result, practical segmentation of the electron tomography data has been dominated by manual operation, which is time consuming and subjective. In this paper, we propose a model-based automated approach to extracting KMTs and the associated plus-ends with a coarse-to-fine scale scheme consisting of volume preprocessing, microtubule segmentation and plus-end tracing. In volume preprocessing, we first apply an anisotropic invariant wavelet transform and a tube-enhancing filter to enhance the microtubules at coarse level for localization. This is followed with a surface-enhancing filter to accentuate the fine microtubule boundary features. The microtubule body is then segmented using a modified active shape model method. Starting from the segmented microtubule body, the plus-ends are extracted with a probabilistic tracing method improved with rectangular window based feature detection and the integration of multiple cues. Experimental results demonstrate that our automated method produces results comparable to manual segmentation but using only a fraction of the manual segmentation time.
Collapse
Affiliation(s)
- Ming Jiang
- Department of Electrical, Computer and System Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | | | | |
Collapse
|
127
|
Tsai CL, Warger W, DiMarzio C. Accurate image registration for quadrature tomographic microscopy. MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION : MICCAI ... INTERNATIONAL CONFERENCE ON MEDICAL IMAGE COMPUTING AND COMPUTER-ASSISTED INTERVENTION 2006; 8:935-42. [PMID: 16686050 DOI: 10.1007/11566489_115] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/09/2023]
Abstract
This paper presents a robust and fully automated registration algorithm for registration of images of Quadrature Tomographic Microscopy (QTM), which is an optical interferometer. The need for registration of such images is to recognize distinguishing features of viable embryos to advance the technique for In Vitro Fertilization. QTM images a sample (live embryo) multiple times with different hardware configurations, each in turn producing 4 images taken by 4 CCD cameras simultaneously. Embryo movement is often present between imaging. Our algorithm handles camera calibration of multiple cameras using a variant of ICP, and elimination of embryo movement using a hybrid of feature- and intensity-based methods. The algorithm is tested on 20 live mouse embryos containing various cell numbers between 8 and 26. No failure thus far, and the average alignment error is 0.09 pixels, corresponding to the range of 639 and 675 nanometers.
Collapse
|
128
|
Abstract
This work studies retinal image registration in the context of the National Institutes of Health (NIH) Early Treatment Diabetic Retinopathy Study (ETDRS) standard. The ETDRS imaging protocol specifies seven fields of each retina and presents three major challenges for the image registration task. First, small overlaps between adjacent fields lead to inadequate landmark points for feature-based methods. Second, the non-uniform contrast/intensity distributions due to imperfect data acquisition will deteriorate the performance of area-based techniques. Third, high-resolution images contain large homogeneous nonvascular/texureless regions that weaken the capabilities of both feature-based and area-based techniques. In this work, we propose a hybrid retinal image registration approach for ETDRS images that effectively combines both area-based and feature-based methods. Four major steps are involved. First, the vascular tree is extracted by using an efficient local entropy-based thresholding technique. Next, zeroth-order translation is estimated by maximizing mutual information based on the binary image pair (area-based). Then image quality assessment regarding the ETDRS field definition is performed based on the translation model. If the image pair is accepted, higher-order transformations will be involved. Specifically, we use two types of features, landmark points and sampling points, for affine/quadratic model estimation. Three empirical conditions are derived experimentally to control the algorithm progress, so that we can achieve the lowest registration error and the highest success rate. Simulation results on 504 pairs of ETDRS images show the effectiveness and robustness of the proposed algorithm.
Collapse
Affiliation(s)
- Thitiporn Chanwimaluang
- School of Electrical and Computer Engineering, Oklahoma State University, Stillwater 74078, USA.
| | | | | |
Collapse
|
129
|
Wu D, Zhang M, Liu JC, Bauman W. On the Adaptive Detection of Blood Vessels in Retinal Images. IEEE Trans Biomed Eng 2006; 53:341-3. [PMID: 16485764 DOI: 10.1109/tbme.2005.862571] [Citation(s) in RCA: 67] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper proposes an automated blood vessel detection scheme based on adaptive contrast enhancement, feature extraction, and tracing. Feature extraction of small blood vessels is performed by using the standard deviation of Gabor filter responses. Tracing of vessels is done via forward detection, bifurcation identification, and backward verification. Tests over twenty images show that for normal images, the true positive rate (TPR) ranges from 80% to 91%, and their corresponding false positive rates (FPR) range from 2.8% to 5.5%. For abnormal images, the TPR ranges from 73.8% to 86.5% and the FPR ranges from 2.1% to 5.3%, respectively. In comparison with two published solution schemes that were also based on the STARE database, our scheme has lower FPR for the reported TPR measure.
Collapse
Affiliation(s)
- Di Wu
- Computer Science Department, Texas A&M University, College Station 77843-3112, USA.
| | | | | | | |
Collapse
|
130
|
Lin G, Bjornsson CS, Smith KL, Abdul-Karim MA, Turner JN, Shain W, Roysam B. Automated image analysis methods for 3-D quantification of the neurovascular unit from multichannel confocal microscope images. Cytometry A 2006; 66:9-23. [PMID: 15934061 DOI: 10.1002/cyto.a.20149] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
BACKGROUND There is a need for integrative and quantitative methods to investigate the structural and functional relations among elements of complex systems, such as the neurovascular unit (NVU), that involve multiple cell types, microvasculatures, and various genomic/proteomic/ionic functional entities. METHODS Vascular casting and selective labeling enabled simultaneous three-dimensional imaging of the microvasculature, cell nuclei, and cytoplasmic stains. Multidimensional segmentation was achieved by (i) bleed-through removal and attenuation correction; (ii) independent segmentation and morphometry for each corrected channel; and (iii) spatially associative feature computation across channels. The combined measurements enabled cell classification based on nuclear morphometry, cytoplasmic signals, and distance from vascular elements. Specific spatial relations among the NVU elements could be quantified. RESULTS A software system combining nuclear and vessel segmentation codes and associative features was constructed and validated. Biological variability contributed to misidentified nuclei (9.3%), undersegmentation of nuclei (3.7%), hypersegmentation of nuclei (14%), and missed nuclei (4.7%). Microvessel segmentation errors occurred rarely, mainly due to nonuniform lumen staining. CONCLUSIONS Associative features across fluorescence channels, in combination with standard features, enable integrative structural and functional analysis of the NVU. By labeling additional structural and functional entities, this method can be scaled up to larger-scale systems biology studies that integrate spatial and molecular information.
Collapse
Affiliation(s)
- Gang Lin
- Department of Electrical, Computer, and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180-3590, New York, USA
| | | | | | | | | | | | | |
Collapse
|
131
|
Narasimha-Iyer H, Can A, Roysam B, Stern J. Automated change analysis from fluorescein angiograms for monitoring wet macular degeneration. CONFERENCE PROCEEDINGS : ... ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL CONFERENCE 2006; 2006:4714-4717. [PMID: 17947113 DOI: 10.1109/iembs.2006.259738] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Detection and analysis of changes from retinal images is important in clinical practice, quantitative scoring of clinical trials, computer-assisted reading centers, and in medical research. This paper presents a fully-automated approach for robust detection and classification of changes in longitudinal time-series of fluorescein angiograms (FA). The changes of interest here are related to the development of choroidal neo-vascularization (CNV) in wet macular degeneration. Specifically, the changes in CNV regions as well as the retinal pigment epithelium (RPE) hypertrophic regions are detected and analyzed to study the progression of disease and effect of treatment. Retinal features including the vasculature, vessel branching/crossover locations, optic disk and location of the fovea are first segmented automatically. The images are then registered to sub-pixel accuracy using a 12-dimensional mapping that accounts for the unknown retinal curvature and camera parameters. Spatial variations in illumination are removed using a surface fitting algorithm that exploits the segmentations of the various features. The changes are identified in the regions of interest and a Bayesian classifier is used to classify the changes into clinically significant classes. The automated change analysis algorithms were found to have a success rate of 83%
Collapse
|
132
|
Walsh AC, Updike PG, Sadda SR. Quantitative Fluorescein Angiography. Retina 2006. [DOI: 10.1016/b978-0-323-02598-0.50058-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
133
|
Jain AK, Mustafa T, Zhou Y, Burdette C, Chirikjian GS, Fichtinger G. FTRAC--a robust fluoroscope tracking fiducial. Med Phys 2005; 32:3185-98. [PMID: 16279072 DOI: 10.1118/1.2047782] [Citation(s) in RCA: 57] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
C-arm fluoroscopy is ubiquitous in contemporary surgery, but it lacks the ability to accurately reconstruct three-dimensional (3D) information. A major obstacle in fluoroscopic reconstruction is discerning the pose of the x-ray image, in 3D space. Optical/magnetic trackers tend to be prohibitively expensive, intrusive and cumbersome in many applications. We present single-image-based fluoroscope tracking (FTRAC) with the use of an external radiographic fiducial consisting of a mathematically optimized set of ellipses, lines, and points. This is an improvement over contemporary fiducials, which use only points. The fiducial encodes six degrees of freedom in a single image by creating a unique view from any direction. A nonlinear optimizer can rapidly compute the pose of the fiducial using this image. The current embodiment has salient attributes: small dimensions (3 x 3 x 5 cm); need not be close to the anatomy of interest; and accurately segmentable. We tested the fiducial and the pose recovery method on synthetic data and also experimentally on a precisely machined mechanical phantom. Pose recovery in phantom experiments had an accuracy of 0.56 mm in translation and 0.33 degrees in orientation. Object reconstruction had a mean error of 0.53 mm with 0.16 mm STD. The method offers accuracies similar to commercial tracking systems, and appears to be sufficiently robust for intraoperative quantitative C-arm fluoroscopy. Simulation experiments indicate that the size can be further reduced to 1 x 1 X 2 cm, with only a marginal drop in accuracy.
Collapse
Affiliation(s)
- Ameet Kumar Jain
- Department of Computer Science, Johns Hopkins University, Baltimore, Maryland 21218, USA.
| | | | | | | | | | | |
Collapse
|
134
|
Su CL. Chinese-Seal-Print Recognition by Color Image Dilating, Extraction, and Gray Scale Image Geometry Comparison. J INTELL ROBOT SYST 2005. [DOI: 10.1007/s10846-005-9001-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
135
|
Tyrrell JA, Mahadevan V, Tong RT, Brown EB, Jain RK, Roysam B. A 2-D/3-D model-based method to quantify the complexity of microvasculature imaged by in vivo multiphoton microscopy. Microvasc Res 2005; 70:165-78. [PMID: 16239015 DOI: 10.1016/j.mvr.2005.08.005] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2005] [Revised: 04/29/2005] [Accepted: 08/30/2005] [Indexed: 11/30/2022]
Abstract
This paper presents model-based information-theoretic methods to quantify the complexity of tumor microvasculature, taking into account shape, textural, and structural irregularities. The proposed techniques are completely automated, and are applicable to optical slices (3-D) or projection images (2-D). Improvements upon the prior literature include: (i) measuring local (vessel segment) as well as global (entire image) vascular complexity without requiring explicit segmentation or tracing; (ii) focusing on the vessel boundaries in the complexity estimate; and (iii) added robustness to image artifacts common to tumor microvasculature images. Vessels are modeled using a family of super-Gaussian functions that are based on the superquadric modeling primitive common in computer vision. The superquadric generalizes a simple ellipsoid by including shape parameters that allow it to approximate a cylinder with elliptical cross-sections (generalized cylinder). The super-Gaussian is obtained by composing a superquadric with an exponential function giving a form that is similar to a standard Gaussian function but with the ability to produce level sets that approximate generalized cylinders. Importantly, the super-Gaussian is continuous and differentiable so it can be fit to image data using robust non-linear regression. This fitting enables quantification of the intrinsic complexity of vessel data vis-a-vis the super-Gaussian model within a minimum message length (MML) framework. The resulting measures are expressed in units of information (bits). Synthetic and real-data examples are provided to illustrate the proposed measures.
Collapse
Affiliation(s)
- James A Tyrrell
- Department of Electrical, Computer and Systems Engineering, Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA
| | | | | | | | | | | |
Collapse
|
136
|
Narasimha-Iyer H, Beach JM, Khoobehi B, Ning J, Kawano H, Roysam B. Algorithms for automated oximetry along the retinal vascular tree from dual-wavelength fundus images. JOURNAL OF BIOMEDICAL OPTICS 2005; 10:054013. [PMID: 16292973 DOI: 10.1117/1.2113187] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
We present an automated method to perform accurate, rapid, and objective measurement of the blood oxygen saturation over each segment of the retinal vascular hierarchy from dual-wavelength fundus images. Its speed and automation (2 s per entire image versus 20 s per segment for manual methods) enables detailed level-by-level measurements over wider areas. An automated tracing algorithm is used to estimate vessel centerlines, thickness, directions, and locations of landmarks such as bifurcations and crossover points. The hierarchical structure of the vascular network is recovered from the trace fragments and landmarks by a novel algorithm. Optical densities (OD) are measured from vascular segments using the minimum reflected intensities inside and outside the vessel. The OD ratio (ODR=OD600/OD570) bears an inverse relationship to systemic HbO2 saturation (SO2). The sensitivity for detecting saturation change when breathing air versus pure oxygen was calculated from the measurements made on six subjects and was found to be 0.0226 ODR units, which is in good agreement with previous manual measurements by the dual-wavelength technique, indicating the validity of the automation. A fully automated system for retinal vessel oximetry would prove useful to achieve early assessments of risk for progression of disease conditions associated with oxygen utilization.
Collapse
|
137
|
Abdul-Karim MA, Roysam B, Dowell-Mesfin NM, Jeromin A, Yuksel M, Kalyanaraman S. Automatic selection of parameters for vessel/neurite segmentation algorithms. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2005; 14:1338-50. [PMID: 16190469 DOI: 10.1109/tip.2005.852462] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
An automated method is presented for selecting optimal parameter settings for vessel/neurite segmentation algorithms using the minimum description length principle and a recursive random search algorithm. It trades off a probabilistic measure of image-content coverage against its conciseness. It enables nonexpert users to select parameter settings objectively, without knowledge of underlying algorithms, broadening the applicability of the segmentation algorithm, and delivering higher morphometric accuracy. It enables adaptation of parameters across batches of images. It simplifies the user interface to just one optional parameter and reduces the cost of technical support. Finally, the method is modular, extensible, and amenable to parallel computation. The method is applied to 223 images of human retinas and cultured neurons, from four different sources, using a single segmentation algorithm with eight parameters. Improvements in segmentation quality compared to default settings using 1000 iterations ranged from 4.7%-21%. Paired t-tests showed that improvements are statistically significant (p < 0.0005). Most of the improvement occurred in the first 44 iterations. Improvements in description lengths and agreement with the ground truth were strongly correlated (p = 0.78).
Collapse
|
138
|
Matsopoulos GK, Asvestas PA, Mouravliansky NA, Delibasis KK. Multimodal registration of retinal images using self organizing maps. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:1557-1563. [PMID: 15575412 DOI: 10.1109/tmi.2004.836547] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
In this paper, an automatic method for registering multimodal retinal images is presented. The method consists of three steps: the vessel centerline detection and extraction of bifurcation points only in the reference image, the automatic correspondence of bifurcation points in the two images using a novel implementation of the self organizing maps and the extraction of the parameters of the affine transform using the previously obtained correspondences. The proposed registration algorithm was tested on 24 multimodal retinal pairs and the obtained results show an advantageous performance in terms of accuracy with respect to the manual registration.
Collapse
|
139
|
Mahadevan V, Narasimha-Iyer H, Roysam B, Tanenbaum HL. Robust Model-Based Vasculature Detection in Noisy Biomedical Images. ACTA ACUST UNITED AC 2004; 8:360-76. [PMID: 15484442 DOI: 10.1109/titb.2004.834410] [Citation(s) in RCA: 60] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a set of algorithms for robust detection of vasculature in noisy retinal video images. Three methods are studied for effective handling of outliers. The first method is based on Huber's censored likelihood ratio test. The second is based on the use of a alpha-trimmed test statistic. The third is based on robust model selection algorithms. All of these algorithms rely on a mathematical model for the vasculature that accounts for the expected variations in intensity/texture profile, width, orientation, scale, and imaging noise. These unknown parameters are estimated implicitly within a robust detection and estimation framework. The proposed algorithms are also useful as nonlinear vessel enhancement filters. The proposed algorithms were evaluated over carefully constructed phantom images, where the ground truth is known a priori, as well as clinically recorded images for which the ground truth was manually compiled. A comparative evaluation of the proposed approaches is presented. Collectively, these methods outperformed prior approaches based on Chaudhuri et al. (1989) matched filtering, as well as the verification methods used by prior exploratory tracing algorithms, such as the work of Can et aL (1999). The Huber censored likelihood test yielded the best overall improvement, with a 145.7% improvement over the exploratory tracing algorithm, and a 43.7% improvement in detection rates over the matched filter.
Collapse
|
140
|
Trelles MA, Allones I, Martín-Vázquez MJ, Trelles O, Vélez M, Mordon S. Long pulse Nd:YAG laser for treatment of leg veins in 40 patients with assessments at 6 and 12 months. Lasers Surg Med 2004; 35:68-76. [PMID: 15278931 DOI: 10.1002/lsm.20038] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND AND OBJECTIVES This study assessed subjectively and objectively the efficacy of a long-pulsed Nd:YAG laser system in clearing dermal leg veins, successful treatment of which remains problematic. STUDY DESIGN/PATIENTS AND METHODS Forty female patients (24-58 years old, skin types II-IV) with leg veins were treated with synchronized micropulses from a long-pulsed 1,064 nm Nd:YAG laser, 6 mm diameter spot size, 130 and 140 J/cm2. One to three treatments were given at 6-week intervals, with post-treatment assessments at 6 and 12 months. Patients assessed improvement subjectively with a satisfaction index (SI). Objective assessment was based on the clinical photography, and in addition on computer-generated data from a Canny operator-based edge-detection program. RESULTS The overall patient satisfaction rates and objective assessments at the 6 and 12 month assessments were 42.5 and 57.5%, and 75 and 82.5%, respectively. CONCLUSIONS The long-pulsed Nd:YAG laser offered efficient treatment of leg veins. Side effects were minimal and transient. The edge-detection program may help patients appreciate better the actual results of the treatment.
Collapse
|
141
|
Al-Kofahi KA, Can A, Lasek S, Szarowski DH, Dowell-Mesfin N, Shain W, Turner JN, Roysam B. Median-based robust algorithms for tracing neurons from noisy confocal microscope images. ACTA ACUST UNITED AC 2004; 7:302-17. [PMID: 15000357 DOI: 10.1109/titb.2003.816564] [Citation(s) in RCA: 75] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
This paper presents a method to exploit rank statistics to improve fully automatic tracing of neurons from noisy digital confocal microscope images. Previously proposed exploratory tracing (vectorization) algorithms work by recursively following the neuronal topology, guided by responses of multiple directional correlation kernels. These algorithms were found to fail when the data was of lower quality (noisier, less contrast, weak signal, or more discontinuous structures). This type of data is commonly encountered in the study of neuronal growth on microfabricated surfaces. We show that by partitioning the correlation kernels in the tracing algorithm into multiple subkernels, and using the median of their responses as the guiding criterion improves the tracing precision from 41% to 89% for low-quality data, with a 5% improvement in recall. Improved handling was observed for artifacts such as discontinuities and/or hollowness of structures. The new algorithms require slightly higher amounts of computation, but are still acceptably fast, typically consuming less than 2 seconds on a personal computer (Pentium III, 500 MHz, 128 MB). They produce labeling for all somas present in the field, and a graph-theoretic representation of all dendritic/axonal structures that can be edited. Topological and size measurements such as area, length, and tortuosity are derived readily. The efficiency, accuracy, and fully-automated nature of the proposed method makes it attractive for large-scale applications such as high-throughput assays in the pharmaceutical industry, and study of neuron growth on nano/micro-fabricated structures. A careful quantitative validation of the proposed algorithms is provided against manually derived tracing, using a performance measure that combines the precision and recall metrics.
Collapse
Affiliation(s)
- Khalid A Al-Kofahi
- ECSE Department Rensselaer Polytechnic Institute, Troy, NY 12180-3590, USA
| | | | | | | | | | | | | | | |
Collapse
|
142
|
Dowell-Mesfin NM, Abdul-Karim MA, Turner AMP, Schanz S, Craighead HG, Roysam B, Turner JN, Shain W. Topographically modified surfaces affect orientation and growth of hippocampal neurons. J Neural Eng 2004; 1:78-90. [PMID: 15876626 DOI: 10.1088/1741-2560/1/2/003] [Citation(s) in RCA: 185] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Extracellular matrix molecules provide biochemical and topographical cues that influence cell growth in vivo and in vitro. Effects of topographical cues on hippocampal neuron growth were examined after 14 days in vitro. Neurons from hippocampi of rat embryos were grown on poly-L-lysine-coated silicon surfaces containing fields of pillars with varying geometries. Photolithography was used to fabricate 1 microm high pillar arrays with different widths and spacings. Beta(III)-tubulin and MAP-2 immunocytochemistry and scanning electron microscopy were used to describe neuronal processes. Automated two-dimensional tracing software quantified process orientation and length. Process growth on smooth surfaces was random, while growth on pillared surfaces exhibited the most faithful alignment to pillar geometries with smallest gap sizes. Neurite lengths were significantly longer on pillars with the smallest inter-pillar spacings (gaps) and 2 microm pillar widths. These data indicate that physical cues affect neuron growth, suggesting that extracellular matrix topography may contribute to cell growth and differentiation. These results demonstrate new strategies for directing and promoting neuronal growth that will facilitate studies of synapse formation and function and provide methods to establish defined neural networks.
Collapse
|
143
|
Tsai CL, Stewart CV, Tanenbaum HL, Roysam B. Model-Based Method for Improving the Accuracy and Repeatability of Estimating Vascular Bifurcations and Crossovers From Retinal Fundus Images. ACTA ACUST UNITED AC 2004; 8:122-30. [PMID: 15217257 DOI: 10.1109/titb.2004.826733] [Citation(s) in RCA: 95] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A model-based algorithm, termed exclusion region and position refinement (ERPR), is presented for improving the accuracy and repeatability of estimating the locations where vascular structures branch and cross over, in the context of human retinal images. The goal is two fold. First, accurate morphometry of branching and crossover points (landmarks) in neuronal/vascular structure is important to several areas of biology and medicine. Second, these points are valuable as landmarks for image registration, so improved accuracy and repeatability in estimating their locations and signatures leads to more reliable image registration for applications such as change detection and mosaicing. The ERPR algorithm is shown to reduce the median location error from 2.04 pixels down to 1.1 pixels, while improving the median spread (a measure of repeatability) from 2.09 pixels down to 1.05 pixels. Errors in estimating vessel orientations were similarly reduced from 7.2 degrees down to 3.8 degrees.
Collapse
|
144
|
Tyrrell JA, LaPre JM, Carothers CD, Roysam B, Stewart CV. Efficient Migration of Complex Off-Line Computer Vision Software to Real-Time System Implementation on Generic Computer Hardware. ACTA ACUST UNITED AC 2004; 8:142-53. [PMID: 15217259 DOI: 10.1109/titb.2004.828883] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This paper addresses the problem of migrating large and complex computer vision code bases that have been developed off-line, into efficient real-time implementations avoiding the need for rewriting the software, and the associated costs. Creative linking strategies based on Linux loadable kernel modules are presented to create a simultaneous realization of real-time and off-line frame rate computer vision systems from a single code base. In this approach, systemic predictability is achieved by inserting time-critical components of a user-level executable directly into the kernel as a virtual device driver. This effectively emulates a single process space model that is nonpreemptable, nonpageable, and that has direct access to a powerful set of system-level services. This overall approach is shown to provide the basis for building a predictable frame-rate vision system using commercial off-the-shelf hardware and a standard uniprocessor Linux operating system. Experiments on a frame-rate vision system designed for computer-assisted laser retinal surgery show that this method reduces the variance of observed per-frame central processing unit cycle counts by two orders of magnitude. The conclusion is that when predictable application algorithms are used, it is possible to efficiently migrate to a predictable frame-rate computer vision system.
Collapse
|
145
|
Wang Q, Zeng YJ, Huo P, Hu JL, Zhang JH. A specialized plug-in software module for computer-aided quantitative measurement of medical images. Med Eng Phys 2004; 25:887-92. [PMID: 14630476 DOI: 10.1016/s1350-4533(03)00114-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
This paper presents a specialized system for quantitative measurement of medical images. Using Visual C++, we developed a computer-aided software based on Image-Pro Plus (IPP), a software development platform. When transferred to the hard disk of a computer by an MVPCI-V3A frame grabber, medical images can be automatically processed by our own IPP plug-in for immunohistochemical analysis, cytomorphological measurement and blood vessel segmentation. In 34 clinical studies, the system has shown its high stability, reliability and ease of utility.
Collapse
Affiliation(s)
- Q Wang
- Biomedical Engineering Center, Beijing University of Technology, Beijing 100022, China
| | | | | | | | | |
Collapse
|
146
|
Meijering E, Jacob M, Sarria JCF, Steiner P, Hirling H, Unser M. Design and validation of a tool for neurite tracing and analysis in fluorescence microscopy images. Cytometry A 2004; 58:167-76. [PMID: 15057970 DOI: 10.1002/cyto.a.20022] [Citation(s) in RCA: 1117] [Impact Index Per Article: 53.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/10/2023]
Abstract
BACKGROUND For the investigation of the molecular mechanisms involved in neurite outgrowth and differentiation, accurate and reproducible segmentation and quantification of neuronal processes are a prerequisite. To facilitate this task, we developed a semiautomatic neurite tracing technique. This article describes the design and validation of the technique. METHODS The technique was compared to fully manual delineation. Four observers repeatedly traced selected neurites in 20 fluorescence microscopy images of cells in culture, using both methods. Accuracy and reproducibility were determined by comparing the tracings to high-resolution reference tracings, using two error measures. Labor intensiveness was measured in numbers of mouse clicks required. The significance of the results was determined by a Student t-test and by analysis of variance. RESULTS Both methods slightly underestimated the true neurite length, but the differences were not unanimously significant. The average deviation from the true neurite centerline was a factor 2.6 smaller with the developed technique compared to fully manual tracing. Intraobserver variability in the respective measures was reduced by a factor 6.0 and 23.2. Interobserver variability was reduced by a factor 2.4 and 8.8, respectively, and labor intensiveness by a factor 3.3. CONCLUSIONS Providing similar accuracy in measuring neurite length, significantly improved accuracy in neurite centerline extraction, and significantly improved reproducibility and reduced labor intensiveness, the developed technique may replace fully manual tracing methods.
Collapse
Affiliation(s)
- E Meijering
- Department of Medical Informatics, Erasmus MC-University Medical Center Rotterdam, Rotterdam, The Netherlands.
| | | | | | | | | | | |
Collapse
|
147
|
Staal J, Abràmoff MD, Niemeijer M, Viergever MA, van Ginneken B. Ridge-based vessel segmentation in color images of the retina. IEEE TRANSACTIONS ON MEDICAL IMAGING 2004; 23:501-9. [PMID: 15084075 DOI: 10.1109/tmi.2004.825627] [Citation(s) in RCA: 1199] [Impact Index Per Article: 57.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2023]
Abstract
A method is presented for automated segmentation of vessels in two-dimensional color images of the retina. This method can be used in computer analyses of retinal images, e.g., in automated screening for diabetic retinopathy. The system is based on extraction of image ridges, which coincide approximately with vessel centerlines. The ridges are used to compose primitives in the form of line elements. With the line elements an image is partitioned into patches by assigning each image pixel to the closest line element. Every line element constitutes a local coordinate frame for its corresponding patch. For every pixel, feature vectors are computed that make use of properties of the patches and the line elements. The feature vectors are classified using a kappaNN-classifier and sequential forward feature selection. The algorithm was tested on a database consisting of 40 manually labeled images. The method achieves an area under the receiver operating characteristic curve of 0.952. The method is compared with two recently published rule-based methods of Hoover et al. and Jiang et al. The results show that our method is significantly better than the two rule-based methods (p < 0.01). The accuracy of our method is 0.944 versus 0.947 for a second observer.
Collapse
Affiliation(s)
- Joes Staal
- Image Sciences Institute, University Medical Center Utrecht, Heidelberglaan 100, E.01.335, 3584 CX, Utrecht, The Netherlands.
| | | | | | | | | |
Collapse
|
148
|
|
149
|
Stewart CV, Lee YL, Tsai CL. An Uncertainty-Driven Hybrid of Intensity-Based and Feature-Based Registration with Application to Retinal and Lung CT Images. ACTA ACUST UNITED AC 2004. [DOI: 10.1007/978-3-540-30135-6_106] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/20/2023]
|
150
|
Lin G, Stewart CV, Roysam B, Fritzsche K, Yang G, Tanenbaum HL. Predictive Scheduling Algorithms for Real-Time Feature Extraction and Spatial Referencing: Application to Retinal Image Sequences. IEEE Trans Biomed Eng 2004; 51:115-25. [PMID: 14723500 DOI: 10.1109/tbme.2003.820332] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Real-time spatial referencing is an important alternative to tracking for designing spatially aware ophthalmic instrumentation for procedures such as laser photocoagulation and perimetry. It requires independent, fast registration of each image frame from a digital video stream (1024 x 1024 pixels) to a spatial map of the retina. Recently, we have introduced a spatial referencing algorithm that works in three primary steps: 1) tracing the retinal vasculature to extract image feature (landmarks); 2) invariant indexing to generate hypothesized landmark correspondences and initial transformations; and 3) alignment and verification steps to robustly estimate a 12-parameter quadratic spatial transformation between the image frame and the map. The goal of this paper is to introduce techniques to minimize the amount of computation for successful spatial referencing. The fundamental driving idea is to make feature extraction subservient to registration and, therefore, only produce the information needed for verified, accurate transformations. To this end, the image is analyzed along one-dimensional, vertical and horizontal grid lines to produce a regular sampling of the vasculature, needed for step 3) and to initiate step 1). Tracing of the vascular is then prioritized hierarchically to quickly extract landmarks and groups (constellations) of landmarks for indexing. Finally, the tracing and spatial referencing computations are integrated so that landmark constellations found by tracing are tested immediately. The resulting implementation is an order-of-magnitude faster with the same success rate. The average total computation time is 31.2 ms per image on a 2.2-GHz Pentium Xeon processor.
Collapse
Affiliation(s)
- Gang Lin
- Rensselaer Polytechnic Institute, Troy, NY 12180, USA
| | | | | | | | | | | |
Collapse
|