1
|
Hou Z, Al-Atabany W, Farag R, Vuong QC, Mokhov A, Degenaar P. A scalable data transmission scheme for implantable optogenetic visual prostheses. J Neural Eng 2020; 17:055001. [DOI: 10.1088/1741-2552/abaf2e] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
|
2
|
Costela FM, Woods RL. The Impact of Field of View on Understanding of a Movie Is Reduced by Magnifying Around the Center of Interest. Transl Vis Sci Technol 2020; 9:6. [PMID: 32855853 PMCID: PMC7422781 DOI: 10.1167/tvst.9.8.6] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2019] [Accepted: 05/12/2020] [Indexed: 01/26/2023] Open
Abstract
Purpose Magnification is commonly used to reduce the impact of impaired central vision. However, magnification limits the field of view (FoV) which may make it difficult to follow the story. Most people with normal vision look in about the same place at about the same time, the center of interest (COI), when watching “Hollywood” movies. We hypothesized that if the FoV was centered at the COI, then this view would provide more useful information than either the original image center or an unrelated view location (the COI locations from a different video clip) as the FoV reduced. Methods The FoV was varied between 100% (original) and 3%. To measure video comprehension as the FoV reduced, subjects described 30-second video clips in response to two open-ended questions. A computational, natural-language approach was used to provide an information acquisition (IA) score. Results The IA scores reduced as the FoV decreased. When the FoV was around the COI, subjects were better able to understand the content of the video clips (higher IA scores) as the FoV decreased than the other conditions. Thus, magnification around the COI may serve as a better video enhancement approach than simple magnification of the image center. Conclusions These results have implications for future image processing and scene viewing, which may help people with central vision loss view directed dynamic visual content (“Hollywood” movies). Translational Relevance Our results are promising for the use of magnification around the COI as a vision rehabilitation aid for people with central vision loss.
Collapse
Affiliation(s)
- Francisco M Costela
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| | - Russell L Woods
- Schepens Eye Research Institute, Massachusetts Eye and Ear, Boston, MA, USA.,Department of Ophthalmology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
3
|
Montazeri L, El Zarif N, Trenholm S, Sawan M. Optogenetic Stimulation for Restoring Vision to Patients Suffering From Retinal Degenerative Diseases: Current Strategies and Future Directions. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:1792-1807. [PMID: 31689206 DOI: 10.1109/tbcas.2019.2951298] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Optogenetic strategies for vision restoration involve photosensitizing surviving retinal neurons following retinal degeneration, using emerging optogenetic techniques. This approach opens the door to a minimally-invasive retinal vision restoration approach. Moreover, light stimulation has the potential to offer better spatial and temporal resolution than conventional retinal electrical prosthetics. Although proof-of-concept studies in animal models have demonstrated the possibility of restoring vision using optogenetic techniques, and initial clinical trials are underway, there are still hurdles to pass before such an approach restores naturalistic vision in humans. One limitation is the development of light stimulation devices to activate optogenetic channels in the retina. Here we review recent progress in the design and implementation of optogenetic stimulation devices and outline the corresponding technological challenges. Finally, while most work to date has focused on providing therapy to patients suffering from retinitis pigmentosa, we provide additional insights into strategies for applying optogenetic vision restoration to patients suffering from age-related macular degeneration.
Collapse
|
4
|
Abstract
SIGNIFICANCE For people with limited vision, wearable displays hold the potential to digitally enhance visual function. As these display technologies advance, it is important to understand their promise and limitations as vision aids. PURPOSE The aim of this study was to test the potential of a consumer augmented reality (AR) device for improving the functional vision of people with near-complete vision loss. METHODS An AR application that translates spatial information into high-contrast visual patterns was developed. Two experiments assessed the efficacy of the application to improve vision: an exploratory study with four visually impaired participants and a main controlled study with participants with simulated vision loss (n = 48). In both studies, performance was tested on a range of visual tasks (identifying the location, pose and gesture of a person, identifying objects, and moving around in an unfamiliar space). Participants' accuracy and confidence were compared on these tasks with and without augmented vision, as well as their subjective responses about ease of mobility. RESULTS In the main study, the AR application was associated with substantially improved accuracy and confidence in object recognition (all P < .001) and to a lesser degree in gesture recognition (P < .05). There was no significant change in performance on identifying body poses or in subjective assessments of mobility, as compared with a control group. CONCLUSIONS Consumer AR devices may soon be able to support applications that improve the functional vision of users for some tasks. In our study, both artificially impaired participants and participants with near-complete vision loss performed tasks that they could not do without the AR system. Current limitations in system performance and form factor, as well as the risk of overconfidence, will need to be overcome.
Collapse
|
5
|
Zarif NE, Montazeri L, Sawan M. Real-Time Retinal Processing for High-Resolution Optogenetic Stimulation Device. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2018; 2018:5946-5949. [PMID: 30441690 DOI: 10.1109/embc.2018.8513692] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
We present in this paper an image processing technique called Circular Distortion and Motion Compensation (CDMC) that can perform real-time retinal processing with geometric compensation for the ring structure arraignment in the retina for bipolar and ganglion cells. The system was running on an embedded platform of Raspberry Pi 3 and managed to achieve a respectable 12 frames per second on a $640\times 480$ resolution live video capture from a webcam. The system emulates biological processes occurring in the retina such as motion estimation and temporal filtering while compensating for the radial shift of ganglion and bipolar cells in human retina. The proposed algorithm is efficient enough to run on mobile hardware with battery powered device in real-time and it is ideal for high resolution optogenetic stimulation devices that targets the retina.
Collapse
|
6
|
Soltan A, Barrett JM, Maaskant P, Armstrong N, Al-Atabany W, Chaudet L, Neil M, Sernagor E, Degenaar P. A head mounted device stimulator for optogenetic retinal prosthesis. J Neural Eng 2018; 15:065002. [PMID: 30156188 PMCID: PMC6372131 DOI: 10.1088/1741-2552/aadd55] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
Abstract
Objective. Our main objective is to demonstrate that compact high radiance gallium nitride displays can be used with conventional virtual reality optics to stimulate an optogenetic retina. Hence, we aim to introduce a non-invasive approach to restore vision for people with conditions such as retinitis pigmentosa where there is a remaining viable communication link between the retina and the visual cortex. Approach. We design and implement the headset using a high-density µLED matrix, Raspberry Pi, microcontroller from NXP and virtual reality lens. Then, a test platform is developed to evaluate the performance of the headset and the optical system. Furthermore, image simplification algorithms are used to simplify the scene to be sent to the retina. Moreover, in vivo evaluation of the genetically modified retina response at different light intensity is discussed to prove the reliability of the proposed system. Main results. We demonstrate that in keeping with regulatory guidance, the headset displays need to limit their luminance to 90 kcd m−2. We demonstrate an optical system with 5.75% efficiency which allows for 0.16 mW mm−2 irradiance on the retina within the regulatory guidance, but which is capable of an average peak irradiance of 1.35 mW mm−2. As this is lower than the commonly accepted threshold for channelrhodopsin-2, we demonstrate efficacy through an optical model of an eye onto a biological retina. Significance. We demonstrate a fully functional 8100-pixel headset system including software/hardware which can operate on a standard consumer battery for periods exceeding a 24 h recharge cycle. The headset is capable of delivering enough light to stimulate the genetically modified retina cells and also keeping the amount of light below the regulation threshold for safety.
Collapse
Affiliation(s)
- Ahmed Soltan
- School of Engineering, Newcastle University, Newcastle upon Tyne, NE1 7RU, United Kingdom
| | | | | | | | | | | | | | | | | |
Collapse
|
7
|
Extraspectral Imaging for Improving the Perceived Information Presented in Retinal Prosthesis. JOURNAL OF HEALTHCARE ENGINEERING 2018; 2018:3493826. [PMID: 29849997 PMCID: PMC5932423 DOI: 10.1155/2018/3493826] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 03/07/2018] [Indexed: 01/12/2023]
Abstract
Retinal prosthesis is steadily improving as a clinical treatment for blindness caused by retinitis pigmentosa. However, despite the continued exciting progress, the level of visual return is still very poor. It is also unlikely that those utilising these devices will stop being legally blind in the near future. Therefore, it is important to develop methods to maximise the transfer of useful information extracted from the visual scene. Such an approach can be achieved by digitally suppressing less important visual features and textures within the scene. The result can be interpreted as a cartoon-like image of the scene. Furthermore, utilising extravisual wavelengths such as infrared can be useful in the decision process to determine the optimal information to present. In this paper, we, therefore, present a processing methodology that utilises information extracted from the infrared spectrum to assist in the preprocessing of the visual image prior to conversion to retinal information. We demonstrate how this allows for enhanced recognition and how it could be implemented for optogenetic forms of retinal prosthesis. The new approach has been quantitatively evaluated on volunteers showing 112% enhancement in recognizing objects over normal approaches.
Collapse
|
8
|
Soltan A, McGovern B, Drakakis E, Neil M, Maaskant P, Akhter M, Lee JS, Degenaar P. High Density, High Radiance $\mu$ LED Matrix for Optogenetic Retinal Prostheses and Planar Neural Stimulation. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2017; 11:347-359. [PMID: 28212099 DOI: 10.1109/tbcas.2016.2623949] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Optical neuron stimulation arrays are important for both in-vitro biology and retinal prosthetic biomedical applications. Hence, in this work, we present an 8100 pixel high radiance photonic stimulator. The chip module vertically combines custom made gallium nitride μ LEDs with a CMOS application specific integrated circuit. This is designed with active pixels to ensure random access and to allow continuous illumination of all required pixels. The μLEDs have been assembled on the chip using a solder ball flip-chip bonding technique which has allowed for reliable and repeatable manufacture. We have evaluated the performance of the matrix by measuring the different factors including the static, dynamic power consumption, the illumination, and the current consumption by each LED. We show that the power consumption is within a range suitable for portable use. Finally, the thermal behavior of the matrix is monitored and the matrix proved to be thermally stable.
Collapse
|
9
|
Barnes N, Scott AF, Lieby P, Petoe MA, McCarthy C, Stacey A, Ayton LN, Sinclair NC, Shivdasani MN, Lovell NH, McDermott HJ, Walker JG. Vision function testing for a suprachoroidal retinal prosthesis: effects of image filtering. J Neural Eng 2016; 13:036013. [DOI: 10.1088/1741-2560/13/3/036013] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
10
|
Moshtael H, Aslam T, Underwood I, Dhillon B. High Tech Aids Low Vision: A Review of Image Processing for the Visually Impaired. Transl Vis Sci Technol 2015; 4:6. [PMID: 26290777 DOI: 10.1167/tvst.4.4.6] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 06/10/2015] [Indexed: 11/24/2022] Open
Abstract
Recent advances in digital image processing provide promising methods for maximizing the residual vision of the visually impaired. This paper seeks to introduce this field to the readership and describe its current state as found in the literature. A systematic search revealed 37 studies that measure the value of image processing techniques for subjects with low vision. The techniques used are categorized according to their effect and the principal findings are summarized. The majority of participants preferred enhanced images over the original for a wide range of enhancement types. Adapting the contrast and spatial frequency content often improved performance at object recognition and reading speed, as did techniques that attenuate the image background and a technique that induced jitter. A lack of consistency in preference and performance measures was found, as well as a lack of independent studies. Nevertheless, the promising results should encourage further research in order to allow their widespread use in low-vision aids.
Collapse
Affiliation(s)
- Howard Moshtael
- EPSRC Centre for Doctoral Training in Applied Photonics, Heriot-Watt University, UK
| | - Tariq Aslam
- Institute of Human Development, Faculty of Medical and Human Sciences, University of Manchester, UK ; Honorary Professor of Vision Science and Interface Technologies, Heriot-Watt University, UK ; Manchester Royal Eye Hospital, Central Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, UK
| | | | | |
Collapse
|
11
|
Kiral-Kornek FI, OʼSullivan-Greene E, Savage CO, McCarthy C, Grayden DB, Burkitt AN. Improved visual performance in letter perception through edge orientation encoding in a retinal prosthesis simulation. J Neural Eng 2014; 11:066002. [PMID: 25307496 DOI: 10.1088/1741-2560/11/6/066002] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
Abstract
Objective. Stimulation strategies for retinal prostheses predominately seek to directly encode image brightness values rather than edge orientations. Recent work suggests that the generation of oriented elliptical phosphenes may be possible by controlling interactions between neighboring electrodes. Based on this, we propose a novel stimulation strategy for prosthetic vision that extracts edge orientation information from the intensity image and encodes it as oriented elliptical phosphenes. We test the hypothesis that encoding edge orientation via oriented elliptical phosphenes leads to better alphabetic letter recognition than standard intensity-based encoding. Approach. We conduct a psychophysical study with simulated phosphene vision with 12 normal-sighted volunteers. The two stimulation strategies were compared with variations of letter size, electrode drop-out and spatial offsets of phosphenes. Main results. Mean letter recognition accuracy was significantly better with the new proposed stimulation strategy (65%) compared to direct grayscale encoding (47%). All examined parameters-stimulus size, phosphene dropout, and location shift-were found to influence the performance, with significant two-way interactions between phosphene dropout and stimulus size as well as between phosphene dropout and phosphene location shift. The analysis delivers a model of perception performance. Significance. Displaying available directional information to an implant user may improve their visual performance. We present a model for designing a stimulation strategy under the constraints of existing retinal prostheses that can be exploited by retinal implant developers to strategically employ oriented phosphenes.
Collapse
Affiliation(s)
- F Isabell Kiral-Kornek
- NeuroEngineering Laboratory, Department of Electrical & Electronic Engineering, The University of Melbourne, Australia. Centre for Neural Engineering, The University of Melbourne, Australia. NICTA, c/- Department of Electrical & Electronic Engineering, The University of Melbourne, Australia
| | | | | | | | | | | |
Collapse
|
12
|
BARRETT JOHNMARTIN, BERLINGUER-PALMINI ROLANDO, DEGENAAR PATRICK. Optogenetic approaches to retinal prosthesis. Vis Neurosci 2014; 31:345-54. [PMID: 25100257 PMCID: PMC4161214 DOI: 10.1017/s0952523814000212] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 05/07/2014] [Indexed: 01/14/2023]
Abstract
The concept of visual restoration via retinal prosthesis arguably started in 1992 with the discovery that some of the retinal cells were still intact in those with the retinitis pigmentosa disease. Two decades later, the first commercially available devices have the capability to allow users to identify basic shapes. Such devices are still very far from returning vision beyond the legal blindness. Thus, there is considerable continued development of electrode materials, and structures and electronic control mechanisms to increase both resolution and contrast. In parallel, the field of optogenetics--the genetic photosensitization of neural tissue holds particular promise for new approaches. Given that the eye is transparent, photosensitizing remaining neural layers of the eye and illuminating from the outside could prove to be less invasive, cheaper, and more effective than present approaches. As we move toward human trials in the coming years, this review explores the core technological and biological challenges related to the gene therapy and the high radiance optical stimulation requirement.
Collapse
Affiliation(s)
- JOHN MARTIN BARRETT
- Institute of Neuroscience,
Newcastle University, Newcastle upon
Tyne, United Kingdom
| | | | - PATRICK DEGENAAR
- School of EEE,
Newcastle University, Newcastle upon
Tyne, United Kingdom
| |
Collapse
|
13
|
Hicks SL, Wilson I, Muhammed L, Worsfold J, Downes SM, Kennard C. A depth-based head-mounted visual display to aid navigation in partially sighted individuals. PLoS One 2013; 8:e67695. [PMID: 23844067 PMCID: PMC3701048 DOI: 10.1371/journal.pone.0067695] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2012] [Accepted: 05/22/2013] [Indexed: 11/19/2022] Open
Abstract
Independent navigation for blind individuals can be extremely difficult due to the inability to recognise and avoid obstacles. Assistive techniques such as white canes, guide dogs, and sensory substitution provide a degree of situational awareness by relying on touch or hearing but as yet there are no techniques that attempt to make use of any residual vision that the individual is likely to retain. Residual vision can restricted to the awareness of the orientation of a light source, and hence any information presented on a wearable display would have to limited and unambiguous. For improved situational awareness, i.e. for the detection of obstacles, displaying the size and position of nearby objects, rather than including finer surface details may be sufficient. To test whether a depth-based display could be used to navigate a small obstacle course, we built a real-time head-mounted display with a depth camera and software to detect the distance to nearby objects. Distance was represented as brightness on a low-resolution display positioned close to the eyes without the benefit focussing optics. A set of sighted participants were monitored as they learned to use this display to navigate the course. All were able to do so, and time and velocity rapidly improved with practise with no increase in the number of collisions. In a second experiment a cohort of severely sight-impaired individuals of varying aetiologies performed a search task using a similar low-resolution head-mounted display. The majority of participants were able to use the display to respond to objects in their central and peripheral fields at a similar rate to sighted controls. We conclude that the skill to use a depth-based display for obstacle avoidance can be rapidly acquired and the simplified nature of the display may appropriate for the development of an aid for sight-impaired individuals.
Collapse
Affiliation(s)
- Stephen L Hicks
- The Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom.
| | | | | | | | | | | |
Collapse
|
14
|
Wiecek E, Jackson ML, Dakin SC, Bex P. Visual search with image modification in age-related macular degeneration. Invest Ophthalmol Vis Sci 2012; 53:6600-9. [PMID: 22930725 DOI: 10.1167/iovs.12-10012] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
PURPOSE AMD results in loss of central vision and a dependence on low-resolution peripheral vision. While many image enhancement techniques have been proposed, there is a lack of quantitative comparison of the effectiveness of enhancement. We developed a natural visual search task that uses patients' eye movements as a quantitative and functional measure of the efficacy of image modification. METHODS Eye movements of 17 patients (mean age = 77 years) with AMD were recorded while they searched for target objects in natural images. Eight different image modification methods were implemented and included manipulations of local image or edge contrast, color, and crowding. In a subsequent task, patients ranked their preference of the image modifications. RESULTS Within individual participants, there was no significant difference in search duration or accuracy across eight different image manipulations. When data were collapsed across all image modifications, a multivariate model identified six significant predictors for normalized search duration including scotoma size and acuity, as well as interactions among scotoma size, age, acuity, and contrast (P < 0.05). Additionally, an analysis of image statistics showed no correlation with search performance across all image modifications. Rank ordering of enhancement methods based on participants' preference revealed a trend that participants preferred the least modified images (P < 0.05). CONCLUSIONS There was no quantitative effect of image modification on search performance. A better understanding of low- and high-level components of visual search in natural scenes is necessary to improve future attempts at image enhancement for low vision patients. Different search tasks may require alternative image modifications to improve patient functioning and performance.
Collapse
Affiliation(s)
- Emily Wiecek
- Massachusetts Eye and Ear Infirmary, 20 Staniford Street, Boston, MA 02118, USA.
| | | | | | | |
Collapse
|
15
|
Al-Atabany W, McGovern B, Mehran K, Berlinguer-Palmini R, Degenaar P. A processing platform for optoelectronic/optogenetic retinal prosthesis. IEEE Trans Biomed Eng 2011; 60:781-91. [PMID: 22127992 DOI: 10.1109/tbme.2011.2177498] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
The field of retinal prosthesis has been steadily developing over the last two decades. Despite the many obstacles, clinical trials for electronic approaches are in progress and already demonstrating some success. Optogenetic/optoelectronic retinal prosthesis may prove to have even greater capabilities. Although resolutions are now moving beyond recognition of simple shapes, it will nevertheless be poor compared to normal vision. If we define the aim to be to return mobility and natural scene recognition to the patient, it is important to maximize the useful visual information we attempt to transfer. In this paper, we highlight a method to simplify the scene, perform spatial image compression, and then apply spike coding. We then show the potential for translation on standard consumer processors. The algorithms are applicable to all forms of visual prosthesis, but we particularly focus on optogenetic approaches.
Collapse
Affiliation(s)
- Walid Al-Atabany
- Department of Biomedical Engineering, Helwan University, Helwan 11421, Egypt.
| | | | | | | | | |
Collapse
|
16
|
Al-Atabany WI, Tong T, Degenaar PA. Improved content aware scene retargeting for retinitis pigmentosa patients. Biomed Eng Online 2010; 9:52. [PMID: 20846440 PMCID: PMC2949883 DOI: 10.1186/1475-925x-9-52] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2010] [Accepted: 09/16/2010] [Indexed: 12/03/2022] Open
Abstract
Background In this paper we present a novel scene retargeting technique to reduce the visual scene while maintaining the size of the key features. The algorithm is scalable to implementation onto portable devices, and thus, has potential for augmented reality systems to provide visual support for those with tunnel vision. We therefore test the efficacy of our algorithm on shrinking the visual scene into the remaining field of view for those patients. Methods Simple spatial compression of visual scenes makes objects appear further away. We have therefore developed an algorithm which removes low importance information, maintaining the size of the significant features. Previous approaches in this field have included seam carving, which removes low importance seams from the scene, and shrinkability which dynamically shrinks the scene according to a generated importance map. The former method causes significant artifacts and the latter is inefficient. In this work we have developed a new algorithm, combining the best aspects of both these two previous methods. In particular, our approach is to generate a shrinkability importance map using as seam based approach. We then use it to dynamically shrink the scene in similar fashion to the shrinkability method. Importantly, we have implemented it so that it can be used in real time without prior knowledge of future frames. Results We have evaluated and compared our algorithm to the seam carving and image shrinkability approaches from a content preservation perspective and a compression quality perspective. Also our technique has been evaluated and tested on a trial included 20 participants with simulated tunnel vision. Results show the robustness of our method at reducing scenes up to 50% with minimal distortion. We also demonstrate efficacy in its use for those with simulated tunnel vision of 22 degrees of field of view or less. Conclusions Our approach allows us to perform content aware video resizing in real time using only information from previous frames to avoid jitter. Also our method has a great benefit over the ordinary resizing method and even over other image retargeting methods. We show that the benefit derived from this algorithm is significant to patients with fields of view 20° or less.
Collapse
|