1
|
Fan H, Xu L, Luo MR. Optimized principal component analysis for camera spectral sensitivity estimation. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2023; 40:1515-1526. [PMID: 37707107 DOI: 10.1364/josaa.492929] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2023] [Accepted: 07/01/2023] [Indexed: 09/15/2023]
Abstract
This paper describes the use of a weighted principal component analysis (PCA) method for camera spectral sensitivity estimation. A comprehensive set of spectral sensitivities of 111 cameras was collected from four publicly available databases. It was proposed to weight the spectral sensitivities in the database according to the similarities with those of the test camera. The similarity was evaluated by the reciprocal predicted errors of camera responses. Thus, a set of dynamic principal components was generated from the weighted spectral sensitivity database and served as the basis functions to estimate spectral sensitivities. The test stimuli included self-luminous colors from a multi-channel LED system and reflective colors from a color chart. The proposed method was tested in both the simulated and practical experiments, and the results were compared with the classical PCA method, three commonly used basis function methods (Fourier, polynomial, and radial bases), and a regularization method. It was demonstrated that the proposed method significantly improved the accuracy of spectral sensitivity estimation.
Collapse
|
2
|
Lecca M, Gianini G, Serapioni RP. Mathematical insights into the original Retinex algorithm for image enhancement. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2022; 39:2063-2072. [PMID: 36520703 DOI: 10.1364/josaa.471953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/01/2022] [Accepted: 10/03/2022] [Indexed: 06/17/2023]
Abstract
The Retinex theory, originally developed by Land and McCann as a computation model of the human color sensation, has become, with time, a pillar of digital image enhancement. In this area, the Retinex algorithm is widely used to improve the quality of any input image by increasing the visibility of its content and details, enhancing its colorfulness, and weakening, or even removing, some undesired effects of the illumination. The algorithm was originally described by its creators in terms of a sequence of image processing operations and was not fully formalized mathematically. Later, works focusing on aspects of the original formulation and adopting some of its principles tried to frame the algorithm within a mathematical formalism: this yielded every time a partial rendering of the model and resulted in several interesting model variants. The purpose of the present work is to fill a gap in the Retinex-related literature by providing a complete mathematical formalization of the original Retinex algorithm. The overarching goals of this work are to provide mathematical insights into the Retinex theory, promote awareness of the use of the model within image enhancement, and enable better appreciation of differences and similarities with later models based on Retinex principles. For this purpose, we compare our model with others proposed in the literature, paying particular attention to the work published in 2005 by Provenzi and others.
Collapse
|
3
|
Shi K, Luo MR. Factors affecting colour matching between displays. OPTICS EXPRESS 2022; 30:26841-26855. [PMID: 36236868 DOI: 10.1364/oe.462242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 06/24/2022] [Indexed: 06/16/2023]
Abstract
A colour matching experiment was conducted to study and observe metamerism between different displays. The goals were to investigate the parameters of the display primaries (spectral power distributions (SPDs)), display types (OLED and LCD), and the colour matching functions (CMFs). The results showed that the use of the CIE 2006 2° CMFs can give better agreement to the visual results, especially matching between OLED against LCD displays, mainly due to the SPDs of the primaries. The results also showed that a simple color correction model improved the matching performance between displays, regardless of the display type.
Collapse
|
4
|
Rouček T, Amjadi AS, Rozsypálek Z, Broughton G, Blaha J, Kusumam K, Krajník T. Self-Supervised Robust Feature Matching Pipeline for Teach and Repeat Navigation. SENSORS (BASEL, SWITZERLAND) 2022; 22:2836. [PMID: 35458823 PMCID: PMC9032253 DOI: 10.3390/s22082836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/01/2022] [Revised: 03/28/2022] [Accepted: 03/31/2022] [Indexed: 06/14/2023]
Abstract
The performance of deep neural networks and the low costs of computational hardware has made computer vision a popular choice in many robotic systems. An attractive feature of deep-learned methods is their ability to cope with appearance changes caused by day-night cycles and seasonal variations. However, deep learning of neural networks typically relies on large numbers of hand-annotated images, which requires significant effort for data collection and annotation. We present a method that allows autonomous, self-supervised training of a neural network in visual teach-and-repeat (VT&R) tasks, where a mobile robot has to traverse a previously taught path repeatedly. Our method is based on a fusion of two image registration schemes: one based on a Siamese neural network and another on point-feature matching. As the robot traverses the taught paths, it uses the results of feature-based matching to train the neural network, which, in turn, provides coarse registration estimates to the feature matcher. We show that as the neural network gets trained, the accuracy and robustness of the navigation increases, making the robot capable of dealing with significant changes in the environment. This method can significantly reduce the data annotation efforts when designing new robotic systems or introducing robots into new environments. Moreover, the method provides annotated datasets that can be deployed in other navigation systems. To promote the reproducibility of the research presented herein, we provide our datasets, codes and trained models online.
Collapse
Affiliation(s)
- Tomáš Rouček
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Arash Sadeghi Amjadi
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Zdeněk Rozsypálek
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - George Broughton
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Jan Blaha
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| | - Keerthy Kusumam
- Department of Computer Science, University of Nottingham, Jubilee Campus, 7301 Wollaton Rd, Lenton, Nottingham NG8 1BB, UK;
| | - Tomáš Krajník
- Artificial Intelligence Center, Faculty of Electrical Engineering, Czech Technical University in Prague, 166 27 Prague 6, Czech Republic; (A.S.A.); (Z.R.); (G.B.); (J.B.); (T.K.)
| |
Collapse
|
5
|
One-net: Convolutional color constancy simplified. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.04.035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
6
|
Yoo JS, Lee CH, Kim JO. Deep Dichromatic Model Estimation Under AC Light Sources. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:7064-7073. [PMID: 34351857 DOI: 10.1109/tip.2021.3100550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
The dichromatic reflection model has been popularly exploited for computer vison tasks, such as color constancy and highlight removal. However, dichromatic model estimation is an severely ill-posed problem. Thus, several assumptions have been commonly made to estimate the dichromatic model, such as white-light (highlight removal) and the existence of highlight regions (color constancy). In this paper, we propose a spatio-temporal deep network to estimate the dichromatic parameters under AC light sources. The minute illumination variations can be captured with high-speed camera. The proposed network is composed of two sub-network branches. From high-speed video frames, each branch generates chromaticity and coefficient matrices, which correspond to the dichromatic image model. These two separate branches are jointly learned by spatio-temporal regularization. As far as we know, this is the first work that aims to estimate all dichromatic parameters in computer vision. To validate the model estimation accuracy, it is applied to color constancy and highlight removal. Both experimental results show that the dichromatic model can be estimated accurately via the proposed deep network.
Collapse
|
7
|
Ji Y, Kwak Y, Park SM, Kim YL. Compressive recovery of smartphone RGB spectral sensitivity functions. OPTICS EXPRESS 2021; 29:11947-11961. [PMID: 33984965 PMCID: PMC8237928 DOI: 10.1364/oe.420069] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
Spectral response (or sensitivity) functions of a three-color image sensor (or trichromatic camera) allow a mapping from spectral stimuli to RGB color values. Like biological photosensors, digital RGB spectral responses are device dependent and significantly vary from model to model. Thus, the information on the RGB spectral response functions of a specific device is vital in a variety of computer vision as well as mobile health (mHealth) applications. Theoretically, spectral response functions can directly be measured with sophisticated calibration equipment in a specialized laboratory setting, which is not easily accessible for most application developers. As a result, several mathematical methods have been proposed relying on standard color references. Typical optimization frameworks with constraints are often complicated, requiring a large number of colors. We report a compressive sensing framework in the frequency domain for accurately predicting RGB spectral response functions only with several primary colors. Using a scientific camera, we first validate the estimation method with direct spectral sensitivity measurements and ensure that the root mean square errors between the ground truth and recovered RGB spectral response functions are negligible. We further recover the RGB spectral response functions of smartphones and validate with an expanded color checker reference. We expect that this simple yet reliable estimation method of RGB spectral sensitivity can easily be applied for color calibration and standardization in machine vision, hyperspectral filters, and mHealth applications that capitalize on the built-in cameras of smartphones.
Collapse
Affiliation(s)
- Yuhyun Ji
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Yunsang Kwak
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Sang Mok Park
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
| | - Young L. Kim
- Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN 47907, USA
- Purdue Quantum Science and Engineering Institute, West Lafayette, IN 47907, USA
- Regenstrief Center for Healthcare Engineering, West Lafayette, IN 47907, USA
- Purdue University Center for Cancer Research, West Lafayette, IN 47907, USA
| |
Collapse
|
8
|
Prosa M, Bolognesi M, Fornasari L, Grasso G, Lopez-Sanchez L, Marabelli F, Toffanin S. Nanostructured Organic/Hybrid Materials and Components in Miniaturized Optical and Chemical Sensors. NANOMATERIALS (BASEL, SWITZERLAND) 2020; 10:E480. [PMID: 32155993 PMCID: PMC7153587 DOI: 10.3390/nano10030480] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/08/2020] [Revised: 02/28/2020] [Accepted: 03/04/2020] [Indexed: 01/16/2023]
Abstract
In the last decade, biochemical sensors have brought a disruptive breakthrough in analytical chemistry and microbiology due the advent of technologically advanced systems conceived to respond to specific applications. From the design of a multitude of different detection modalities, several classes of sensor have been developed over the years. However, to date they have been hardly used in point-of-care or in-field applications, where cost and portability are of primary concern. In the present review we report on the use of nanostructured organic and hybrid compounds in optoelectronic, electrochemical and plasmonic components as constituting elements of miniaturized and easy-to-integrate biochemical sensors. We show how the targeted design, synthesis and nanostructuring of organic and hybrid materials have enabled enormous progress not only in terms of modulation and optimization of the sensor capabilities and performance when used as active materials, but also in the architecture of the detection schemes when used as structural/packing components. With a particular focus on optoelectronic, chemical and plasmonic components for sensing, we highlight that the new concept of having highly-integrated architectures through a system-engineering approach may enable the full expression of the potential of the sensing systems in real-setting applications in terms of fast-response, high sensitivity and multiplexity at low-cost and ease of portability.
Collapse
Affiliation(s)
- Mario Prosa
- Institute of Nanostructured Materials (ISMN), National Research Council (CNR), via P. Gobetti 101, 40129 Bologna, Italy; (M.P.); (M.B.)
| | - Margherita Bolognesi
- Institute of Nanostructured Materials (ISMN), National Research Council (CNR), via P. Gobetti 101, 40129 Bologna, Italy; (M.P.); (M.B.)
| | - Lucia Fornasari
- Plasmore s.r.l., viale Vittorio Emanuele II 4, 27100 Pavia, Italy; (L.F.); (L.L.-S.)
| | - Gerardo Grasso
- Institute of Nanostructured Materials (ISMN), National Research Council (CNR) c/o Department of Chemistry, ‘Sapienza’ University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy;
| | - Laura Lopez-Sanchez
- Plasmore s.r.l., viale Vittorio Emanuele II 4, 27100 Pavia, Italy; (L.F.); (L.L.-S.)
| | - Franco Marabelli
- Physics Department, University of Pavia, via A. Bassi 6, 27100 Pavia, Italy;
| | - Stefano Toffanin
- Institute of Nanostructured Materials (ISMN), National Research Council (CNR), via P. Gobetti 101, 40129 Bologna, Italy; (M.P.); (M.B.)
| |
Collapse
|
9
|
Ibarra-Arenado MJ, Tjahjadi T, Pérez-Oria J. Shadow Detection in Still Road Images Using Chrominance Properties of Shadows and Spectral Power Distribution of the Illumination. SENSORS 2020; 20:s20041012. [PMID: 32069938 PMCID: PMC7070959 DOI: 10.3390/s20041012] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/30/2019] [Revised: 02/03/2020] [Accepted: 02/10/2020] [Indexed: 11/16/2022]
Abstract
A well-known challenge in vision-based driver assistance systems is cast shadows on the road, which makes fundamental tasks such as road and lane detections difficult. In as much as shadow detection relies on shadow features, in this paper, we propose a set of new chrominance properties of shadows based on the skylight and sunlight contributions to the road surface chromaticity. Six constraints on shadow and non-shadowed regions are derived from these properties. The chrominance properties and the associated constraints are used as shadow features in an effective shadow detection method intended to be integrated on an onboard road detection system where the identification of cast shadows on the road is a determinant stage. Onboard systems deal with still outdoor images; thus, the approach focuses on distinguishing shadow boundaries from material changes by considering two illumination sources: sky and sun. A non-shadowed road region is illuminated by both skylight and sunlight, whereas a shadowed one is illuminated by skylight only; thus, their chromaticity varies. The shadow edge detection strategy consists of the identification of image edges separating shadowed and non-shadowed road regions. The classification is achieved by verifying whether the pixel chrominance values of regions on both sides of the image edges satisfy the six constraints. Experiments on real traffic scenes demonstrated the effectiveness of our shadow detection system in detecting shadow edges on the road and material-change edges, outperforming previous shadow detection methods based on physical features, and showing the high potential of the new chrominance properties.
Collapse
Affiliation(s)
- Manuel José Ibarra-Arenado
- Department of Electrical and Energy Engineering, University of Cantabria, Avda. Los Castros s/n, 39005 Santander, Spain
- Correspondence: ; Tel.: +34-942-201-360; Fax: +34-942-201-385
| | - Tardi Tjahjadi
- School of Engineering, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL, UK;
| | - Juan Pérez-Oria
- Department of Electronic Technology and Automatic Systems, University of Cantabria, Avda. Los Castros s/n, 39005 Santander, Spain;
| |
Collapse
|
10
|
Gao SB, Zhang M, Li CY, Li YJ. Improving color constancy by discounting the variation of camera spectral sensitivity. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2017; 34:1448-1462. [PMID: 29036112 DOI: 10.1364/josaa.34.001448] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Accepted: 07/11/2017] [Indexed: 06/07/2023]
Abstract
It is an ill-posed problem to recover the true scene colors from a color biased image by discounting the effects of scene illuminant and camera spectral sensitivity (CSS) at the same time. Most color constancy (CC) models have been designed to first estimate the illuminant color, which is then removed from the color biased image to obtain an image taken under white light, without the explicit consideration of CSS effect on CC. This paper first studies the CSS effect on illuminant estimation arising in the inter-dataset-based CC (inter-CC), i.e., training a CC model on one dataset and then testing on another dataset captured by a distinct CSS. We show the clear degradation of existing CC models for inter-CC application. Then a simple way is proposed to overcome such degradation by first learning quickly a transform matrix between the two distinct CSSs (CSS-1 and CSS-2). The learned matrix is then used to convert the data (including the illuminant ground truth and the color-biased images) rendered under CSS-1 into CSS-2, and then train and apply the CC model on the color-biased images under CSS-2 without the need of burdensome acquiring of the training set under CSS-2. Extensive experiments on synthetic and real images show that our method can clearly improve the inter-CC performance for traditional CC algorithms. We suggest that, by taking the CSS effect into account, it is more likely to obtain the truly color constant images invariant to the changes of both illuminant and camera sensors.
Collapse
|
11
|
Shadow-Based Hierarchical Matching for the Automatic Registration of Airborne LiDAR Data and Space Imagery. REMOTE SENSING 2016. [DOI: 10.3390/rs8060466] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
12
|
Lecca M, Gottardi M, Farella E, Milosevic B. Always-on low-power optical system for skin-based touchless machine control. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2016; 33:1015-1024. [PMID: 27409427 DOI: 10.1364/josaa.33.001015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Embedded vision systems are smart energy-efficient devices that capture and process a visual signal in order to extract high-level information about the surrounding observed world. Thanks to these capabilities, embedded vision systems attract more and more interest from research and industry. In this work, we present a novel low-power optical embedded system tailored to detect the human skin under various illuminant conditions. We employ the presented sensor as a smart switch to activate one or more appliances connected to it. The system is composed of an always-on low-power RGB color sensor, a proximity sensor, and an energy-efficient microcontroller (MCU). The architecture of the color sensor allows a hardware preprocessing of the RGB signal, which is converted into the rg space directly on chip reducing the power consumption. The rg signal is delivered to the MCU, where it is classified as skin or non-skin. Each time the signal is classified as skin, the proximity sensor is activated to check the distance of the detected object. If it appears to be in the desired proximity range, the system detects the interaction and switches on/off the connected appliances. The experimental validation of the proposed system on a prototype shows that processing both distance and color remarkably improves the performance of the two separated components. This makes the system a promising tool for energy-efficient, touchless control of machines.
Collapse
|
13
|
Jansen-van Vuuren RD, Armin A, Pandey AK, Burn PL, Meredith P. Organic Photodiodes: The Future of Full Color Detection and Image Sensing. ADVANCED MATERIALS (DEERFIELD BEACH, FLA.) 2016; 28:4766-4802. [PMID: 27111541 DOI: 10.1002/adma.201505405] [Citation(s) in RCA: 206] [Impact Index Per Article: 25.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Revised: 12/16/2015] [Indexed: 06/05/2023]
Abstract
Major growth in the image sensor market is largely as a result of the expansion of digital imaging into cameras, whether stand-alone or integrated within smart cellular phones or automotive vehicles. Applications in biomedicine, education, environmental monitoring, optical communications, pharmaceutics and machine vision are also driving the development of imaging technologies. Organic photodiodes (OPDs) are now being investigated for existing imaging technologies, as their properties make them interesting candidates for these applications. OPDs offer cheaper processing methods, devices that are light, flexible and compatible with large (or small) areas, and the ability to tune the photophysical and optoelectronic properties - both at a material and device level. Although the concept of OPDs has been around for some time, it is only relatively recently that significant progress has been made, with their performance now reaching the point that they are beginning to rival their inorganic counterparts in a number of performance criteria including the linear dynamic range, detectivity, and color selectivity. This review covers the progress made in the OPD field, describing their development as well as the challenges and opportunities.
Collapse
Affiliation(s)
- Ross D Jansen-van Vuuren
- Center for Organic Photonics & Electronics, the University of Queensland, Queensland, 4072, Australia
| | - Ardalan Armin
- Center for Organic Photonics & Electronics, the University of Queensland, Queensland, 4072, Australia
| | - Ajay K Pandey
- Center for Organic Photonics & Electronics, the University of Queensland, Queensland, 4072, Australia
| | - Paul L Burn
- Center for Organic Photonics & Electronics, the University of Queensland, Queensland, 4072, Australia
| | - Paul Meredith
- Center for Organic Photonics & Electronics, the University of Queensland, Queensland, 4072, Australia
| |
Collapse
|
14
|
Tian J, Duan Z, Ren W, Han Z, Tang Y. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision. OPTICS EXPRESS 2016; 24:7266-7286. [PMID: 27137018 DOI: 10.1364/oe.24.007266] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.
Collapse
|
15
|
Mazin B, Delon J, Gousseau Y. Estimation of illuminants from projections on the Planckian locus. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:1944-1955. [PMID: 25826801 DOI: 10.1109/tip.2015.2405414] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
This paper introduces a new approach for the automatic estimation of illuminants in a digital color image. The method relies on two assumptions. First, the image is supposed to contain at least a small set of achromatic pixels. The second assumption is physical and concerns the set of possible illuminants, assumed to be well approximated by black body radiators. The proposed scheme is based on a projection of selected pixels on the Planckian locus in a well chosen chromaticity space, followed by a voting procedure yielding the estimation of the illuminant. This approach is very simple and learning-free. The voting procedure can be extended for the detection of multiple illuminants when necessary. Experiments on various databases show that the performances of this approach are similar to those of the best learning-based state-of-the-art algorithms.
Collapse
|
16
|
|
17
|
Qu L, Tian J, Han Z, Tang Y. Pixel-wise orthogonal decomposition for color illumination invariant and shadow-free image. OPTICS EXPRESS 2015; 23:2220-2239. [PMID: 25836092 DOI: 10.1364/oe.23.002220] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we propose a novel, effective and fast method to obtain a color illumination invariant and shadow-free image from a single outdoor image. Different from state-of-the-art methods for shadow-free image that either need shadow detection or statistical learning, we set up a linear equation set for each pixel value vector based on physically-based shadow invariants, deduce a pixel-wise orthogonal decomposition for its solutions, and then get an illumination invariant vector for each pixel value vector on an image. The illumination invariant vector is the unique particular solution of the linear equation set, which is orthogonal to its free solutions. With this illumination invariant vector and Lab color space, we propose an algorithm to generate a shadow-free image which well preserves the texture and color information of the original image. A series of experiments on a diverse set of outdoor images and the comparisons with the state-of-the-art methods validate our method.
Collapse
|
18
|
Vazquez-Corral J, Bertalmío M. Spectral sharpening of color sensors: diagonal color constancy and beyond. SENSORS 2014; 14:3965-85. [PMID: 24577523 PMCID: PMC4003926 DOI: 10.3390/s140303965] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/24/2013] [Revised: 01/23/2014] [Accepted: 02/19/2014] [Indexed: 11/17/2022]
Abstract
It has now been 20 years since the seminal work by Finlayson et al. on the use of spectral sharpening of sensors to achieve diagonal color constancy. Spectral sharpening is still used today by numerous researchers for different goals unrelated to the original goal of diagonal color constancy e.g., multispectral processing, shadow removal, location of unique hues. This paper reviews the idea of spectral sharpening through the lens of what is known today in color constancy, describes the different methods used for obtaining a set of sharpening sensors and presents an overview of the many different uses that have been found for spectral sharpening over the years.
Collapse
Affiliation(s)
- Javier Vazquez-Corral
- Information and Communications Technologies Department, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain.
| | - Marcelo Bertalmío
- Information and Communications Technologies Department, Universitat Pompeu Fabra, Roc Boronat 138, Barcelona 08018, Spain.
| |
Collapse
|
19
|
Internal limiting membrane contrast after staining with indocyanine green and brilliant blue G during macular surgery. Retina 2013; 33:812-7. [PMID: 23481454 DOI: 10.1097/iae.0b013e3182807629] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
PURPOSE To evaluate the difference in color contrast by performing a color contrast ratio (CR) analysis and resulting visibility of the internal limiting membrane (ILM) when stained with indocyanine green and brilliant blue G (BBG) during macular surgery by performing a color CR analysis. METHODS The authors analyzed 40 consecutive cases in which vitrectomy with ILM removal was performed to treat a macular hole or an epiretinal membrane. The surgical procedure was performed in 21 patients (21 eyes) after staining with indocyanine green and in 19 patients (19 eyes) after staining with BBG. The color CRs were estimated based on digital analysis of the red, green, and blue data of the digital images captured, and the CRs obtained with the two dyes were compared. RESULTS Color contrast analysis was performed in all 40 eyes, in which the ILM was removed after staining with indocyanine green or BBG, and the CRs were estimated in every eye. The CR (mean ± SD) obtained with indocyanine green and BBG was 4.3 ± 0.3 and 2.4 ± 0.1, respectively. Indocyanine green provided a significantly higher CR than BBG (P = 0.015). CONCLUSION Digital color contrast analysis can be used to evaluate the visibility of digital images, and it may be useful when choosing the dye to use for staining the ILM better.
Collapse
|
20
|
Krüger N, Janssen P, Kalkan S, Lappe M, Leonardis A, Piater J, Rodríguez-Sánchez AJ, Wiskott L. Deep hierarchies in the primate visual cortex: what can we learn for computer vision? IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2013; 35:1847-1871. [PMID: 23787340 DOI: 10.1109/tpami.2012.272] [Citation(s) in RCA: 104] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Computational modeling of the primate visual system yields insights of potential relevance to some of the challenges that computer vision is facing, such as object recognition and categorization, motion detection and activity recognition, or vision-based navigation and manipulation. This paper reviews some functional principles and structures that are generally thought to underlie the primate visual cortex, and attempts to extract biological principles that could further advance computer vision research. Organized for a computer vision audience, we present functional principles of the processing hierarchies present in the primate visual system considering recent discoveries in neurophysiology. The hierarchical processing in the primate visual system is characterized by a sequence of different levels of processing (on the order of 10) that constitute a deep hierarchy in contrast to the flat vision architectures predominantly used in today's mainstream computer vision. We hope that the functional description of the deep hierarchies realized in the primate visual system provides valuable insights for the design of computer vision algorithms, fostering increasingly productive interaction between biological and computer vision research.
Collapse
Affiliation(s)
- Norbert Krüger
- Maersk Mc-Kinney Moller Institute, University of Southern Denmark, Campusvej 55, Odense M 5230, Denmark.
| | | | | | | | | | | | | | | |
Collapse
|
21
|
|
22
|
|
23
|
Yang Q, Tan KH, Ahuja N. Shadow removal using bilateral filtering. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:4361-4368. [PMID: 22829402 DOI: 10.1109/tip.2012.2208976] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
In this paper, we propose a simple but effective shadow removal method using a single input image. We first derive a 2-D intrinsic image from a single RGB camera image based solely on colors, particularly chromaticity. We next present a method to recover a 3-D intrinsic image based on bilateral filtering and the 2-D intrinsic image. The luminance contrast in regions with similar surface reflectance due to geometry and illumination variances is effectively reduced in the derived 3-D intrinsic image, while the contrast in regions with different surface reflectance is preserved. However, the intrinsic image contains incorrect luminance values. To obtain the correct luminance, we decompose the input RGB image and the intrinsic image. Each image is decomposed into a base layer and a detail layer. We obtain a shadow-free image by combining the base layer from the input RGB image and the detail layer from the intrinsic image such that the details of the intrinsic image are transferred to the input RGB image from which the correct luminance values can be obtained. Unlike previous methods, the presented technique is fully automatic and does not require shadow detection.
Collapse
Affiliation(s)
- Qingxiong Yang
- Department of Computer Science, City University of Hong Kong, Hong Kong.
| | | | | |
Collapse
|
24
|
Ratnasingam S, McGinnity TM. Chromaticity space for illuminant invariant recognition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:3612-3623. [PMID: 22481826 DOI: 10.1109/tip.2012.2193135] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
In this paper an algorithm is proposed to extract two illuminant invariant chromaticity features from three image sensor responses. The algorithm extracts these chromaticity features at pixel level and therefore can perform well in scenes illuminated with non-uniform illuminant. An approach is proposed to use the algorithm with cameras of unknown sensitivity. The algorithm was tested for separability of perceptually similar colours under the International Commission on Illumination (CIE) standard illuminants and obtained a good performance. It was also tested for colour based object recognition by illuminating objects with typical indoor illuminants and obtained a better performance compared to other existing algorithms investigated in this paper. Finally, the algorithm was tested for skin detection invariant to illuminant, ethnic background and imaging device. In this investigation, daylight scenes under different weather conditions and scenes illuminated by typical indoor illuminants were used. The proposed algorithm gives a better skin detection performance compared to widely used standard colour spaces. Based on the results presented, the proposed illuminant invariant chromaticity space can be used for machine vision applications including illuminant invariant colour based object recognition and skin detection.
Collapse
|
25
|
Gijsenij A, Lu R, Gevers T. Color constancy for multiple light sources. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2012; 21:697-707. [PMID: 21859624 DOI: 10.1109/tip.2011.2165219] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
Color constancy algorithms are generally based on the simplifying assumption that the spectral distribution of a light source is uniform across scenes. However, in reality, this assumption is often violated due to the presence of multiple light sources. In this paper, we will address more realistic scenarios where the uniform light-source assumption is too restrictive. First, a methodology is proposed to extend existing algorithms by applying color constancy locally to image patches, rather than globally to the entire image. After local (patch-based) illuminant estimation, these estimates are combined into more robust estimations, and a local correction is applied based on a modified diagonal model. Quantitative and qualitative experiments on spectral and real images show that the proposed methodology reduces the influence of two light sources simultaneously present in one scene. If the chromatic difference between these two illuminants is more than 1°, the proposed framework outperforms algorithms based on the uniform light-source assumption (with error-reduction up to approximately 30%). Otherwise, when the chromatic difference is less than 1° and the scene can be considered to contain one (approximately) uniform light source, the performance of the proposed method framework is similar to global color constancy methods.
Collapse
|
26
|
Lecca M, Messelodi S. Linking the von Kries model to Wien’s law for the estimation of an illuminant invariant image. Pattern Recognit Lett 2011. [DOI: 10.1016/j.patrec.2011.08.005] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
27
|
Heo YS, Lee KM, Lee SU. Robust stereo matching using adaptive normalized cross-correlation. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2011; 33:807-822. [PMID: 20660949 DOI: 10.1109/tpami.2010.136] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
A majority of the existing stereo matching algorithms assume that the corresponding color values are similar to each other. However, it is not so in practice as image color values are often affected by various radiometric factors such as illumination direction, illuminant color, and imaging device changes. For this reason, the raw color recorded by a camera should not be relied on completely, and the assumption of color consistency does not hold good between stereo images in real scenes. Therefore, the performance of most conventional stereo matching algorithms can be severely degraded under the radiometric variations. In this paper, we present a new stereo matching measure that is insensitive to radiometric variations between left and right images. Unlike most stereo matching measures, we use the color formation model explicitly in our framework and propose a new measure, called the Adaptive Normalized Cross-Correlation (ANCC), for a robust and accurate correspondence measure. The advantage of our method is that it is robust to lighting geometry, illuminant color, and camera parameter changes between left and right images, and does not suffer from the fattening effect unlike conventional Normalized Cross-Correlation (NCC). Experimental results show that our method outperforms other state-of-the-art stereo methods under severely different radiometric conditions between stereo images.
Collapse
Affiliation(s)
- Yong Seok Heo
- Department of Electrical Engineering and Computer Science, Automation and Systems Research Institute, Seoul National University, 599 Gwanak-ro, Gwanak-gu, Seoul 151-744, Korea.
| | | | | |
Collapse
|
28
|
Ratnasingam S, Hernández-Andrés J. Illuminant spectrum estimation at a pixel. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2011; 28:696-703. [PMID: 21478968 DOI: 10.1364/josaa.28.000696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, an algorithm is proposed to estimate the spectral power distribution of a light source at a pixel. The first step of the algorithm is forming a two-dimensional illuminant invariant chromaticity space. In estimating the illuminant spectrum, generalized inverse estimation and Wiener estimation methods were applied. The chromaticity space was divided into small grids and a weight matrix was used to estimate the illuminant spectrum illuminating the pixels that fall within a grid. The algorithm was tested using a different number of sensor responses to determine the optimum number of sensors for accurate colorimetric and spectral reproduction. To investigate the performance of the algorithm realistically, the responses were multiplied with Gaussian noise and then quantized to 10 bits. The algorithm was tested with standard and measured data. Based on the results presented, the algorithm can be used with six sensors to obtain a colorimetrically good estimate of the illuminant spectrum at a pixel.
Collapse
Affiliation(s)
- Sivalogeswaran Ratnasingam
- 1Intelligent Systems Research Centre, School of Computing and Intelligent Systems, Magee Campus, University of Ulster, Londonderry, Northern Ireland, BT48 7JL, UK.
| | | |
Collapse
|
29
|
Ratnasingam S, Collins S, Hernández-Andrés J. Extending "color constancy" outside the visible region. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2011; 28:541-547. [PMID: 21478947 DOI: 10.1364/josaa.28.000541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/30/2023]
Abstract
In this paper, the results of an investigation of the possibility of extending "color constancy" to obtain illuminant-invariant reflectance features from data in the near-ultraviolet (UV) and near-infrared (IR) wavelength regions are reported. These features are obtained by extending a blackbody-model-based color constancy algorithm proposed by Ratnasingam and Collins [J. Opt. Soc. Am. A27, 286 (2010)] to these additional wavelengths. Ratnasingam and Collins applied the model-based algorithm in the visible region to extract two illuminant-invariant features related to the wavelength-dependent reflectance of a surface from the responses of four sensors. In this paper, this model-based algorithm is extended to extract two illuminant-invariant reflectance features from the responses of sensors that cover the visible and either the near-UV or near-IR wavelength. In this investigation, test reflectance data sets are generated using the goodness-fitness coefficient (GFC). The appropriateness of the GFC for generating the test data sets is demonstrated by comparing the results obtained with these data with those obtained from data sets generated using the CIELab distance. Results based upon the GFC are then presented that suggest that the model-based algorithm can extract useful features from data from the visible and near-IR wavelengths. Finally, results are presented that show that, although the spectrum of daylight in the near UV is very different from a blackbody spectrum, the algorithm can be modified to extract useful features from visible and near-UV wavelengths.
Collapse
|
30
|
Ratnasingam S, Collins S, Hernández-Andrés J. Optimum sensors for color constancy in scenes illuminated by daylight. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2010; 27:2198-2207. [PMID: 20922010 DOI: 10.1364/josaa.27.002198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
The apparent color of an object within a scene depends on the spectrum of the light illuminating the object. However, recording an object's color independent of the illuminant spectrum is important in many machine vision applications. In this paper the performance of a blackbody-model-based color constancy algorithm that requires four sensors with different spectral responses is investigated under daylight illumination. In this investigation sensor noise was modeled as gaussian noise, and the responses were quantized using different numbers of bits. A projection-based algorithm whose output is invariant to illuminant is investigated to improve the results that are obtained. The performance of both of these algorithms is then improved by optimizing the spectral sensitivities of the four sensors using freely available CIE standard daylight spectra and a set of lightness-normalized Munsell reflectance data. With the optimized sensors the performance of both algorithms is shown to be comparable to the human visual system. However, results obtained with measured daylight spectra show that the standard daylights may not be sufficiently representative of measured daylight for optimization with the standard daylight to lead to a reliable set of optimum sensor characteristics.
Collapse
Affiliation(s)
- Sivalogeswaran Ratnasingam
- Institute of Image Communication and Information Processing, Department of Electronic Engineering, Shanghai Jiao Tong University, Shanghai 200240, China.
| | | | | |
Collapse
|
31
|
Ratnasingam S, Collins S. Study of the photodetector characteristics of a camera for color constancy in natural scenes. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2010; 27:286-294. [PMID: 20126240 DOI: 10.1364/josaa.27.000286] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
An algorithm is described to extract two features that represent the chromaticity of a surface and that are independent of both the intensity and correlated color temperature of the daylight illuminating a scene. For mathematical convenience this algorithm is derived using the assumptions that each photodetector responds to a single wavelength and that the spectrum of the illumination source can be represented by a blackbody spectrum. Neither of these assumptions will be valid in a real application. A new method is proposed to determine the effect of violating these assumptions. The conclusion reached is that two features can be obtained that are effectively independent of the daylight illuminant if photodetectors with a spectral response whose full width at half maximum is 80 nm or less are used.
Collapse
|
32
|
|
33
|
Kawakami R, Takamatsu J, Ikeuchi K. Color constancy from blackbody illumination. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2007; 24:1886-93. [PMID: 17728810 DOI: 10.1364/josaa.24.001886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
We present a theoretical analysis of what we believe to be a new color constancy method that inputs two chromaticities of an identical surface taken under two blackbody illuminations. By using the Planck formula for modeling spectra of outdoor illumination and by assuming that a narrowband camera sensitivity function is sufficiently narrow, surface colors can be estimated mathematically. Experiments with simulation and real data have been conducted to evaluate the effectiveness of the method. The results showed that although this method is a perfect vehicle for simulation data, it produces significant errors with real data. A thorough investigation of the cause of errors indicates how important the assumptions on both blackbody illuminations and narrowband camera sensitivities are to the method. Finally, we discuss the robustness of our method and the limitation of solving color constancy using the illumination constraint.
Collapse
Affiliation(s)
- Rei Kawakami
- Graduate School of Information Science and Technology, The University of Tokyo, Japan.
| | | | | |
Collapse
|
34
|
Finlayson GD, Hordley SD, Lu C, Drew MS. On the removal of shadows from images. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2006; 28:59-68. [PMID: 16402619 DOI: 10.1109/tpami.2006.18] [Citation(s) in RCA: 45] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/06/2023]
Abstract
This paper is concerned with the derivation of a progression of shadow-free image representations. First, we show that adopting certain assumptions about lights and cameras leads to a 1D, gray-scale image representation which is illuminant invariant at each image pixel. We show that as a consequence, images represented in this form are shadow-free. We then extend this 1D representation to an equivalent 2D, chromaticity representation. We show that in this 2D representation, it is possible to relight all the image pixels in the same way, effectively deriving a 2D image representation which is additionally shadow-free. Finally, we show how to recover a 3D, full color shadow-free image representation by first (with the help of the 2D representation) identifying shadow edges. We then remove shadow edges from the edge-map of the original image by edge in-painting and we propose a method to reintegrate this thresholded edge map, thus deriving the sought-after 3D shadow-free image.
Collapse
Affiliation(s)
- Graham D Finlayson
- School of Computing Sciences, University of East Anglia, Norwich, NR4 7TJ UK.
| | | | | | | |
Collapse
|
35
|
Lovell PG, Tolhurst DJ, Párraga CA, Baddeley R, Leonards U, Troscianko J, Troscianko T. Stability of the color-opponent signals under changes of illuminant in natural scenes. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2005; 22:2060-71. [PMID: 16277277 DOI: 10.1364/josaa.22.002060] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/05/2023]
Abstract
Illumination varies greatly both across parts of a natural scene and as a function of time, whereas the spectral reflectance function of surfaces remains more stable and is of much greater relevance when searching for specific targets. This study investigates the functional properties of postreceptoral opponent-channel responses, in particular regarding their stability against spatial and temporal variation in illumination. We studied images of natural scenes obtained in UK and Uganda with digital cameras calibrated to produce estimated L-, M-, and S-cone responses of trichromatic primates (human) and birds (starling). For both primates and birds we calculated luminance and red-green opponent (RG) responses. We also calculated a primate blue-yellow-opponent (BY) response. The BY response varies with changes in illumination, both across time and across the image, rendering this factor less invariant. The RG response is much more stable than the BY response across such changes in illumination for primates, less so for birds. These differences between species are due to the greater separation of bird L and M cones in wavelength and the narrower bandwidth of the cone action spectra. This greater separation also produces a larger chromatic signal for a given change in spectral reflectance. Thus bird vision seems to suffer a greater degree of spatiotemporal "clutter" than primate vision, but also enhances differences between targets and background. Therefore, there may be a trade-off between the degree of chromatic clutter in a visual system versus the degree of chromatic difference between a target and its background. Primate and bird visual systems have found different solutions to this trade-off.
Collapse
Affiliation(s)
- P G Lovell
- Department of Experimental Psychology, University of Bristol, UK.
| | | | | | | | | | | | | |
Collapse
|
36
|
|
37
|
Georgescu B, Meer P. Point matching under large image deformations and illumination changes. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2004; 26:674-688. [PMID: 18579929 DOI: 10.1109/tpami.2004.2] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
To solve the general point correspondence problem in which the underlying transformation between image patches is represented by a homography, a solution based on extensive use of first order differential techniques is proposed. We integrate in a single robust M-estimation framework the traditional optical flow method and matching of local color distributions. These distributions are computed with spatially oriented kernels in the 5D joint spatial/color space. The estimation process is initiated at the third level of a Gaussian pyramid, uses only local information, and the illumination changes between the two images are also taken into account. Subpixel matching accuracy is achieved under large projective distortions significantly exceeding the performance of any of the two components alone. As an application, the correspondence algorithm is employed in oriented tracking of objects.
Collapse
Affiliation(s)
- Bogdan Georgescu
- Computer Science Department, Rutgers University, 94 Brett Road, Piscataway, NJ 08854-8058, USA.
| | | |
Collapse
|
38
|
Tan RT, Nishino K, Ikeuchi K. Color constancy through inverse-intensity chromaticity space. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2004; 21:321-334. [PMID: 15005396 DOI: 10.1364/josaa.21.000321] [Citation(s) in RCA: 13] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
Existing color constancy methods cannot handle both uniformly colored surfaces and highly textured surfaces in a single integrated framework. Statistics-based methods require many surface colors and become error prone when there are only a few surface colors. In contrast, dichromatic-based methods can successfully handle uniformly colored surfaces but cannot be applied to highly textured surfaces, since they require precise color segmentation. We present a single integrated method to estimate illumination chromaticity from single-colored and multicolored surfaces. Unlike existing dichromatic-based methods, the proposed method requires only rough highlight regions without segmenting the colors inside them. We show that, by analyzing highlights, a direct correlation between illumination chromaticity and image chromaticity can be obtained. This correlation is clearly described in "inverse-intensity chromaticity space," a novel two-dimensional space that we introduce. In addition, when Hough transform and histogram analysis is utilized in this space, illumination chromaticity can be estimated robustly, even for a highly textured surface.
Collapse
Affiliation(s)
- Robby T Tan
- Department of Computer Science, The University of Tokyo, 4-6-1 Komaba, Meguro-ku, Tokyo, 153-8505, Japan
| | | | | |
Collapse
|
39
|
|
40
|
Romero J, Valero E, Hernández-Andrés J, Nieves JL. Color-signal filtering in the Fourier-frequency domain. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2003; 20:1714-1724. [PMID: 12968644 DOI: 10.1364/josaa.20.001714] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/24/2023]
Abstract
We have analyzed the Fourier-frequency content of spectral power distributions deriving from three types of illuminants (daylight, incandescent, and fluorescent) and the color signals from both biochrome and nonbiochrome surfaces lit by these illuminants. As far as daylight and the incandescent illuminant are concerned, after filtering the signals through parabolic (low-pass) filters in the Fourier-frequency domain and then reconstructing them, we found that most of the spectral information was contained below 0.016 c/nm. When fluorescent illuminants were involved, we were unable to recover either the original illuminants or color signals to any satisfactory degree. We also used the spectral modulation sensitivity function, which is related to the human visual system's color discrimination thresholds, as a Fourier-frequency filter and obtained consistently less reliable results than with low-pass filtering. We provide comparative results for daylight signals recovered by three different methods. We found reconstructions based on linear models to be the most effective.
Collapse
Affiliation(s)
- Javier Romero
- Departamento de Optica, Facultad de Ciencias, Universidad de Granada, 18071 Granada, Spain.
| | | | | | | |
Collapse
|
41
|
Marchant JA, Onyango CM. Spectral invariance under daylight illumination changes. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2002; 19:840-848. [PMID: 11999960 DOI: 10.1364/josaa.19.000840] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We develop a method for calculating invariant spectra of light reflected from surfaces under changing daylight illumination conditions. A necessary part of the method is representing the illuminant in a suitable form. We represent daylight by a function E(lambda, T) = h(lambda)exp[u(lambda)f(T)], where lambda is the wavelength, T is the color temperature, h(lambda) and u(lambda) are any functions of lambda but not T, and f(T) is any function of T but not lambda. We use an eigenvalue decomposition on the logarithm of the CIE daylight standard at various color temperatures to obtain the necessary functions and show that this gives an extremely good fit to CIE daylight over our experimental range. We obtain experimental data over the range 350-830 nm from a range of standard colored surfaces for 50 daylight conditions covering a wide range of illumination spectra. Despite a considerable variation in the spectra of the reflected light, we show only small variations when the transformation is used. We investigate the possible causes of the residual variation and conclude that using the above approximation to daylight is unlikely to be a major cause. Some variation is caused by local daylight conditions being different from the CIE standard and the rest by measurement and modeling errors.
Collapse
Affiliation(s)
- John A Marchant
- Image Analysis and Control Group, Silsoe Research Institute, Bedford, UK.
| | | |
Collapse
|
42
|
Finlayson GD, Hordley SD, Drew MS. Removing Shadows from Images. COMPUTER VISION — ECCV 2002 2002. [DOI: 10.1007/3-540-47979-1_55] [Citation(s) in RCA: 107] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
|
43
|
Marchant JA, Onyango CM. Color invariant for daylight changes: relaxing the constraints on illuminants. JOURNAL OF THE OPTICAL SOCIETY OF AMERICA. A, OPTICS, IMAGE SCIENCE, AND VISION 2001; 18:2704-2706. [PMID: 11688860 DOI: 10.1364/josaa.18.002704] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
We extend previous work that addressed the problem of color changes on reflective surfaces resulting from changes in the daylight spectrum. In that work, we constrained the illuminants to a family represented by the Wien approximation to Planck's formula in order to derive a function of the three camera outputs that is invariant to daylight changes. In this work, we show that the constraint on the form of the illuminants can be relaxed and that a much more general form is permissible. We use principal components analysis on the logarithm of the illumination to represent the CIE standard in the more general form and show that the result closely represents the standard. We recalculate the exponent used in the invariant for our camera from the extended theory and obtain a result that duplicates the one found by empirical means used in our previous work.
Collapse
Affiliation(s)
- J A Marchant
- Image Analysis and Control Group, Silsoe Research Institute, Bedford, UK
| | | |
Collapse
|