992 resultados para illumination conditions


Relevância:

100.00% 100.00%

Publicador:

Resumo:

PURPOSE. This study was conducted to determine the magnitude of pupil center shift between the illumination conditions provided by corneal topography measurement (photopic illuminance) and by Hartmann-Shack aberrometry (mesopic illuminance) and to investigate the importance of this shift when calculating corneal aberrations and for the success of wavefront-guided surgical procedures. METHODS. Sixty-two subjects with emmetropia underwent corneal topography and Hartmann-Shack aberrometry. Corneal limbus and pupil edges were detected, and the differences between their respective centers were determined for both procedures. Corneal aberrations were calculated using the pupil centers for corneal topography and for Hartmann-Shack aberrometry. Bland-Altmann plots and paired t-tests were used to analyze the differences between corneal aberrations referenced to the two pupil centers. RESULTS. The mean magnitude (modulus) of the displacement of the pupil with the change of the illumination conditions was 0.21 ± 0.11 mm. The effect of this pupillary shift was manifest for coma corneal aberrations for 5-mm pupils, but the two sets of aberrations calculated with the two pupil positions were not significantly different. Sixty-eight percent of the population had differences in coma smaller than 0.05 µm, and only 4% had differences larger than 0.1 µm. Pupil displacement was not large enough to significantly affect other higher-order Zernike modes. CONCLUSIONS. Estimated corneal aberrations changed slightly between photopic and mesopic illumination conditions given by corneal topography and Hartmann-Shack aberrometry. However, this systematic pupil shift, according to the published tolerances ranges, is enough to deteriorate the optical quality below the theoretically predicted diffraction limit of wavefront-guided corneal surgery.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The aim of this study was to test the influence of different degrees of additional illumination on visual caries detection using the International Caries Detection and Assessment System (ICDAS). Two calibrated examiners assessed 139 occlusal surfaces of extracted permanent molars using a standard operation lamp with or without an additional headlamp providing three default brightness intensities. Histology served as the gold standard. Pooled data showed no differences in sensitivities. Specificities were not influenced by additional light. The area under the curve for the Marthaler classification D3 threshold was significantly lower when an additional strong headlamp was used (0.59 compared to 0.69-0.72 when reduced illumination intensities were used). One of the two examiners also had a significantly lower sensitivity for the D1 threshold when an additional headlamp was used. The use of additional white light led to a reduced detection of dentine lesions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Current older adult capability data-sets fail to account for the effects of everyday environmental conditions on capability. This article details a study that investigates the effects of everyday ambient illumination conditions (overcast, 6000 lx; in-house lighting, 150 lx and street lighting, 7.5 lx) and contrast (90%, 70%, 50% and 30%) on the near visual acuity (VA) of older adults (n= 38, 65-87 years). VA was measured at a 1-m viewing distance using logarithm of minimum angle of resolution (LogMAR) acuity charts. Results from the study showed that for all contrast levels tested, VA decreased by 0.2 log units between the overcast and street lighting conditions. On average, in overcast conditions, participants could detect detail around 1.6 times smaller on the LogMAR charts compared with street lighting. VA also significantly decreased when contrast was reduced from 70% to 50%, and from 50% to 30% in each of the ambient illumination conditions. Practitioner summary: This article presents an experimental study that investigates the impact of everyday ambient illumination levels and contrast on older adults' VA. Results show that both factors have a significant effect on their VA. Findings suggest that environmental conditions need to be accounted for in older adult capability data-sets/designs.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Under normal viewing conditions, humans find it easy to distinguish between objects made out of different materials such as plastic, metal, or paper. Untextured materials such as these have different surface reflectance properties, including lightness and gloss. With single isolated images and unknown illumination conditions, the task of estimating surface reflectance is highly underconstrained, because many combinations of reflection and illumination are consistent with a given image. In order to work out how humans estimate surface reflectance properties, we asked subjects to match the appearance of isolated spheres taken out of their original contexts. We found that subjects were able to perform the task accurately and reliably without contextual information to specify the illumination. The spheres were rendered under a variety of artificial illuminations, such as a single point light source, and a number of photographically-captured real-world illuminations from both indoor and outdoor scenes. Subjects performed more accurately for stimuli viewed under real-world patterns of illumination than under artificial illuminations, suggesting that subjects use stored assumptions about the regularities of real-world illuminations to solve the ill-posed problem.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is then achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2-D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The warping templates are computed at the first frame of the sequence. Illumination templates are precomputed off-line over a training set of face images collected under varying lighting conditions. Experiments in tracking are reported.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

An improved technique for 3D head tracking under varying illumination conditions is proposed. The head is modeled as a texture mapped cylinder. Tracking is formulated as an image registration problem in the cylinder's texture map image. The resulting dynamic texture map provides a stabilized view of the face that can be used as input to many existing 2D techniques for face recognition, facial expressions analysis, lip reading, and eye tracking. To solve the registration problem in the presence of lighting variation and head motion, the residual error of registration is modeled as a linear combination of texture warping templates and orthogonal illumination templates. Fast and stable on-line tracking is achieved via regularized, weighted least squares minimization of the registration error. The regularization term tends to limit potential ambiguities that arise in the warping and illumination templates. It enables stable tracking over extended sequences. Tracking does not require a precise initial fit of the model; the system is initialized automatically using a simple 2D face detector. The only assumption is that the target is facing the camera in the first frame of the sequence. The formulation is tailored to take advantage of texture mapping hardware available in many workstations, PC's, and game consoles. The non-optimized implementation runs at about 15 frames per second on a SGI O2 graphic workstation. Extensive experiments evaluating the effectiveness of the formulation are reported. The sensitivity of the technique to illumination, regularization parameters, errors in the initial positioning and internal camera parameters are analyzed. Examples and applications of tracking are reported.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this paper we demonstrate a simple and novel illumination model that can be used for illumination invariant facial recognition. This model requires no prior knowledge of the illumination conditions and can be used when there is only a single training image per-person. The proposed illumination model separates the effects of illumination over a small area of the face into two components; an additive component modelling the mean illumination and a multiplicative component, modelling the variance within the facial area. Illumination invariant facial recognition is performed in a piecewise manner, by splitting the face image into blocks, then normalizing the illumination within each block based on the new lighting model. The assumptions underlying this novel lighting model have been verified on the YaleB face database. We show that magnitude 2D Fourier features can be used as robust facial descriptors within the new lighting model. Using only a single training image per-person, our new method achieves high (in most cases 100%) identification accuracy on the YaleB, extended YaleB and CMU-PIE face databases.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Humans distinguish materials such as metal, plastic, and paper effortlessly at a glance. Traditional computer vision systems cannot solve this problem at all. Recognizing surface reflectance properties from a single photograph is difficult because the observed image depends heavily on the amount of light incident from every direction. A mirrored sphere, for example, produces a different image in every environment. To make matters worse, two surfaces with different reflectance properties could produce identical images. The mirrored sphere simply reflects its surroundings, so in the right artificial setting, it could mimic the appearance of a matte ping-pong ball. Yet, humans possess an intuitive sense of what materials typically "look like" in the real world. This thesis develops computational algorithms with a similar ability to recognize reflectance properties from photographs under unknown, real-world illumination conditions. Real-world illumination is complex, with light typically incident on a surface from every direction. We find, however, that real-world illumination patterns are not arbitrary. They exhibit highly predictable spatial structure, which we describe largely in the wavelet domain. Although they differ in several respects from the typical photographs, illumination patterns share much of the regularity described in the natural image statistics literature. These properties of real-world illumination lead to predictable image statistics for a surface with given reflectance properties. We construct a system that classifies a surface according to its reflectance from a single photograph under unknown illuminination. Our algorithm learns relationships between surface reflectance and certain statistics computed from the observed image. Like the human visual system, we solve the otherwise underconstrained inverse problem of reflectance estimation by taking advantage of the statistical regularity of illumination. For surfaces with homogeneous reflectance properties and known geometry, our system rivals human performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Our aim in this paper is to robustly match frontal faces in the presence of extreme illumination changes, using only a single training image per person and a single probe image. In the illumination conditions we consider, which include those with the dominant light source placed behind and to the side of the user, directly above and pointing downwards or indeed below and pointing upwards, this is a most challenging problem. The presence of sharp cast shadows, large poorly illuminated regions of the face, quantum and quantization noise and other nuisance effects, makes it difficult to extract a sufficiently discriminative yet robust representation. We introduce a representation which is based on image gradient directions near robust edges which correspond to characteristic facial features. Robust edges are extracted using a cascade of processing steps, each of which seeks to harness further discriminative information or normalize for a particular source of extra-personal appearance variability. The proposed representation was evaluated on the extremely difficult YaleB data set. Unlike most of the previous work we include all available illuminations, perform training using a single image per person and match these also to a single probe image. In this challenging evaluation setup, the proposed gradient edge map achieved 0.8% error rate, demonstrating a nearly perfect receiver-operator characteristic curve behaviour. This is by far the best performance achieved in this setup reported in the literature, the best performing methods previously proposed attaining error rates of approximately 6–7%.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Illumination invariance remains the most researched, yet the most challenging aspect of automatic face recognition. In this paper we propose a novel, general recognition framework for efficient matching of individual face images, sets or sequences. The framework is based on simple image processing filters that compete with unprocessed greyscale input to yield a single matching score between individuals. It is shown how the discrepancy between illumination conditions between novel input and the training data set can be estimated and used to weigh the contribution of two competing representations. We describe an extensive empirical evaluation of the proposed method on 171 individuals and over 1300 video sequences with extreme illumination, pose and head motion variation. On this challenging data set our algorithm consistently demonstrated a dramatic performance improvement over traditional filtering approaches. We demonstrate a reduction of 50-75% in recognition error rates, the best performing method-filter combination correctly recognizing 96% of the individuals.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The ecotoxicology of nano-TiO2 has been extensively studied in recent years; however, few toxicological investigations have considered the photocatalytic properties of the substance, which can increase its toxicity to aquatic biota. The aim of this work was to evaluate the effects on fish exposed to different nano-TiO2 concentrations and illumination conditions. The interaction of these variables was investigated by observing the survival of the organisms, together with biomarkers of biochemical and genetic alterations. Fish (Piaractus mesopotamicus) were exposed for 96h to 0, 1, 10, and 100mg/L of nano-TiO2, under visible light, and visible light with ultraviolet (UV) light (22.47J/cm2/h). The following biomarkers of oxidative stress were monitored in the liver: concentrations of lipid hydroperoxide and carbonylated protein, and specific activities of superoxide dismutase, catalase, and glutathione S-transferase. Other biomarkers of physiological function were also studied: the specific activities of acid phosphatase and Na,K-ATPase were analyzed in the liver and brain, respectively, and the concentration of metallothionein was measured in the gills. In addition, micronucleus and comet assays were performed with blood as genotoxic biomarkers. Nano-TiO2 caused no mortality under any of the conditions tested, but induced sublethal effects that were influenced by illumination condition. Under both illumination conditions tested, exposure to 100mg/L showed an inhibition of acid phosphatase activity. Under visible light, there was an increase in metallothionein level in fish exposed to 1mg/L of nano-TiO2. Under UV light, protein carbonylation was reduced in groups exposed to 1 and 10mg/L, while nucleus alterations in erythrocytes were higher in fish exposed to 10mg/L. As well as improving the understanding of nano-TiO2 toxicity, the findings demonstrated the importance of considering the experimental conditions in nanoecotoxicological tests. This work provides information for the development of protocols to study substances whose toxicity is affected by illumination conditions. © 2013 Elsevier B.V..

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)