2 resultados para Nonlinear Dynamic Response

em Helda - Digital Repository of University of Helsinki


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background When we are viewing natural scenes, every saccade abruptly changes both the mean luminance and the contrast structure falling on any given retinal location. Thus it would be useful if the two were independently encoded by the visual system, even when they change simultaneously. Recordings from single neurons in the cat visual system have suggested that contrast information may be quite independently represented in neural responses to simultaneous changes in contrast and luminance. Here we test to what extent this is true in human perception. Methodology/Principal Findings Small contrast stimuli were presented together with a 7-fold upward or downward step of mean luminance (between 185 and 1295 Td, corresponding to 14 and 98 cd/m2), either simultaneously or with various delays (50–800 ms). The perceived contrast of the target under the different conditions was measured with an adaptive staircase method. Over the contrast range 0.1–0.45, mainly subtractive attenuation was found. Perceived contrast decreased by 0.052±0.021 (N = 3) when target onset was simultaneous with the luminance increase. The attenuation subsided within 400 ms, and even faster after luminance decreases, where the effect was also smaller. The main results were robust against differences in target types and the size of the field over which luminance changed. Conclusions/Significance Perceived contrast is attenuated mainly by a subtractive term when coincident with a luminance change. The effect is of ecologically relevant magnitude and duration; in other words, strict contrast constancy must often fail during normal human visual behaviour. Still, the relative robustness of the contrast signal is remarkable in view of the limited dynamic response range of retinal cones. We propose a conceptual model for how early retinal signalling may allow this.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paradigm of computational vision hypothesizes that any visual function -- such as the recognition of your grandparent -- can be replicated by computational processing of the visual input. What are these computations that the brain performs? What should or could they be? Working on the latter question, this dissertation takes the statistical approach, where the suitable computations are attempted to be learned from the natural visual data itself. In particular, we empirically study the computational processing that emerges from the statistical properties of the visual world and the constraints and objectives specified for the learning process. This thesis consists of an introduction and 7 peer-reviewed publications, where the purpose of the introduction is to illustrate the area of study to a reader who is not familiar with computational vision research. In the scope of the introduction, we will briefly overview the primary challenges to visual processing, as well as recall some of the current opinions on visual processing in the early visual systems of animals. Next, we describe the methodology we have used in our research, and discuss the presented results. We have included some additional remarks, speculations and conclusions to this discussion that were not featured in the original publications. We present the following results in the publications of this thesis. First, we empirically demonstrate that luminance and contrast are strongly dependent in natural images, contradicting previous theories suggesting that luminance and contrast were processed separately in natural systems due to their independence in the visual data. Second, we show that simple cell -like receptive fields of the primary visual cortex can be learned in the nonlinear contrast domain by maximization of independence. Further, we provide first-time reports of the emergence of conjunctive (corner-detecting) and subtractive (opponent orientation) processing due to nonlinear projection pursuit with simple objective functions related to sparseness and response energy optimization. Then, we show that attempting to extract independent components of nonlinear histogram statistics of a biologically plausible representation leads to projection directions that appear to differentiate between visual contexts. Such processing might be applicable for priming, \ie the selection and tuning of later visual processing. We continue by showing that a different kind of thresholded low-frequency priming can be learned and used to make object detection faster with little loss in accuracy. Finally, we show that in a computational object detection setting, nonlinearly gain-controlled visual features of medium complexity can be acquired sequentially as images are encountered and discarded. We present two online algorithms to perform this feature selection, and propose the idea that for artificial systems, some processing mechanisms could be selectable from the environment without optimizing the mechanisms themselves. In summary, this thesis explores learning visual processing on several levels. The learning can be understood as interplay of input data, model structures, learning objectives, and estimation algorithms. The presented work adds to the growing body of evidence showing that statistical methods can be used to acquire intuitively meaningful visual processing mechanisms. The work also presents some predictions and ideas regarding biological visual processing.