1000 resultados para visual explanations


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The current paper presents a study conducted at CERN, Switzerland, to investigate visitors' and tour guides' use and appreciation of existing panels at visit itinerary points. The results were used to develop a set of recommendations for constructing optimal panels to assist the guides' explanation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The overall aim of this dissertation was to study the public's preferences for forest regeneration fellings and field afforestations, as well as to find out the relations of these preferences to landscape management instructions, to ecological healthiness, and to the contemporary theories for predicting landscape preferences. This dissertation includes four case studies in Finland, each based on the visualization of management options and surveys. Guidelines for improving the visual quality of forest regeneration and field afforestation are given based on the case studies. The results show that forest regeneration can be connected to positive images and memories when the regeneration area is small and some time has passed since the felling. Preferences may not depend only on the management alternative itself but also on the viewing distance, viewing point, and the scene in which the management options are implemented. The current Finnish forest landscape management guidelines as well as the ecological healthiness of the studied options are to a large extent compatible with the public's preferences. However, there are some discrepancies. For example, the landscape management instructions as well as ecological hypotheses suggest that the retention trees need to be left in groups, whereas people usually prefer individually located retention trees to those trees in groups. Information and psycho-evolutionary theories provide some possible explanations for people's preferences for forest regeneration and field afforestation, but the results cannot be consistently explained by these theories. The preferences of the different stakeholder groups were very similar. However, the preference ratings of the groups that make their living from forest - forest owners and forest professionals - slightly differed from those of the others. These results provide support for the assumptions that preferences are largely consistent at least within one nation, but that knowledge and a reference group may also influence preferences.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

While searching for objects, we combine information from multiple visual modalities. Classical theories of visual search assume that features are processed independently prior to an integration stage. Based on this, one would predict that features that are equally discriminable in single feature search should remain so in conjunction search. We test this hypothesis by examining whether search accuracy in feature search predicts accuracy in conjunction search. Subjects searched for objects combining color and orientation or size; eye movements were recorded. Prior to the main experiment, we matched feature discriminability, making sure that in feature search, 70% of saccades were likely to go to the correct target stimulus. In contrast to this symmetric single feature discrimination performance, the conjunction search task showed an asymmetry in feature discrimination performance: In conjunction search, a similar percentage of saccades went to the correct color as in feature search but much less often to correct orientation or size. Therefore, accuracy in feature search is a good predictor of accuracy in conjunction search for color but not for size and orientation. We propose two explanations for the presence of such asymmetries in conjunction search: the use of conjunctively tuned channels and differential crowding effects for different features.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article describes further evidence for a new neural network theory of biological motion perception that is called a Motion Boundary Contour System. This theory clarifies why parallel streams Vl-> V2 and Vl-> MT exist for static form and motion form processing among the areas Vl, V2, and MT of visual cortex. The Motion Boundary Contour System consists of several parallel copies, such that each copy is activated by a different range of receptive field sizes. Each copy is further subdivided into two hierarchically organized subsystems: a Motion Oriented Contrast Filter, or MOC Filter, for preprocessing moving images; and a Cooperative-Competitive Feedback Loop, or CC Loop, for generating emergent boundary segmentations of the filtered signals. The present article uses the MOC Filter to explain a variety of classical and recent data about short-range and long-range apparent motion percepts that have not yet been explained by alternative models. These data include split motion; reverse-contrast gamma motion; delta motion; visual inertia; group motion in response to a reverse-contrast Ternus display at short interstimulus intervals; speed-up of motion velocity as interfiash distance increases or flash duration decreases; dependence of the transition from element motion to group motion on stimulus duration and size; various classical dependencies between flash duration, spatial separation, interstimulus interval, and motion threshold known as Korte's Laws; and dependence of motion strength on stimulus orientation and spatial frequency. These results supplement earlier explanations by the model of apparent motion data that other models have not explained; a recent proposed solution of the global aperture problem, including explanations of motion capture and induced motion; an explanation of how parallel cortical systems for static form perception and motion form perception may develop, including a demonstration that these parallel systems are variations on a common cortical design; an explanation of why the geometries of static form and motion form differ, in particular why opposite orientations differ by 90°, whereas opposite directions differ by 180°, and why a cortical stream Vl -> V2 -> MT is needed; and a summary of how the main properties of other motion perception models can be assimilated into different parts of the Motion Boundary Contour System design.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability,” quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pós-graduação em Ciências da Motricidade - IBRC

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Based on the Attentional Control Theory (ACT; Eysenck et al., 2007), performance efficiency is decreased in high-anxiety situations because worrying thoughts compete for attentional resources. A repeated-measures design (high/low state anxiety and high/low perceptual task demands) was used to test ACT explanations. Complex football situations were displayed to expert and non-expert football players in a decision making task in a controlled laboratory setting. Ratings of state anxiety and pupil diameter measures were used to check anxiety manipulations. Dependent variables were verbal response time and accuracy, mental effort ratings and visual search behavior (e.g., visual search rate). Results confirmed that an anxiety increase, indicated by higher state-anxiety ratings and larger pupil diameters, reduced processing efficiency for both groups (higher response times and mental effort ratings). Moreover, high task demands reduced the ability to shift attention between different locations for the expert group in the high anxiety condition only. Since particularly experts, who were expected to use more top-down strategies to guide visual attention under high perceptual task demands, showed less attentional shifts in the high compared to the low anxiety condition, as predicted by ACT, anxiety seems to impair the shifting function by interrupting the balance between top-down and bottom-up processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motor-performance-enhancing effects of long final fixations before movement initiation – a phenomenon called Quiet Eye (QE) – have repeatedly been demonstrated. Drawing on the information-processing framework, it is assumed that the QE supports information processing revealed by the close link between QE duration and task demands concerning, in particular, response selection and movement parameterisation. However, the question remains whether the suggested mechanism also holds for processes referring to stimulus identification. Thus, in a series of two experiments, performance in a targeting task was tested as a function of experimentally manipulated visual processing demands as well as experimentally manipulated QE durations. The results support the suggested link because a performance-enhancing QE effect was found under increased visual processing demands only: Whereas QE duration did not affect performance as long as positional information was preserved (Experiment 1), in the full vs. no target visibility comparison, QE efficiency turned out to depend on information processing time as soon as the interval falls below a certain threshold (Experiment 2). Thus, the results rather contradict alternative, e.g., posture-based explanations of QE effects and support the assumption that the crucial mechanism behind the QE phenomenon is rooted in the cognitive domain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As we look around a scene, we perceive it as continuous and stable even though each saccadic eye movement changes the visual input to the retinas. How the brain achieves this perceptual stabilization is unknown, but a major hypothesis is that it relies on presaccadic remapping, a process in which neurons shift their visual sensitivity to a new location in the scene just before each saccade. This hypothesis is difficult to test in vivo because complete, selective inactivation of remapping is currently intractable. We tested it in silico with a hierarchical, sheet-based neural network model of the visual and oculomotor system. The model generated saccadic commands to move a video camera abruptly. Visual input from the camera and internal copies of the saccadic movement commands, or corollary discharge, converged at a map-level simulation of the frontal eye field (FEF), a primate brain area known to receive such inputs. FEF output was combined with eye position signals to yield a suitable coordinate frame for guiding arm movements of a robot. Our operational definition of perceptual stability was "useful stability," quantified as continuously accurate pointing to a visual object despite camera saccades. During training, the emergence of useful stability was correlated tightly with the emergence of presaccadic remapping in the FEF. Remapping depended on corollary discharge but its timing was synchronized to the updating of eye position. When coupled to predictive eye position signals, remapping served to stabilize the target representation for continuously accurate pointing. Graded inactivations of pathways in the model replicated, and helped to interpret, previous in vivo experiments. The results support the hypothesis that visual stability requires presaccadic remapping, provide explanations for the function and timing of remapping, and offer testable hypotheses for in vivo studies. We conclude that remapping allows for seamless coordinate frame transformations and quick actions despite visual afferent lags. With visual remapping in place for behavior, it may be exploited for perceptual continuity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.