9 resultados para Visual signals

em CentAUR: Central Archive University of Reading - UK


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The interactions among the multiple factors regulating predator-prey relationships make predation a more complex process than previously thought. The degree to which substandard individuals are captured disproportionately seems to be better a function of the difficulty of prey capture than of the hunting techniques (coursing vs. ambushing predators). That is, when the capture and killing of a prey species is easy, substandard individuals will be predated in proportion to their occurrence in the prey population. In the present study, we made use of eagle owls Bubo bubo and their main prey, the rabbit Oryctolagus cuniculus: (a) the brightness of the white tails of rabbits seems to be correlated with the physical condition of individuals, (b) by using the tails of predated rabbits as an index of individual condition, we found that eagle owls seem to prefer substandard individuals (characterized by duller tails), and (c) by using information from continuous radiotracking of 14 individuals, we suggest that the difficulty of rabbit capture could be low. Although the relative benefits of preying on substandard individuals should considerably decrease when a predator is attacking an easy prey, we hypothesise that the eagle owl preference for substandard individuals could be due to the easy detection of poor individuals by a visual cue, the brightness of the rabbit tail. Several elements allow us to believe that this form of visual communication between a prey and one of its main predators could be more widespread than previously thought. In fact: (a) visual signalling plays a relevant role in intraspecific communication in eagle owls and, consequently, visual signals could also play a role in interspecific interactions, and (b) empirical studies showed that signals may inform the predator that it has been perceived, or that the prey is in a sufficiently healthy state to elude the predator.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first similar to 100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RE Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual control of locomotion is essential for most mammals and requires coordination between perceptual processes and action systems. Previous research on the neural systems engaged by self-motion has focused on heading perception, which is only one perceptual subcomponent. For effective steering, it is necessary to perceive an appropriate future path and then bring about the required change to heading. Using function magnetic resonance imaging in humans, we reveal a role for the parietal eye fields (PEFs) in directing spatially selective processes relating to future path information. A parietal area close to PEFs appears to be specialized for processing the future path information itself. Furthermore, a separate parietal area responds to visual position error signals, which occur when steering adjustments are imprecise. A network of three areas, the cerebellum, the supplementary eye fields, and dorsal premotor cortex, was found to be involved in generating appropriate motor responses for steering adjustments. This may reflect the demands of integrating visual inputs with the output response for the control device.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Embodied theories of cognition propose that neural substrates used in experiencing the referent of a word, for example perceiving upward motion, should be engaged in weaker form when that word, for example ‘rise’, is comprehended. Motivated by the finding that the perception of irrelevant background motion at near-threshold, but not supra-threshold, levels interferes with task execution, we assessed whether interference from near-threshold background motion was modulated by its congruence with the meaning of words (semantic content) when participants completed a lexical decision task (deciding if a string of letters is a real word or not). Reaction times for motion words, such as ‘rise’ or ‘fall’, were slower when the direction of visual motion and the ‘motion’ of the word were incongruent — but only when the visual motion was at nearthreshold levels. When motion was supra-threshold, the distribution of error rates, not reaction times, implicated low-level motion processing in the semantic processing of motion words. As the perception of near-threshold signals is not likely to be influenced by strategies, our results support a close contact between semantic information and perceptual systems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Clinical evidence suggests that a persistent search for solutions for chronic pain may bring along costs at the cognitive, affective, and behavioral level. Specifically, attempts to control pain may fuel hypervigilance and prioritize attention towards pain-related information. This hypothesis was investigated in an experiment with 41 healthy volunteers. Prioritization of attention towards a signal for pain was measured using an adaptation of a visual search paradigm in which participants had to search for a target presented in a varying number of colored circles. One of these colors (Conditioned Stimulus) became a signal for pain (Unconditioned Stimulus: electrocutaneous stimulus at tolerance level) using a classical conditioning procedure. Intermixed with the visual search task, participants also performed another task. In the pain-control group, participants were informed that correct and fast responses on trials of this second task would result in an avoidance of the Unconditioned Stimulus. In the comparison group, performance on the second task was not instrumental in controlling pain. Results showed that in the pain-control group, attention was more prioritized towards the Conditioned Stimulus than in the comparison group. The theoretical and clinical implications of these results are discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, brain–computer interfaces (BCIs) have rapidly developed, both in technological and application domains. However, most of these interfaces rely on the visual modality. Only some research groups have been studying non-visual BCIs, primarily based on auditory and, sometimes, on somatosensory signals. These non-visual BCI approaches are especially useful for severely disabled patients with poor vision. From a broader perspective, multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback. This chapter describes current systems that are used within auditory and somatosensory BCI research. Four categories of noninvasive BCI paradigms are employed: (1) P300 evoked potentials, (2) steady-state evoked potentials, (3) slow cortical potentials, and (4) mental tasks. Comparing visual and non-visual BCIs, we propose and discuss different possible multisensory combinations, as well as their pros and cons. We conclude by discussing potential future research directions of multisensory BCIs and related research questions