32 resultados para Visual stimuli
em CentAUR: Central Archive University of Reading - UK
Resumo:
Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.
Resumo:
Perirhinal cortex in monkeys has been thought to be involved in visual associative learning. The authors examined rats' ability to make associations between visual stimuli in a visual secondary reinforcement task. Rats learned 2-choice visual discriminations for secondary visual reinforcement. They showed significant learning of discriminations before any primary reinforcement. Following bilateral perirhinal cortex lesions, rats continued to learn visual discriminations for visual secondary reinforcement at the same rate as before surgery. Thus, this study does not support a critical role of perirhinal cortex in learning for visual secondary reinforcement. Contrasting this result with other positive results, the authors suggest that the role of perirhinal cortex is in "within-object" associations and that it plays a much lesser role in stimulus-stimulus associations between objects.
Resumo:
Recent theories propose that semantic representation and sensorimotor processing have a common substrate via simulation. We tested the prediction that comprehension interacts with perception, using a standard psychophysics methodology.While passively listening to verbs that referred to upward or downward motion, and to control verbs that did not refer to motion, 20 subjects performed a motion-detection task, indicating whether or not they saw motion in visual stimuli containing threshold levels of coherent vertical motion. A signal detection analysis revealed that when verbs were directionally incongruent with the motion signal, perceptual sensitivity was impaired. Word comprehension also affected decision criteria and reaction times, but in different ways. The results are discussed with reference to existing explanations of embodied processing and the potential of psychophysical methods for assessing interactions between language and perception.
Resumo:
Dorsolateral prefrontal cortex (DLPFC) is recruited during visual working memory (WM) when relevant information must be maintained in the presence of distracting information. The mechanism by which DLPFC might ensure successful maintenance of the contents of WM is, however, unclear; it might enhance neural maintenance of memory targets or suppress processing of distracters. To adjudicate between these possibilities, we applied time-locked transcranial magnetic stimulation (TMS) during functional MRI, an approach that permits causal assessment of a stimulated brain region's influence on connected brain regions, and evaluated how this influence may change under different task conditions. Participants performed a visual WM task requiring retention of visual stimuli (faces or houses) across a delay during which visual distracters could be present or absent. When distracters were present, they were always from the opposite stimulus category, so that targets and distracters were represented in distinct posterior cortical areas. We then measured whether DLPFC-TMS, administered in the delay at the time point when distracters could appear, would modulate posterior regions representing memory targets or distracters. We found that DLPFC-TMS influenced posterior areas only when distracters were present and, critically, that this influence consisted of increased activity in regions representing the current memory targets. DLPFC-TMS did not affect regions representing current distracters. These results provide a new line of causal evidence for a top-down DLPFC-based control mechanism that promotes successful maintenance of relevant information in WM in the presence of distraction.
Resumo:
Spontaneous activity of the brain at rest frequently has been considered a mere backdrop to the salient activity evoked by external stimuli or tasks. However, the resting state of the brain consumes most of its energy budget, which suggests a far more important role. An intriguing hint comes from experimental observations of spontaneous activity patterns, which closely resemble those evoked by visual stimulation with oriented gratings, except that cortex appeared to cycle between different orientation maps. Moreover, patterns similar to those evoked by the behaviorally most relevant horizontal and vertical orientations occurred more often than those corresponding to oblique angles. We hypothesize that this kind of spontaneous activity develops at least to some degree autonomously, providing a dynamical reservoir of cortical states, which are then associated with visual stimuli through learning. To test this hypothesis, we use a biologically inspired neural mass model to simulate a patch of cat visual cortex. Spontaneous transitions between orientation states were induced by modest modifications of the neural connectivity, establishing a stable heteroclinic channel. Significantly, the experimentally observed greater frequency of states representing the behaviorally important horizontal and vertical orientations emerged spontaneously from these simulations. We then applied bar-shaped inputs to the model cortex and used Hebbian learning rules to modify the corresponding synaptic strengths. After unsupervised learning, different bar inputs reliably and exclusively evoked their associated orientation state; whereas in the absence of input, the model cortex resumed its spontaneous cycling. We conclude that the experimentally observed similarities between spontaneous and evoked activity in visual cortex can be explained as the outcome of a learning process that associates external stimuli with a preexisting reservoir of autonomous neural activity states. Our findings hence demonstrate how cortical connectivity can link the maintenance of spontaneous activity in the brain mechanistically to its core cognitive functions.
Resumo:
Virtual Reality (VR) can provide visual stimuli for EEG studies that can be altered in real time and can produce effects that are difficult or impossible to reproduce in a non-virtual experimental platform. As part of this experiment the Oculus Rift, a commercial-grade, low-cost, Head Mounted Display (HMD) was assessed as a visual stimuli platform for experiments recording EEG. Following, the device was used to investigate the effect of congruent visual stimuli on Event Related Desynchronisation (ERD) due to motion imagery.
Resumo:
Recent studies have identified a distributed network of brain regions thought to support cognitive reappraisal processes underlying emotion regulation in response to affective images, including parieto-temporal regions and lateral/medial regions of prefrontal cortex (PFC). A number of these commonly activated regions are also known to underlie visuospatial attention and oculomotor control, which raises the possibility that people use attentional redeployment rather than, or in addition to, reappraisal as a strategy to regulate emotion. We predicted that a significant portion of the observed variance in brain activation during emotion regulation tasks would be associated with differences in how participants visually scan the images while regulating their emotions. We recorded brain activation using fMRI and quantified patterns of gaze fixation while participants increased or decreased their affective response to a set of affective images. fMRI results replicated previous findings on emotion regulation with regulation differences reflected in regions of PFC and the amygdala. In addition, our gaze fixation data revealed that when regulating, individuals changed their gaze patterns relative to a control condition. Furthermore, this variation in gaze fixation accounted for substantial amounts of variance in brain activation. These data point to the importance of controlling for gaze fixation in studies of emotion regulation that use visual stimuli.
Resumo:
In this research, a cross-model paradigm was chosen to test the hypothesis that affective olfactory and auditory cues paired with neutral visual stimuli bearing no resemblance or logical connection to the affective cues can evoke preference shifts in those stimuli. Neutral visual stimuli of abstract paintings were presented simultaneously with liked and disliked odours and sounds, with neutral-neutral pairings serving as controls. The results confirm previous findings that the affective evaluation of previously neutral visual stimuli shifts in the direction of contingently presented affective auditory stimuli. In addition, this research shows the presence of conditioning with affective odours having no logical connection with the pictures.
Resumo:
Several recent hypotheses, including sensory drive and sensory exploitation, suggest that receiver biases may drive selection of biological signals in the context of sexual selection. Here we suggest that a similar mechanism may have led to convergence of patterns in flowers, stingless bee nest entrances, and pitchers of insectivorous plants. A survey of these non-related visual stimuli shows that they share features such as stripes, dark centre, and peripheral dots. Next, we experimentally show that in stingless bees the close-up approach to a flower is guided by dark centre preference. Moreover, in the approach towards their nest entrance, they have a spontaneous preference for entrance patterns containing a dark centre and disrupted ornamentation. Together with existing empirical evidence on the honeybee's and other insects' orientation to flowers, this suggests that the signal receivers of the natural patterns we examined, mainly Hymenoptera, have spontaneous preferences for radiating stripes, dark centres, and peripheral dots. These receiver biases may have evolved in other behavioural contexts in the ancestors of Hymenoptera, but our findings suggest that they have triggered the convergent evolution of visual stimuli in floral guides, stingless bee nest entrances, and insectivorous pitchers.
Resumo:
One of the most common decisions we make is the one about where to move our eyes next. Here we examine the impact that processing the evidence supporting competing options has on saccade programming. Participants were asked to saccade to one of two possible visual targets indicated by a cloud of moving dots. We varied the evidence which supported saccade target choice by manipulating the proportion of dots moving towards one target or the other. The task was found to become easier as the evidence supporting target choice increased. This was reflected in an increase in percent correct and a decrease in saccade latency. The trajectory and landing position of saccades were found to deviate away from the non-selected target reflecting the choice of the target and the inhibition of the non-target. The extent of the deviation was found to increase with amount of sensory evidence supporting target choice. This shows that decision-making processes involved in saccade target choice have an impact on the spatial control of a saccade. This would seem to extend the notion of the processes involved in the control of saccade metrics beyond a competition between visual stimuli to one also reflecting a competition between options.
Resumo:
Models of perceptual decision making often assume that sensory evidence is accumulated over time in favor of the various possible decisions, until the evidence in favor of one of them outweighs the evidence for the others. Saccadic eye movements are among the most frequent perceptual decisions that the human brain performs. We used stochastic visual stimuli to identify the temporal impulse response underlying saccadic eye movement decisions. Observers performed a contrast search task, with temporal variability in the visual signals. In experiment 1, we derived the temporal filter observers used to integrate the visual information. The integration window was restricted to the first similar to 100 ms after display onset. In experiment 2, we showed that observers cannot perform the task if there is no useful information to distinguish the target from the distractor within this time epoch. We conclude that (1) observers did not integrate sensory evidence up to a criterion level, (2) observers did not integrate visual information up to the start of the saccadic dead time, and (3) variability in saccade latency does not correspond to variability in the visual integration period. Instead, our results support a temporal filter model of saccadic decision making. The temporal impulse response identified by our methods corresponds well with estimates of integration times of V1 output neurons.
Resumo:
We investigated whether attention shifts and eye movement preparation are mediated by shared control mechanisms, as claimed by the premotor theory of attention. ERPs were recorded in three tasks where directional cues presented at the beginning of each trial instructed participants to direct their attention to the cued side without eye movements (Covert task), to prepare an eye movement in the cued direction without attention shifts (Saccade task) or both (Combined task). A peripheral visual Go/Nogo stimulus that was presented 800 ms after cue onset signalled whether responses had to be executed or withheld. Lateralised ERP components triggered during the cue–target interval, which are assumed to reflect preparatory control mechanisms that mediate attentional orienting, were very similar across tasks. They were also present in the Saccade task, which was designed to discourage any concomitant covert attention shifts. These results support the hypothesis that saccade preparation and attentional orienting are implemented by common control structures. There were however systematic differences in the impact of eye movement programming and covert attention on ERPs triggered in response to visual stimuli at cued versus uncued locations. It is concluded that, although the preparatory processes underlying saccade programming and covert attentional orienting may be based on common mechanisms, they nevertheless differ in their spatially specific effects on visual information processing.
The multisensory attentional consequences of tool use: a functional magnetic resonance imaging study
Resumo:
Background: Tool use in humans requires that multisensory information is integrated across different locations, from objects seen to be distant from the hand, but felt indirectly at the hand via the tool. We tested the hypothesis that using a simple tool to perceive vibrotactile stimuli results in the enhanced processing of visual stimuli presented at the distal, functional part of the tool. Such a finding would be consistent with a shift of spatial attention to the location where the tool is used. Methodology/Principal Findings: We tested this hypothesis by scanning healthy human participants' brains using functional magnetic resonance imaging, while they used a simple tool to discriminate between target vibrations, accompanied by congruent or incongruent visual distractors, on the same or opposite side to the tool. The attentional hypothesis was supported: BOLD response in occipital cortex, particularly in the right hemisphere lingual gyrus, varied significantly as a function of tool position, increasing contralaterally, and decreasing ipsilaterally to the tool. Furthermore, these modulations occurred despite the fact that participants were repeatedly instructed to ignore the visual stimuli, to respond only to the vibrotactile stimuli, and to maintain visual fixation centrally. In addition, the magnitude of multisensory (visual-vibrotactile) interactions in participants' behavioural responses significantly predicted the BOLD response in occipital cortical areas that were also modulated as a function of both visual stimulus position and tool position. Conclusions/Significance: These results show that using a simple tool to locate and to perceive vibrotactile stimuli is accompanied by a shift of spatial attention to the location where the functional part of the tool is used, resulting in enhanced processing of visual stimuli at that location, and decreased processing at other locations. This was most clearly observed in the right hemisphere lingual gyrus. Such modulations of visual processing may reflect the functional importance of visuospatial information during human tool use