36 resultados para task-determined visual strategy
em CentAUR: Central Archive University of Reading - UK
Resumo:
We investigated the roles of top-down task set and bottom-up stimulus salience for feature-specific attentional capture. Spatially nonpredictive cues preceded search arrays that included a color-defined target. For target-color singleton cues, behavioral spatial cueing effects were accompanied by cueinduced N2pc components, indicative of attentional capture. These effects were only minimally attenuated for nonsingleton target-color cues, underlining the dominance of top-down task set over salience in attentional capture. Nontarget-color singleton cues triggered no N2pc, but instead an anterior N2 component indicative of top-down inhibition. In Experiment 2, inverted behavioral cueing effects of these cues were accompanied by a delayed N2pc to targets at cued locations, suggesting that perceptually salient but task-irrelevant visual events trigger location-specific inhibition mechanisms that can delay subsequent target selection.
Resumo:
The premotor theory of attention claims that attentional shifts are triggered during response programming, regardless of which response modality is involved. To investigate this claim, event-related brain potentials (ERPs) were recorded while participants covertly prepared a left or right response, as indicated by a precue presented at the beginning of each trial. Cues signalled a left or right eye movement in the saccade task, and a left or right manual response in the manual task. The cued response had to be executed or withheld following the presentation of a Go/Nogo stimulus. Although there were systematic differences between ERPs triggered during covert manual and saccade preparation, lateralised ERP components sensitive to the direction of a cued response were very similar for both tasks, and also similar to the components previously found during cued shifts of endogenous spatial attention. This is consistent with the claim that the control of attention and of covert response preparation are closely linked. N1 components triggered by task-irrelevant visual probes presented during the covert response preparation interval were enhanced when these probes were presented close to cued response hand in the manual task, and at the saccade target location in the saccade task. This demonstrates that both manual and saccade preparation result in spatially specific modulations of visual processing, in line with the predictions of the premotor theory.
Resumo:
Near ground maneuvers, such as hover, approach and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground often using ultrasonic or laser range finders. Near ground maneuvers are naturally mastered by flying birds and insects as objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-to-contact (Tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for Unmanned Aerial Vehicles (UAV) relative ground distance control. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the Tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented on-board an experimental quadrotor UAV and shown not only to successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Near-ground maneuvers, such as hover, approach, and landing, are key elements of autonomy in unmanned aerial vehicles. Such maneuvers have been tackled conventionally by measuring or estimating the velocity and the height above the ground, often using ultrasonic or laser range finders. Near-ground maneuvers are naturally mastered by flying birds and insects because objects below may be of interest for food or shelter. These animals perform such maneuvers efficiently using only the available vision and vestibular sensory information. In this paper, the time-tocontact (tau) theory, which conceptualizes the visual strategy with which many species are believed to approach objects, is presented as a solution for relative ground distance control for unmanned aerial vehicles. The paper shows how such an approach can be visually guided without knowledge of height and velocity relative to the ground. A control scheme that implements the tau strategy is developed employing only visual information from a monocular camera and an inertial measurement unit. To achieve reliable visual information at a high rate, a novel filtering system is proposed to complement the control system. The proposed system is implemented onboard an experimental quadrotor unmannedaerial vehicle and is shown to not only successfully land and approach ground, but also to enable the user to choose the dynamic characteristics of the approach. The methods presented in this paper are applicable to both aerial and space autonomous vehicles.
Resumo:
Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.
Resumo:
Emerging evidence suggests that items held in working memory(WM)might not all be in the same representational state. One item might be privileged over others, making it more accessible and thereby recalled with greater precision. Here, using transcranial magnetic stimulation (TMS), we provide causal evidence in human participants that items inWMare differentially susceptible to disruptive TMS, depending on their state, determined either by task relevance or serial position. Across two experiments, we applied TMS to area MT during the WM retention of two motion directions. In Experiment 1, we used an “incidental cue” to bring one of the two targets into a privileged state. In Experiment 2, we presented the targets sequentially so that the last item was in a privileged state by virtue of recency. In both experiments, recall precision of motion direction was differentially affected by TMS, depending on the state of the memory target at the time of disruption. Privileged items were recalled with less precision, whereas nonprivileged items were recalled with higher precision. Thus, only the privileged item was susceptible to disruptive TMS over MT�. By contrast, precision of the nonprivileged item improved either directly because of facilitation by TMS or indirectly through reduced interference from the privileged item. Our results provide a unique line of evidence, as revealed by TMS over a posterior sensory brain region, for at least two different states of item representation in WM.
Resumo:
Task relevance affects emotional attention in healthy individuals. Here, we investigate whether the association between anxiety and attention bias is affected by the task relevance of emotion during an attention task. Participants completed two visual search tasks. In the emotion-irrelevant task, participants were asked to indicate whether a discrepant face in a crowd of neutral, middle-aged faces was old or young. Irrelevant to the task, target faces displayed angry, happy, or neutral expressions. In the emotion-relevant task, participants were asked to indicate whether a discrepant face in a crowd of middle-aged neutral faces was happy or angry (target faces also varied in age). Trait anxiety was not associated with attention in the emotion-relevant task. However, in the emotion-irrelevant task, trait anxiety was associated with a bias for angry over happy faces. These findings demonstrate that the task relevance of emotional information affects conclusions about the presence of an anxiety-linked attention bias.
Resumo:
Two experiments examined the learning of a set of Greek pronunciation rules through explicit and implicit modes of rule presentation. Experiment 1 compared the effectiveness of implicit and explicit modes of presentation in two modalities, visual and auditory. Subjects in the explicit or rule group were presented with the rule set, and those in the implicit or natural group were shown a set of Greek words, composed of letters from the rule set, linked to their pronunciations. Subjects learned the Greek words to criterion and were then given a series of tests which aimed to tap different types of knowledge. The results showed an advantage of explicit study of the rules. In addition, an interaction was found between mode of presentation and modality. Explicit instruction was more effective in the visual than in the auditory modality, whereas there was no modality effect for implicit instruction. Experiment 2 examined a possible reason for the advantage of the rule groups by comparing different combinations of explicit and implicit presentation in the study and learning phases. The results suggested that explicit presentation of the rules is only beneficial when it is followed by practice at applying them.
Resumo:
The contribution of retinal flow (RF), extraretinal (ER), and egocentric visual direction (VD) information in locomotor control was explored. First, the recovery of heading from RF was examined when ER information was manipulated; results confirmed that ER signals affect heading judgments. Then the task was translated to steering curved paths, and the availability and veracity of VD were manipulated with either degraded or systematically biased RE Large steering errors resulted from selective manipulation of RF and VD, providing strong evidence for the combination of RF, ER, and VD. The relative weighting applied to RF and VD was estimated. A point-attractor model is proposed that combines redundant sources of information for robust locomotor control with flexible trajectory planning through active gaze.
Resumo:
The efficacy of explicit and implicit learning paradigms was examined during the very early stages of learning the perceptual-motor anticipation task of predicting ball direction from temporally occluded footage of soccer penalty kicks. In addition, the effect of instructional condition on point-of-gaze during learning was examined. A significant improvement in horizontal prediction accuracy was observed in the explicit learning group; however, similar improvement was evident in a placebo group who watched footage of soccer matches. Only the explicit learning intervention resulted in changes in eye movement behaviour and increased awareness of relevant postural cues. Results are discussed in terms of methodological and practical issues regarding the employment of implicit perceptual training interventions. (c) 2005 Elsevier B.V. All rights reserved.
Resumo:
Defensive behaviors, such as withdrawing your hand to avoid potentially harmful approaching objects, rely on rapid sensorimotor transformations between visual and motor coordinates. We examined the reference frame for coding visual information about objects approaching the hand during motor preparation. Subjects performed a simple visuomanual task while a task-irrelevant distractor ball rapidly approached a location either near to or far from their hand. After the distractor ball appearance, single pulses of transcranial magnetic stimulation were delivered over the subject's primary motor cortex, eliciting motor evoked potentials (MEPs) in their responding hand. MEP amplitude was reduced when the ball approached near the responding hand, both when the hand was on the left and the right of the midline. Strikingly, this suppression occurred very early, at 70-80ms after ball appearance, and was not modified by visual fixation location. Furthermore, it was selective for approaching balls, since static visual distractors did not modulate MEP amplitude. Together with additional behavioral measurements, we provide converging evidence for automatic hand-centered coding of visual space in the human brain.
Resumo:
The current study investigated a new, easily administered, visual inhibition task for infants termed the Freeze-Frame task. In the new task, 9-month-olds were encouraged to inhibit looks to peripheral distractors. This was done by briefly freezing a central animated stimulus when infants looked to the distractors. Half of the trials presented an engaging central stimulus, and the other half presented a repetitive central stimulus. Three measures of inhibitory function were derived from the task and compared with performance on a set of frontal cortex tasks administered at 9 and 24 months of age. As expected, infants' ability to learn to selectively inhibit looks to the distractors at 9 months predicted performance at 24 months. However, performance differences in the two Freeze-Frame trial types early in the experiment also turned out to be an important predictor. The results are discussed in terms of the validity of the Freeze-Frame task as an early measure of different components of inhibitory function. (C) 2007 Elsevier Inc. All rights reserved.
Resumo:
Perirhinal cortex in monkeys has been thought to be involved in visual associative learning. The authors examined rats' ability to make associations between visual stimuli in a visual secondary reinforcement task. Rats learned 2-choice visual discriminations for secondary visual reinforcement. They showed significant learning of discriminations before any primary reinforcement. Following bilateral perirhinal cortex lesions, rats continued to learn visual discriminations for visual secondary reinforcement at the same rate as before surgery. Thus, this study does not support a critical role of perirhinal cortex in learning for visual secondary reinforcement. Contrasting this result with other positive results, the authors suggest that the role of perirhinal cortex is in "within-object" associations and that it plays a much lesser role in stimulus-stimulus associations between objects.
Resumo:
Short-term memory (STM) has often been considered to be a central resource in cognition. This study addresses its role in rapid serial visual presentation (RSVP) tasks tapping into temporal attention-the attentional blink (AB). Various STM operations are tested for their impact on performance and, in particular, on the AB. Memory tasks were found to exert considerable impact on general performance but the size of the AB was more or less immune to manipulations of STM load. Likewise, the AB was unaffected by manipulating the match between items held in STM and targets or temporally close distractors in the RSVP stream. The emerging picture is that STM resources, or their lack, play no role in the AB. Alternative accounts assuming serial consolidation, selection for action, and distractor-induced task-set interference are discussed.