853 resultados para task-determined visual strategy


Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Rapid Visual Information Processing (RVIP) task, a serial discrimination task where task performance believed to reflect sustained attention capabilities, is widely used in behavioural research and increasingly in neuroimaging studies. To date, functional neuroimaging research into the RVIP has been undertaken using block analyses, reflecting the sustained processing involved in the task, but not necessarily the transient processes associated with individual trial performance. Furthermore, this research has been limited to young cohorts. This study assessed the behavioural and functional magnetic resonance imaging (fMRI) outcomes of the RVIP task using both block and event-related analyses in a healthy middle aged cohort (mean age = 53.56 years, n = 16). The results show that the version of the RVIP used here is sensitive to changes in attentional demand processes with participants achieving a 43% accuracy hit rate in the experimental task compared with 96% accuracy in the control task. As shown by previous research, the block analysis revealed an increase in activation in a network of frontal, parietal, occipital and cerebellar regions. The event related analysis showed a similar network of activation, seemingly omitting regions involved in the processing of the task (as shown in the block analysis), such as occipital areas and the thalamus, providing an indication of a network of regions involved in correct trial performance. Frontal (superior and inferior frontal gryi), parietal (precuenus, inferior parietal lobe) and cerebellar regions were shown to be active in both the block and event-related analyses, suggesting their importance in sustained attention/vigilance. These networks and the differences between them are discussed in detail, as well as implications for future research in middle aged cohorts.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

There is an increased interest in the use of Unmanned Aerial Vehicles for load transportation from environmental remote sensing to construction and parcel delivery. One of the main challenges is accurate control of the load position and trajectory. This paper presents an assessment of real flight trials for the control of an autonomous multi-rotor with a suspended slung load using only visual feedback to determine the load position. This method uses an onboard camera to take advantage of a common visual marker detection algorithm to robustly detect the load location. The load position is calculated using an onboard processor, and transmitted over a wireless network to a ground station integrating MATLAB/SIMULINK and Robotic Operating System (ROS) and a Model Predictive Controller (MPC) to control both the load and the UAV. To evaluate the system performance, the position of the load determined by the visual detection system in real flight is compared with data received by a motion tracking system. The multi-rotor position tracking performance is also analyzed by conducting flight trials using perfect load position data and data obtained only from the visual system. Results show very accurate estimation of the load position (~5% Offset) using only the visual system and demonstrate that the need for an external motion tracking system is not needed for this task.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Many people suffer from conditions that lead to deterioration of motor control and makes access to the computer using traditional input devices difficult. In particular, they may loose control of hand movement to the extent that the standard mouse cannot be used as a pointing device. Most current alternatives use markers or specialized hardware to track and translate a user's movement to pointer movement. These approaches may be perceived as intrusive, for example, wearable devices. Camera-based assistive systems that use visual tracking of features on the user's body often require cumbersome manual adjustment. This paper introduces an enhanced computer vision based strategy where features, for example on a user's face, viewed through an inexpensive USB camera, are tracked and translated to pointer movement. The main contributions of this paper are (1) enhancing a video based interface with a mechanism for mapping feature movement to pointer movement, which allows users to navigate to all areas of the screen even with very limited physical movement, and (2) providing a customizable, hierarchical navigation framework for human computer interaction (HCI). This framework provides effective use of the vision-based interface system for accessing multiple applications in an autonomous setting. Experiments with several users show the effectiveness of the mapping strategy and its usage within the application framework as a practical tool for desktop users with disabilities.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emotional and attentional functions are known to be distributed along ventral and dorsal networks in the brain, respectively. However, the interactions between these systems remain to be specified. The present study used event-related functional magnetic resonance imaging (fMRI) to investigate how attentional focus can modulate the neural activity elicited by scenes that vary in emotional content. In a visual oddball task, aversive and neutral scenes were presented intermittently among circles and squares. The squares were frequent standard events, whereas the other novel stimulus categories occurred rarely. One experimental group [N=10] was instructed to count the circles, whereas another group [N=12] counted the emotional scenes. A main effect of emotion was found in the amygdala (AMG) and ventral frontotemporal cortices. In these regions, activation was significantly greater for emotional than neutral stimuli but was invariant to attentional focus. A main effect of attentional focus was found in dorsal frontoparietal cortices, whose activity signaled task-relevant target events irrespective of emotional content. The only brain region that was sensitive to both emotion and attentional focus was the anterior cingulate gyrus (ACG). When circles were task-relevant, the ACG responded equally to circle targets and distracting emotional scenes. The ACG response to emotional scenes increased when they were task-relevant, and the response to circles concomitantly decreased. These findings support and extend prominent network theories of emotion-attention interactions that highlight the integrative role played by the anterior cingulate.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This study investigated whether rhesus monkeys show evidence of metacognition in a reduced, visual oculomotor task that is particularly suitable for use in fMRI and electrophysiology. The 2-stage task involved punctate visual stimulation and saccadic eye movement responses. In each trial, monkeys made a decision and then made a bet. To earn maximum reward, they had to monitor their decision and use that information to bet advantageously. Two monkeys learned to base their bets on their decisions within a few weeks. We implemented an operational definition of metacognitive behavior that relied on trial-by-trial analyses and signal detection theory. Both monkeys exhibited metacognition according to these quantitative criteria. Neither external visual cues nor potential reaction time cues explained the betting behavior; the animals seemed to rely exclusively on internal traces of their decisions. We documented the learning process of one monkey. During a 10-session transition phase, betting switched from random to a decision-based strategy. The results reinforce previous findings of metacognitive ability in monkeys and may facilitate the neurophysiological investigation of metacognitive functions.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We investigated the role of visual feedback in adapting to novel visuomotor environments. Participants produced isometric elbow torques to move a cursor towards visual targets. Following trials with no rotation, participants adapted to a 60 degrees rotation of the visual feedback before returning to the non-rotated condition. Participants received continuous visual feedback (CF) of cursor position during task execution or post-trial visual feedback (PF). With training, reductions of the angular deviations of the cursor path occurred to a similar extent and at a similar rate for CF and PF groups. However, upon re-exposure to the non-rotated environment only CF participants exhibited post-training aftereffects, manifested as increased angular deviation of the cursor path, with respect to the pre-rotation trials. These aftereffects occurred despite colour cues permitting identification of the change in environment. The results show that concurrent feedback permits automatic recalibration of the visuomotor mapping while post-trial feedback permits performance improvement via a cognitive strategy. (C) 2008 Elsevier B.V. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This thesis proposes a solution to the problem of estimating the motion of an Unmanned Underwater Vehicle (UUV). Our approach is based on the integration of the incremental measurements which are provided by a vision system. When the vehicle is close to the underwater terrain, it constructs a visual map (so called "mosaic") of the area where the mission takes place while, at the same time, it localizes itself on this map, following the Concurrent Mapping and Localization strategy. The proposed methodology to achieve this goal is based on a feature-based mosaicking algorithm. A down-looking camera is attached to the underwater vehicle. As the vehicle moves, a sequence of images of the sea-floor is acquired by the camera. For every image of the sequence, a set of characteristic features is detected by means of a corner detector. Then, their correspondences are found in the next image of the sequence. Solving the correspondence problem in an accurate and reliable way is a difficult task in computer vision. We consider different alternatives to solve this problem by introducing a detailed analysis of the textural characteristics of the image. This is done in two phases: first comparing different texture operators individually, and next selecting those that best characterize the point/matching pair and using them together to obtain a more robust characterization. Various alternatives are also studied to merge the information provided by the individual texture operators. Finally, the best approach in terms of robustness and efficiency is proposed. After the correspondences have been solved, for every pair of consecutive images we obtain a list of image features in the first image and their matchings in the next frame. Our aim is now to recover the apparent motion of the camera from these features. Although an accurate texture analysis is devoted to the matching pro-cedure, some false matches (known as outliers) could still appear among the right correspon-dences. For this reason, a robust estimation technique is used to estimate the planar transformation (homography) which explains the dominant motion of the image. Next, this homography is used to warp the processed image to the common mosaic frame, constructing a composite image formed by every frame of the sequence. With the aim of estimating the position of the vehicle as the mosaic is being constructed, the 3D motion of the vehicle can be computed from the measurements obtained by a sonar altimeter and the incremental motion computed from the homography. Unfortunately, as the mosaic increases in size, image local alignment errors increase the inaccuracies associated to the position of the vehicle. Occasionally, the trajectory described by the vehicle may cross over itself. In this situation new information is available, and the system can readjust the position estimates. Our proposal consists not only in localizing the vehicle, but also in readjusting the trajectory described by the vehicle when crossover information is obtained. This is achieved by implementing an Augmented State Kalman Filter (ASKF). Kalman filtering appears as an adequate framework to deal with position estimates and their associated covariances. Finally, some experimental results are shown. A laboratory setup has been used to analyze and evaluate the accuracy of the mosaicking system. This setup enables a quantitative measurement of the accumulated errors of the mosaics created in the lab. Then, the results obtained from real sea trials using the URIS underwater vehicle are shown.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Rats with fornix transection, or with cytotoxic retrohippocampal lesions that removed entorhinal cortex plus ventral subiculum, performed a task that permits incidental learning about either allocentric (Allo) or egocentric (Ego) spatial cues without the need to navigate by them. Rats learned eight visual discriminations among computer-displayed scenes in a Y-maze, using the constant-negative paradigm. Every discrimination problem included two familiar scenes (constants) and many less familiar scenes (variables). On each trial, the rats chose between a constant and a variable scene, with the choice of the variable rewarded. In six problems, the two constant scenes had correlated spatial properties, either Alto (each constant appeared always in the same maze arm) or Ego (each constant always appeared in a fixed direction from the start arm) or both (Allo + Ego). In two No-Cue (NC) problems, the two constants appeared in randomly determined arms and directions. Intact rats learn problems with an added Allo or Ego cue faster than NC problems; this facilitation provides indirect evidence that they learn the associations between scenes and spatial cues, even though that is not required for problem solution. Fornix and retrohippocampal-lesioned groups learned NC problems at a similar rate to sham-operated controls and showed as much facilitation of learning by added spatial cues as did the controls; therefore, both lesion groups must have encoded the spatial cues and have incidentally learned their associations with particular constant scenes. Similar facilitation was seen in subgroups that had short or long prior experience with the apparatus and task. Therefore, neither major hippocampal input-output system is crucial for learning about allocentric or egocentric cues in this paradigm, which does not require rats to control their choices or navigation directly by spatial cues.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Emerging evidence suggests that items held in working memory(WM)might not all be in the same representational state. One item might be privileged over others, making it more accessible and thereby recalled with greater precision. Here, using transcranial magnetic stimulation (TMS), we provide causal evidence in human participants that items inWMare differentially susceptible to disruptive TMS, depending on their state, determined either by task relevance or serial position. Across two experiments, we applied TMS to area MT during the WM retention of two motion directions. In Experiment 1, we used an “incidental cue” to bring one of the two targets into a privileged state. In Experiment 2, we presented the targets sequentially so that the last item was in a privileged state by virtue of recency. In both experiments, recall precision of motion direction was differentially affected by TMS, depending on the state of the memory target at the time of disruption. Privileged items were recalled with less precision, whereas nonprivileged items were recalled with higher precision. Thus, only the privileged item was susceptible to disruptive TMS over MT�. By contrast, precision of the nonprivileged item improved either directly because of facilitation by TMS or indirectly through reduced interference from the privileged item. Our results provide a unique line of evidence, as revealed by TMS over a posterior sensory brain region, for at least two different states of item representation in WM.