907 resultados para Visual control
Resumo:
Visual telepresence seeks to extend existing teleoperative capability by supplying the operator with a 3D interactive view of the remote environment. This is achieved through the use of a stereo camera platform which, through appropriate 3D display devices, provides a distinct image to each eye of the operator, and which is slaved directly from the operator's head and eye movements. However, the resolution within current head mounted displays remains poor, thereby reducing the operator's visual acuity. This paper reports on the feasibility of incorporation of eye tracking to increase resolution and investigates the stability and control issues for such a system. Continuous domain and discrete simulations are presented which indicates that eye tracking provides a stable feedback loop for tracking applications, though some empirical testing (currently being initiated) of such a system will be required to overcome indicated stability problems associated with micro saccades of the human operator.
Resumo:
Dorsolateral prefrontal cortex (DLPFC) is recruited during visual working memory (WM) when relevant information must be maintained in the presence of distracting information. The mechanism by which DLPFC might ensure successful maintenance of the contents of WM is, however, unclear; it might enhance neural maintenance of memory targets or suppress processing of distracters. To adjudicate between these possibilities, we applied time-locked transcranial magnetic stimulation (TMS) during functional MRI, an approach that permits causal assessment of a stimulated brain region's influence on connected brain regions, and evaluated how this influence may change under different task conditions. Participants performed a visual WM task requiring retention of visual stimuli (faces or houses) across a delay during which visual distracters could be present or absent. When distracters were present, they were always from the opposite stimulus category, so that targets and distracters were represented in distinct posterior cortical areas. We then measured whether DLPFC-TMS, administered in the delay at the time point when distracters could appear, would modulate posterior regions representing memory targets or distracters. We found that DLPFC-TMS influenced posterior areas only when distracters were present and, critically, that this influence consisted of increased activity in regions representing the current memory targets. DLPFC-TMS did not affect regions representing current distracters. These results provide a new line of causal evidence for a top-down DLPFC-based control mechanism that promotes successful maintenance of relevant information in WM in the presence of distraction.
Resumo:
In terms of evolution, the strategy of catching prey would have been an important part of survival in a constantly changing environment. A prediction mechanism would have developed to compensate for any delay in the sensory-motor system. In a previous study, “proactive control” was found, in which the motion of the hands preceded the virtual moving target. These results implied that the positive phase shift of the hand motion represents the proactive nature of the visual-motor control system, which attempts to minimize the brief error in the hand motion when the target changes position unexpectedly. In our study, a visual target moves in circle (13 cm diameter) on a computer screen, and each subject is asked to keep track of the target’s motion by the motion of a cursor. As the frequency of the target increases, a rhythmic component was found in the velocity of the cursor in spite of the fact that the velocity of the target was constant. The generation of a rhythmic component cannot be explained simply as a feedback mechanism for the phase shifts of the target and cursor in a sensory-motor system. Therefore, it implies that the rhythmic component was generated to predict the velocity of the target, which is a feed-forward mechanism in the sensory-motor system. Here, we discuss the generation of the rhythmic component and its roll in the feed-forward mechanism.
Resumo:
Voluntary selective attention can prioritize different features in a visual scene. The frontal eye-fields (FEF) are one potential source of such feature-specific top-down signals, but causal evidence for influences on visual cortex (as was shown for "spatial" attention) has remained elusive. Here, we show that transcranial magnetic stimulation (TMS) applied to right FEF increased the blood oxygen level-dependent (BOLD) signals in visual areas processing "target feature" but not in "distracter feature"-processing regions. TMS-induced BOLD signals increase in motion-responsive visual cortex (MT+) when motion was attended in a display with moving dots superimposed on face stimuli, but in face-responsive fusiform area (FFA) when faces were attended to. These TMS effects on BOLD signal in both regions were negatively related to performance (on the motion task), supporting the behavioral relevance of this pathway. Our findings provide new causal evidence for the human FEF in the control of nonspatial "feature"-based attention, mediated by dynamic influences on feature-specific visual cortex that vary with the currently attended property.
Resumo:
Clinical evidence suggests that a persistent search for solutions for chronic pain may bring along costs at the cognitive, affective, and behavioral level. Specifically, attempts to control pain may fuel hypervigilance and prioritize attention towards pain-related information. This hypothesis was investigated in an experiment with 41 healthy volunteers. Prioritization of attention towards a signal for pain was measured using an adaptation of a visual search paradigm in which participants had to search for a target presented in a varying number of colored circles. One of these colors (Conditioned Stimulus) became a signal for pain (Unconditioned Stimulus: electrocutaneous stimulus at tolerance level) using a classical conditioning procedure. Intermixed with the visual search task, participants also performed another task. In the pain-control group, participants were informed that correct and fast responses on trials of this second task would result in an avoidance of the Unconditioned Stimulus. In the comparison group, performance on the second task was not instrumental in controlling pain. Results showed that in the pain-control group, attention was more prioritized towards the Conditioned Stimulus than in the comparison group. The theoretical and clinical implications of these results are discussed.
Resumo:
How is semantic memory influenced by individual differences under conditions of distraction? This question was addressed by observing how visual target words—drawn from a single category—were recalled whilst ignoring spoken distracter words that were either members of the same, or members of a different (single) category. Working memory capacity (WMC) was related to disruption only with synchronous, not asynchronous, presentation and distraction was greater when the words were presented synchronously. Subsequent experiments found greater negative priming of distracters amongst individuals with higher WMC but this may be dependent on targets and distracters being comparable category exemplars. With less dominant category members as distracters, target recall was impaired – relative to control – only amongst individuals with low WMC. The results highlight the role of cognitive control resources in target-distracter selection and the individual-specific cost implications of such cognitive control.
Resumo:
Synesthesia entails a special kind of sensory perception, where stimulation in one sensory modality leads to an internally generated perceptual experience of another, not stimulated sensory modality. This phenomenon can be viewed as an abnormal multisensory integration process as here the synesthetic percept is aberrantly fused with the stimulated modality. Indeed, recent synesthesia research has focused on multimodal processing even outside of the specific synesthesia-inducing context and has revealed changed multimodal integration, thus suggesting perceptual alterations at a global level. Here, we focused on audio-visual processing in synesthesia using a semantic classification task in combination with visually or auditory-visually presented animated and in animated objects in an audio-visual congruent and incongruent manner. Fourteen subjects with auditory-visual and/or grapheme-color synesthesia and 14 control subjects participated in the experiment. During presentation of the stimuli, event-related potentials were recorded from 32 electrodes. The analysis of reaction times and error rates revealed no group differences with best performance for audio-visually congruent stimulation indicating the well-known multimodal facilitation effect. We found enhanced amplitude of the N1 component over occipital electrode sites for synesthetes compared to controls. The differences occurred irrespective of the experimental condition and therefore suggest a global influence on early sensory processing in synesthetes.
Resumo:
Event-related desynchronization (ERD) of the electroencephalogram (EEG) from the motor cortex is associated with execution, observation, and mental imagery of motor tasks. Generation of ERD by motor imagery (MI) has been widely used for brain-computer interfaces (BCIs) linked to neuroprosthetics and other motor assistance devices. Control of MI-based BCIs can be acquired by neurofeedback training to reliably induce MI-associated ERD. To develop more effective training conditions, we investigated the effect of static and dynamic visual representations of target movements (a picture of forearms or a video clip of hand grasping movements) during the BCI training. After 4 consecutive training days, the group that performed MI while viewing the video showed significant improvement in generating MI-associated ERD compared with the group that viewed the static image. This result suggests that passively observing the target movement during MI would improve the associated mental imagery and enhance MI-based BCIs skills.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
Interferences from the spatially adjacent non-target stimuli evoke ERPs during non-target sub-trials and lead to false positives. This phenomenon is commonly seen in visual attention based BCIs and affects the performance of BCI system. Although, users or subjects tried to focus on the target stimulus, they still could not help being affected by conspicuous changes of the stimuli (flashes or presenting images) which were adjacent to the target stimulus. In view of this case, the aim of this study is to reduce the adjacent interference using new stimulus presentation pattern based on facial expression changes. Positive facial expressions can be changed to negative facial expressions by minor changes to the original facial image. Although the changes are minor, the contrast will be big enough to evoke strong ERPs. In this paper, two different conditions (Pattern_1, Pattern_2) were used to compare across objective measures such as classification accuracy and information transfer rate as well as subjective measures. Pattern_1 was a “flash-only” pattern and Pattern_2 was a facial expression change of a dummy face. In the facial expression change patterns, the background is a positive facial expression and the stimulus is a negative facial expression. The results showed that the interferences from adjacent stimuli could be reduced significantly (P<;0.05) by using the facial expression change patterns. The online performance of the BCI system using the facial expression change patterns was significantly better than that using the “flash-only” patterns in terms of classification accuracy (p<;0.01), bit rate (p<;0.01), and practical bit rate (p<;0.01). Subjects reported that the annoyance and fatigue could be significantly decreased (p<;0.05) using the new stimulus presentation pattern presented in this paper.
Resumo:
During the past decade, brain–computer interfaces (BCIs) have rapidly developed, both in technological and application domains. However, most of these interfaces rely on the visual modality. Only some research groups have been studying non-visual BCIs, primarily based on auditory and, sometimes, on somatosensory signals. These non-visual BCI approaches are especially useful for severely disabled patients with poor vision. From a broader perspective, multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback. This chapter describes current systems that are used within auditory and somatosensory BCI research. Four categories of noninvasive BCI paradigms are employed: (1) P300 evoked potentials, (2) steady-state evoked potentials, (3) slow cortical potentials, and (4) mental tasks. Comparing visual and non-visual BCIs, we propose and discuss different possible multisensory combinations, as well as their pros and cons. We conclude by discussing potential future research directions of multisensory BCIs and related research questions
Resumo:
Dance is a rich source of material for researchers interested in the integration of movement and cognition. The multiple aspects of embodied cognition involved in performing and perceiving dance have inspired scientists to use dance as a means for studying motor control, expertise, and action-perception links. The aim of this review is to present basic research on cognitive and neural processes implicated in the execution, expression, and observation of dance, and to bring into relief contemporary issues and open research questions. The review addresses six topics: 1) dancers’ exemplary motor control, in terms of postural control, equilibrium maintenance, and stabilization; 2) how dancers’ timing and on-line synchronization are influenced by attention demands and motor experience; 3) the critical roles played by sequence learning and memory; 4) how dancers make strategic use of visual and motor imagery; 5) the insights into the neural coupling between action and perception yielded through exploration of the brain architecture mediating dance observation; and 6) a neuroaesthetics perspective that sheds new light on the way audiences perceive and evaluate dance expression. Current and emerging issues are presented regarding future directions that will facilitate the ongoing dialogue between science and dance.
Resumo:
This article explores the way users of an online gay chat room negotiate the exchange of photographs and the conduct of video conferencing sessions and how this negotiation changes the way participants manage their interactions and claim and impute social identities. Different modes of communication provide users with different resources for the control of information, affecting not just what users are able to reveal, but also what they are able to conceal. Thus, the shift from a purely textual mode for interacting to one involving visual images fundamentally changes the kinds of identities and relationships available to users. At the same time, the strategies users employ to negotiate these shifts of mode can alter the resources available in different modes. The kinds of social actions made possible through different modes, it is argued, are not just a matter of the modes themselves but also of how modes are introduced into the ongoing flow of interaction.
Resumo:
Thermochromic windows are able to modulate their transmittance in both the visible and the near-infrared field as a function of their temperature. As a consequence, they allow to control the solar gains in summer, thus reducing the energy needs for space cooling. However, they may also yield a reduction in the daylight availability, which results in the energy consumption for indoor artificial lighting being increased. This paper investigates, by means of dynamic simulations, the application of thermochromic windows to an existing office building in terms of energy savings on an annual basis, while also focusing on the effects in terms of daylighting and thermal comfort. In particular, due attention is paid to daylight availability, described through illuminance maps and by the calculation of the daylight factor, which in several countries is subject thresholds. The study considers both a commercially available thermochromic pane and a series of theoretical thermochromic glazing. The expected performance is compared to static clear and reflective insulating glass units. The simulations are repeated in different climatic conditions, showing that the overall energy savings compared to clear glazing can range from around 5% for cold climates to around 20% in warm climates, while not compromising daylight availability. Moreover the role played by the transition temperature of the pane is examined, pointing out an optimal transition temperatures that is irrespective of the climatic conditions.
Resumo:
Given capacity limits, only a subset of stimuli 1 give rise to a conscious percept. Neurocognitive models suggest that humans have evolved mechanisms that operate without awareness and prioritize threatening stimuli over neutral stimuli in subsequent perception. In this meta analysis, we review evidence for this ‘standard hypothesis’ emanating from three widely used, but rather different experimental paradigms that have been used to manipulate awareness. We found a small pooled threat-bias effect in the masked visual probe paradigm, a medium effect in the binocular rivalry paradigm and highly inconsistent effects in the breaking continuous flash suppression paradigm. Substantial heterogeneity was explained by the stimulus type: the only threat stimuli that were robustly prioritized across all three paradigms were fearful faces. Meta regression revealed that anxiety may modulate threat biases, but only under specific presentation conditions. We also found that insufficiently rigorous awareness measures, inadequate control of response biases and low level confounds may undermine claims of genuine unconscious threat processing. Considering the data together, we suggest that uncritical acceptance of the standard hypothesis is premature: current behavioral evidence for threat-sensitive visual processing that operates without awareness is weak.