4 resultados para Feedback visual
em CentAUR: Central Archive University of Reading - UK
Resumo:
Visual telepresence seeks to extend existing teleoperative capability by supplying the operator with a 3D interactive view of the remote environment. This is achieved through the use of a stereo camera platform which, through appropriate 3D display devices, provides a distinct image to each eye of the operator, and which is slaved directly from the operator's head and eye movements. However, the resolution within current head mounted displays remains poor, thereby reducing the operator's visual acuity. This paper reports on the feasibility of incorporation of eye tracking to increase resolution and investigates the stability and control issues for such a system. Continuous domain and discrete simulations are presented which indicates that eye tracking provides a stable feedback loop for tracking applications, though some empirical testing (currently being initiated) of such a system will be required to overcome indicated stability problems associated with micro saccades of the human operator.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
Previous climate model simulations have shown that the configuration of the Earth's orbit during the early to mid-Holocene (approximately 10–5 kyr) can account for the generally warmer-than-present conditions experienced by the high latitudes of the northern hemisphere. New simulations for 6 kyr with two atmospheric/mixed-layer ocean models (Community Climate Model, version 1, CCMl, and Global ENvironmental and Ecological Simulation of Interactive Systems, version 2, GENESIS 2) are presented here and compared with results from two previous simulations with GENESIS 1 that were obtained with and without the albedo feedback due to climate-induced poleward expansion of the boreal forest. The climate model results are summarized in the form of potential vegetation maps obtained with the global BIOME model, which facilitates visual comparisons both among models and with pollen and plant macrofossil data recording shifts of the forest-tundra boundary. A preliminary synthesis shows that the forest limit was shifted 100–200 km north in most sectors. Both CCMl and GENESIS 2 produced a shift of this magnitude. GENESIS 1 however produced too small a shift, except when the boreal forest albedo feedback was included. The feedback in this case was estimated to have amplified forest expansion by approximately 50%. The forest limit changes also show meridional patterns (greatest expansion in central Siberia and little or none in Alaska and Labrador) which have yet to be reproduced by models. Further progress in understanding of the processes involved in the response of climate and vegetation to orbital forcing will require both the deployment of coupled atmosphere-biosphere-ocean models and the development of more comprehensive observational data sets
Resumo:
During the past decade, brain–computer interfaces (BCIs) have rapidly developed, both in technological and application domains. However, most of these interfaces rely on the visual modality. Only some research groups have been studying non-visual BCIs, primarily based on auditory and, sometimes, on somatosensory signals. These non-visual BCI approaches are especially useful for severely disabled patients with poor vision. From a broader perspective, multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback. This chapter describes current systems that are used within auditory and somatosensory BCI research. Four categories of noninvasive BCI paradigms are employed: (1) P300 evoked potentials, (2) steady-state evoked potentials, (3) slow cortical potentials, and (4) mental tasks. Comparing visual and non-visual BCIs, we propose and discuss different possible multisensory combinations, as well as their pros and cons. We conclude by discussing potential future research directions of multisensory BCIs and related research questions