10 resultados para Visual feedback

em CentAUR: Central Archive University of Reading - UK


Relevância:

70.00% 70.00%

Publicador:

Resumo:

The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Visual information is vital for fast and accurate hand movements. It has been demonstrated that allowing free eye movements results in greater accuracy than when the eyes maintain centrally fixed. Three explanations as to why free gaze improves accuracy are: shifting gaze to a target allows visual feedback in guiding the hand to the target (feedback loop), shifting gaze generates ocular-proprioception which can be used to update a movement (feedback-feedforward), or efference copy could be used to direct hand movements (feedforward). In this experiment we used a double-step task and manipulated the utility of ocular-proprioceptive feedback from eye to head position by removing the second target during the saccade. We confirm the advantage of free gaze for sequential movements with a double-step pointing task and document eye-hand lead times of approximately 200 ms for both initial movements and secondary movements. The observation that participants move gaze well ahead of the current hand target dismisses foveal feedback as a major contribution. We argue for a feedforward model based on eye movement efference as the major factor in enabling accurate hand movements. The results with the double-step target task also suggest the need for some buffering of efference and ocular-proprioceptive signals to cope with the situation where the eye has moved to a location ahead of the current target for the hand movement. We estimate that this buffer period may range between 120 and 200 ms without significant impact on hand movement accuracy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objective: To evaluate the effect of robot-mediated therapy on arm dysfunction post stroke. Design: A series of single-case studies using a randomized multiple baseline design with ABC or ACB order. Subjects (n = 20) had a baseline length of 8, 9 or 10 data points. They continued measurement during the B - robot-mediated therapy and C - sling suspension phases. Setting: Physiotherapy department, teaching hospital. Subjects: Twenty subjects with varying degrees of motor and sensory deficit completed the study. Subjects attended three times a week, with each phase lasting three weeks. Interventions: In the robot-mediated therapy phase they practised three functional exercises with haptic and visual feedback from the system. In the sling suspension phase they practised three single-plane exercises. Each treatment phase was three weeks long. Main measures: The range of active shoulder flexion, the Fugl-Meyer motor assessment and the Motor Assessment Scale were measured at each visit. Results: Each subject had a varied response to the measurement and intervention phases. The rate of recovery was greater during the robot-mediated therapy phase than in the baseline phase for the majority of subjects. The rate of recovery during the robot-mediated therapy phase was also greater than that during the sling suspension phase for most subjects. Conclusion: The positive treatment effect for both groups suggests that robot-mediated therapy can have a treatment effect greater than the same duration of non-functional exercises. Further studies investigating the optimal duration of treatment in the form of a randomized controlled trial are warranted.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Researchers in the rehabilitation engineering community have been designing and developing a variety of passive/active devices to help persons with limited upper extremity function to perform essential daily manipulations. Devices range from low-end tools such as head/mouth sticks to sophisticated robots using vision and speech input. While almost all of the high-end equipment developed to date relies on visual feedback alone to guide the user providing no tactile or proprioceptive cues, the “low-tech” head/mouth sticks deliver better “feel” because of the inherent force feedback through physical contact with the user's body. However, the disadvantage of a conventional head/mouth stick is that it can only function in a limited workspace and the performance is limited by the user's strength. It therefore seems reasonable to attempt to develop a system that exploits the advantages of the two approaches: the power and flexibility of robotic systems with the sensory feedback of a headstick. The system presented in this paper reflects the design philosophy stated above. This system contains a pair of master-slave robots with the master being operated by the user's head and the slave acting as a telestick. Described in this paper are the design, control strategies, implementation and performance evaluation of the head-controlled force-reflecting telestick system.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper describes the design, implementation and testing of a high speed controlled stereo “head/eye” platform which facilitates the rapid redirection of gaze in response to visual input. It details the mechanical device, which is based around geared DC motors, and describes hardware aspects of the controller and vision system, which are implemented on a reconfigurable network of general purpose parallel processors. The servo-controller is described in detail and higher level gaze and vision constructs outlined. The paper gives performance figures gained both from mechanical tests on the platform alone, and from closed loop tests on the entire system using visual feedback from a feature detector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The authors demonstrate four real-time reactive responses to movement in everyday scenes using an active head/eye platform. They first describe the design and realization of a high-bandwidth four-degree-of-freedom head/eye platform and visual feedback loop for the exploration of motion processing within active vision. The vision system divides processing into two scales and two broad functions. At a coarse, quasi-peripheral scale, detection and segmentation of new motion occurs across the whole image, and at fine scale, tracking of already detected motion takes place within a foveal region. Several simple coarse scale motion sensors which run concurrently at 25 Hz with latencies around 100 ms are detailed. The use of these sensors are discussed to drive the following real-time responses: (1) head/eye saccades to moving regions of interest; (2) a panic response to looming motion; (3) an opto-kinetic response to continuous motion across the image and (4) smooth pursuit of a moving target using motion alone.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Individuals with schizophrenia, particularly those with passivity symptoms, may not feel in control of their actions, believing them to be controlled by external agents. Cognitive operations that contribute to these symptoms may include abnormal processing in agency as well as body representations that deal with body schema and body image. However, these operations in schizophrenia are not fully understood, and the questions of general versus specific deficits in individuals with different symptom profiles remain unanswered. Using the projected-hand illusion (a digital video version of the rubber-hand illusion) with synchronous and asynchronous stroking (500 ms delay), and a hand laterality judgment task, we assessed sense of agency, body image, and body schema in 53 people with clinically stable schizophrenia (with a current, past, and no history of passivity symptoms) and 48 healthy controls. The results revealed a stable trait in schizophrenia with no difference between clinical subgroups (sense of agency) and some quantitative (specific) differences depending on the passivity symptom profile (body image and body schema). Specifically, a reduced sense of self-agency was a common feature of all clinical subgroups. However, subgroup comparisons showed that individuals with passivity symptoms (both current and past) had significantly greater deficits on tasks assessing body image and body schema, relative to the other groups. In addition, patients with current passivity symptoms failed to demonstrate the normal reduction in body illusion typically seen with a 500 ms delay in visual feedback (asynchronous condition), suggesting internal timing problems. Altogether, the results underscore self-abnormalities in schizophrenia, provide evidence for both trait abnormalities and state changes specific to passivity symptoms, and point to a role for internal timing deficits as a mechanistic explanation for external cues becoming a possible source of self-body input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual telepresence seeks to extend existing teleoperative capability by supplying the operator with a 3D interactive view of the remote environment. This is achieved through the use of a stereo camera platform which, through appropriate 3D display devices, provides a distinct image to each eye of the operator, and which is slaved directly from the operator's head and eye movements. However, the resolution within current head mounted displays remains poor, thereby reducing the operator's visual acuity. This paper reports on the feasibility of incorporation of eye tracking to increase resolution and investigates the stability and control issues for such a system. Continuous domain and discrete simulations are presented which indicates that eye tracking provides a stable feedback loop for tracking applications, though some empirical testing (currently being initiated) of such a system will be required to overcome indicated stability problems associated with micro saccades of the human operator.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Previous climate model simulations have shown that the configuration of the Earth's orbit during the early to mid-Holocene (approximately 10–5 kyr) can account for the generally warmer-than-present conditions experienced by the high latitudes of the northern hemisphere. New simulations for 6 kyr with two atmospheric/mixed-layer ocean models (Community Climate Model, version 1, CCMl, and Global ENvironmental and Ecological Simulation of Interactive Systems, version 2, GENESIS 2) are presented here and compared with results from two previous simulations with GENESIS 1 that were obtained with and without the albedo feedback due to climate-induced poleward expansion of the boreal forest. The climate model results are summarized in the form of potential vegetation maps obtained with the global BIOME model, which facilitates visual comparisons both among models and with pollen and plant macrofossil data recording shifts of the forest-tundra boundary. A preliminary synthesis shows that the forest limit was shifted 100–200 km north in most sectors. Both CCMl and GENESIS 2 produced a shift of this magnitude. GENESIS 1 however produced too small a shift, except when the boreal forest albedo feedback was included. The feedback in this case was estimated to have amplified forest expansion by approximately 50%. The forest limit changes also show meridional patterns (greatest expansion in central Siberia and little or none in Alaska and Labrador) which have yet to be reproduced by models. Further progress in understanding of the processes involved in the response of climate and vegetation to orbital forcing will require both the deployment of coupled atmosphere-biosphere-ocean models and the development of more comprehensive observational data sets

Relevância:

30.00% 30.00%

Publicador:

Resumo:

During the past decade, brain–computer interfaces (BCIs) have rapidly developed, both in technological and application domains. However, most of these interfaces rely on the visual modality. Only some research groups have been studying non-visual BCIs, primarily based on auditory and, sometimes, on somatosensory signals. These non-visual BCI approaches are especially useful for severely disabled patients with poor vision. From a broader perspective, multisensory BCIs may offer more versatile and user-friendly paradigms for control and feedback. This chapter describes current systems that are used within auditory and somatosensory BCI research. Four categories of noninvasive BCI paradigms are employed: (1) P300 evoked potentials, (2) steady-state evoked potentials, (3) slow cortical potentials, and (4) mental tasks. Comparing visual and non-visual BCIs, we propose and discuss different possible multisensory combinations, as well as their pros and cons. We conclude by discussing potential future research directions of multisensory BCIs and related research questions