973 resultados para Visual feedback


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Utilising advanced technologies, such as virtual environments (VEs), is of importance to training and education. The need to develop and effectively apply interactive, immersive 3D VEs continues to grow. As with any emerging technology, user acceptance of new software and hardware devices is often difficult to measure and guidelines to introduce and ensure adequate and correct usage of such technologies are lacking. It is therefore imperative to obtain a solid understanding of the important elements that play a role in effective learning through VEs. In particular, 3D VEs may present unusual and varied interaction and adoption considerations. The major contribution of this study is to investigate a complex set of interrelated factors in the relatively new sphere of VEs for training and education. Although many of the factors appears to be important from past research, researcher have not explicitly studied a comprehensive set of inter-dependant, empirically validated factors in order to understand how VEs aid complex procedural knowledge and motor skill learning. By integrating theory from research on training, human computer interaction (HCI), ergonomics and cognitive psychology, this research proposes and validates a model that contributes to application-specific VE efficacy formation. The findings of this study show visual feedback has a significant effect on performance. For tactile/force feedback and auditory feedback, no significant effect were found. For satisfaction, user control is salient for performance. Other factors such as interactivity and system comfort, as well as level of task difficulty, also showed effects on performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper presents an image based visual servoing system that is intended to be used for tracking and obtaining scientific observations of the HIFiRE vehicles. The primary aim of this tracking platform is to acquire and track the thermal signature emitted from the surface of the vehicle during the re-entry phase of the mission using an infra-red camera. The implemented visual servoing scheme uses a classical image based approach to identify and track the target using visual kinematic control. The paper utilizes simulation and experimental results to show the tracking performance of the system using visual feedback. Discussions on current implementation and control techniques to further improve the performance of the system are also explored.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

There is an increased interest in the use of Unmanned Aerial Vehicles for load transportation from environmental remote sensing to construction and parcel delivery. One of the main challenges is accurate control of the load position and trajectory. This paper presents an assessment of real flight trials for the control of an autonomous multi-rotor with a suspended slung load using only visual feedback to determine the load position. This method uses an onboard camera to take advantage of a common visual marker detection algorithm to robustly detect the load location. The load position is calculated using an onboard processor, and transmitted over a wireless network to a ground station integrating MATLAB/SIMULINK and Robotic Operating System (ROS) and a Model Predictive Controller (MPC) to control both the load and the UAV. To evaluate the system performance, the position of the load determined by the visual detection system in real flight is compared with data received by a motion tracking system. The multi-rotor position tracking performance is also analyzed by conducting flight trials using perfect load position data and data obtained only from the visual system. Results show very accurate estimation of the load position (~5% Offset) using only the visual system and demonstrate that the need for an external motion tracking system is not needed for this task.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Many forms of formative feedback are used in dance training to refine the dancer’s spatial and kinaesthetic awareness in order that the dancer’s sensorimotor intentions and observable danced outcomes might converge. This paper documents the use of smartphones to record and playback movement sequences in ballet and contemporary technique classes. Peers in pairs took turns filming one another and then analysing the playback. This provided immediate visual feedback of the movement sequence as performed by each dancer. This immediacy facilitated the dancer’s capacity to associate what they felt as they were dancing with what they looked like during the dance. The often-dissonant realities of self-perception and perception by others were thus guided towards harmony, generating improved performance and knowledge relating to dance technique. An approach is offered for potential development of peer review activities to support summative progressive assessment in dance technique training.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Real-time acquisition of EMG during functional MRI (fMRI) provides a novel method of controlling motor experiments in the scanner using feedback of EMG. Because of the redundancy in the human muscle system, this is not possible from recordings of joint torque and kinematics alone, because these provide no information about individual muscle activation. This is particularly critical during brain imaging because brain activations are not only related to joint torques and kinematics but are also related to individual muscle activation. However, EMG collected during imaging is corrupted by large artifacts induced by the varying magnetic fields and radio frequency (RF) pulses in the scanner. Methods proposed in literature for artifact removal are complex, computationally expensive, and difficult to implement for real-time noise removal. We describe an acquisition system and algorithm that enables real-time acquisition for the first time. The algorithm removes particular frequencies from the EMG spectrum in which the noise is concentrated. Although this decreases the power content of the EMG, this method provides excellent estimates of EMG with good resolution. Comparisons show that the cleaned EMG obtained with the algorithm is, like actual EMG, very well correlated with joint torque and can thus be used for real-time visual feedback during functional studies.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Virtual assembly environment (VAE) technology has the great potential for benefiting the manufacturing applications in industry. Usability is an important aspect of the VAE. This paper presents the usability evaluation of a developed multi-sensory VAE. The evaluation is conducted by using its three attributes: (a) efficiency of use; (b) user satisfaction; and (c) reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. A peg-in-a-hole and a Sener electronic box assembly task have been used to perform the experiments, using sixteen participants. The outcomes showed that the introduction of 3D auditory and/or visual feedback could improve the usability. They also indicated that the integrated feedback (visual plus auditory) offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed. © 2007 ACADEMY PUBLISHER.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The purpose of the present study was to determine which augmented sensory modality would best develop subjective error-detection capabilities of learners performing a spatial-temporal task when using a touch screen monitor. Participants were required to learn a 5-digit key-pressing task in a goal time of 2550 ms over 100 acquisition trials on a touch screen. Participants were randomized into 1 of 4 groups: 1) visual-feedback (colour change of button when selected), 2) auditory-feedback (click sound when button was selected), 3) visual-auditory feedback (both colour change and click sound when button was selected), and 4) no-feedback (no colour change or click sound when button was selected). Following each trial, participants were required to provide a subjective estimate regarding their performance time in relation to the actual time it took for them complete the 5-digit sequence. A no-KR retention test was conducted approximately 24-hours after the last completed acquisition trial. Results showed that practicing a timing task on a touch screen augmented with both visual and auditory information may have differentially impacted motor skill acquisition such that removal of one or both sources of augmented feedback did not result in a severe detriment to timing performance or error detection capabilities of the learner. The present study reflects the importance of multimodal augmented feedback conditions to maximize cognitive abilities for developing a stronger motor memory for subjective error-detection and correction capabilities.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

AIM:
To conduct a pilot study to explore the potential impact of visual feedback of personal retinal images on diabetes outcomes.

METHODS:
Twenty-five participants with non-proliferative diabetic retinopathy and suboptimal HbA1c (> 53 mmol/mol; > 7%) were randomized to receive visual feedback of their own retinal images or to a control group. At baseline and 3-month follow-up, HbA1c, standard measures of beliefs, diabetes-related distress and self-care activities were assessed.

RESULTS:
In unadjusted models, relative to controls, the intervention group showed significantly greater improvement in HbA1c at 3-month follow-up (–0.6% vs. +0.3%, P < 0.01), as well as enhanced motivation to improve blood glucose management (P < 0.05).

CONCLUSIONS:
This small pilot study provides preliminary evidence that visual feedback of personal retinal images may offer a practical educational strategy for clinicians in eye care services to improve diabetes outcomes in non-target compliant patients. A fully powered randomized controlled trial is required to confirm these findings and determine the optimal use of feedback to produce sustained effects.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

BACKGROUND: Clinical decision-making is a complex activity that is critical to patient safety. Simulation, augmented by feedback, affords learners the opportunity to learn critical clinical decision-making skills. More detailed feedback following simulation exercises has the potential to further enhance student learning, particularly in relation to developing improved clinical decision-making skills. OBJECTIVE: To investigate the feasibility of head-mounted video camera recordings, to augment feedback, following acute patient deterioration simulations. DESIGN: Pilot study using an observational design. METHODS: Ten final-year nursing students participated in three simulation exercises, each focussed on detection and management of patient deterioration. Two observers collected behavioural data using an adapted version of Gaba's Clinical Simulation Tool, to provide verbal feedback to each participant, following each simulation exercise. Participants wore a head-mounted video camera during the second simulation exercise only. Video recordings were replayed to participants to augment feedback, following the second simulation exercise. Data were collected on: participant performance (observed and perceived); participant perceptions of feedback methods; and head-mounted video camera recording feasibility and capability for detailed audio-visual feedback. RESULTS: Management of patient deterioration improved for six participants (60%). Increased perceptions of confidence (70%) and competence (80%), were reported by the majority of participants. Few participants (20%) agreed that the video recording specifically enhanced their learning. The visual field of the head-mounted video camera was not always synchronised with the participant's field of vision, thus affecting the usefulness of some recordings. CONCLUSION: The usefulness of the video recordings, to enhance verbal feedback to participants on detection and management of simulated patient deterioration, was inconclusive. Modification of the video camera glasses, to improve visual-field synchronisation with participants' actual visual field, is recommended to further explore this technology for enhancing student performance.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose

Relevância:

70.00% 70.00%

Publicador:

Resumo:

In conventional robot manipulator control, the desired path is specified in cartesian space and converted to joint space through inverse kinematics mapping. The joint references generated by this mapping are utilized for dynamic control in joint space. Thus, the end-effector position is, in fact, controlled indirectly, in open-loop, and the accuracy of grip position control directly depends on the accuracy of the available kinematic model. In this report, a new scheme for redundant manipulator kinematic control, based on visual servoing is proposed. In the proposed system, a robot image acquired through a CCD camera is processed in order to compute the position and orientation of each link of the robot arm. The robot task is specified as a temporal sequence of reference images of the robot arm. Thus, both the measured pose and the reference pose are specified in the same image space, and its difference is utilized to generate a cartesian space error for kinematic control purposes. The proposed control scheme was applied in a four degree-of-freedom planar redundant robot arm, experimental results are shown

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Single Limb Stance under visual and proprioceptive disturbances is largely used in clinical settings in order to improve balance in a wide range of functional disabilities. However, the proper role of vision and proprioception in SLS is not completely understood. The objectives of this study were to test the hypotheses that when ankle proprioception is perturbed, the role of vision in postural control increases according to the difficulty of the standing task. And to test the effect of vision during postural adaptation after withdrawal of the somesthetic perturbation during double and single limb stance. Eleven males were submitted to double (DLS) and single limb (SLS) stances under conditions of normal or reduced vision, both with normal and perturbed proprioception. Center of pressure parameters were analyzed across conditions. Vision had a main effect in SLS, whereas proprioception perturbation showed effects only during DLS. Baseline stability was promptly achieved independently of visual input after proprioception reintegration. In conclusion, the role of vision increases in SLS. After proprioception reintegration, vision does not affect postural recovery. Balance training programs must take that into account. © 2011 Elsevier Ltd.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)