961 resultados para Visual Feedback
Resumo:
There is an increased interest in the use of Unmanned Aerial Vehicles for load transportation from environmental remote sensing to construction and parcel delivery. One of the main challenges is accurate control of the load position and trajectory. This paper presents an assessment of real flight trials for the control of an autonomous multi-rotor with a suspended slung load using only visual feedback to determine the load position. This method uses an onboard camera to take advantage of a common visual marker detection algorithm to robustly detect the load location. The load position is calculated using an onboard processor, and transmitted over a wireless network to a ground station integrating MATLAB/SIMULINK and Robotic Operating System (ROS) and a Model Predictive Controller (MPC) to control both the load and the UAV. To evaluate the system performance, the position of the load determined by the visual detection system in real flight is compared with data received by a motion tracking system. The multi-rotor position tracking performance is also analyzed by conducting flight trials using perfect load position data and data obtained only from the visual system. Results show very accurate estimation of the load position (~5% Offset) using only the visual system and demonstrate that the need for an external motion tracking system is not needed for this task.
Resumo:
Many forms of formative feedback are used in dance training to refine the dancer’s spatial and kinaesthetic awareness in order that the dancer’s sensorimotor intentions and observable danced outcomes might converge. This paper documents the use of smartphones to record and playback movement sequences in ballet and contemporary technique classes. Peers in pairs took turns filming one another and then analysing the playback. This provided immediate visual feedback of the movement sequence as performed by each dancer. This immediacy facilitated the dancer’s capacity to associate what they felt as they were dancing with what they looked like during the dance. The often-dissonant realities of self-perception and perception by others were thus guided towards harmony, generating improved performance and knowledge relating to dance technique. An approach is offered for potential development of peer review activities to support summative progressive assessment in dance technique training.
Resumo:
Real-time acquisition of EMG during functional MRI (fMRI) provides a novel method of controlling motor experiments in the scanner using feedback of EMG. Because of the redundancy in the human muscle system, this is not possible from recordings of joint torque and kinematics alone, because these provide no information about individual muscle activation. This is particularly critical during brain imaging because brain activations are not only related to joint torques and kinematics but are also related to individual muscle activation. However, EMG collected during imaging is corrupted by large artifacts induced by the varying magnetic fields and radio frequency (RF) pulses in the scanner. Methods proposed in literature for artifact removal are complex, computationally expensive, and difficult to implement for real-time noise removal. We describe an acquisition system and algorithm that enables real-time acquisition for the first time. The algorithm removes particular frequencies from the EMG spectrum in which the noise is concentrated. Although this decreases the power content of the EMG, this method provides excellent estimates of EMG with good resolution. Comparisons show that the cleaned EMG obtained with the algorithm is, like actual EMG, very well correlated with joint torque and can thus be used for real-time visual feedback during functional studies.
Resumo:
Virtual assembly environment (VAE) technology has the great potential for benefiting the manufacturing applications in industry. Usability is an important aspect of the VAE. This paper presents the usability evaluation of a developed multi-sensory VAE. The evaluation is conducted by using its three attributes: (a) efficiency of use; (b) user satisfaction; and (c) reliability. These are addressed by using task completion times (TCTs), questionnaires, and human performance error rates (HPERs), respectively. A peg-in-a-hole and a Sener electronic box assembly task have been used to perform the experiments, using sixteen participants. The outcomes showed that the introduction of 3D auditory and/or visual feedback could improve the usability. They also indicated that the integrated feedback (visual plus auditory) offered better usability than either feedback used in isolation. Most participants preferred the integrated feedback to either feedback (visual or auditory) or no feedback. The participants' comments demonstrated that nonrealistic or inappropriate feedback had negative effects on the usability, and easily made them feel frustrated. The possible reasons behind the outcomes are also analysed. © 2007 ACADEMY PUBLISHER.
Resumo:
The purpose of the present study was to determine which augmented sensory modality would best develop subjective error-detection capabilities of learners performing a spatial-temporal task when using a touch screen monitor. Participants were required to learn a 5-digit key-pressing task in a goal time of 2550 ms over 100 acquisition trials on a touch screen. Participants were randomized into 1 of 4 groups: 1) visual-feedback (colour change of button when selected), 2) auditory-feedback (click sound when button was selected), 3) visual-auditory feedback (both colour change and click sound when button was selected), and 4) no-feedback (no colour change or click sound when button was selected). Following each trial, participants were required to provide a subjective estimate regarding their performance time in relation to the actual time it took for them complete the 5-digit sequence. A no-KR retention test was conducted approximately 24-hours after the last completed acquisition trial. Results showed that practicing a timing task on a touch screen augmented with both visual and auditory information may have differentially impacted motor skill acquisition such that removal of one or both sources of augmented feedback did not result in a severe detriment to timing performance or error detection capabilities of the learner. The present study reflects the importance of multimodal augmented feedback conditions to maximize cognitive abilities for developing a stronger motor memory for subjective error-detection and correction capabilities.
Resumo:
The feedback mechanism used in a brain-computer interface (BCI) forms an integral part of the closed-loop learning process required for successful operation of a BCI. However, ultimate success of the BCI may be dependent upon the modality of the feedback used. This study explores the use of music tempo as a feedback mechanism in BCI and compares it to the more commonly used visual feedback mechanism. Three different feedback modalities are compared for a kinaesthetic motor imagery BCI: visual, auditory via music tempo, and a combined visual and auditory feedback modality. Visual feedback is provided via the position, on the y-axis, of a moving ball. In the music feedback condition, the tempo of a piece of continuously generated music is dynamically adjusted via a novel music-generation method. All the feedback mechanisms allowed users to learn to control the BCI. However, users were not able to maintain as stable control with the music tempo feedback condition as they could in the visual feedback and combined conditions. Additionally, the combined condition exhibited significantly less inter-user variability, suggesting that multi-modal feedback may lead to more robust results. Finally, common spatial patterns are used to identify participant-specific spatial filters for each of the feedback modalities. The mean optimal spatial filter obtained for the music feedback condition is observed to be more diffuse and weaker than the mean spatial filters obtained for the visual and combined feedback conditions.
Resumo:
This work proposes a kinematic control scheme, using visual feedback for a robot arm with five degrees of freedom. Using computational vision techniques, a method was developed to determine the cartesian 3d position and orientation of the robot arm (pose) using a robot image obtained through a camera. A colored triangular label is disposed on the robot manipulator tool and efficient heuristic rules are used to obtain the vertexes of that label in the image. The tool pose is obtained from those vertexes through numerical methods. A color calibration scheme based in the K-means algorithm was implemented to guarantee the robustness of the vision system in the presence of light variations. The extrinsic camera parameters are computed from the image of four coplanar points whose cartesian 3d coordinates, related to a fixed frame, are known. Two distinct poses of the tool, initial and final, obtained from image, are interpolated to generate a desired trajectory in cartesian space. The error signal in the proposed control scheme consists in the difference between the desired tool pose and the actual tool pose. Gains are applied at the error signal and the signal resulting is mapped in joint incrementals using the pseudoinverse of the manipulator jacobian matrix. These incrementals are applied to the manipulator joints moving the tool to the desired pose
Resumo:
In conventional robot manipulator control, the desired path is specified in cartesian space and converted to joint space through inverse kinematics mapping. The joint references generated by this mapping are utilized for dynamic control in joint space. Thus, the end-effector position is, in fact, controlled indirectly, in open-loop, and the accuracy of grip position control directly depends on the accuracy of the available kinematic model. In this report, a new scheme for redundant manipulator kinematic control, based on visual servoing is proposed. In the proposed system, a robot image acquired through a CCD camera is processed in order to compute the position and orientation of each link of the robot arm. The robot task is specified as a temporal sequence of reference images of the robot arm. Thus, both the measured pose and the reference pose are specified in the same image space, and its difference is utilized to generate a cartesian space error for kinematic control purposes. The proposed control scheme was applied in a four degree-of-freedom planar redundant robot arm, experimental results are shown
Resumo:
Single Limb Stance under visual and proprioceptive disturbances is largely used in clinical settings in order to improve balance in a wide range of functional disabilities. However, the proper role of vision and proprioception in SLS is not completely understood. The objectives of this study were to test the hypotheses that when ankle proprioception is perturbed, the role of vision in postural control increases according to the difficulty of the standing task. And to test the effect of vision during postural adaptation after withdrawal of the somesthetic perturbation during double and single limb stance. Eleven males were submitted to double (DLS) and single limb (SLS) stances under conditions of normal or reduced vision, both with normal and perturbed proprioception. Center of pressure parameters were analyzed across conditions. Vision had a main effect in SLS, whereas proprioception perturbation showed effects only during DLS. Baseline stability was promptly achieved independently of visual input after proprioception reintegration. In conclusion, the role of vision increases in SLS. After proprioception reintegration, vision does not affect postural recovery. Balance training programs must take that into account. © 2011 Elsevier Ltd.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
BACKGROUND Efficiently performed basic life support (BLS) after cardiac arrest is proven to be effective. However, cardiopulmonary resuscitation (CPR) is strenuous and rescuers' performance declines rapidly over time. Audio-visual feedback devices reporting CPR quality may prevent this decline. We aimed to investigate the effect of various CPR feedback devices on CPR quality. METHODS In this open, prospective, randomised, controlled trial we compared three CPR feedback devices (PocketCPR, CPRmeter, iPhone app PocketCPR) with standard BLS without feedback in a simulated scenario. 240 trained medical students performed single rescuer BLS on a manikin for 8min. Effective compression (compressions with correct depth, pressure point and sufficient decompression) as well as compression rate, flow time fraction and ventilation parameters were compared between the four groups. RESULTS Study participants using the PocketCPR performed 17±19% effective compressions compared to 32±28% with CPRmeter, 25±27% with the iPhone app PocketCPR, and 35±30% applying standard BLS (PocketCPR vs. CPRmeter p=0.007, PocketCPR vs. standard BLS p=0.001, others: ns). PocketCPR and CPRmeter prevented a decline in effective compression over time, but overall performance in the PocketCPR group was considerably inferior to standard BLS. Compression depth and rate were within the range recommended in the guidelines in all groups. CONCLUSION While we found differences between the investigated CPR feedback devices, overall BLS quality was suboptimal in all groups. Surprisingly, effective compression was not improved by any CPR feedback device compared to standard BLS. All feedback devices caused substantial delay in starting CPR, which may worsen outcome.
Resumo:
BACKGROUND Resuscitation guidelines encourage the use of cardiopulmonary resuscitation (CPR) feedback devices implying better outcomes after sudden cardiac arrest. Whether effective continuous feedback could also be given verbally by a second rescuer ("human feedback") has not been investigated yet. We, therefore, compared the effect of human feedback to a CPR feedback device. METHODS In an open, prospective, randomised, controlled trial, we compared CPR performance of three groups of medical students in a two-rescuer scenario. Group "sCPR" was taught standard BLS without continuous feedback, serving as control. Group "mfCPR" was taught BLS with mechanical audio-visual feedback (HeartStart MRx with Q-CPR-Technology™). Group "hfCPR" was taught standard BLS with human feedback. Afterwards, 326 medical students performed two-rescuer BLS on a manikin for 8 min. CPR quality parameters, such as "effective compression ratio" (ECR: compressions with correct hand position, depth and complete decompression multiplied by flow-time fraction), and other compression, ventilation and time-related parameters were assessed for all groups. RESULTS ECR was comparable between the hfCPR and the mfCPR group (0.33 vs. 0.35, p = 0.435). The hfCPR group needed less time until starting chest compressions (2 vs. 8 s, p < 0.001) and showed fewer incorrect decompressions (26 vs. 33 %, p = 0.044). On the other hand, absolute hands-off time was higher in the hfCPR group (67 vs. 60 s, p = 0.021). CONCLUSIONS The quality of CPR with human feedback or by using a mechanical audio-visual feedback device was similar. Further studies should investigate whether extended human feedback training could further increase CPR quality at comparable costs for training.
Resumo:
PURPOSE:
To evaluate the combination of a pressure-indicating sensor film with hydrogel-forming microneedle arrays, as a method of feedback to confirm MN insertion in vivo.
METHODS:
Pilot in vitro insertion studies were conducted using a Texture Analyser to insert MN arrays, coupled with a pressure-indicating sensor film, at varying forces into excised neonatal porcine skin. In vivo studies involved twenty human volunteers, who self-applied two hydrogel-forming MN arrays, one with a pressure-indicating sensor film incorporated and one without. Optical coherence tomography was employed to measure the resulting penetration depth and colorimetric analysis to investigate the associated colour change of the pressure-indicating sensor film.
RESULTS:
Microneedle insertion was achieved in vitro at three different forces, demonstrating the colour change of the pressure-indicating sensor film upon application of increasing pressure. When self-applied in vivo, there was no significant difference in the microneedle penetration depth resulting from each type of array, with a mean depth of 237 μm recorded. When the pressure-indicating sensor film was present, a colour change occurred upon each application, providing evidence of insertion.
CONCLUSIONS:
For the first time, this study shows how the incorporation of a simple, low-cost pressure-indicating sensor film can indicate microneedle insertion in vitro and in vivo, providing visual feedback to assure the user of correct application. Such a strategy may enhance usability of a microneedle device and, hence, assist in the future translation of the technology to widespread clinical use.
Resumo:
Retinal image properties such as contrast and spatial frequency play important roles in the development of normal vision. For example, visual environments comprised solely of low contrast and/or low spatial frequencies induce myopia. The visual image is processed by the retina and it then locally controls eye growth. In terms of the retinal neurotransmitters that link visual stimuli to eye growth, there is strong evidence to suggest involvement of the retinal dopamine (DA) system. For example, effectively increasing retinal DA levels by using DA agonists can suppress the development of form-deprivation myopia (FDM). However, whether visual feedback controls eye growth by modulating retinal DA release, and/or some other factors, is still being elucidated. This thesis is chiefly concerned with the relationship between the dopaminergic system and retinal image properties in eye growth control. More specifically, whether the amount of retinal DA release reduces as the complexity of the image degrades was determined. For example, we investigated whether the level of retinal DA release decreased as image contrast decreased. In addition, the effects of spatial frequency, spatial energy distribution slope, and spatial phase on retinal DA release and eye growth were examined. When chicks were 8-days-old, a cone-lens imaging system was applied monocularly (+30 D, 3.3 cm cone). A short-term treatment period (6 hr) and a longer-term treatment period (4.5 days) were used. The short-term treatment tests for the acute reduction in DA release by the visual stimulus, as is seen with diffusers and lenses, whereas the 4.5 day point tests for reduction in DA release after more prolonged exposure to the visual stimulus. In the contrast study, 1.35 cyc/deg square wave grating targets of 95%, 67%, 45%, 12% or 4.2% contrast were used. Blank (0% contrast) targets were included for comparison. In the spatial frequency study, both sine and square wave grating targets with either 0.017 cyc/deg and 0.13 cyc/deg fundamental spatial frequencies and 95% contrast were used. In the spectral slope study, 30% root-mean-squared (RMS) contrast fractal noise targets with spectral fall-off of 1/f0.5, 1/f and 1/f2 were used. In the spatial alignment study, a structured Maltese cross (MX) target, a structured circular patterned (C) target and the scrambled versions of these two targets (SMX and SC) were used. Each treatment group comprised 6 chicks for ocular biometry (refraction and ocular dimension measurement) and 4 for analysis of retinal DA release. Vitreal dihydroxyphenylacetic acid (DOPAC) was analysed through ion-paired reversed phase high performance liquid chromatography with electrochemical detection (HPLC-ED), as a measure of retinal DA release. For the comparison between retinal DA release and eye growth, large reductions in retinal DA release possibly due to the decreased light level inside the cone-lens imaging system were observed across all treated eyes while only those exposed to low contrast, low spatial frequency sine wave grating, 1/f2, C and SC targets had myopic shifts in refraction. Amongst these treatment groups, no acute effect was observed and longer-term effects were only found in the low contrast and 1/f2 groups. These findings suggest that retinal DA release does not causally link visual stimuli properties to eye growth, and these target induced changes in refractive development are not dependent on the level of retinal DA release. Retinal dopaminergic cells might be affected indirectly via other retinal cells that immediately respond to changes in the image contrast of the retinal image.
Resumo:
Acute exercise has been shown to exhibit different effects on human sensorimotor behavior; however, the causes and mechanisms of the responses are often not clear. The primary aim of the present study was to determine the effects of incremental running until exhaustion on sensorimotor performance and adaptation in a tracking task. Subjects were randomly assigned to a running group (RG), a tracking group (TG), or a running followed by tracking group (RTG), with 10 subjects assigned to each group. Treadmill running velocity was initially set at 2.0 m s− 1, increasing by 0.5 m s− 1 every 5 min until exhaustion. Tracking consisted of 35 episodes (each 40 s) where the subjects' task was to track a visual target on a computer screen while the visual feedback was veridical (performance) or left-right reversed (adaptation). Resting electroencephalographic (EEG) activity was recorded before and after each experimental condition (running, tracking, rest). Tracking performance and the final amount of adaptation did not differ between groups. However, task adaptation was significantly faster in RTG compared to TG. In addition, increased alpha and beta power were observed following tracking in TG but not RTG although exhaustive running failed to induce significant changes in these frequency bands. Our results suggest that exhaustive running can facilitate adaptation processes in a manual tracking task. Attenuated cortical activation following tracking in the exercise condition was interpreted to indicate cortical efficiency and exercise-induced facilitation of selective central processes during actual task demands.