52 resultados para audio-visual information
Resumo:
The cerebral cortex contains circuitry for continuously computing properties of the environment and one's body, as well as relations among those properties. The success of complex perceptuomotor performances requires integrated, simultaneous use of such relational information. Ball catching is a good example as it involves reaching and grasping of visually pursued objects that move relative to the catcher. Although integrated neural control of catching has received sparse attention in the neuroscience literature, behavioral observations have led to the identification of control principles that may be embodied in the involved neural circuits. Here, we report a catching experiment that refines those principles via a novel manipulation. Visual field motion was used to perturb velocity information about balls traveling on various trajectories relative to a seated catcher, with various initial hand positions. The experiment produced evidence for a continuous, prospective catching strategy, in which hand movements are planned based on gaze-centered ball velocity and ball position information. Such a strategy was implemented in a new neural model, which suggests how position, velocity, and temporal information streams combine to shape catching movements. The model accurately reproduces the main and interaction effects found in the behavioral experiment and provides an interpretation of recently observed target motion-related activity in the motor cortex during interceptive reaching by monkeys. It functionally interprets a broad range of neurobiological and behavioral data, and thus contributes to a unified theory of the neural control of reaching to stationary and moving targets.
Resumo:
To date, the usefulness of stereoscopic visual displays in research on manual interceptive actions has never been examined. In this study, we compared the catching movements of 8 right-handed participants (6 men, 2 women) in a real environment (with suspended balls swinging past the participant, requiring lateral hand movements for interception) with those in a situation in which similar virtual ball trajectories were displayed stereoscopically in a virtual reality system (Cave Automated Virtual Environment [CAVE]; Cruz-Neira, Sandin, DeFranti, Kenyon, & Hart, 1992) with the head fixated. Catching the virtual ball involved grasping a lightweight ball attached to the palm of the hand. The results showed that, compared to real catching, hand movements in the CAVE were (a) initiated later, (b) less accurate, (c) smoother, and (d) aimed more directly at the interception point. Although the latter 3 observations might be attributable to the delayed movement initiation observed in the CAVE, this delayed initiation might have resulted from the use of visual displays. This suggests that stereoscopic visual displays such as present in many virtual reality systems should be used circumspectly in the experimental study of catching and should be used only to address research questions requiring no detailed analysis of the information-based online control of the catching movements.
Resumo:
Growing evidence suggests that significant motor problems are associated with a diagnosis of Autism Spectrum Disorders (ASD), particularly in catching tasks. Catching is a complex, dynamic skill that involves the ability to synchronise one's own movement to that of a moving target. To successfully complete the task, the participant must pick up and use perceptual information about the moving target to arrive at the catching place at the right time. This study looks at catching ability in children diagnosed with ASD (mean age 10.16 ± 0.9 years) and age-matched non-verbal (9.72 ± 0.79 years) and receptive language (9.51 ± 0.46) control groups. Participants were asked to "catch" a ball as it rolled down a fixed ramp. Two ramp heights provided two levels of task difficulty, whilst the sensory information (audio and visual) specifying ball arrival time was varied. Results showed children with ASD performed significantly worse than both the receptive language (p =.02) and non-verbal (p =.02) control groups in terms of total number of balls caught. A detailed analysis of the movement kinematics showed that difficulties with picking up and using the sensory information to guide the action may be the source of the problem. © 2013 Elsevier Ltd.
Resumo:
PURPOSE. To investigate the methods used in contemporary ophthalmic literature to designate visual acuity (VA). METHODS. Papers in all 2005 editions of five ophthalmic journals were considered. Papers were included if (1) VA, vision, or visual function was mentioned in the abstract and (2) if the study involved age-related macular degeneration, cataract, or refractive surgery. If a paper was selected on the basis of its abstract, the full text of the paper was examined for information on the method of refractive correction during VA testing, type of chart used to measure VA, specifics concerning chart features, testing protocols, and data analysis and means of expressing VA in results. RESULTS. One hundred twenty-eight papers were included. The most common type of charts used were described as logMAR-based. Although most (89.8%) of the studies reported on the method of refractive correction during VA testing, only 58.6% gave the chart design, and less than 12% gave any information whatsoever on chart features or measurement procedures used. CONCLUSIONS. The methods used and the approach to analysis were rarely described in sufficient detail to allow others to replicate the study being reported. Sufficient detail should be given on VA measurement to enable others to duplicate the research. The authors suggest that charts adhering to Bailey-Lovie design principles always be used to measure vision in prospective studies and their use encouraged in clinical settings. The distinction between the terms logMAR, an acuity notation, and Bailey-Lovie or ETDRS as chart types should be adhered to more strictly. Copyright © Association for Research in Vision and Ophthalmology.
Resumo:
Human listeners seem to be remarkably able to recognise acoustic sound sources based on timbre cues. Here we describe a psychophysical paradigm to estimate the time it takes to recognise a set of complex sounds differing only in timbre cues: both in terms of the minimum duration of the sounds and the inferred neural processing time. Listeners had to respond to the human voice while ignoring a set of distractors. All sounds were recorded from natural sources over the same pitch range and equalised to the same duration and power. In a first experiment, stimuli were gated in time with a raised-cosine window of variable duration and random onset time. A voice/non-voice (yes/no) task was used. Performance, as measured by d', remained above chance for the shortest sounds tested (2 ms); d's above 1 were observed for durations longer than or equal to 8 ms. Then, we constructed sequences of short sounds presented in rapid succession. Listeners were asked to report the presence of a single voice token that could occur at a random position within the sequence. This method is analogous to the "rapid sequential visual presentation" paradigm (RSVP), which has been used to evaluate neural processing time for images. For 500-ms sequences made of 32-ms and 16-ms sounds, d' remained above chance for presentation rates of up to 30 sounds per second. There was no effect of the pitch relation between successive sounds: identical for all sounds in the sequence or random for each sound. This implies that the task was not determined by streaming or forward masking, as both phenomena would predict better performance for the random pitch condition. Overall, the recognition of familiar sound categories such as the voice seems to be surprisingly fast, both in terms of the acoustic duration required and of the underlying neural time constants.
Resumo:
In this paper, we propose a novel visual tracking framework, based on a decision-theoretic online learning algorithm namely NormalHedge. To make NormalHedge more robust against noise, we propose an adaptive NormalHedge algorithm, which exploits the historic information of each expert to perform more accurate prediction than the standard NormalHedge. Technically, we use a set of weighted experts to predict the state of the target to be tracked over time. The weight of each expert is online learned by pushing the cumulative regret of the learner towards that of the expert. Our simulation experiments demonstrate the effectiveness of the proposed adaptive NormalHedge, compared to the standard NormalHedge method. Furthermore, the experimental results of several challenging video sequences show that the proposed tracking method outperforms several state-of-the-art methods.
Resumo:
Experience obtained in the support of mobile learning using podcast audio is reported. The paper outlines design, storage and distribution via a web site. An initial evaluation of the uptake of the approach in a final year computing module was undertaken. Audio objects were tailored to meet different pedagogical needs resulting in a repository of persistent glossary terms and disposable audio lectures distributed by podcasting. An aim of our approach is to document the interest from the students, and evaluate the potential of mobile learning for supplementing revision