958 resultados para Auditory Display


Relevância:

70.00% 70.00%

Publicador:

Resumo:

Abstract: The paper describes an auditory interface using directional sound as a possible support for pilots during approach in an instrument landing scenario. Several ways of producing directional sounds are illustrated. One using speaker pairs and controlling power distribution between speakers is evaluated experimentally. Results show, that power alone is insufficient for positioning single isolated sound events, although discrimination in the horizontal plane performs better than in the vertical. Additional sound parameters to compensate for this are proposed.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

The integration of the auditory modality in virtual reality environments is known to promote the sensations of immersion and presence. However it is also known from psychophysics studies that auditory-visual interaction obey to complex rules and that multisensory conflicts may disrupt the adhesion of the participant to the presented virtual scene. It is thus important to measure the accuracy of the auditory spatial cues reproduced by the auditory display and their consistency with the spatial visual cues. This study evaluates auditory localization performances under various unimodal and auditory-visual bimodal conditions in a virtual reality (VR) setup using a stereoscopic display and binaural reproduction over headphones in static conditions. The auditory localization performances observed in the present study are in line with those reported in real conditions, suggesting that VR gives rise to consistent auditory and visual spatial cues. These results validate the use of VR for future psychophysics experiments with auditory and visual stimuli. They also emphasize the importance of a spatially accurate auditory and visual rendering for VR setups.

Relevância:

70.00% 70.00%

Publicador:

Resumo:

This paper explores the design, development and evaluation of a novel real-time auditory display system for accelerated racing driver skills acquisition. The auditory feedback provides concurrent sensory augmentation and performance feedback using a novel target matching design. Real-time, dynamic, tonal audio feedback representing lateral G-force (a proxy for tire slip) is delivered to one ear whilst a target lateral G-force value representing the ‘limit’ of the car, to which the driver aims to drive, is panned to the driver’s other ear; tonal match across both ears signifies that the ‘limit’ has been reached. An evaluation approach was established to measure the efficacy of the audio feedback in terms of performance, workload and drivers’ assessment of self-efficacy. A preliminary human subject study was conducted in a driving simulator environment. Initial results are encouraging, indicating that there is potential for performance gain and driver confidence enhancement based on the audio feedback.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In recent years, sonification of movement has emerged as a viable method for the provision of feedback in motor learning. Despite some experimental validation of its utility, controlled trials to test the usefulness of sonification in a motor learning context are still rare. As such, there are no accepted conventions for dealing with its implementation. This article addresses the question of how continuous movement information should be best presented as sound to be fed back to the learner. It is proposed that to establish effective approaches to using sonification in this context, consideration must be given to the processes that underlie motor learning, in particular the nature of the perceptual information available to the learner for performing the task at hand. Although sonification has much potential in movement performance enhancement, this potential is largely unrealised as of yet, in part due to the lack of a clear framework for sonification mapping: the relationship between movement and sound. By grounding mapping decisions in a firmer understanding of how perceptual information guides learning, and an embodied cognition stance in general, it is hoped that greater advances in use of sonification to enhance motor learning can be achieved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We discuss the phenomenon of system tailoring in the context of data from an observational study of anaesthesia. We found that anaesthetists tailor their monitoring equipment so that the auditory alarms are more informative. However, the occurrence of tailoring by anaesthetists in the operating theatre was infrequent, even though the flexibility to tailor exists on many of the patient monitoring systems used in the study. We present an influence diagram to explain how alarm tailoring can increase situation awareness in the operating theatre but why factors inhibiting tailoring prevent widespread use. Extending the influence diagram, we discuss ways that more informative displays could achieve the results sought by anaesthetists when they tailor their alarm systems. In particular, we argue that we should improve our designs rather than simply provide more flexible tailoring systems. because users often find tailoring a complex task. We conclude that properly designed auditory displays may benefit anaesthetists in achieving greater patient situation awareness and that designers should consider carefully how factors promoting and inhibiting tailoring will affect the end-users' likelihood of conducting tailoring. (C) 2004 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

More information is now readily available to computer users than at any time in human history; however, much of this information is often inaccessible to people with blindness or low-vision, for whom information must be presented non-visually. Currently, screen readers are able to verbalize on-screen text using text-to-speech (TTS) synthesis; however, much of this vocalization is inadequate for browsing the Internet. An auditory interface that incorporates auditory-spatial orientation was created and tested. For information that can be structured as a two-dimensional table, links can be semantically grouped as cells in a row within an auditory table, which provides a consistent structure for auditory navigation. An auditory display prototype was tested.^ Sixteen legally blind subjects participated in this research study. Results demonstrated that stereo panning was an effective technique for audio-spatially orienting non-visual navigation in a five-row, six-column HTML table as compared to a centered, stationary synthesized voice. These results were based on measuring the time- to-target (TTT), or the amount of time elapsed from the first prompting to the selection of each tabular link. Preliminary analysis of the TTT values recorded during the experiment showed that the populations did not conform to the ANOVA requirements of normality and equality of variances. Therefore, the data were transformed using the natural logarithm. The repeated-measures two-factor ANOVA results show that the logarithmically-transformed TTTs were significantly affected by the tonal variation method, F(1,15) = 6.194, p= 0.025. Similarly, the results show that the logarithmically transformed TTTs were marginally affected by the stereo spatialization method, F(1,15) = 4.240, p=0.057. The results show that the logarithmically transformed TTTs were not significantly affected by the interaction of both methods, F(1,15) = 1.381, p=0.258. These results suggest that some confusion may be caused in the subject when employing both of these methods simultaneously. The significant effect of tonal variation indicates that the effect is actually increasing the average TTT. In other words, the presence of preceding tones increases task completion time on average. The marginally-significant effect of stereo spatialization decreases the average log(TTT) from 2.405 to 2.264.^

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Sound is potentially an effective way of analysing data and it is possible to simultaneously interpret layers of sounds and identify changes. Multiple attempts to use sound with scientific data have been made, with varying levels of success. On many occasions this was done without including the end user during the development. In this study a sonified model of the 8 planets of our solar system was built and tested using an end user approach. The sonification was created for the Esplora Planetarium, which is currently being constructed in Malta. The data requirements were gathered from a member of the planetarium staff, and 12 end users, as well as the planetarium representative tested the sonification. The results suggest that listeners were able to discern various planetary characteristics without requiring any additional information. Three out of eight sound design parameters did not represent characteristics successfully. These issues have been identified and further development will be conducted in order to improve the model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The auditory system can detect occasional changes (deviants) in acoustic regularities without the need for subjects to focus their attention on the sound material. Deviant detection is reflected in the elicitation of the mismatch negativity component (MMN) of the event-related potentials. In the studies presented in this thesis, the MMN is used to investigate the auditory abilities for detecting similarities and regularities in sound streams. To investigate the limits of these processes, professional musicians have been tested in some of the studies. The results show that auditory grouping is already more advanced in musicians than in nonmusicians and that the auditory system of musicians can, unlike that of nonmusicians, detect a numerical regularity of always four tones in a series. These results suggest that sensory auditory processing in musicians is not only a fine tuning of universal abilities, but is also qualitatively more advanced than in nonmusicians. In addition, the relationship between the auditory change-detection function and perception is examined. It is shown that, contrary to the generally accepted view, MMN elicitation does not necessarily correlate with perception. The outcome of the auditory change-detection function can be implicit and the implicit knowledge of the sound structure can, after training, be utilized for behaviorally correct intuitive sound detection. These results illustrate the automatic character of the sensory change detection function.