898 resultados para Audio indexing
Resumo:
This is an exploratory study into the effective use of embedding custom made audiovisual case studies (AVCS) in enhancing the student’s learning experience. This paper describes a project that used AVCS for a large divergent cohort of undergraduate students, enrolled in an International Business course. The study makes a number of key contributions to advancing learning and teaching within the discipline. AVCS provide first hand reporting of the case material, where the students have the ability to improve their understanding from both verbal and nonverbal cues. The paper demonstrates how AVCS can be embedded in a student-centred teaching approach to capture the students’ interest and to enhance a deep approach to learning by providing real-world authentic experience.
Resumo:
The aim of spoken term detection (STD) is to find all occurrences of a specified query term in a large audio database. This process is usually divided into two steps: indexing and search. In a previous study, it was shown that knowing the topic of an audio document would help to improve the accuracy of indexing step which results in a better performance for STD system. In this paper, we propose the use of topic information not only in the indexing step, but also in the search step. Results of our experiments show that topic information could also be used in search step to improve the STD accuracy.
Resumo:
This paper presents a low-bandwidth multi-robot communication system designed to serve as a backup communication channel in the event a robot suffers a network device fault. While much research has been performed in the area of distributing network communication across multiple robots within a system, individual robots are still susceptible to hardware failure. In the past, such robots would simply be removed from service, and their tasks re-allocated to other members. However, there are times when a faulty robot might be crucial to a mission, or be able to contribute in a less communication intensive area. By allowing robots to encode and decode messages into unique sequences of DTMF symbols, called words, our system is able to facilitate continued low-bandwidth communication between robots without access to network communication. Our results have shown that the system is capable of permitting robots to negotiate task initiation and termination, and is flexible enough to permit a pair of robots to perform a simple turn taking task.
Resumo:
It is impracticable to upgrade the 18,900 Australian passive crossings as such crossings are often located in remote areas, where power is lacking and with low road and rail traffic. The rail industry is interested in developing innovative in-vehicle technology interventions to warn motorists of approaching trains directly in their vehicles. The objective of this study was therefore to evaluate the benefits of the introduction of such technology. We evaluated the changes in driver performance once the technology is enabled and functioning correctly, as well as the effects of an unsafe failure of the technology? We conducted a driving simulator study where participants (N=15) were familiarised with an in-vehicle audio warning for an extended period. After being familiarised with the system, the technology started failing, and we tested the reaction of drivers with a train approaching. This study has shown that with the traditional passive crossings with RX2 signage, the majority of drivers complied (70%) and looked for trains on both sides of the rail track. With the introduction of the in-vehicle audio message, drivers did not approach crossings faster, did not reduce their safety margins and did not reduce their gaze towards the rail tracks. However participants’ compliance at the stop sign decreased by 16.5% with the technology installed in the vehicle. The effect of the failure of the in-vehicle audio warning technology showed that most participants did not experience difficulties in detecting the approaching train even though they did not receive any warning message. This showed that participants were still actively looking for trains with the system in their vehicle. However, two participants did not stop and one decided to beat the train when they did not receive the audio message, suggesting potential human factors issues to be considered with such technology.
Resumo:
We propose a novel technique for conducting robust voice activity detection (VAD) in high-noise recordings. We use Gaussian mixture modeling (GMM) to train two generic models; speech and non-speech. We then score smaller segments of a given (unseen) recording against each of these GMMs to obtain two respective likelihood scores for each segment. These scores are used to compute a dissimilarity measure between pairs of segments and to carry out complete-linkage clustering of the segments into speech and non-speech clusters. We compare the accuracy of our method against state-of-the-art and standardised VAD techniques to demonstrate an absolute improvement of 15% in half-total error rate (HTER) over the best performing baseline system and across the QUT-NOISE-TIMIT database. We then apply our approach to the Audio-Visual Database of American English (AVDBAE) to demonstrate the performance of our algorithm in using visual, audio-visual or a proposed fusion of these features.
Resumo:
Visual information in the form of lip movements of the speaker has been shown to improve the performance of speech recognition and search applications. In our previous work, we proposed cross database training of synchronous hidden Markov models (SHMMs) to make use of external large and publicly available audio databases in addition to the relatively small given audio visual database. In this work, the cross database training approach is improved by performing an additional audio adaptation step, which enables audio visual SHMMs to benefit from audio observations of the external audio models before adding visual modality to them. The proposed approach outperforms the baseline cross database training approach in clean and noisy environments in terms of phone recognition accuracy as well as spoken term detection (STD) accuracy.
Resumo:
Spoken term detection (STD) is the task of looking up a spoken term in a large volume of speech segments. In order to provide fast search, speech segments are first indexed into an intermediate representation using speech recognition engines which provide multiple hypotheses for each speech segment. Approximate matching techniques are usually applied at the search stage to compensate the poor performance of automatic speech recognition engines during indexing. Recently, using visual information in addition to audio information has been shown to improve phone recognition performance, particularly in noisy environments. In this paper, we will make use of visual information in the form of lip movements of the speaker in indexing stage and will investigate its effect on STD performance. Particularly, we will investigate if gains in phone recognition accuracy will carry through the approximate matching stage to provide similar gains in the final audio-visual STD system over a traditional audio only approach. We will also investigate the effect of using visual information on STD performance in different noise environments.
Resumo:
Speech recognition can be improved by using visual information in the form of lip movements of the speaker in addition to audio information. To date, state-of-the-art techniques for audio-visual speech recognition continue to use audio and visual data of the same database for training their models. In this paper, we present a new approach to make use of one modality of an external dataset in addition to a given audio-visual dataset. By so doing, it is possible to create more powerful models from other extensive audio-only databases and adapt them on our comparatively smaller multi-stream databases. Results show that the presented approach outperforms the widely adopted synchronous hidden Markov models (HMM) trained jointly on audio and visual data of a given audio-visual database for phone recognition by 29% relative. It also outperforms the external audio models trained on extensive external audio datasets and also internal audio models by 5.5% and 46% relative respectively. We also show that the proposed approach is beneficial in noisy environments where the audio source is affected by the environmental noise.
Resumo:
Automated digital recordings are useful for large-scale temporal and spatial environmental monitoring. An important research effort has been the automated classification of calling bird species. In this paper we examine a related task, retrieval of birdcalls from a database of audio recordings, similar to a user supplied query call. Such a retrieval task can sometimes be more useful than an automated classifier. We compare three approaches to similarity-based birdcall retrieval using spectral ridge features and two kinds of gradient features, structure tensor and the histogram of oriented gradients. The retrieval accuracy of our spectral ridge method is 94% compared to 82% for the structure tensor method and 90% for the histogram of gradients method. Additionally, this approach potentially offers a more compact representation and is more computationally efficient.
Resumo:
Bioacoustic monitoring has become a significant research topic for species diversity conservation. Due to the development of sensing techniques, acoustic sensors are widely deployed in the field to record animal sounds over a large spatial and temporal scale. With large volumes of collected audio data, it is essential to develop semi-automatic or automatic techniques to analyse the data. This can help ecologists make decisions on how to protect and promote the species diversity. This paper presents generic features to characterize a range of bird species for vocalisation retrieval. In the implementation, audio recordings are first converted to spectrograms using short-time Fourier transform, then a ridge detection method is applied to the spectrogram for detecting points of interest. Based on the detected points, a new region representation are explored for describing various bird vocalisations and a local descriptor including temporal entropy, frequency bin entropy and histogram of counts of four ridge directions is calculated for each sub-region. To speed up the retrieval process, indexing is carried out and the retrieved results are ranked according to similarity scores. The experiment results show that our proposed feature set can achieve 0.71 in term of retrieval success rate which outperforms spectral ridge features alone (0.55) and Mel frequency cepstral coefficients (0.36).
Resumo:
Access to transport systems and the connection to such systems provided to essential economic and social activities are critical to determine households' transportation disadvantage levels. In spite of the developments in better identifying transportation disadvantaged groups, the lack of effective policies resulted in the continuum of the issue as a significant problem. This paper undertakes a pilot case investigation as test bed for a new approach developed to reduce transportation policy shortcomings. The approach, ‘disadvantage-impedance index’, aims to ease transportation disadvantages by employing representative parameters to measure the differences between policy alternatives run in a simulation environment. Implemented in the Japanese town of Arao, the index uses trip-making behaviour and resident stated preference data. The results of the index reveal that even a slight improvement in accessibility and travel quality indicators makes a significant difference in easing disadvantages. The index, integrated into a four-step model, proves to be highly robust and useful in terms of quick diagnosis in capturing effective actions, and developing potentially efficient policies.
Resumo:
Time-frequency analysis of various simulated and experimental signals due to elastic wave scattering from damage are performed using wavelet transform (WT) and Hilbert-Huang transform (HHT) and their performances are compared in context of quantifying the damages. Spectral finite element method is employed for numerical simulation of wave scattering. An analytical study is carried out to study the effects of higher-order damage parameters on the reflected wave from a damage. Based on this study, error bounds are computed for the signals in the spectral and also on the time-frequency domains. It is shown how such an error bound can provide all estimate of error in the modelling of wave propagation in structure with damage. Measures of damage based on WT and HHT is derived to quantify the damage information hidden in the signal. The aim of this study is to obtain detailed insights into the problem of (1) identifying localised damages (2) dispersion of multifrequency non-stationary signals after they interact with various types of damage and (3) quantifying the damages. Sensitivity analysis of the signal due to scattered wave based on time-frequency representation helps to correlate the variation of damage index measures with respect to the damage parameters like damage size and material degradation factors.
Resumo:
This research has made contributions to the area of spoken term detection (STD), defined as the process of finding all occurrences of a specified search term in a large collection of speech segments. The use of visual information in the form of lip movements of the speaker in addition to audio and the use of topic of the speech segments, and the expected frequency of words in the target speech domain, are proposed. By using these complementary information, improvement in the performance of STD has been achieved which enables efficient search of key words in large collection of multimedia documents.