932 resultados para Speech and pioneering sports Colima
Resumo:
This paper introduces a novel technique to directly optimise the Figure of Merit (FOM) for phonetic spoken term detection. The FOM is a popular measure of sTD accuracy, making it an ideal candiate for use as an objective function. A simple linear model is introduced to transform the phone log-posterior probabilities output by a phe classifier to produce enhanced log-posterior features that are more suitable for the STD task. Direct optimisation of the FOM is then performed by training the parameters of this model using a non-linear gradient descent algorithm. Substantial FOM improvements of 11% relative are achieved on held-out evaluation data, demonstrating the generalisability of the approach.
Resumo:
The cascading appearance-based (CAB) feature extraction technique has established itself as the state-of-the-art in extracting dynamic visual speech features for speech recognition. In this paper, we will focus on investigating the effectiveness of this technique for the related speaker verification application. By investigating the speaker verification ability of each stage of the cascade we will demonstrate that the same steps taken to reduce static speaker and environmental information for the visual speech recognition application also provide similar improvements for visual speaker recognition. A further study is conducted comparing synchronous HMM (SHMM) based fusion of CAB visual features and traditional perceptual linear predictive (PLP) acoustic features to show that higher complexity inherit in the SHMM approach does not appear to provide any improvement in the final audio-visual speaker verification system over simpler utterance level score fusion.
Resumo:
The traditional searching method for model-order selection in linear regression is a nested full-parameters-set searching procedure over the desired orders, which we call full-model order selection. On the other hand, a method for model-selection searches for the best sub-model within each order. In this paper, we propose using the model-selection searching method for model-order selection, which we call partial-model order selection. We show by simulations that the proposed searching method gives better accuracies than the traditional one, especially for low signal-to-noise ratios over a wide range of model-order selection criteria (both information theoretic based and bootstrap-based). Also, we show that for some models the performance of the bootstrap-based criterion improves significantly by using the proposed partial-model selection searching method. Index Terms— Model order estimation, model selection, information theoretic criteria, bootstrap 1. INTRODUCTION Several model-order selection criteria can be applied to find the optimal order. Some of the more commonly used information theoretic-based procedures include Akaike’s information criterion (AIC) [1], corrected Akaike (AICc) [2], minimum description length (MDL) [3], normalized maximum likelihood (NML) [4], Hannan-Quinn criterion (HQC) [5], conditional model-order estimation (CME) [6], and the efficient detection criterion (EDC) [7]. From a practical point of view, it is difficult to decide which model order selection criterion to use. Many of them perform reasonably well when the signal-to-noise ratio (SNR) is high. The discrepancies in their performance, however, become more evident when the SNR is low. In those situations, the performance of the given technique is not only determined by the model structure (say a polynomial trend versus a Fourier series) but, more importantly, by the relative values of the parameters within the model. This makes the comparison between the model-order selection algorithms difficult as within the same model with a given order one could find an example for which one of the methods performs favourably well or fails [6, 8]. Our aim is to improve the performance of the model order selection criteria in cases where the SNR is low by considering a model-selection searching procedure that takes into account not only the full-model order search but also a partial model order search within the given model order. Understandably, the improvement in the performance of the model order estimation is at the expense of additional computational complexity.
Resumo:
The development and use of a virtual assessment tool for a signal processing unit is described. It allows students to take a test from anywhere using a web browser to connect to the university server that hosts the test. While student responses are of the multiple choice type, they have to work out problems to arrive at the answer to be entered. CGI programming is used to verify student identification information and record their scores as well as provide immediate feedback after the test is complete. The tool has been used at QUT for the past 3 years and student feedback is discussed. The virtual assessment tool is an efficient alternative to marking written assignment reports that can often take more hours than actual lecture hall contact from a lecturer or tutor. It is especially attractive for very large classes that are now the norm at many universities in the first two years.
Resumo:
This document outlines the system submitted by the Speech and Audio Research Laboratory at the Queensland University of Technology (QUT) for the Speaker Identity Verification: Application task of EVALITA 2009. This competitive submission consisted of a score-level fusion of three component systems; a joint-factor analysis GMM system and two SVM systems using GLDS and GMM supervector kernels. Development evaluation and post-submission results are presented in this study, demonstrating the effectiveness of this fused system approach. This study highlights the challenges associated with system calibration from limited development data and that mismatch between training and testing conditions continues to be a major source of error in speaker verification technology.
Resumo:
This work proposes to improve spoken term detection (STD) accuracy by optimising the Figure of Merit (FOM). In this article, the index takes the form of phonetic posterior-feature matrix. Accuracy is improved by formulating STD as a discriminative training problem and directly optimising the FOM, through its use as an objective function to train a transformation of the index. The outcome of indexing is then a matrix of enhanced posterior-features that are directly tailored for the STD task. The technique is shown to improve the FOM by up to 13% on held-out data. Additional analysis explores the effect of the technique on phone recognition accuracy, examines the actual values of the learned transform, and demonstrates that using an extended training data set results in further improvement in the FOM.
Resumo:
An approach to pattern recognition using invariant parameters based on higher-order spectra is presented. In particular, bispectral invariants are used to classify one-dimensional shapes. The bispectrum, which is translation invariant, is integrated along straight lines passing through the origin in bifrequency space. The phase of the integrated bispectrum is shown to be scale- and amplification-invariant. A minimal set of these invariants is selected as the feature vector for pattern classification. Pattern recognition using higher-order spectral invariants is fast, suited for parallel implementation, and works for signals corrupted by Gaussian noise. The classification technique is shown to distinguish two similar but different bolts given their one-dimensional profiles
Resumo:
Features derived from the trispectra of DFT magnitude slices are used for multi-font digit recognition. These features are insensitive to translation, rotation, or scaling of the input. They are also robust to noise. Classification accuracy tests were conducted on a common data base of 256× 256 pixel bilevel images of digits in 9 fonts. Randomly rotated and translated noisy versions were used for training and testing. The results indicate that the trispectral features are better than moment invariants and affine moment invariants. They achieve a classification accuracy of 95% compared to about 81% for Hu's (1962) moment invariants and 39% for the Flusser and Suk (1994) affine moment invariants on the same data in the presence of 1% impulse noise using a 1-NN classifier. For comparison, a multilayer perceptron with no normalization for rotations and translations yields 34% accuracy on 16× 16 pixel low-pass filtered and decimated versions of the same data.
Resumo:
An application of image processing techniques to recognition of hand-drawn circuit diagrams is presented. The scanned image of a diagram is pre-processed to remove noise and converted to bilevel. Morphological operations are applied to obtain a clean, connected representation using thinned lines. The diagram comprises of nodes, connections and components. Nodes and components are segmented using appropriate thresholds on a spatially varying object pixel density. Connection paths are traced using a pixel-stack. Nodes are classified using syntactic analysis. Components are classified using a combination of invariant moments, scalar pixel-distribution features, and vector relationships between straight lines in polygonal representations. A node recognition accuracy of 82% and a component recognition accuracy of 86% was achieved on a database comprising 107 nodes and 449 components. This recogniser can be used for layout “beautification” or to generate input code for circuit analysis and simulation packages
Resumo:
There is significant interest in Human-computer interaction methods that assist in the design of applications for use by children. Many of these approaches draw upon standard HCI methods,such as personas, scenarios, and probes. However, often these techniques require communication and kinds of thinking skills that are designer centred,which prevents children with Autism Spectrum Disorders or other learning and communication disabilities from being able to participate. This study investigates methods that might be used with children with ASD or other learning and communication disabilities to inspire the design of technology based intervention approaches to support their speech and language development. Similar to Iversen and Brodersen, we argue that children with ASD should not be treated as being in some way “cognitively incomplete”. Rather they are experts in their everyday lives and we cannot design future IT without involving them. However, how do we involve them Instead of beginning with HCI methods, we draw upon easy to use technologies and methods used in the therapy professions for child engagement, particularly utilizing the approaches of Hanen (2011) and Greenspan (1998). These approaches emphasize following the child’s lead and ensuring that the child always has a legitimate turn at a detailed level of interaction. In a pilot project, we have studied a child’s interactions with their parents about activities over which they have control – photos that they have taken at school on an iPad. The iPad was simple enough for this child with ASD to use and they enjoyed taking and reviewing photos. We use this small case study as an example of a child-led approach for a child with ASD. We examine interactions from this study in order to assess the possibilities and limitations of the child-led approach for supporting the design of technology based interventions to support speech and language development.
Resumo:
The design of artificial intelligence in computer games is an important component of a player's game play experience. As games are becoming more life-like and interactive, the need for more realistic game AI will increase. This is particularly the case with respect to AI that simulates how human players act, behave and make decisions. The purpose of this research is to establish a model of player-like behavior that may be effectively used to inform the design of artificial intelligence to more accurately mimic a player's decision making process. The research uses a qualitative analysis of player opinions and reactions while playing a first person shooter video game, with recordings of their in game actions, speech and facial characteristics. The initial studies provide player data that has been used to design a model of how a player behaves.
Resumo:
This paper introduces the Weighted Linear Discriminant Analysis (WLDA) technique, based upon the weighted pairwise Fisher criterion, for the purposes of improving i-vector speaker verification in the presence of high intersession variability. By taking advantage of the speaker discriminative information that is available in the distances between pairs of speakers clustered in the development i-vector space, the WLDA technique is shown to provide an improvement in speaker verification performance over traditional Linear Discriminant Analysis (LDA) approaches. A similar approach is also taken to extend the recently developed Source Normalised LDA (SNLDA) into Weighted SNLDA (WSNLDA) which, similarly, shows an improvement in speaker verification performance in both matched and mismatched enrolment/verification conditions. Based upon the results presented within this paper using the NIST 2008 Speaker Recognition Evaluation dataset, we believe that both WLDA and WSNLDA are viable as replacement techniques to improve the performance of LDA and SNLDA-based i-vector speaker verification.
Resumo:
Purpose This chapter investigates an episode where a supervising teacher on playground duty asks two boys to each give an account of their actions over an incident that had just occurred on some climbing equipment in the playground. Methodology This paper employs an ethnomethodological approach using conversation analysis. The data are taken from a corpus of video recorded interactions of children, aged 7-9 years, and the teacher, in school playgrounds during the lunch recess. Findings The findings show the ways that children work up accounts of their playground practices when asked by the teacher. The teacher initially provided interactional space for each child to give their version of the events. Ultimately, the teacher’s version of how to act in the playground became the sanctioned one. The children and the teacher formulated particular social orders of behavior in the playground through multi-modal devices, direct reported speech and scripts. Such public displays of talk work as socialization practices that frame teacher-sanctioned morally appropriate actions in the playground. Value of paper This chapter shows the pervasiveness of the teacher’s social order, as she presented an institutional social order of how to interact in the playground, showing clearly the disjunction of adult-child orders between the teacher and children.
Resumo:
Recent findings concerning exhaled aerosol size distributions and the regions in the respiratory tract in which they are generated could have significant implications for human to human spread of lower respiratory tract-specific infections. Even in healthy people, measurable quantities of aerosol are routinely generated from the Lower Respiratory Tract (LRT) during breathing(1-3). We have found that there at least three modes in the exhaled aerosol size distribution of healthy adults(4) (see Figure 1). These modes each have a characteristic size and arise from different parts of the respiratory tract. The respiratory bronchioles produce aerosol during breathing, the larynx during speech and the oral cavity also during speech. The model of the resulting droplet size distribution is therefore called the Bronchial Laryngeal Oral (B.L.O.) tri-modal model of expired aerosol.
Resumo:
The rank transform is a non-parametric technique which has been recently proposed for the stereo matching problem. The motivation behind its application to the matching problem is its invariance to certain types of image distortion and noise, as well as its amenability to real-time implementation. This paper derives an analytic expression for the process of matching using the rank transform, and then goes on to derive one constraint which must be satisfied for a correct match. This has been dubbed the rank order constraint or simply the rank constraint. Experimental work has shown that this constraint is capable of resolving ambiguous matches, thereby improving matching reliability. This constraint was incorporated into a new algorithm for matching using the rank transform. This modified algorithm resulted in an increased proportion of correct matches, for all test imagery used.