999 resultados para small vocabulary
Resumo:
Visual noise insensitivity is important to audio visual speech recognition (AVSR). Visual noise can take on a number of forms such as varying frame rate, occlusion, lighting or speaker variabilities. The use of a high dimensional secondary classifier on the word likelihood scores from both the audio and video modalities is investigated for the purposes of adaptive fusion. Preliminary results are presented demonstrating performance above the catastrophic fusion boundary for our confidence measure irrespective of the type of visual noise presented to it. Our experiments were restricted to small vocabulary applications.
Resumo:
Based on biomimetic pattern recognition theory, we proposed a novel speaker-independent continuous speech keyword-spotting algorithm. Without endpoint detection and division, we can get the minimum distance curve between continuous speech samples and every keyword-training net through the dynamic searching to the feature-extracted continuous speech. Then we can count the number of the keywords by investigating the vale-value and the numbers of the vales in the curve. Experiments of small vocabulary continuous speech with various speaking rate have got good recognition results and proved the validity of the algorithm.
Resumo:
This paper describes a structured SVM framework suitable for noise-robust medium/large vocabulary speech recognition. Several theoretical and practical extensions to previous work on small vocabulary tasks are detailed. The joint feature space based on word models is extended to allow context-dependent triphone models to be used. By interpreting the structured SVM as a large margin log-linear model, illustrates that there is an implicit assumption that the prior of the discriminative parameter is a zero mean Gaussian. However, depending on the definition of likelihood feature space, a non-zero prior may be more appropriate. A general Gaussian prior is incorporated into the large margin training criterion in a form that allows the cutting plan algorithm to be directly applied. To further speed up the training process, 1-slack algorithm, caching competing hypothesis and parallelization strategies are also proposed. The performance of structured SVMs is evaluated on noise corrupted medium vocabulary speech recognition task: AURORA 4. © 2011 IEEE.
Resumo:
Dans ce mémoire, nous examinons certaines propriétés des représentations distribuées de mots et nous proposons une technique pour élargir le vocabulaire des systèmes de traduction automatique neurale. En premier lieu, nous considérons un problème de résolution d'analogies bien connu et examinons l'effet de poids adaptés à la position, le choix de la fonction de combinaison et l'impact de l'apprentissage supervisé. Nous enchaînons en montrant que des représentations distribuées simples basées sur la traduction peuvent atteindre ou dépasser l'état de l'art sur le test de détection de synonymes TOEFL et sur le récent étalon-or SimLex-999. Finalament, motivé par d'impressionnants résultats obtenus avec des représentations distribuées issues de systèmes de traduction neurale à petit vocabulaire (30 000 mots), nous présentons une approche compatible à l'utilisation de cartes graphiques pour augmenter la taille du vocabulaire par plus d'un ordre de magnitude. Bien qu'originalement développée seulement pour obtenir les représentations distribuées, nous montrons que cette technique fonctionne plutôt bien sur des tâches de traduction, en particulier de l'anglais vers le français (WMT'14).
Resumo:
Recently there has been interest in combined gen- erative/discriminative classifiers. In these classifiers features for the discriminative models are derived from generative kernels. One advantage of using generative kernels is that systematic approaches exist how to introduce complex dependencies beyond conditional independence assumptions. Furthermore, by using generative kernels model-based compensation/adaptation tech- niques can be applied to make discriminative models robust to noise/speaker conditions. This paper extends previous work with combined generative/discriminative classifiers in several directions. First, it introduces derivative kernels based on context- dependent generative models. Second, it describes how derivative kernels can be incorporated in continuous discriminative models. Third, it addresses the issues associated with large number of classes and parameters when context-dependent models and high- dimensional features of derivative kernels are used. The approach is evaluated on two noise-corrupted tasks: small vocabulary AURORA 2 and medium-to-large vocabulary AURORA 4 task.
Resumo:
Recently there has been interest in combining generative and discriminative classifiers. In these classifiers features for the discriminative models are derived from the generative kernels. One advantage of using generative kernels is that systematic approaches exist to introduce complex dependencies into the feature-space. Furthermore, as the features are based on generative models standard model-based compensation and adaptation techniques can be applied to make discriminative models robust to noise and speaker conditions. This paper extends previous work in this framework in several directions. First, it introduces derivative kernels based on context-dependent generative models. Second, it describes how derivative kernels can be incorporated in structured discriminative models. Third, it addresses the issues associated with large number of classes and parameters when context-dependent models and high-dimensional feature-spaces of derivative kernels are used. The approach is evaluated on two noise-corrupted tasks: small vocabulary AURORA 2 and medium-to-large vocabulary AURORA 4 task. © 2011 IEEE.
Resumo:
Headed on the first page with the words "Nomenclatura hebraica," this handwritten volume is a vocabulary with the Hebrew word in the left column, and the English translation on the right. While the book is arranged in sections by letter, individual entries do not appear in strict alphabetical order. The small vocabulary varies greatly and includes entries like enigma, excommunication, and martyr, as well as cucumber and maggot. There are translations of the astrological signs at the end of the volume. Poem written at the bottom of the last page in different hand: "Women when good the best of saints/ that bright seraphick lovely/ she, who nothing of an angel/ wants but truth & immortality./ Verse 2: Who silken limbs & charming/ face. Keeps nature warm."
Resumo:
Anonymous. By John Kemp.
Resumo:
In this action research study of my classroom of 8th grade mathematics, I investigated the effects of self-assessment on student group work. Data was collected to see how self-assessment affected small-group work, usage of precise mathematical vocabulary, and student attitudes toward mathematics. Self-assessment allowed the students to periodically evaluate their own learning and their involvement in math class. I discovered that the vast majority of students enjoy working in small-groups, and they feel they are good group members. Evidence in regard to use of precise mathematical vocabulary showed an increased awareness in the importance of its usage. Student attitudes toward mathematics remained positive and unchanged throughout the research. As a result of this research, I plan to continue use of small-group work and selfassessment. I will continue emphasis on the inclusion of precise mathematical vocabulary as well as on training on cooperative learning strategies.
Resumo:
Numerous studies have found a positive connection between learners’ motivation towards foreign language and foreign language achievement. The present study examines the role of motivation in receptive vocabulary breadth (size) of two groups of Spanish learners of different ages, but all with 734 hours of instruction in English as a Foreign Language (EFL): a CLIL (Content and Language Integrated Learning) group in primary education and a non-CLIL (or EFL) group in secondary education. Most students in both groups were found to be highly motivated. The primary CLIL group slightly overcame the secondary non-CLIL group with respect to the mean general motivation but this is a non-significant difference. The secondary group surpass significantly the primary group in receptive vocabulary size. No relationship between the receptive vocabulary knowledge and general motivation is found in the primary CLIL group. On the other hand, a positive significant connection, although a very small one, is identified for the secondary non-CLIL group. We will discuss on the type of test, the age of students and the type of instruction as variables that could be influencing the results.