873 resultados para human language technology
Materiales vascos del legado de Wilhelm von Humboldt: la relevancia de Astarloa y el Plan de Lenguas
Resumo:
Este artículo es traducción del original alemán «Zum Stellenwert Astarloas und des Plan de Lenguas», publicado en B. Hurch (ed.), Die baskischen Materialien aus dem Nachlaß Wilhelm von Humboldts. Astarloa, Charpentier, Fréret, Aizpitarte und anderes. Paderborn: Schöningh, pp. 21-42. La traducción al español es obra de Oroitz Jauregi y ha sido revisada por Ricardo Gómez y Bernhard Hurch.
Resumo:
[EUS] Hizkuntzaren sustrai biologikoen ikerketak aurrerapausu handiak egin ditu azken urteotan. Ikerketa hau alorrartekoa da nahitaez, eta hizkuntzalaritza, psikologia, neurozientzia eta genetikaren arteko bilbatura batek osatzen du «biohizkuntzalaritza» eta hizkuntzaren neurokognizioa itxuratzea xede duen eremu berri hau. Artikulu honetan ezagutza eremu honen hastapeneko itxurapen bat eskainzen da, hizkuntzaren genetikan eta neurokognizioan azken urteotan egin diren zenbait urrats aurkeztuaz.
Resumo:
The partially observable Markov decision process (POMDP) provides a popular framework for modelling spoken dialogue. This paper describes how the expectation propagation algorithm (EP) can be used to learn the parameters of the POMDP user model. Various special probability factors applicable to this task are presented, which allow the parameters be to learned when the structure of the dialogue is complex. No annotations, neither the true dialogue state nor the true semantics of user utterances, are required. Parameters optimised using the proposed techniques are shown to improve the performance of both offline transcription experiments as well as simulated dialogue management performance. ©2010 IEEE.
Resumo:
A recent trend in spoken dialogue research is the use of reinforcement learning to train dialogue systems in a simulated environment. Past researchers have shown that the types of errors that are simulated can have a significant effect on simulated dialogue performance. Since modern systems typically receive an N-best list of possible user utterances, it is important to be able to simulate a full N-best list of hypotheses. This paper presents a new method for simulating such errors based on logistic regression, as well as a new method for simulating the structure of N-best lists of semantics and their probabilities, based on the Dirichlet distribution. Off-line evaluations show that the new Dirichlet model results in a much closer match to the receiver operating characteristics (ROC) of the live data. Experiments also show that the logistic model gives confusions that are closer to the type of confusions observed in live situations. The hope is that these new error models will be able to improve the resulting performance of trained dialogue systems. © 2012 IEEE.
Resumo:
The partially observable Markov decision process (POMDP) has been proposed as a dialogue model that enables automatic improvement of the dialogue policy and robustness to speech understanding errors. It requires, however, a large number of dialogues to train the dialogue policy. Gaussian processes (GP) have recently been applied to POMDP dialogue management optimisation showing an ability to substantially increase the speed of learning. Here, we investigate this further using the Bayesian Update of Dialogue State dialogue manager. We show that it is possible to apply Gaussian processes directly to the belief state, removing the need for a parametric policy representation. In addition, the resulting policy learns significantly faster while maintaining operational performance. © 2012 IEEE.
Resumo:
We describe our work on developing a speech recognition system for multi-genre media archives. The high diversity of the data makes this a challenging recognition task, which may benefit from systems trained on a combination of in-domain and out-of-domain data. Working with tandem HMMs, we present Multi-level Adaptive Networks (MLAN), a novel technique for incorporating information from out-of-domain posterior features using deep neural networks. We show that it provides a substantial reduction in WER over other systems, with relative WER reductions of 15% over a PLP baseline, 9% over in-domain tandem features and 8% over the best out-of-domain tandem features. © 2012 IEEE.
Resumo:
Vocal learning is a critical behavioral substrate for spoken human language. It is a rare trait found in three distantly related groups of birds-songbirds, hummingbirds, and parrots. These avian groups have remarkably similar systems of cerebral vocal nuclei for the control of learned vocalizations that are not found in their more closely related vocal non-learning relatives. These findings led to the hypothesis that brain pathways for vocal learning in different groups evolved independently from a common ancestor but under pre-existing constraints. Here, we suggest one constraint, a pre-existing system for movement control. Using behavioral molecular mapping, we discovered that in songbirds, parrots, and hummingbirds, all cerebral vocal learning nuclei are adjacent to discrete brain areas active during limb and body movements. Similar to the relationships between vocal nuclei activation and singing, activation in the adjacent areas correlated with the amount of movement performed and was independent of auditory and visual input. These same movement-associated brain areas were also present in female songbirds that do not learn vocalizations and have atrophied cerebral vocal nuclei, and in ring doves that are vocal non-learners and do not have cerebral vocal nuclei. A compilation of previous neural tracing experiments in songbirds suggests that the movement-associated areas are connected in a network that is in parallel with the adjacent vocal learning system. This study is the first global mapping that we are aware for movement-associated areas of the avian cerebrum and it indicates that brain systems that control vocal learning in distantly related birds are directly adjacent to brain systems involved in movement control. Based upon these findings, we propose a motor theory for the origin of vocal learning, this being that the brain areas specialized for vocal learning in vocal learners evolved as a specialization of a pre-existing motor pathway that controls movement.
Resumo:
While the hominin fossil record cannot inform us on either the presence or extent of social and cognitive abilities that may have paved the way for the emergence of language, studying non-vocal communication among our closest living relatives, the African apes, may provide valuable information about how language originated. Although much has been learned from gestural signaling in non-human primates, we have not yet established how and why gestural repertoires vary across species, what factors influence this variation, and how knowledge of these differences can contribute to an understanding of gestural signaling's contribution to language evolution. In this paper, we review arguments surrounding the theory that language evolved from gestural signaling and suggest some important factors to consider when conducting comparative studies of gestural communication among African apes. Specifically, we propose that social dynamics and positional behavior are critical components that shape the frequency and nature of gestural signaling across species and we argue that an understanding of these factors could shed light on how gestural communication may have been the basis of human language. We outline predictions for the influence of these factors on the frequencies and types of gestures used across the African apes and highlight the importance of including these factors in future gestural communication research with primates.
Resumo:
One thing is (a) to develop a system that handles some task to one's satisfaction, and also has a universally recognized myrthful side to its output. Another thing is (b) to provide an analysis of why you are getting such a byproduct. Yet another thing is (c) to develop a model that incorporates reflection about some phenomenon in humor for its own sake. This paper selects for discussion especially Alibi, going on to describe the preliminaries of Columbus. The former, which fits in (a), is a planner with an explanatory capability. It invents pretexts. It's no legal defense, but it is relevant to evidential thinking in AI & Law. Some of the output pretext are myrthful. Not in the sense they are silly: they are not. A key factor seems to be the very alacrity at explaining out detail after detail of globally damning evidence. I attempt a reanalysis of Alibi in respect of (b). As to Columbus, it fits instead in (c). We introduce here the basics of this (unimplemented) model, developed to account for a sample text in parody.
Resumo:
We present the results of exploratory experiments using lexical valence extracted from brain using electroencephalography (EEG) for sentiment analysis. We selected 78 English words (36 for training and 42 for testing), presented as stimuli to 3 English native speakers. EEG signals were recorded from the subjects while they performed a mental imaging task for each word stimulus. Wavelet decomposition was employed to extract EEG features from the time-frequency domain. The extracted features were used as inputs to a sparse multinomial logistic regression (SMLR) classifier for valence classification, after univariate ANOVA feature selection. After mapping EEG signals to sentiment valences, we exploited the lexical polarity extracted from brain data for the prediction of the valence of 12 sentences taken from the SemEval-2007 shared task, and compared it against existing lexical resources.