5 resultados para performativity of speech
em Archivo Digital para la Docencia y la Investigación - Repositorio Institucional de la Universidad del País Vasco
Resumo:
Accurate and fast decoding of speech imagery from electroencephalographic (EEG) data could serve as a basis for a new generation of brain computer interfaces (BCIs), more portable and easier to use. However, decoding of speech imagery from EEG is a hard problem due to many factors. In this paper we focus on the analysis of the classification step of speech imagery decoding for a three-class vowel speech imagery recognition problem. We empirically show that different classification subtasks may require different classifiers for accurately decoding and obtain a classification accuracy that improves the best results previously published. We further investigate the relationship between the classifiers and different sets of features selected by the common spatial patterns method. Our results indicate that further improvement on BCIs based on speech imagery could be achieved by carefully selecting an appropriate combination of classifiers for the subtasks involved.
Resumo:
[EN] This paper examines the syntactic ideas of Pablo Pedro Astarloa (1752-1806) as he explained in his Discursos filosóficos sobre la lengua primitiva (1805), and tries to put them in the context of the debate between rationalists and sensualists, who argued whether there is a «natural order» of words. Astarloa developed a system for accounting the word order in the primitive language of mankind (and hence in the Basque language) founded in three types of «nobleness», and in the principle that the noblest element precedes the less noble one. The first type (nobleza de origen) orders words according to their meaning. The second type (nobleza de ministerio) orders words according to the part of speech they belong to, or the semantic function they have. Finally, the third type (nobleza de mérito or de movilidad) considers the will for communication and, as a result, word order reflects the information structure. Moreover Astarloa ’s three types of nobleness are arranged in a hierarchy of superiority: movilidad > ministerio > origen. So Astarloa ’s syntax appears near to sensualists ’ conceptions on word order because it did not appeal for a fixed natural order of words; instead he proposed a variable word order based mainly on the communicative process.
Resumo:
Query-by-Example Spoken Term Detection (QbE STD) aims at retrieving data from a speech data repository given an acoustic query containing the term of interest as input. Nowadays, it has been receiving much interest due to the high volume of information stored in audio or audiovisual format. QbE STD differs from automatic speech recognition (ASR) and keyword spotting (KWS)/spoken term detection (STD) since ASR is interested in all the terms/words that appear in the speech signal and KWS/STD relies on a textual transcription of the search term to retrieve the speech data. This paper presents the systems submitted to the ALBAYZIN 2012 QbE STD evaluation held as a part of ALBAYZIN 2012 evaluation campaign within the context of the IberSPEECH 2012 Conference(a). The evaluation consists of retrieving the speech files that contain the input queries, indicating their start and end timestamps within the appropriate speech file. Evaluation is conducted on a Spanish spontaneous speech database containing a set of talks from MAVIR workshops(b), which amount at about 7 h of speech in total. We present the database metric systems submitted along with all results and some discussion. Four different research groups took part in the evaluation. Evaluation results show the difficulty of this task and the limited performance indicates there is still a lot of room for improvement. The best result is achieved by a dynamic time warping-based search over Gaussian posteriorgrams/posterior phoneme probabilities. This paper also compares the systems aiming at establishing the best technique dealing with that difficult task and looking for defining promising directions for this relatively novel task.
Resumo:
[EU]Ahots teknologiaren garapenaren gorakadak, hizketan minusbaliotasunen bat duten pertsonen eguneroko bizitza ahalik eta erosoena egitearen saiakerarekin batera, Aholab ikerkuntza taldea ZURE TTS proiektua garatzera eraman du, proiektuaren helburua ahots minusbaliotasun batez jota dauden edo ahotsa guztiz galdu duten pertsonentzat hizketa sintetizadore bat garatzea delarik. Ahots sintetizatua lortzeko, ahots emaileek grabatutako esaldiez osatutako ahots naturaleko corpus bat hartzen da oinarritzat. Sintesi prozesua ahalik eta kalitate altuenekoa izateko, nahitaezkoa da datu basean gordeta dagoen ahotsa egokia izatea, eta horregatik, burutuko den proiektuak grabazioen edukiaren egiaztatzaile bat garatzea du helburu, erabiltzaileak irakurritako esaldiak zuzenak diren edo ez egiaztatzen dituena, horrela ahots sintetizatuaren kalitatea bermatuz.
Resumo:
We wished to replicate evidence that an experimental paradigm of speech illusions is associated with psychotic experiences. Fifty-four patients with a first episode of psychosis (FEP) and 150 healthy subjects were examined in an experimental paradigm assessing the presence of speech illusion in neutral white noise. Socio-demographic, cognitive function and family history data were collected. The Positive and Negative Syndrome Scale (PANSS) was administered in the patient group and the Structured Interview for Schizotypy-Revised (SIS-R), and the Community Assessment of Psychic Experiences (CAPE) in the control group. Patients had a much higher rate of speech illusions (33.3% versus 8.7%, ORadjusted: 5.1, 95% CI: 2.3-11.5), which was only partly explained by differences in IQ (ORadjusted: 3.4, 95% CI: 1.4-8.3). Differences were particularly marked for signals in random noise that were perceived as affectively salient (ORadjusted: 9.7, 95% CI: 1.8-53.9). Speech illusion tended to be associated with positive symptoms in patients (ORadjusted: 3.3, 95% CI: 0.9-11.6), particularly affectively salient illusions (ORadjusted: 8.3, 95% CI: 0.7-100.3). In controls, speech illusions were not associated with positive schizotypy (ORadjusted: 1.1, 95% CI: 0.3-3.4) or self-reported psychotic experiences (ORadjusted: 1.4, 95% CI: 0.4-4.6). Experimental paradigms indexing the tendency to detect affectively salient signals in noise may be used to identify liability to psychosis.