974 resultados para Language processing
Resumo:
Background. Among Hispanics, the HPV vaccine has the potential to eliminate disparities in cervical cancer incidence and mortality but only if optimal rates of vaccination are achieved. Media can be an important information source for increasing HPV knowledge and awareness of the vaccine. Very little is known about how media use among Hispanics affects their HPV knowledge and vaccine awareness. Even less is known about what differences exist in media use and information processing among English- and Spanish-speaking Hispanics.^ Aims. Examine the relationships between three health communication variables (media exposure, HPV-specific information scanning and seeking) and three HPV outcomes (knowledge, vaccine awareness and initiation) among English- and Spanish-speaking Hispanics.^ Methods. Cross-sectional data from a survey administered to Hispanic mothers in Dallas, Texas was used for univariate and multivariate logistic regression analyses. Sample used for analysis included 288 mothers of females aged 8-22 recruited from clinics and community events. Dependent variables of interest were HPV knowledge, HPV vaccine awareness and initiation. Independent variables were media exposure, HPV-specific information scanning and seeking. Language was tested as an effect modifier on the relationship between health communication variables and HPV outcomes.^ Results. English-speaking mothers reported more media exposure, HPV-specific information scanning and seeking than Spanish-speakers. Scanning for HPV information was associated with more HPV knowledge (OR = 4.26, 95% CI = 2.41 - 7.51), vaccine awareness (OR = 10.01, 95% CI = 5.43 - 18.47) and vaccine initiation (OR = 2.54, 95% CI = 1.09 - 5.91). Seeking HPV-specific information was associated with more knowledge (OR = 2.27, 95% CI = 1.23 - 4.16), awareness (OR = 6.60, 95% CI = 2.74 - 15.91) and initiation (OR = 4.93, 95% CI = 2.64 - 9.20). Language moderated the effect of information scanning and seeking on vaccine awareness.^ Discussion. Differences in information scanning and seeking behaviors among Hispanic subgroups have the potential to lead to disparities in vaccine awareness.^ Conclusion. Findings from this study underscore health communication differences among Hispanics and emphasize the need to target Spanish language media as well as English language media aimed at Hispanics to improve knowledge and awareness.^
Resumo:
The dHDL language has been defined to improve hardware design productivity. This is achieved through the definition of a better reuse interface (including parameters, attributes and macroports) and the creation of control structures that help the designer in the hardware generation process.
Resumo:
This paper presents a description of our system for the Albayzin 2012 LRE competition. One of the main characteristics of this evaluation was the reduced number of available files for training the system, especially for the empty condition where no training data set was provided but only a development set. In addition, the whole database was created from online videos and around one third of the training data was labeled as noisy files. Our primary system was the fusion of three different i-vector based systems: one acoustic system based on MFCCs, a phonotactic system using trigrams of phone-posteriorgram counts, and another acoustic system based on RPLPs that improved robustness against noise. A contrastive system that included new features based on the glottal source was also presented. Official and postevaluation results for all the conditions using the proposed metrics for the evaluation and the Cavg metric are presented in the paper.
Resumo:
This paper presents a methodology for adapting an advanced communication system for deaf people in a new domain. This methodology is a user-centered design approach consisting of four main steps: requirement analysis, parallel corpus generation, technology adaptation to the new domain, and finally, system evaluation. In this paper, the new considered domain has been the dialogues in a hotel reception. With this methodology, it was possible to develop the system in a few months, obtaining very good performance: good speech recognition and translation rates (around 90%) with small processing times.
Resumo:
This paper presents new techniques with relevant improvements added to the primary system presented by our group to the Albayzin 2012 LRE competition, where the use of any additional corpora for training or optimizing the models was forbidden. In this work, we present the incorporation of an additional phonotactic subsystem based on the use of phone log-likelihood ratio features (PLLR) extracted from different phonotactic recognizers that contributes to improve the accuracy of the system in a 21.4% in terms of Cavg (we also present results for the official metric during the evaluation, Fact). We will present how using these features at the phone state level provides significant improvements, when used together with dimensionality reduction techniques, especially PCA. We have also experimented with applying alternative SDC-like configurations on these PLLR features with additional improvements. Also, we will describe some modifications to the MFCC-based acoustic i-vector system which have also contributed to additional improvements. The final fused system outperformed the baseline in 27.4% in Cavg.
Resumo:
RESUMEN En los últimos años, debido al incremento en la demanda por parte de las empresas de tecnologías que posibiliten la monitorización y el análisis de un gran volumen de datos en tiempo real, la tecnología CEP (Complex Event Processing) ha surgido como una potencia en alza y su uso se ha incrementado notablemente en ciertos sectores como, por ejemplo, la gestión y automatización de procesos de negocios, finanzas, monitorización de redes y aplicaciones, así como redes de sensores inteligentes como el caso de estudio en el que nos centraremos. CEP se basa en un lenguaje de procesamiento de eventos (Event Processing Language,EPL) cuya utilización puede resultar bastante compleja para usuarios inexpertos. Esta complejidad supone un hándicap y, por lo tanto, un problema a la hora de que su uso se extienda. Este Proyecto Fin de Grado (PFG) pretende dar una solución a este problema, acercando al usuario la tecnología CEP mediante técnicas de abstracción y modelado. Para ello, este PFG ha definido un lenguaje de modelado específico dominio, sencillo e intuitivo para el usuario inexperto, al que se ha dado soporte mediante el desarrollo de una herramienta de modelado gráfico (CEP Modeler) en la que se pueden modelar consultas CEP de forma gráfica, sencilla y de manera más accesible para el usuario. ABSTRACT Over recent years, more and more companies demand technology for monitoring and analyzing a vast volume of data in real time. In this regard, the CEP technology (Complex Event Processing) has emerged as a novel approach to that end, and its use has increased dramatically in certain domains, such as, management and automation of business processes, finance, monitoring of networks and applications, as well as smart sensor networks as the case study in which we will focus. CEP is based on in the Event Processing Language (EPL). This language can be rather difficult to use for new users. This complexity can be a handicap, and therefore, a problem at the time of extending its use. This project aims to provide a solution to this problem, trying to approach the CEP technology to users through abstraction and modelling techniques. To that end, this project has defined an intuitive and simple domain-specific modelling language for new users through a web tool (CEP Modeler) for graphically modeling CEP queries, in an easier and more accessible way.
Resumo:
This article reviews attempts to characterize the mental operations mediated by left inferior prefrontal cortex, especially the anterior and inferior portion of the gyrus, with the functional neuroimaging techniques of positron emission tomography and functional magnetic resonance imaging. Activations in this region occur during semantic, relative to nonsemantic, tasks for the generation of words to semantic cues or the classification of words or pictures into semantic categories. This activation appears in the right prefrontal cortex of people known to be atypically right-hemisphere dominant for language. In this region, activations are associated with meaningful encoding that leads to superior explicit memory for stimuli and deactivations with implicit semantic memory (repetition priming) for words and pictures. New findings are reported showing that patients with global amnesia show deactivations in the same region associated with repetition priming, that activation in this region reflects selection of a response from among numerous relative to few alternatives, and that activations in a portion of this region are associated specifically with semantic relative to phonological processing. It is hypothesized that activations in left inferior prefrontal cortex reflect a domain-specific semantic working memory capacity that is invoked more for semantic than nonsemantic analyses regardless of stimulus modality, more for initial than for repeated semantic analysis of a word or picture, more when a response must be selected from among many than few legitimate alternatives, and that yields superior later explicit memory for experiences.
Resumo:
Cerebral organization during sentence processing in English and in American Sign Language (ASL) was characterized by employing functional magnetic resonance imaging (fMRI) at 4 T. Effects of deafness, age of language acquisition, and bilingualism were assessed by comparing results from (i) normally hearing, monolingual, native speakers of English, (ii) congenitally, genetically deaf, native signers of ASL who learned English late and through the visual modality, and (iii) normally hearing bilinguals who were native signers of ASL and speakers of English. All groups, hearing and deaf, processing their native language, English or ASL, displayed strong and repeated activation within classical language areas of the left hemisphere. Deaf subjects reading English did not display activation in these regions. These results suggest that the early acquisition of a natural language is important in the expression of the strong bias for these areas to mediate language, independently of the form of the language. In addition, native signers, hearing and deaf, displayed extensive activation of homologous areas within the right hemisphere, indicating that the specific processing requirements of the language also in part determine the organization of the language systems of the brain.
Resumo:
This paper surveys some of the fundamental problems in natural language (NL) understanding (syntax, semantics, pragmatics, and discourse) and the current approaches to solving them. Some recent developments in NL processing include increased emphasis on corpus-based rather than example- or intuition-based work, attempts to measure the coverage and effectiveness of NL systems, dealing with discourse and dialogue phenomena, and attempts to use both analytic and stochastic knowledge. Critical areas for the future include grammars that are appropriate to processing large amounts of real language; automatic (or at least semi-automatic) methods for deriving models of syntax, semantics, and pragmatics; self-adapting systems; and integration with speech processing. Of particular importance are techniques that can be tuned to such requirements as full versus partial understanding and spoken language versus text. Portability (the ease with which one can configure an NL system for a particular application) is one of the largest barriers to application of this technology.
Resumo:
Assistive technology involving voice communication is used primarily by people who are deaf, hard of hearing, or who have speech and/or language disabilities. It is also used to a lesser extent by people with visual or motor disabilities. A very wide range of devices has been developed for people with hearing loss. These devices can be categorized not only by the modality of stimulation [i.e., auditory, visual, tactile, or direct electrical stimulation of the auditory nerve (auditory-neural)] but also in terms of the degree of speech processing that is used. At least four such categories can be distinguished: assistive devices (a) that are not designed specifically for speech, (b) that take the average characteristics of speech into account, (c) that process articulatory or phonetic characteristics of speech, and (d) that embody some degree of automatic speech recognition. Assistive devices for people with speech and/or language disabilities typically involve some form of speech synthesis or symbol generation for severe forms of language disability. Speech synthesis is also used in text-to-speech systems for sightless persons. Other applications of assistive technology involving voice communication include voice control of wheelchairs and other devices for people with mobility disabilities.
Resumo:
Three studies investigated the relation between symbolic gestures and words, aiming at discover the neural basis and behavioural features of the lexical semantic processing and integration of the two communicative signals. The first study aimed at determining whether elaboration of communicative signals (symbolic gestures and words) is always accompanied by integration with each other and, if present, this integration can be considered in support of the existence of a same control mechanism. Experiment 1 aimed at determining whether and how gesture is integrated with word. Participants were administered with a semantic priming paradigm with a lexical decision task and pronounced a target word, which was preceded by a meaningful or meaningless prime gesture. When meaningful, the gesture could be either congruent or incongruent with word meaning. Duration of prime presentation (100, 250, 400 ms) randomly varied. Voice spectra, lip kinematics, and time to response were recorded and analyzed. Formant 1 of voice spectra, and mean velocity in lip kinematics increased when the prime was meaningful and congruent with the word, as compared to meaningless gesture. In other words, parameters of voice and movement were magnified by congruence, but this occurred only when prime duration was 250 ms. Time to response to meaningful gesture was shorter in the condition of congruence compared to incongruence. Experiment 2 aimed at determining whether the mechanism of integration of a prime word with a target word is similar to that of a prime gesture with a target word. Formant 1 of the target word increased when word prime was meaningful and congruent, as compared to meaningless congruent prime. Increase was, however, present for whatever prime word duration. In the second study, experiment 3 aimed at determining whether symbolic prime gesture comprehension makes use of motor simulation. Transcranial Magnetic Stimulation was delivered to left primary motor cortex 100, 250, 500 ms after prime gesture presentation. Motor Evoked Potential of First Dorsal Interosseus increased when stimulation occurred 100 ms post-stimulus. Thus, gesture was understood within 100ms and integrated with the target word within 250 ms. Experiment 4 excluded any hand motor simulation in order to comprehend prime word. The effect of the prior presentation of a symbolic gesture on congruent target word processing was investigated in study 3. In experiment 5, symbolic gestures were presented as primes, followed by semantically congruent target word or pseudowords. In this case, lexical-semantic decision was accompanied by a motor simulation at 100ms after the onset of the verbal stimuli. Summing up, the same type of integration with a word was present for both prime gesture and word. It was probably subsequent to understanding of the signal, which used motor simulation for gesture and direct access to semantics for words. However, gesture and words could be understood at the same motor level through simulation if words were preceded by an adequate gestural context. Results are discussed in the prospective of a continuum between transitive actions and emblems, in parallelism with language; the grounded/symbolic content of the different signals evidences relation between sensorimotor and linguistic systems, which could interact at different levels.
Resumo:
Cooperative learning has been successfully implemented in the last 60 years for teaching at different educational levels including the Higher Studies due to its solid theoretical foundation, the principles it proposes and its practical applications. The purpose of this article is to offer a proposal for some cooperative activities that allow students to work in small groups in a language subject in order to learn not only contents but also putting into practice what they learn, i.e., they learn by being active. This article discusses how the said activities make it possible for students to work with the main principles of cooperative learning, i.e.: positive interdependence, face-to-face interaction, individual and group accountability, interpersonal and small-group skills and group processing. Moreover, this research will also point out that the proposed activities allow students to acquire some of the social competences required in the labour market such as leadership, conflict solving and cooperation.
Resumo:
Reading strategies vary across languages according to orthographic depth - the complexity of the grapheme in relation to phoneme conversion rules - notably at the level of eye movement patterns. We recently demonstrated that a group of early bilinguals, who learned both languages equally under the age of seven, presented a first fixation location (FFL) closer to the beginning of words when reading in German as compared with French. Since German is known to be orthographically more transparent than French, this suggested that different strategies were being engaged depending on the orthographic depth of the used language. Opaque languages induce a global reading strategy, and transparent languages force a local/serial strategy. Thus, pseudo-words were processed using a local strategy in both languages, suggesting that the link between word forms and their lexical representation may also play a role in selecting a specific strategy. In order to test whether corresponding effects appear in late bilinguals with low proficiency in their second language (L2), we present a new study in which we recorded eye movements while two groups of late German-French and French-German bilinguals read aloud isolated French and German words and pseudo-words. Since, a transparent reading strategy is local and serial, with a high number of fixations per stimuli, and the level of the bilingual participants' L2 is low, the impact of language opacity should be observed in L1. We therefore predicted a global reading strategy if the bilinguals' L1 was French (FFL close to the middle of the stimuli with fewer fixations per stimuli) and a local and serial reading strategy if it was German. Thus, the L2 of each group, as well as pseudo-words, should also require a local and serial reading strategy. Our results confirmed these hypotheses, suggesting that global word processing is only achieved by bilinguals with an opaque L1 when reading in an opaque language; the low level in the L2 gives way to a local and serial reading strategy. These findings stress the fact that reading behavior is influenced not only by the linguistic mode but also by top-down factors, such as readers' proficiency.
Resumo:
Bibliography: p. 19-20.