994 resultados para Language Functions
Resumo:
Cognitive functions in the child's brain develop in the context of complex adaptive processes, determined by genetic and environmental factors. Little is known about the cerebral representation of cognitive functions during development. In particular, knowledge about the development of right hemispheric (RH) functions is scarce. Considering the dynamics of brain development, localization and lateralization of cognitive functions must be expected to change with age. Twenty healthy subjects (8.6-20.5 years) were examined with fMRI and neuropsychological tests. All participants completed two fMRI tasks known to activate left hemispheric (LH) regions (language tasks) and two tasks known to involve predominantly RH areas (visual search tasks). A laterality index (LI) was computed to determine the asymmetry of activation. Group analysis revealed unilateral activation of the LH language circuitry during language tasks while visual search tasks induced a more widespread RH activation pattern in frontal, superior temporal, and occipital areas. Laterality of language increased between the ages of 8-20 in frontal (r = 0.392, P = 0.049) and temporal (r = 0.387, P = 0.051) areas. The asymmetry of visual search functions increased in frontal (r = -0.525, P = 0.009) and parietal (r = -0.439, P = 0.027) regions. A positive correlation was found between Verbal-IQ and the LI during a language task (r = 0.585, P = 0.028), while visuospatial skills correlated with LIs of visual search (r = -0.621, P = 0.018). To summarize, cognitive development is accompanied by changes in the functional representation of neuronal circuitries, with a strengthening of lateralization not only for LH but also for RH functions. Our data show that age and performance, independently, account for the increases of laterality with age.
Resumo:
Second Life (SL) is an ideal platform for language learning. It is called a Multi-User Virtual Environment, where users can have varieties of learning experiences in life-like environments. Numerous attempts have been made to use SL as a platform for language teaching and the possibility of SL as a means to promote conversational interactions has been reported. However, the research so far has largely focused on simply using SL without further augmentations for communication between learners or between teachers and learners in a school-like environment. Conversely, not enough attention has been paid to its controllability which builds on the embedded functions in SL. This study, based on the latest theories of second language acquisition, especially on the Task Based Language Teaching and the Interaction Hypothesis, proposes to design and implement an automatized interactive task space (AITS) where robotic agents work as interlocutors of learners. This paper presents a design that incorporates the SLA theories into SL and the implementation method of the design to construct AITS, fulfilling the controllability of SL. It also presents the result of the evaluation experiment conducted on the constructed AITS.
Resumo:
Brain processing of grammatical word class was studied analyzing event-related potential (ERP) brain fields. Normal subjects observed a randomized sequence of single German nouns and verbs on a computer screen, while 20-channel ERP field map series were recorded separately for both word classes. Spatial microstate analysis was applied, based on the observation that series of ERP maps consist of epochs of quasi-stable map landscapes and based on the rationale that different map landscapes must have been generated by different neural generators and thus suggest different brain functions. Space-oriented segmentation of the mean map series identified nine successive, different functional microstates, i.e., steps of brain information processing characterized by quasi-stable map landscapes. In the microstate from 116 to 172 msec, noun-related maps differed significantly from verb-related maps along the left–right axis. The results indicate that different neural populations represent different grammatical word classes in language processing, in agreement with clinical observations. This word class differentiation as revealed by the spatial–temporal organization of neural activity occurred at a time after word input compatible with speed of reading.
Resumo:
Schoolbooks convey not only school-relevant knowledge; they also influence the development of stereotypes about different social groups. Particularly during the 1970s and 1980s, many studies analysed schoolbooks and criticised the overall predominance of male persons and of traditional role allocations. Since that time, women’s and men’s occupations and social functions have changed considerably. The present research investigated gender portrayals in schoolbooks for German and mathematics that were recently published in Germany. We examined the proportions of female and male persons in pictures and texts and categorized their activities, occupational and parental roles. Going beyond previous studies, we added two criteria: the use of gender-fair language and the spatial arrangements of persons in pictures. Our results show that schoolbooks for German contained almost balanced depictions of girls and boys, whereas women were less frequently shown than men. In mathematics books, males outnumbered females in general. Across both types of books, female and male persons were engaged in many different activities, not only gendertyped ones; however, male persons were more often described via their profession than females. Use of gender-fair language has found its way into schoolbooks but is not used consistently. Books for German were more gender fair in terms of linguistic forms than books for mathematics. For spatial arrangements, we found no indication for gender biases. The results are discussed with a focus on how schoolbooks can be optimized to contribute to gender equality.
Resumo:
We describe some of the novel aspects and motivations behind the design and implementation of the Ciao multiparadigm programming system. An important aspect of Ciao is that it provides the programmer with a large number of useful features from different programming paradigms and styles, and that the use of each of these features can be turned on and off at will for each program module. Thus, a given module may be using e.g. higher order functions and constraints, while another module may be using objects, predicates, and concurrency. Furthermore, the language is designed to be extensible in a simple and modular way. Another important aspect of Ciao is its programming environment, which provides a powerful preprocessor (with an associated assertion language) capable of statically finding non-trivial bugs, verifying that programs comply with specifications, and performing many types of program optimizations. Such optimizations produce code that is highly competitive with other dynamic languages or, when the highest levéis of optimization are used, even that of static languages, all while retaining the interactive development environment of a dynamic language. The environment also includes a powerful auto-documenter. The paper provides an informal overview of the language and program development environment. It aims at illustrating the design philosophy rather than at being exhaustive, which would be impossible in the format of a paper, pointing instead to the existing literature on the system.
Resumo:
El extraordinario auge de las nuevas tecnologías de la información, el desarrollo de la Internet de las Cosas, el comercio electrónico, las redes sociales, la telefonía móvil y la computación y almacenamiento en la nube, han proporcionado grandes beneficios en todos los ámbitos de la sociedad. Junto a éstos, se presentan nuevos retos para la protección y privacidad de la información y su contenido, como la suplantación de personalidad y la pérdida de la confidencialidad e integridad de los documentos o las comunicaciones electrónicas. Este hecho puede verse agravado por la falta de una frontera clara que delimite el mundo personal del mundo laboral en cuanto al acceso de la información. En todos estos campos de la actividad personal y laboral, la Criptografía ha jugado un papel fundamental aportando las herramientas necesarias para garantizar la confidencialidad, integridad y disponibilidad tanto de la privacidad de los datos personales como de la información. Por otro lado, la Biometría ha propuesto y ofrecido diferentes técnicas con el fin de garantizar la autentificación de individuos a través del uso de determinadas características personales como las huellas dáctilares, el iris, la geometría de la mano, la voz, la forma de caminar, etc. Cada una de estas dos ciencias, Criptografía y Biometría, aportan soluciones a campos específicos de la protección de datos y autentificación de usuarios, que se verían enormemente potenciados si determinadas características de ambas ciencias se unieran con vistas a objetivos comunes. Por ello es imperativo intensificar la investigación en estos ámbitos combinando los algoritmos y primitivas matemáticas de la Criptografía con la Biometría para dar respuesta a la demanda creciente de nuevas soluciones más técnicas, seguras y fáciles de usar que potencien de modo simultáneo la protección de datos y la identificacíón de usuarios. En esta combinación el concepto de biometría cancelable ha supuesto una piedra angular en el proceso de autentificación e identificación de usuarios al proporcionar propiedades de revocación y cancelación a los ragos biométricos. La contribución de esta tesis se basa en el principal aspecto de la Biometría, es decir, la autentificación segura y eficiente de usuarios a través de sus rasgos biométricos, utilizando tres aproximaciones distintas: 1. Diseño de un esquema criptobiométrico borroso que implemente los principios de la biometría cancelable para identificar usuarios lidiando con los problemas acaecidos de la variabilidad intra e inter-usuarios. 2. Diseño de una nueva función hash que preserva la similitud (SPHF por sus siglas en inglés). Actualmente estas funciones se usan en el campo del análisis forense digital con el objetivo de buscar similitudes en el contenido de archivos distintos pero similares de modo que se pueda precisar hasta qué punto estos archivos pudieran ser considerados iguales. La función definida en este trabajo de investigación, además de mejorar los resultados de las principales funciones desarrolladas hasta el momento, intenta extender su uso a la comparación entre patrones de iris. 3. Desarrollando un nuevo mecanismo de comparación de patrones de iris que considera tales patrones como si fueran señales para compararlos posteriormente utilizando la transformada de Walsh-Hadarmard. Los resultados obtenidos son excelentes teniendo en cuenta los requerimientos de seguridad y privacidad mencionados anteriormente. Cada uno de los tres esquemas diseñados han sido implementados para poder realizar experimentos y probar su eficacia operativa en escenarios que simulan situaciones reales: El esquema criptobiométrico borroso y la función SPHF han sido implementados en lenguaje Java mientras que el proceso basado en la transformada de Walsh-Hadamard en Matlab. En los experimentos se ha utilizado una base de datos de imágenes de iris (CASIA) para simular una población de usuarios del sistema. En el caso particular de la función de SPHF, además se han realizado experimentos para comprobar su utilidad en el campo de análisis forense comparando archivos e imágenes con contenido similar y distinto. En este sentido, para cada uno de los esquemas se han calculado los ratios de falso negativo y falso positivo. ABSTRACT The extraordinary increase of new information technologies, the development of Internet of Things, the electronic commerce, the social networks, mobile or smart telephony and cloud computing and storage, have provided great benefits in all areas of society. Besides this fact, there are new challenges for the protection and privacy of information and its content, such as the loss of confidentiality and integrity of electronic documents and communications. This is exarcebated by the lack of a clear boundary between the personal world and the business world as their differences are becoming narrower. In both worlds, i.e the personal and the business one, Cryptography has played a key role by providing the necessary tools to ensure the confidentiality, integrity and availability both of the privacy of the personal data and information. On the other hand, Biometrics has offered and proposed different techniques with the aim to assure the authentication of individuals through their biometric traits, such as fingerprints, iris, hand geometry, voice, gait, etc. Each of these sciences, Cryptography and Biometrics, provides tools to specific problems of the data protection and user authentication, which would be widely strengthen if determined characteristics of both sciences would be combined in order to achieve common objectives. Therefore, it is imperative to intensify the research in this area by combining the basics mathematical algorithms and primitives of Cryptography with Biometrics to meet the growing demand for more secure and usability techniques which would improve the data protection and the user authentication. In this combination, the use of cancelable biometrics makes a cornerstone in the user authentication and identification process since it provides revocable or cancelation properties to the biometric traits. The contributions in this thesis involve the main aspect of Biometrics, i.e. the secure and efficient authentication of users through their biometric templates, considered from three different approaches. The first one is designing a fuzzy crypto-biometric scheme using the cancelable biometric principles to take advantage of the fuzziness of the biometric templates at the same time that it deals with the intra- and inter-user variability among users without compromising the biometric templates extracted from the legitimate users. The second one is designing a new Similarity Preserving Hash Function (SPHF), currently widely used in the Digital Forensics field to find similarities among different files to calculate their similarity level. The function designed in this research work, besides the fact of improving the results of the two main functions of this field currently in place, it tries to expand its use to the iris template comparison. Finally, the last approach of this thesis is developing a new mechanism of handling the iris templates, considering them as signals, to use the Walsh-Hadamard transform (complemented with three other algorithms) to compare them. The results obtained are excellent taking into account the security and privacy requirements mentioned previously. Every one of the three schemes designed have been implemented to test their operational efficacy in situations that simulate real scenarios: The fuzzy crypto-biometric scheme and the SPHF have been implemented in Java language, while the process based on the Walsh-Hadamard transform in Matlab. The experiments have been performed using a database of iris templates (CASIA-IrisV2) to simulate a user population. The case of the new SPHF designed is special since previous to be applied i to the Biometrics field, it has been also tested to determine its applicability in the Digital Forensic field comparing similar and dissimilar files and images. The ratios of efficiency and effectiveness regarding user authentication, i.e. False Non Match and False Match Rate, for the schemes designed have been calculated with different parameters and cases to analyse their behaviour.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel (1997) found that when learning a positive, linear relationship between a continuous predictor (x) and a continuous criterion (y), trainees tend to underestimate y on items that ask the trainee to extrapolate. In 3 experiments, the authors examined the phenomenon and found that the tendency to underestimate y is reliable only in the so-called lower extrapolation region-that is, new values of x that lie between zero and the edge of the training region. Existing models of function learning, such as the extrapolation-association model (DeLosh et al., 1997) and the population of linear experts model (M. L. Kalish, S. Lewandowsky, & J. Kruschke, 2004), cannot account for these results. The authors show that with minor changes, both models can predict the correct pattern of results.
Resumo:
This book introduces key ideas and current critical debates about how English functions within its social and cultural contexts, and provides practical examples and guidance on how to approach further work in these areas. It introduces core topics of language study; language variation, pragmatics, stylistics, critical discourse analysis, language and gender and language and education. Each chapter includes case studies providing worked analysis of sample texts, suggestions for further project work and an annotated further reading section.
Resumo:
Illustrates that translation as a culture transcending process is an important way of positioning cultures. The focus is on the role of translation for the formation of cultural identities, and on effects of globalization for translating advertising.
Resumo:
Germany's latest attempt at unification raises again the question of German nationhood and nationality. The present study examines the links between the development of the German language and the political history of Germany, principally in the nineteenth and twentieth centuries. By examining the role of language in the establishment and exercise of political power and in the creation of national and group solidarity in Germany, the study both provides insights into the nature of language as political action and contributes to the socio-cultural history of the German language. The language-theoretical hypothesis on which the study is based sees language as a central factor in political action, and opposes the notion that language is a reflection of underlying political 'realities' which exist independently of language. Language is viewed as language-in-text which performs identifiable functions. Following Leech, five functions are distinguished, two of which (the regulative and the phatic) are regarded as central to political processes. The phatic function is tested against the role of the German language as a creator and symbol of national identity, with particular attention being paid to concepts of the 'purity' of the language. The regulative function (under which a persuasive function is also subsumed) is illustrated using the examples of German fascist discourse and selected cases from German history post-1945. In addition, the interactions are examined between language change and socio-economic change by postulating that language change is both a condition and consequence of socio-economic change, in that socio-economic change both requires and conditions changes in the communicative environment. Finally, three politocolinguistic case studies from the eight and ninth decades of the twentieth century are introduced in order to demonstrate specific ways in which language has been deployed in an attempt to create political realities, thus verifying the initial hypothesis of the centrality of language to the political process.
Resumo:
This thesis is about the study of relationships between experimental dynamical systems. The basic approach is to fit radial basis function maps between time delay embeddings of manifolds. We have shown that under certain conditions these maps are generically diffeomorphisms, and can be analysed to determine whether or not the manifolds in question are diffeomorphically related to each other. If not, a study of the distribution of errors may provide information about the lack of equivalence between the two. The method has applications wherever two or more sensors are used to measure a single system, or where a single sensor can respond on more than one time scale: their respective time series can be tested to determine whether or not they are coupled, and to what degree. One application which we have explored is the determination of a minimum embedding dimension for dynamical system reconstruction. In this special case the diffeomorphism in question is closely related to the predictor for the time series itself. Linear transformations of delay embedded manifolds can also be shown to have nonlinear inverses under the right conditions, and we have used radial basis functions to approximate these inverse maps in a variety of contexts. This method is particularly useful when the linear transformation corresponds to the delay embedding of a finite impulse response filtered time series. One application of fitting an inverse to this linear map is the detection of periodic orbits in chaotic attractors, using suitably tuned filters. This method has also been used to separate signals with known bandwidths from deterministic noise, by tuning a filter to stop the signal and then recovering the chaos with the nonlinear inverse. The method may have applications to the cancellation of noise generated by mechanical or electrical systems. In the course of this research a sophisticated piece of software has been developed. The program allows the construction of a hierarchy of delay embeddings from scalar and multi-valued time series. The embedded objects can be analysed graphically, and radial basis function maps can be fitted between them asynchronously, in parallel, on a multi-processor machine. In addition to a graphical user interface, the program can be driven by a batch mode command language, incorporating the concept of parallel and sequential instruction groups and enabling complex sequences of experiments to be performed in parallel in a resource-efficient manner.