938 resultados para Linear Attention,Conditional Language Model,Natural Language Generation,FLAX,Rare diseases
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-06
Resumo:
While the occurrence and management of brainstem tumours in children would not traditionally indicate potential direct structural impact on classical language centres, recent theories have implicated some involvement of the brainstem in a functional language and cognitive neural loop between the cerebellum and the cerebral hemispheres. Thus, the present paper explored the impact of treatment for brainstem tumour on the general and high-level language abilities of six children treated for brainstem tumour, in addition to phonological awareness skills. Group analysis revealed that children treated for brainstem tumour demonstrated intact language and phonological awareness abilities in comparison to an age- and gender-matched control group. Individual analysis revealed only one of six children treated for brainstem tumour revealed evidence of language disturbances, with an additional child demonstrating an isolated mildly reduced score on one phonological awareness task. Language deficits identified in a child treated with a combination of both radiotherapy and chemotherapy were noted in the high-level language area of lexical generation. Findings highlighted that no overt language disturbances were evident in children treated for brainstem tumour. However, further analysis into higher-level language skills in the present study indicated that both general and high-level language abilities require long-term monitoring in this population.
Resumo:
Technological advances have brought about the ever-increasing utilisation of computer-assisted language learning ( CALL) media in the learning of a second language (L2). Computer-mediated communication, for example, provides a practical means for extending the learning of spoken language, a challenging process in tonal languages such as Chinese, beyond the realms of the classroom. In order to effectively improve spoken language competency, however, CALL applications must also reproduce the social interaction that lies at the heart of language learning and language use. This study draws on data obtained from the utilisation of CALL in the learning of L2 Chinese to explore whether this medium can be used to extend opportunities for rapport-building in language teaching beyond the face-to-face interaction of the classroom. Rapport's importance lies in its potential to enhance learning, motivate learners, and reduce learner anxiety. To date, CALL's potential in relation to this facet of social interaction remains a neglected area of research. The results of this exploratory study suggest that CALL may help foster learner-teacher rapport and that scaffolding, such as strategically composing rapport-fostering questions in sound-files, is conducive to this outcome. The study provides an instruction model for this application of CALL.
Resumo:
The study reported in this article is a part of a large-scale study investigating syntactic complexity in second language (L2) oral data in commonly taught foreign languages (English, German, Japanese, and Spanish; Ortega, Iwashita, Rabie, & Norris, in preparation). In this article, preliminary findings of the analysis of the Japanese data are reported. Syntactic complexity, which is referred to as syntactic maturity or the use of a range of forms with degrees of sophistication (Ortega, 2003), has long been of interest to researchers in L2 writing. In L2 speaking, researchers have examined syntactic complexity in learner speech in the context of pedagogic intervention (e.g., task type, planning time) and the validation of rating scales. In these studies complexity is examined using measures commonly employed in L2 writing studies. It is assumed that these measures are valid and reliable, but few studies explain what syntactic complexity measures actually examine. The language studied is predominantly English, and little is known about whether the findings of such studies can be applied to languages that are typologically different from English. This study examines how syntactic complexity measures relate to oral proficiency in Japanese as a foreign language. An in-depth analysis of speech samples from 33 learners of Japanese is presented. The results of the analysis are compared across proficiency levels and cross-referenced with 3 other proficiency measures used in the study. As in past studies, the length of T-units and the number of clauses per T-unit is found to be the best way to predict learner proficiency; the measure also had a significant linear relation with independent oral proficiency measures. These results are discussed in light of the notion of syntactic complexity and the interfaces between second language acquisition and language testing. Adapted from the source document
Resumo:
One hundred and twelve university students completed 7 tests assessing word-reading accuracy, print exposure, phonological sensitivity, phonological coding and knowledge of English morphology as predictors of spelling accuracy. Together the tests accounted for 71% of the variance in spelling, with phonological skills and morphological knowledge emerging as strong predictors of spelling accuracy for words with both regular and irregular sound-spelling correspondences. The pattern of relationships was consistent with a model in which, as a function of the learning opportunities that are provided by reading experience, phonological skills promote the learning of individual word orthographies and structural relationships among words.
Resumo:
E. L. DeLosh, J. R. Busemeyer, and M. A. McDaniel (1997) found that when learning a positive, linear relationship between a continuous predictor (x) and a continuous criterion (y), trainees tend to underestimate y on items that ask the trainee to extrapolate. In 3 experiments, the authors examined the phenomenon and found that the tendency to underestimate y is reliable only in the so-called lower extrapolation region-that is, new values of x that lie between zero and the edge of the training region. Existing models of function learning, such as the extrapolation-association model (DeLosh et al., 1997) and the population of linear experts model (M. L. Kalish, S. Lewandowsky, & J. Kruschke, 2004), cannot account for these results. The authors show that with minor changes, both models can predict the correct pattern of results.
Resumo:
Since Z, being a state-based language, describes a system in terms of its state and potential state changes, it is natural to want to describe properties of a specified system also in terms of its state. One means of doing this is to use Linear Temporal Logic (LTL) in which properties about the state of a system over time can be captured. This, however, raises the question of whether these properties are preserved under refinement. Refinement is observation preserving and the state of a specified system is regarded as internal and, hence, non-observable. In this paper, we investigate this issue by addressing the following questions. Given that a Z specification A is refined by a Z specification C, and that P is a temporal logic property which holds for A, what temporal logic property Q can we deduce holds for C? Furthermore, under what circumstances does the property Q preserve the intended meaning of the property P? The paper answers these questions for LTL, but the approach could also be applied to other temporal logics over states such as CTL and the mgr-calculus.
Resumo:
Starting with a UML specification that captures the underlying functionality of some given Java-based concurrent system, we describe a systematic way to construct, from this specification, test sequences for validating an implementation of the system. The approach is to first extend the specification to create UML state machines that directly address those aspects of the system we wish to test. To be specific, the extended UML state machines can capture state information about the number of waiting threads or the number of threads blocked on a given object. Using the SAL model checker we can generate from the extended UML state machines sequences that cover all the various possibilities of events and states. These sequences can then be directly transformed into test sequences suitable for input into a testing tool such as ConAn. As an illustration, the methodology is applied to generate sequences for testing a Java implementation of the producer-consumer system. © 2005 IEEE
Resumo:
This paper presents a formal but practical approach for defining and using design patterns. Initially we formalize the concepts commonly used in defining design patterns using Object-Z. We also formalize consistency constraints that must be satisfied when a pattern is deployed in a design model. Then we implement the pattern modeling language and its consistency constraints using an existing modeling framework, EMF, and incorporate the implementation as plug-ins to the Eclipse modeling environment. While the language is defined formally in terms of Object-Z definitions, the language is implemented in a practical environment. Using the plug-ins, users can develop precise pattern descriptions without knowing the underlying formalism, and can use the tool to check the validity of the pattern descriptions and pattern usage in design models. In this work, formalism brings precision to the pattern language definition and its implementation brings practicability to our pattern-based modeling approach.
Resumo:
A 77-year-old man with 8 year progressive language deterioration in the face of grossly intact memory was followed. No acute or chronic physiological or psychological event was associated with symptom onset. CT revealed small left basal ganglia infarct. Mild atrophy, no lacunar infarcts, mild diffuse periventricular changes registered on MRI. Gait normal but slow. Speech hesitant and sparse. Affect euthymic; neurobehavioral disturbance absent. MMSE 26/30; clock incorrect, concrete. Neuropsychological testing revealed simple attention intact; complex attention, processing speed impaired. Visuospatial copying and delayed recall of copy average with some perseveration. Apraxia absent. Recall mildly impaired. Mild deficits in planning, organization apparent. Patient severely aphasic, dysarthric without paraphasias. Repetition of automatic speech, recitation moderately impaired; prosody intact. Understanding of written language, nonverbal communication abilities, intact. Frontal release signs developed over last 12 months. Repeated cognitive testing revealed mild deterioration across all domains with significant further decrease in expressive, receptive language. Neurobehavioral changes remain absent to date; he remains interested, engaged and independent in basic ADLs. Speech completely deteriorated; gait and movements appreciably slowed. Although signs of frontal/executive dysfunction present, lack of behavioral abnormalities, psychiatric disturbance, personality change argue against focal or progressive frontal impairment or dementia. Relative intactness of memory and comprehension argue against Alzheimer’s disease. Lack of findings on neuroimaging argue against CVA or tumor. It is possible that the small basal ganglia infarct has resulted in a mild lateral prefrontal syndrome. However, the absence of depression as well as the relatively circumscribed language problem suggests otherwise. The progressive, severe nature of language impairments, with relatively minor impairments in attention and memory, argues for a possible diagnosis of primary progressive aphasia.
Resumo:
-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
Resumo:
In the last decade we have seen an exponential growth of functional imaging studies investigating multiple aspects of language processing. These studies have sparked an interest in applying some of the paradigms to various clinically relevant questions, such as the identification of the cortical regions mediating language function in surgical candidates for refractory epilepsy. Here we present data from a group of adult control participants in order to investigate the potential of using frequency specific spectral power changes in MEG activation patterns to establish lateralisation of language function using expressive language tasks. In addition, we report on a paediatric patient whose language function was assessed before and after a left hemisphere amygdalo-hippocampectomy. Our verb generation task produced left hemisphere decreases in beta-band power accompanied by right hemisphere increases in low beta-band power in the majority of the control group, a previously unreported phenomenon. This pattern of spectral power was also found in the patient's post-surgery data, though not her pre-surgery data. Comparison of pre and post-operative results also provided some evidence of reorganisation in language related cortex both inter- and intra-hemispherically following surgery. The differences were not limited to changes in localisation of language specific cortex but also changes in the spectral and temporal profile of frontal brain regions during verb generation. While further investigation is required to establish concordance with invasive measures, our data suggest that the methods described may serve as a reliable lateralisation marker for clinical assessment. Furthermore, our findings highlight the potential utility of MEG for the investigation of cortical language functioning in both healthy development and pathology.
Resumo:
Germany's latest attempt at unification raises again the question of German nationhood and nationality. The present study examines the links between the development of the German language and the political history of Germany, principally in the nineteenth and twentieth centuries. By examining the role of language in the establishment and exercise of political power and in the creation of national and group solidarity in Germany, the study both provides insights into the nature of language as political action and contributes to the socio-cultural history of the German language. The language-theoretical hypothesis on which the study is based sees language as a central factor in political action, and opposes the notion that language is a reflection of underlying political 'realities' which exist independently of language. Language is viewed as language-in-text which performs identifiable functions. Following Leech, five functions are distinguished, two of which (the regulative and the phatic) are regarded as central to political processes. The phatic function is tested against the role of the German language as a creator and symbol of national identity, with particular attention being paid to concepts of the 'purity' of the language. The regulative function (under which a persuasive function is also subsumed) is illustrated using the examples of German fascist discourse and selected cases from German history post-1945. In addition, the interactions are examined between language change and socio-economic change by postulating that language change is both a condition and consequence of socio-economic change, in that socio-economic change both requires and conditions changes in the communicative environment. Finally, three politocolinguistic case studies from the eight and ninth decades of the twentieth century are introduced in order to demonstrate specific ways in which language has been deployed in an attempt to create political realities, thus verifying the initial hypothesis of the centrality of language to the political process.
Resumo:
The study examines the concept of cultural determinism in relation to the business interview, analysing differences in language use between English, French and West German native speakers. The approach is multi- and inter-disciplinary combining linguistic and business research methodologies. An analytical model based on pragmatic and speech act theory is developed to analyse language use in telephone market research interviews. The model aims to evaluate behavioural differences between English, French and West German respondents in the interview situation. The empirical research is based on a telephone survey of industrial managers, conducted in the three countries in the national language of each country. The telephone interviews are transcribed and compared across languages to discover how managers from each country use different language functions to reply to questions and requests. These differences are assessed in terms of specific cultural parameters: politeness, self-assuredness and fullness of response. Empirical and descriptive studies of national character are compared with the survey results, providing the basis for an evaluation of the relationship between management culture and national culture on a contrastive and comparative cross-cultural basis. The project conclusions focus on the implications of the findings both for business interviewing and for language teaching.
Resumo:
Database systems have a user interface one of the components of which will normally be a query language which is based on a particular data model. Typically data models provide primitives to define, manipulate and query databases. Often these primitives are designed to form self-contained query languages. This thesis describes a prototype implementation of a system which allows users to specify queries against the database in a query language whose primitives are not those provided by the actual model on which the database system is based, but those provided by a different data model. The implementation chosen is the Functional Query Language Front End (FQLFE). This uses the Daplex functional data model and query language. Using FQLFE, users can specify the underlying database (based on the relational model) in terms of Daplex. Queries against this specified view can then be made in Daplex. FQLFE transforms these queries into the query language (Quel) of the underlying target database system (Ingres). The automation of part of the Daplex function definition phase is also described and its implementation discussed.