53 resultados para Linear Attention,Conditional Language Model,Natural Language Generation,FLAX,Rare diseases
em Aston University Research Archive
Resumo:
This article uses Bernstein’s theory of pedagogic discourse to account for both the processes by which curriculum change occurs and the failure of efforts to introduce meaningful attention to language structure into national curricula.
Resumo:
Context: Subclinical hypothyroidism (SCH) and cognitive dysfunction are both common in the elderly and have been linked. It is important to determine whether T4 replacement therapy in SCH confers cognitive benefit. Objective: Our objective was to determine whether administration of T4 replacement to achieve biochemical euthyroidism in subjects with SCH improves cognitive function. Design and Setting: We conducted a double-blind placebo-controlled randomized controlled trial in the context of United Kingdom primary care. Patients: Ninety-four subjects aged 65 yr and over (57 females, 37 males) with SCH were recruited from a population of 147 identified by screening. Intervention: T4 or placebo was given at an initial dosage of one tablet of either placebo or 25 µg T4 per day for 12 months. Thyroid function tests were performed at 8-weekly intervals with dosage adjusted in one-tablet increments to achieve TSH within the reference range for subjects in treatment arm. Fifty-two subjects received T4 (31 females, 21 males; mean age 73.5 yr, range 65–94 yr); 42 subjects received placebo (26 females, 16 males; mean age 74.2 yr, 66–84 yr). Main Outcome Measures: Mini-Mental State Examination, Middlesex Elderly Assessment of Mental State (covering orientation, learning, memory, numeracy, perception, attention, and language skills), and Trail-Making A and B were administered. Results: Eighty-two percent and 84% in the T4 group achieved euthyroidism at 6- and 12-month intervals, respectively. Cognitive function scores at baseline and 6 and 12 months were as follows: Mini-Mental State Examination T4 group, 28.26, 28.9, and 28.28, and placebo group, 28.17, 27.82, and 28.25 [not significant (NS)]; Middlesex Elderly Assessment of Mental State T4 group, 11.72, 11.67, and 11.78, and placebo group, 11.21, 11.47, and 11.44 (NS); Trail-Making A T4 group, 45.72, 47.65, and 44.52, and placebo group, 50.29, 49.00, and 46.97 (NS); and Trail-Making B T4 group, 110.57, 106.61, and 96.67, and placebo group, 131.46, 119.13, and 108.38 (NS). Linear mixed-model analysis demonstrated no significant changes in any of the measures of cognitive function over time and no between-group difference in cognitive scores at 6 and 12 months. Conclusions: This RCT provides no evidence for treating elderly subjects with SCH with T4 replacement therapy to improve cognitive function.
Resumo:
This thesis presents a study of interlanguage variability in the use of three tense/aspect forms: the simple present, simple past, and the present perfect. The need for research in this area comes from the problems encountered in the classroom. Language performance in one task sometimes does not reflect that in another. How and why this ocurrs is what this thesis aims to discover. A preliminary study explores the viability of using the Labovian variable model to elicit and explain variability. Difficulties highlight problems which help refine the methodology used in the main study. A review of past research point the direction in which this study should go. Armed with a sample of 17 Chinese Singaporean university students, whose first language is Chinese or a dialect of Chinese, the investigation began with the elicitation of variability to be found in four tasks. Using the attention-to-speech framework, these four tasks are designed to reflect varying degrees of required attention to language form. The results show that there is variability in the use of tense/aspect in all the tasks. However, the framework on which the tasks are based cannot explain the variability pattern. Further analyses of contextual factors, primarily pragmatic ones, point to a complex interplay of factors affecting the variability found in the results.
Resumo:
Visual perception is dependent not only on low-level sensory input but also on high-level cognitive factors such as attention. In this paper, we sought to determine whether attentional processes can be internally monitored for the purpose of enhancing behavioural performance. To do so, we developed a novel paradigm involving an orientation discrimination task in which observers had the freedom to delay target presentation--by any amount required--until they judged their attentional focus to be complete. Our results show that discrimination performance is significantly improved when individuals self-monitor their level of visual attention and respond only when they perceive it to be maximal. Although target delay times varied widely from trial-to-trial (range 860 ms-12.84 s), we show that their distribution is Gaussian when plotted on a reciprocal latency scale. We further show that the neural basis of the delay times for judging attentional status is well explained by a linear rise-to-threshold model. We conclude that attentional mechanisms can be self-monitored for the purpose of enhancing human decision-making processes, and that the neural basis of such processes can be understood in terms of a simple, yet broadly applicable, linear rise-to-threshold model.
Resumo:
There is a paucity of literature regarding the construction and operation of corporate identity at the stakeholder group level. This article examines corporate identity from the perspective of an individual stakeholder group, namely, front-line employees. A stakeholder group that is central to the development of an organization’s corporate identity as it spans an organization’s boundaries, frequently interacts with both internal and external stakeholders, and influences a firm’s financial performance by building customer loyalty and satisfaction. The article reviews the corporate identity, branding, services and social identity literatures to address how corporate identity manifests within the front-line employee stakeholder group, identifying what components comprise front-line employee corporate identity and assessing what contribution front-line employees make to constructing a strong and enduring corporate identity for an organization. In reviewing the literature the article develops propositions that, in conjunction with a conceptual model, constitute the generation of theory that is recommended for empirical testing.
Resumo:
Dyslexia and attentional difficulty have often been linked, but little is known of the nature of the supposed attentional disorder. The Sustained Attention to Response Task (SART: Robertson, Manly, Andrade, Baddeley and Yiend, 1997) was designed as a measure of sustained attention and requires the withholding of responses to rare (one in nine) targets. To investigate the nature of the attentional disorder in dyslexia, this paper reports two studies which examined the performance of teenagers with dyslexia and their age-matched controls on the SART, the squiggle SART (a modification of the SART using novel and unlabellable stimuli rather than digits) and the go-gap-stop test of response inhibition (GGST). Teenagers with dyslexia made significantly more errors than controls on the original SART, but not the squiggle SART. There were no group differences on the GGST. After controlling for speed of reaction time in a sequential multiple regression predicting SART false alarms, false alarms on the GGST accounted for up to 22% extra variance in the control groups (although less on the squiggle SART) but negligible amounts of variance in the dyslexic groups. We interpret the results as reflecting a stimulus recognition automaticity deficit in dyslexia, rather than a sustained attention deficit. Furthermore, results suggest that response inhibition is an important component of performance on the standard SART when stimuli are recognised automatically.
Resumo:
Behavioural studies on normal and brain-damaged individuals provide convincing evidence that the perception of objects results in the generation of both visual and motor signals in the brain, irrespective of whether or not there is an intention to act upon the object. In this paper we sought to determine the basis of the motor signals generated by visual objects. By examining how the properties of an object affect an observer's reaction time for judging its orientation, we provide evidence to indicate that directed visual attention is responsible for the automatic generation of motor signals associated with the spatial characteristics of perceived objects.
Resumo:
We present the prototype tool CADS* for the computer-aided development of an important class of self-* systems, namely systems whose components can be modelled as Markov chains. Given a Markov chain representation of the IT components to be included into a self-* system, CADS* automates or aids (a) the development of the artifacts necessary to build the self-* system; and (b) their integration into a fully-operational self-* solution. This is achieved through a combination of formal software development techniques including model transformation, model-driven code generation and dynamic software reconfiguration.
Resumo:
This paper investigates vertical economies between generation and distribution of electric power, and horizontal economies between different types of power generation in the U.S. electric utility industry. Our quadratic cost function model includes three generation output measures (hydro, nuclear and fossil fuels), which allows us to analyze the effect that generation mix has on vertical economies. Our results provide (sample mean) estimates of vertical economies of 8.1% and horizontal economies of 5.4%. An extensive sensitivity analysis is used to show how the scope measures vary across alternative model specifications and firm types. © 2012 Blackwell Publishing Ltd and the Editorial Board of The Journal of Industrial Economics.
Resumo:
This paper extends the smooth transition conditional correlation model by studying for the first time the impact that illiquidity shocks have on stock market return comovement. We show that firms that experience shocks that increase illiquidity are less liquid than firms that experience shocks that decrease illiquidity. Shocks that increase illiquidity have no statistical impact on comovement. However, shocks that reduce illiquidity lead to a fall in comovement, a pattern that becomes stronger as the illiquidity of the firm increases. This discovery is consistent with increased transparency and an improvement in price efficiency. We find that a small number of firms experience a double illiquidity shock. For these firms, at the first shock, a rise in illiquidity reduces comovement while a fall in illiquidity raises comovement. The second shock partly reverses these changes as a rise in illiquidity is associated with a rise in comovement and a fall in illiquidity is associated with a fall in comovement. These results have important implications for portfolio construction and also for the measurement and evolution of market beta and the cost of capital as it suggests that investors can achieve higher returns for the same amount of market risk because of the greater diversification benefits that exist. We also find that illiquidity, friction, firm size and the pre-shock correlation are all associated with the magnitude of the correlation change. © 2013 Elsevier B.V.
Resumo:
Natural language understanding is to specify a computational model that maps sentences to their semantic mean representation. In this paper, we propose a novel framework to train the statistical models without using expensive fully annotated data. In particular, the input of our framework is a set of sentences labeled with abstract semantic annotations. These annotations encode the underlying embedded semantic structural relations without explicit word/semantic tag alignment. The proposed framework can automatically induce derivation rules that map sentences to their semantic meaning representations. The learning framework is applied on two statistical models, the conditional random fields (CRFs) and the hidden Markov support vector machines (HM-SVMs). Our experimental results on the DARPA communicator data show that both CRFs and HM-SVMs outperform the baseline approach, previously proposed hidden vector state (HVS) model which is also trained on abstract semantic annotations. In addition, the proposed framework shows superior performance than two other baseline approaches, a hybrid framework combining HVS and HM-SVMs and discriminative training of HVS, with a relative error reduction rate of about 25% and 15% being achieved in F-measure.
Resumo:
The main argument of this paper is that Natural Language Processing (NLP) does, and will continue to, underlie the Semantic Web (SW), including its initial construction from unstructured sources like the World Wide Web (WWW), whether its advocates realise this or not. Chiefly, we argue, such NLP activity is the only way up to a defensible notion of meaning at conceptual levels (in the original SW diagram) based on lower level empirical computations over usage. Our aim is definitely not to claim logic-bad, NLP-good in any simple-minded way, but to argue that the SW will be a fascinating interaction of these two methodologies, again like the WWW (which has been basically a field for statistical NLP research) but with deeper content. Only NLP technologies (and chiefly information extraction) will be able to provide the requisite RDF knowledge stores for the SW from existing unstructured text databases in the WWW, and in the vast quantities needed. There is no alternative at this point, since a wholly or mostly hand-crafted SW is also unthinkable, as is a SW built from scratch and without reference to the WWW. We also assume that, whatever the limitations on current SW representational power we have drawn attention to here, the SW will continue to grow in a distributed manner so as to serve the needs of scientists, even if it is not perfect. The WWW has already shown how an imperfect artefact can become indispensable.
Token codeswitching and language alternation in narrative discourse: a functional-pragmatic approach
Resumo:
This study is concerned with two phenomena of language alternation in biographic narrations in Yiddish and Low German, based on spoken language data recorded between 1988 and 1995. In both phenomena language alternation serves as an additional communicative tool which can be applied by bilingual speakers to enlarge their set of interactional devices in order to ensure a smoother or more pointed processing of communicative aims. The first phenomenon is a narrative strategy I call Token Cod-eswitching: In a bilingual narrative culminating in a line of reported speech, a single element of L2 indicates the original language of the reconstructed dialogue – a token for a quote. The second phenomenon has to do with directing procedures, carried out by the speaker and aimed at guiding the hearer's attention, which are frequently carried out in L2, supporting the hearer's attention at crucial points in the interaction. Both phenomena are analyzed following a model of narrative discourse as proposed in the framework of Functional Pragmatics. The model allows the adoption of an integral approach to previous findings in code-switching research.
The effective use of implicit parallelism through the use of an object-oriented programming language
Resumo:
This thesis explores translating well-written sequential programs in a subset of the Eiffel programming language - without syntactic or semantic extensions - into parallelised programs for execution on a distributed architecture. The main focus is on constructing two object-oriented models: a theoretical self-contained model of concurrency which enables a simplified second model for implementing the compiling process. There is a further presentation of principles that, if followed, maximise the potential levels of parallelism. Model of Concurrency. The concurrency model is designed to be a straightforward target for mapping sequential programs onto, thus making them parallel. It aids the compilation process by providing a high level of abstraction, including a useful model of parallel behaviour which enables easy incorporation of message interchange, locking, and synchronization of objects. Further, the model is sufficient such that a compiler can and has been practically built. Model of Compilation. The compilation-model's structure is based upon an object-oriented view of grammar descriptions and capitalises on both a recursive-descent style of processing and abstract syntax trees to perform the parsing. A composite-object view with an attribute grammar style of processing is used to extract sufficient semantic information for the parallelisation (i.e. code-generation) phase. Programming Principles. The set of principles presented are based upon information hiding, sharing and containment of objects and the dividing up of methods on the basis of a command/query division. When followed, the level of potential parallelism within the presented concurrency model is maximised. Further, these principles naturally arise from good programming practice. Summary. In summary this thesis shows that it is possible to compile well-written programs, written in a subset of Eiffel, into parallel programs without any syntactic additions or semantic alterations to Eiffel: i.e. no parallel primitives are added, and the parallel program is modelled to execute with equivalent semantics to the sequential version. If the programming principles are followed, a parallelised program achieves the maximum level of potential parallelisation within the concurrency model.
Resumo:
An implementation of a Lexical Functional Grammar (LFG) natural language front-end to a database is presented, and its capabilities demonstrated by reference to a set of queries used in the Chat-80 system. The potential of LFG for such applications is explored. Other grammars previously used for this purpose are briefly reviewed and contrasted with LFG. The basic LFG formalism is fully described, both as to its syntax and semantics, and the deficiencies of the latter for database access application shown. Other current LFG implementations are reviewed and contrasted with the LFG implementation developed here specifically for database access. The implementation described here allows a natural language interface to a specific Prolog database to be produced from a set of grammar rule and lexical specifications in an LFG-like notation. In addition to this the interface system uses a simple database description to compile metadata about the database for later use in planning the execution of queries. Extensions to LFG's semantic component are shown to be necessary to produce a satisfactory functional analysis and semantic output for querying a database. A diverse set of natural language constructs are analysed using LFG and the derivation of Prolog queries from the F-structure output of LFG is illustrated. The functional description produced from LFG is proposed as sufficient for resolving many problems of quantification and attachment.