32 resultados para Statistical language models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In relation to motor control, the basal ganglia have been implicated in both the scaling and focusing of movement. Hypokinetic and hyperkinetic movement disorders manifest as a consequence of overshooting and undershooting GPi (globus pallidus internus) activity thresholds, respectively. Recently, models of motor control have been borrowed to translate cognitive processes relating to the overshooting and undershooting of GPi activity, including attention and executive function. Linguistic correlates, however, are yet to be extrapolated in sufficient detail. The aims of the present investigation were to: (1) characterise cognitive-linguistic processes within hypokinetic and hyperkinetic neural systems, as defined by motor disturbances; (2) investigate the impact of surgically-induced GPi lesions upon language abilities. Two Parkinsonian cases with opposing motor symptoms (akinetic versus dystonic/dyskinetic) served as experimental subjects in this research. Assessments were conducted both prior to as well as 3 and 12 months following bilateral posteroventral pallidotomy (PVP). Reliable changes in performance (i.e. both improvements and decrements) were typically restricted to tasks demanding complex linguistic operations across subjects. Hyperkinetic motor symptoms were associated with an initial overall improvement in complex language function as a consequence of bilateral PVP, which diminished over time, suggesting a decrescendo effect relative to surgical beneficence. In contrast, hypokinetic symptoms were associated with a more stable longitudinal linguistic profile, albeit defined by higher proportions of reliable decline versus improvement in postoperative assessment scores. The above findings endorsed the integration of the GPi within cognitive mechanisms involved in the arbitration of complex language functions. In relation to models of motor control, 'focusing' was postulated to represent the neural processes underpinning lexical-semantic manipulation, and 'scaling' the potential allocation of cognitive resources during the mediation of high-level linguistic tasks. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Cognitive functioning has been described as largely impervious to chronic STN-DBS administered over 12-month periods. In relation to the domain of language, however, the effects of STN-DBS are yet to be thoroughly delineated. Verbal fluency tasks represent an almost exclusively applied index of linguistic proficiency relative to neuropsychological research within this population. Comprehensive investigations of the impact of STN-DBS on language function, however, have never been undertaken. The more precise elucidation of the role of the STN in the mediation of language processes, by way of assessments which probe language comprehension and production mechanisms, served as the primary focus of this research. Longitudinal analysis also afforded consideration of the way in which cognitive-linguistic circuits respond to STN-DBS over time. Bilateral STN-DBS primarily effected clinically reliable fluctuations (i.e., both improvements and declines) in performance in both subjects on tasks demanding cognitive-linguistic flexibility in the formulation and comprehension of complex language. Of particular note, both subjects demonstrated a cumulative increase in the proportion of reliable post-operative improvements achieved over time. The findings of this research lend support to models of subcortical participation in language which endorse a role for the STN, and suggest that bilateral STN-DBS may serve to enhance the proficiency of basal ganglia-thalamocortical linguistic circuits over time.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this work was to model lung cancer mortality as a function of past exposure to tobacco and to forecast age-sex-specific lung cancer mortality rates. A 3-factor age-period-cohort (APC) model, in which the period variable is replaced by the product of average tar content and adult tobacco consumption per capita, was estimated for the US, UK, Canada and Australia by the maximum likelihood method. Age- and sex-specific tobacco consumption was estimated from historical data on smoking prevalence and total tobacco consumption. Lung cancer mortality was derived from vital registration records. Future tobacco consumption, tar content and the cohort parameter were projected by autoregressive moving average (ARIMA) estimation. The optimal exposure variable was found to be the product of average tar content and adult cigarette consumption per capita, lagged for 2530 years for both males and females in all 4 countries. The coefficient of the product of average tar content and tobacco consumption per capita differs by age and sex. In all models, there was a statistically significant difference in the coefficient of the period variable by sex. In all countries, male age-standardized lung cancer mortality rates peaked in the 1980s and declined thereafter. Female mortality rates are projected to peak in the first decade of this century. The multiplicative models of age, tobacco exposure and cohort fit the observed data between 1950 and 1999 reasonably well, and time-series models yield plausible past trends of relevant variables. Despite a significant reduction in tobacco consumption and average tar content of cigarettes sold over the past few decades, the effect on lung cancer mortality is affected by the time lag between exposure and established disease. As a result, the burden of lung cancer among females is only just reaching, or soon will reach, its peak but has been declining for I to 2 decades in men. Future sex differences in lung cancer mortality are likely to be greater in North America than Australia and the UK due to differences in exposure patterns between the sexes. (c) 2005 Wiley-Liss, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local false discovery rate is provided for each gene, and it can be implemented so that the implied global false discovery rate is bounded as with the Benjamini-Hochberg methodology based on tail areas. The latter procedure is too conservative, unless it is modified according to the prior probability that a gene is not differentially expressed. An attractive feature of the mixture model approach is that it provides a framework for the estimation of this probability and its subsequent use in forming a decision rule. The rule can also be formed to take the false negative rate into account.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Leximancer system is a relatively new method for transforming lexical co-occurrence information from natural language into semantic patterns in an unsupervised manner. It employs two stages of co-occurrence information extraction-semantic and relational-using a different algorithm for each stage. The algorithms used are statistical, but they employ nonlinear dynamics and machine learning. This article is an attempt to validate the output of Leximancer, using a set of evaluation criteria taken from content analysis that are appropriate for knowledge discovery tasks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An important and common problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. As this problem concerns the selection of significant genes from a large pool of candidate genes, it needs to be carried out within the framework of multiple hypothesis testing. In this paper, we focus on the use of mixture models to handle the multiplicity issue. With this approach, a measure of the local FDR (false discovery rate) is provided for each gene. An attractive feature of the mixture model approach is that it provides a framework for the estimation of the prior probability that a gene is not differentially expressed, and this probability can subsequently be used in forming a decision rule. The rule can also be formed to take the false negative rate into account. We apply this approach to a well-known publicly available data set on breast cancer, and discuss our findings with reference to other approaches.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pharmacodynamics (PD) is the study of the biochemical and physiological effects of drugs. The construction of optimal designs for dose-ranging trials with multiple periods is considered in this paper, where the outcome of the trial (the effect of the drug) is considered to be a binary response: the success or failure of a drug to bring about a particular change in the subject after a given amount of time. The carryover effect of each dose from one period to the next is assumed to be proportional to the direct effect. It is shown for a logistic regression model that the efficiency of optimal parallel (single-period) or crossover (two-period) design is substantially greater than a balanced design. The optimal designs are also shown to be robust to misspecification of the value of the parameters. Finally, the parallel and crossover designs are combined to provide the experimenter with greater flexibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, we compare a well-known semantic spacemodel, Latent Semantic Analysis (LSA) with another model, Hyperspace Analogue to Language (HAL) which is widely used in different area, especially in automatic query refinement. We conduct this comparative analysis to prove our hypothesis that with respect to ability of extracting the lexical information from a corpus of text, LSA is quite similar to HAL. We regard HAL and LSA as black boxes. Through a Pearsonrsquos correlation analysis to the outputs of these two black boxes, we conclude that LSA highly co-relates with HAL and thus there is a justification that LSA and HAL can potentially play a similar role in the area of facilitating automatic query refinement. This paper evaluates LSA in a new application area and contributes an effective way to compare different semantic space models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We discuss how integrity consistency constraints between different UML models can be precisely defined at a language level. In doing so, we introduce a formal object-oriented metamodeling approach. In the approach, integrity consistency constraints between UML models are defined in terms of invariants of the UML model elements used to define the models at the language-level. Adopting a formal approach, constraints are formally defined using Object-Z. We demonstrate how integrity consistency constraints for UML models can be precisely defined at the language-level and once completed, the formal description of the consistency constraints will be a precise reference of checking consistency of UML models as well as for tool development.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a formal but practical approach for defining and using design patterns. Initially we formalize the concepts commonly used in defining design patterns using Object-Z. We also formalize consistency constraints that must be satisfied when a pattern is deployed in a design model. Then we implement the pattern modeling language and its consistency constraints using an existing modeling framework, EMF, and incorporate the implementation as plug-ins to the Eclipse modeling environment. While the language is defined formally in terms of Object-Z definitions, the language is implemented in a practical environment. Using the plug-ins, users can develop precise pattern descriptions without knowing the underlying formalism, and can use the tool to check the validity of the pattern descriptions and pattern usage in design models. In this work, formalism brings precision to the pattern language definition and its implementation brings practicability to our pattern-based modeling approach.