15 resultados para Pareto model statistics

em University of Queensland eSpace - Australia


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Parkinson's disease (PD) is associated with disturbances in sentence processing, particularly for noncanonical sentences. The present study aimed to analyse sentence processing in PD patients and healthy control participants, using a word-by-word self-paced reading task and an auditory comprehension task. Both tasks consisted of subject relative (SR) and object relative (OR) sentences, with comprehension accuracy measured for each sentence type. For the self-paced reading task, reading times (RTs) were also recorded for the non-critical and critical processing regions of each sentence. Analysis of RTs using mixed linear model statistics revealed a delayed sensitivity to the critical processing region of OR sentences in the PD group. In addition, only the PD group demonstrated significantly poorer comprehension of OR sentences compared to SR sentences during an auditory comprehension task. These results may be consistent with slower lexical retrieval in PD, and its influence on the processing of noncanonical sentences. (c) 2005 Elsevier Ltd. All rights reserved.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

We examine the event statistics obtained from two differing simplified models for earthquake faults. The first model is a reproduction of the Block-Slider model of Carlson et al. (1991), a model often employed in seismicity studies. The second model is an elastodynamic fault model based upon the Lattice Solid Model (LSM) of Mora and Place (1994). We performed simulations in which the fault length was varied in each model and generated synthetic catalogs of event sizes and times. From these catalogs, we constructed interval event size distributions and inter-event time distributions. The larger, localised events in the Block-Slider model displayed the same scaling behaviour as events in the LSM however the distribution of inter-event times was markedly different. The analysis of both event size and inter-event time statistics is an effective method for comparative studies of differing simplified models for earthquake faults.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We consider the problem of assessing the number of clusters in a limited number of tissue samples containing gene expressions for possibly several thousands of genes. It is proposed to use a normal mixture model-based approach to the clustering of the tissue samples. One advantage of this approach is that the question on the number of clusters in the data can be formulated in terms of a test on the smallest number of components in the mixture model compatible with the data. This test can be carried out on the basis of the likelihood ratio test statistic, using resampling to assess its null distribution. The effectiveness of this approach is demonstrated on simulated data and on some microarray datasets, as considered previously in the bioinformatics literature. (C) 2004 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the distribution of energy level spacings in two models describing coupled single-mode Bose-Einstein condensates. Both models have a fixed number of degrees of freedom, which is small compared to the number of interaction parameters, and is independent of the dimensionality of the Hilbert space. We find that the distribution follows a universal Poisson form independent of the choice of coupling parameters, which is indicative of the integrability of both models. These results complement those for integrable lattice models where the number of degrees of freedom increases with increasing dimensionality of the Hilbert space. Finally, we also show that for one model the inclusion of an additional interaction which breaks the integrability leads to a non-Poisson distribution.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Sorghum is the main dryland summer crop in NE Australia and a number of agricultural businesses would benefit from an ability to forecast production likelihood at regional scale. In this study we sought to develop a simple agro-climatic modelling approach for predicting shire (statistical local area) sorghum yield. Actual shire yield data, available for the period 1983-1997 from the Australian Bureau of Statistics, were used to train the model. Shire yield was related to a water stress index (SI) that was derived from the agro-climatic model. The model involved a simple fallow and crop water balance that was driven by climate data available at recording stations within each shire. Parameters defining the soil water holding capacity, maximum number of sowings (MXNS) in any year, planting rainfall requirement, and critical period for stress during the crop cycle were optimised as part of the model fitting procedure. Cross-validated correlations (CVR) ranged from 0.5 to 0.9 at shire scale. When aggregated to regional and national scales, 78-84% of the annual variation in sorghum yield was explained. The model was used to examine trends in sorghum productivity and the approach to using it in an operational forecasting system was outlined. (c) 2005 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electron-multiplying charge coupled devices promise to revolutionize ultrasensitive optical imaging. The authors present a simple methodology allowing reliable measurement of camera characteristics and statistics of single-electron events, compare the measurements to a simple theoretical model, and report camera performance in a truly photon-counting regime that eliminates the excess noise related to fluctuations of the multiplication gain.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The low-energy properties of the one-dimensional anyon gas with a delta-function interaction are discussed in the context of its Bethe ansatz solution. It is found that the anyonic statistical parameter and the dynamical coupling constant induce Haldane exclusion statistics interpolating between bosons and fermions. Moreover, the anyonic parameter may trigger statistics beyond Fermi statistics for which the exclusion parameter alpha is greater than one. The Tonks-Girardeau and the weak coupling limits are discussed in detail. The results support the universal role of alpha in the dispersion relations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The paper investigates a Bayesian hierarchical model for the analysis of categorical longitudinal data from a large social survey of immigrants to Australia. Data for each subject are observed on three separate occasions, or waves, of the survey. One of the features of the data set is that observations for some variables are missing for at least one wave. A model for the employment status of immigrants is developed by introducing, at the first stage of a hierarchical model, a multinomial model for the response and then subsequent terms are introduced to explain wave and subject effects. To estimate the model, we use the Gibbs sampler, which allows missing data for both the response and the explanatory variables to be imputed at each iteration of the algorithm, given some appropriate prior distributions. After accounting for significant covariate effects in the model, results show that the relative probability of remaining unemployed diminished with time following arrival in Australia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The recurrence interval statistics for regional seismicity follows a universal distribution function, independent of the tectonic setting or average rate of activity (Corral, 2004). The universal function is a modified gamma distribution with power-law scaling of recurrence intervals shorter than the average rate of activity and exponential decay for larger intervals. We employ the method of Corral (2004) to examine the recurrence statistics of a range of cellular automaton earthquake models. The majority of models has an exponential distribution of recurrence intervals, the same as that of a Poisson process. One model, the Olami-Feder-Christensen automaton, has recurrence statistics consistent with regional seismicity for a certain range of the conservation parameter of that model. For conservation parameters in this range, the event size statistics are also consistent with regional seismicity. Models whose dynamics are dominated by characteristic earthquakes do not appear to display universality of recurrence statistics.