45 resultados para Multinomial logit models with random coefficients (RCL)

em University of Queensland eSpace - Australia


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Quantitatively predicting mass transport rates for chemical mixtures in porous materials is important in applications of materials such as adsorbents, membranes, and catalysts. Because directly assessing mixture transport experimentally is challenging, theoretical models that can predict mixture diffusion coefficients using Only single-component information would have many uses. One such model was proposed by Skoulidas, Sholl, and Krishna (Langmuir, 2003, 19, 7977), and applications of this model to a variety of chemical mixtures in nanoporous materials have yielded promising results. In this paper, the accuracy of this model for predicting mixture diffusion coefficients in materials that exhibit a heterogeneous distribution of local binding energies is examined. To examine this issue, single-component and binary mixture diffusion coefficients are computed using kinetic Monte Carlo for a two-dimensional lattice model over a wide range of lattice occupancies and compositions. The approach suggested by Skoulidas, Sholl, and Krishna is found to be accurate in situations where the spatial distribution of binding site energies is relatively homogeneous, but is considerably less accurate for strongly heterogeneous energy distributions.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-component survival mixture model is proposed to analyse a set of ischaemic stroke-specific mortality data. The survival experience of stroke patients after index stroke may be described by a subpopulation of patients in the acute condition and another subpopulation of patients in the chronic phase. To adjust for the inherent correlation of observations due to random hospital effects, a mixture model of two survival functions with random effects is formulated. Assuming a Weibull hazard in both components, an EM algorithm is developed for the estimation of fixed effect parameters and variance components. A simulation study is conducted to assess the performance of the two-component survival mixture model estimators. Simulation results confirm the applicability of the proposed model in a small sample setting. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Much of the published work regarding the Isotropic Singularity is performed under the assumption that the matter source for the cosmological model is a barotropic perfect fluid, or even a perfect fluid with a gamma-law equation of state. There are, however, some general properties of cosmological models which admit an Isotropic Singularity, irrespective of the matter source. In particular, we show that the Isotropic Singularity is a point-like singularity and that vacuum space-times cannot admit an Isotropic Singularity. The relationships between the Isotropic Singularity, and the energy conditions, and the Hubble parameter is explored. A review of work by the authors, regarding the Isotropic Singularity, is presented.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nine classes of integrable open boundary conditions, further extending the one-dimensional U-q (gl (212)) extended Hubbard model, have been constructed previously by means of the boundary Z(2)-graded quantum inverse scattering method. The boundary systems are now solved by using the algebraic Bethe ansatz method, and the Bethe ansatz equations are obtained for all nine cases.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

When linear equality constraints are invariant through time they can be incorporated into estimation by restricted least squares. If, however, the constraints are time-varying, this standard methodology cannot be applied. In this paper we show how to incorporate linear time-varying constraints into the estimation of econometric models. The method involves the augmentation of the observation equation of a state-space model prior to estimation by the Kalman filter. Numerical optimisation routines are used for the estimation. A simple example drawn from demand analysis is used to illustrate the method and its application.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Three kinds of integrable Kondo impurity additions to one-dimensional q-deformed extended Hubbard models are studied by means of the boundary Z(2)-graded quantum inverse scattering method. The boundary K matrices depending on the local magnetic moments of the impurities are presented as nontrivial realisations of the reflection equation algebras in an impurity Hilbert space. The models are solved by using the algebraic Bethe ansatz method, and the Bethe ansatz equations are obtained.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Questionnaire surveys, while more economical, typically achieve poorer response rates than interview surveys. We used data from a national volunteer cohort of young adult twins, who were scheduled for assessment by questionnaire in 1989 and by interview in 1996-2000, to identify predictors of questionnaire non-response. Out of a total of 8536 twins, 5058 completed the questionnaire survey (59% response rate), and 6255 completed a telephone interview survey conducted a decade later (73% response rate). Multinomial logit models were fitted to the interview data to identify socioeconomic, psychiatric and health behavior correlates of non-response in the earlier questionnaire survey. Male gender, education below University level, and being a dizygotic rather than monozygotic twin, all predicted reduced likelihood of participating in the questionnaire survey. Associations between questionnaire response status and psychiatric history and health behavior variables were modest, with history of alcohol dependence and childhood conduct disorder predicting decreased probability of returning a questionnaire, and history of smoking and heavy drinking more weakly associated with non-response. Body-mass index showed no association with questionnaire non-response. Despite a poor response rate to the self-report questionnaire survey, we found only limited sampling biases for most variables. While not appropriate for studies where socioeconomic variables are critical, it appears that survey by questionnaire, with questionnaire administration by telephone to non-responders, will represent a viable strategy for gene-mapping studies requiring that large numbers of relatives be screened.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a range of dynamic data envelopment analysis (DEA) models which allow information on costs of adjustment to be incorporated into the DEA framework. We first specify a basic dynamic DEA model predicated on a number or simplifying assumptions. We then outline a number of extensions to this model to accommodate asymmetric adjustment costs, non-static output quantities, non-static input prices, and non-static costs of adjustment, technological change, quasi-fixed inputs and investment budget constraints. The new dynamic DEA models provide valuable extra information relative to the standard static DEA models-they identify an optimal path of adjustment for the input quantities, and provide a measure of the potential cost savings that result from recognising the costs of adjusting input quantities towards the optimal point. The new models are illustrated using data relating to a chain of 35 retail department stores in Chile. The empirical results illustrate the wealth of information that can be derived from these models, and clearly show that static models overstate potential cost savings when adjustment costs are non-zero.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.