65 resultados para Random parameter Logit Model
Resumo:
Most cellular solids are random materials, while practically all theoretical structure-property results are for periodic models. To be able to generate theoretical results for random models, the finite element method (FEM) was used to study the elastic properties of solids with a closed-cell cellular structure. We have computed the density (rho) and microstructure dependence of the Young's modulus (E) and Poisson's ratio (PR) for several different isotropic random models based on Voronoi tessellations and level-cut Gaussian random fields. The effect of partially open cells is also considered. The results, which are best described by a power law E infinity rho (n) (1<n<2), show the influence of randomness and isotropy on the properties of closed-cell cellular materials, and are found to be in good agreement with experimental data. (C) 2001 Acta Materialia Inc. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
A mixture model incorporating long-term survivors has been adopted in the field of biostatistics where some individuals may never experience the failure event under study. The surviving fractions may be considered as cured. In most applications, the survival times are assumed to be independent. However, when the survival data are obtained from a multi-centre clinical trial, it is conceived that the environ mental conditions and facilities shared within clinic affects the proportion cured as well as the failure risk for the uncured individuals. It necessitates a long-term survivor mixture model with random effects. In this paper, the long-term survivor mixture model is extended for the analysis of multivariate failure time data using the generalized linear mixed model (GLMM) approach. The proposed model is applied to analyse a numerical data set from a multi-centre clinical trial of carcinoma as an illustration. Some simulation experiments are performed to assess the applicability of the model based on the average biases of the estimates formed. Copyright (C) 2001 John Wiley & Sons, Ltd.
Resumo:
Despite its widespread use, the Coale-Demeny model life table system does not capture the extensive variation in age-specific mortality patterns observed in contemporary populations, particularly those of the countries of Eastern Europe and populations affected by HIV/AIDS. Although relational mortality models, such as the Brass logit system, can identify these variations, these models show systematic bias in their predictive ability as mortality levels depart from the standard. We propose a modification of the two-parameter Brass relational model. The modified model incorporates two additional age-specific correction factors (gamma(x), and theta(x)) based on mortality levels among children and adults, relative to the standard. Tests of predictive validity show deviations in age-specific mortality rates predicted by the proposed system to be 30-50 per cent lower than those predicted by the Coale-Demeny system and 15-40 per cent lower than those predicted using the original Brass system. The modified logit system is a two-parameter system, parameterized using values of l(5) and l(60).
Resumo:
Mixture models implemented via the expectation-maximization (EM) algorithm are being increasingly used in a wide range of problems in pattern recognition such as image segmentation. However, the EM algorithm requires considerable computational time in its application to huge data sets such as a three-dimensional magnetic resonance (MR) image of over 10 million voxels. Recently, it was shown that a sparse, incremental version of the EM algorithm could improve its rate of convergence. In this paper, we show how this modified EM algorithm can be speeded up further by adopting a multiresolution kd-tree structure in performing the E-step. The proposed algorithm outperforms some other variants of the EM algorithm for segmenting MR images of the human brain. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
Aims [1] To quantify the random and predictable components of variability for aminoglycoside clearance and volume of distribution [2] To investigate models for predicting aminoglycoside clearance in patients with low serum creatinine concentrations [3] To evaluate the predictive performance of initial dosing strategies for achieving an aminoglycoside target concentration. Methods Aminoglycoside demographic, dosing and concentration data were collected from 697 adult patients (>=20 years old) as part of standard clinical care using a target concentration intervention approach for dose individualization. It was assumed that aminoglycoside clearance had a renal and a nonrenal component, with the renal component being linearly related to predicted creatinine clearance. Results A two compartment pharmacokinetic model best described the aminoglycoside data. The addition of weight, age, sex and serum creatinine as covariates reduced the random component of between subject variability (BSVR) in clearance (CL) from 94% to 36% of population parameter variability (PPV). The final pharmacokinetic parameter estimates for the model with the best predictive performance were: CL, 4.7 l h(-1) 70 kg(-1); intercompartmental clearance (CLic), 1 l h(-1) 70 kg(-1); volume of central compartment (V-1), 19.5 l 70 kg(-1); volume of peripheral compartment (V-2) 11.2 l 70 kg(-1). Conclusions Using a fixed dose of aminoglycoside will achieve 35% of typical patients within 80-125% of a required dose. Covariate guided predictions increase this up to 61%. However, because we have shown that random within subject variability (WSVR) in clearance is less than safe and effective variability (SEV), target concentration intervention can potentially achieve safe and effective doses in 90% of patients.
Resumo:
Optimal sampling times are found for a study in which one of the primary purposes is to develop a model of the pharmacokinetics of itraconazole in patients with cystic fibrosis for both capsule and solution doses. The optimal design is expected to produce reliable estimates of population parameters for two different structural PK models. Data collected at these sampling times are also expected to provide the researchers with sufficient information to reasonably discriminate between the two competing structural models.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.
Resumo:
The detection of seizure in the newborn is a critical aspect of neurological research. Current automatic detection techniques are difficult to assess due to the problems associated with acquiring and labelling newborn electroencephalogram (EEG) data. A realistic model for newborn EEG would allow confident development, assessment and comparison of these detection techniques. This paper presents a model for newborn EEG that accounts for its self-similar and non-stationary nature. The model consists of background and seizure sub-models. The newborn EEG background model is based on the short-time power spectrum with a time-varying power law. The relationship between the fractal dimension and the power law of a power spectrum is utilized for accurate estimation of the short-time power law exponent. The newborn EEG seizure model is based on a well-known time-frequency signal model. This model addresses all significant time-frequency characteristics of newborn EEG seizure which include; multiple components or harmonics, piecewise linear instantaneous frequency laws and harmonic amplitude modulation. Estimates of the parameters of both models are shown to be random and are modelled using the data from a total of 500 background epochs and 204 seizure epochs. The newborn EEG background and seizure models are validated against real newborn EEG data using the correlation coefficient. The results show that the output of the proposed models has a higher correlation with real newborn EEG than currently accepted models (a 10% and 38% improvement for background and seizure models, respectively).
Resumo:
Three main models of parameter setting have been proposed: the Variational model proposed by Yang (2002; 2004), the Structured Acquisition model endorsed by Baker (2001; 2005), and the Very Early Parameter Setting (VEPS) model advanced by Wexler (1998). The VEPS model contends that parameters are set early. The Variational model supposes that children employ statistical learning mechanisms to decide among competing parameter values, so this model anticipates delays in parameter setting when critical input is sparse, and gradual setting of parameters. On the Structured Acquisition model, delays occur because parameters form a hierarchy, with higher-level parameters set before lower-level parameters. Assuming that children freely choose the initial value, children sometimes will miss-set parameters. However when that happens, the input is expected to trigger a precipitous rise in one parameter value and a corresponding decline in the other value. We will point to the kind of child language data that is needed in order to adjudicate among these competing models.
Resumo:
A large number of models have been derived from the two-parameter Weibull distribution and are referred to as Weibull models. They exhibit a wide range of shapes for the density and hazard functions, which makes them suitable for modelling complex failure data sets. The WPP and IWPP plot allows one to determine in a systematic manner if one or more of these models are suitable for modelling a given data set. This paper deals with this topic.
Resumo:
This paper presents a comprehensive and critical review of the mechanisms and kinetics of NO and N2O reduction reaction with coal chars under fluidised-bed combustion conditions (FBC). The heterogeneous reactions of NO and N2O with char/carbon surface have been well recognised as the most important processes in reducing both NOx and N2O in situ FBC. Compared to NO-carbon reactions in FBC, the reactions of N2O with chars have been relatively less understood and studied. Beginning with the overall reaction schemes for both NO and N2O reduction, the paper extensively discusses the reaction mechanisms including the effects of active surface sites. Generally, NO- and N2O-carbon reactions follow a series of step reactions. However, questions remain concerning the role of adsorbed phases of NO and N2O, and the behaviour of different surface sites. Important kinetics factors such as the rate expressions, kinetics parameters as well as the effects of surface area and pore structure are discussed in detail. The main factors influencing the reduction of NO and N2O in FBC conditions are the chemical and physical properties of chars, and the operating parameters of FBC such as temperature, presence of CO, O-2 and pressure. It is shown that under similar conditions, N2O is more readily reduced on the char surface than NO. Temperature was found to be a very important parameter in both NO and N2O reduction. It is generally agreed that both NO- and N2O-carbon reactions follow first-order reaction kinetics with respect to the NO and N2O concentrations. The kinetic parameters for NO and N2O reduction largely depend on the pore structure of chars. The correlation between the char surface area and the reactivities of NO/N2O-char reactions is considered to be of great importance to the determination of the reaction kinetics. The rate of NO reduction by chars is strongly enhanced by the presence of CO and O-2, but these species may not have significant effects on the rate of N2O reduction. However, the presence of these gases in FBC presents difficulties in the study of kinetics since CO cannot be easily eliminated from the carbon surface. In N2O reduction reactions, ash in chars is found to have significant catalytic effects, which must be accounted for in the kinetic models and data evaluation. (C) 1997 Elsevier Science Ltd.
Resumo:
New classes of integrable boundary conditions for the q-deformed (or two-parameter) supersymmetric U model are presented. The boundary systems are solved by using the coordinate space Bethe ansatz technique and Bethe ansatz equations are derived. (C) 1998 Elsevier Science B.V.
Resumo:
A significant problem in the collection of responses to potentially sensitive questions, such as relating to illegal, immoral or embarrassing activities, is non-sampling error due to refusal to respond or false responses. Eichhorn & Hayre (1983) suggested the use of scrambled responses to reduce this form of bias. This paper considers a linear regression model in which the dependent variable is unobserved but for which the sum or product with a scrambling random variable of known distribution, is known. The performance of two likelihood-based estimators is investigated, namely of a Bayesian estimator achieved through a Markov chain Monte Carlo (MCMC) sampling scheme, and a classical maximum-likelihood estimator. These two estimators and an estimator suggested by Singh, Joarder & King (1996) are compared. Monte Carlo results show that the Bayesian estimator outperforms the classical estimators in almost all cases, and the relative performance of the Bayesian estimator improves as the responses become more scrambled.
Resumo:
A new model for correlated electrons is presented which is integrable in one-dimension. The symmetry algebra of the model is the Lie superalgebra gl(2\1) which depends on a continuous free parameter. This symmetry algebra contains the eta pairing algebra as a subalgebra which is used to show that the model exhibits Off-Diagonal Long-Range Order in any number of dimensions.