97 resultados para Random effect model
em University of Queensland eSpace - Australia
Resumo:
We investigate here a modification of the discrete random pore model [Bhatia SK, Vartak BJ, Carbon 1996;34:1383], by including an additional rate constant which takes into account the different reactivity of the initial pore surface having attached functional groups and hydrogens, relative to the subsequently exposed surface. It is observed that the relative initial reactivity has a significant effect on the conversion and structural evolution, underscoring the importance of initial surface chemistry. The model is tested against experimental data on chemically controlled char oxidation and steam gasification at various temperatures. It is seen that the variations of the reaction rate and surface area with conversion are better represented by the present approach than earlier random pore models. The results clearly indicate the improvement of model predictions in the low conversion region, where the effect of the initially attached functional groups and hydrogens is more significant, particularly for char oxidation. It is also seen that, for the data examined, the initial surface chemistry is less important for steam gasification as compared to the oxidation reaction. Further development of the approach must also incorporate the dynamics of surface complexation, which is not considered here.
Resumo:
In many occupational safety interventions, the objective is to reduce the injury incidence as well as the mean claims cost once injury has occurred. The claims cost data within a period typically contain a large proportion of zero observations (no claim). The distribution thus comprises a point mass at 0 mixed with a non-degenerate parametric component. Essentially, the likelihood function can be factorized into two orthogonal components. These two components relate respectively to the effect of covariates on the incidence of claims and the magnitude of claims, given that claims are made. Furthermore, the longitudinal nature of the intervention inherently imposes some correlation among the observations. This paper introduces a zero-augmented gamma random effects model for analysing longitudinal data with many zeros. Adopting the generalized linear mixed model (GLMM) approach reduces the original problem to the fitting of two independent GLMMs. The method is applied to evaluate the effectiveness of a workplace risk assessment teams program, trialled within the cleaning services of a Western Australian public hospital.
Resumo:
Motivation: The clustering of gene profiles across some experimental conditions of interest contributes significantly to the elucidation of unknown gene function, the validation of gene discoveries and the interpretation of biological processes. However, this clustering problem is not straightforward as the profiles of the genes are not all independently distributed and the expression levels may have been obtained from an experimental design involving replicated arrays. Ignoring the dependence between the gene profiles and the structure of the replicated data can result in important sources of variability in the experiments being overlooked in the analysis, with the consequent possibility of misleading inferences being made. We propose a random-effects model that provides a unified approach to the clustering of genes with correlated expression levels measured in a wide variety of experimental situations. Our model is an extension of the normal mixture model to account for the correlations between the gene profiles and to enable covariate information to be incorporated into the clustering process. Hence the model is applicable to longitudinal studies with or without replication, for example, time-course experiments by using time as a covariate, and to cross-sectional experiments by using categorical covariates to represent the different experimental classes. Results: We show that our random-effects model can be fitted by maximum likelihood via the EM algorithm for which the E(expectation) and M(maximization) steps can be implemented in closed form. Hence our model can be fitted deterministically without the need for time-consuming Monte Carlo approximations. The effectiveness of our model-based procedure for the clustering of correlated gene profiles is demonstrated on three real datasets, representing typical microarray experimental designs, covering time-course, repeated-measurement and cross-sectional data. In these examples, relevant clusters of the genes are obtained, which are supported by existing gene-function annotation. A synthetic dataset is considered too.
Resumo:
A new model proposed for the gasification of chars and carbons incorporates features of the turbostratic nanoscale structure that exists in such materials. The model also considers the effect of initial surface chemistry and different reactivities perpendicular to the edges and to the faces of the underlying crystallite planes comprising the turbostratic structure. It may be more realistic than earlier models based on pore or grain structure idealizations when the carbon contains large amounts of crystallite matter. Shrinkage of the carbon particles in the chemically controlled regime is also possible due to the random complete gasification of crystallitic planes. This mechanism can explain observations in the literature of particle size reduction. Based on the model predictions, both initial surface chemistry and the number of stacked planes in the crystallites strongly influence the reactivity and particle shrinkage. Its test results agree well with literature data on the air-oxidation of Spherocarb and show that it accurately predicts the variation of particle size with conversion. Model parameters are determined entirely from rate measurements.
Estimation of pharmacokinetic parameters from non-compartmental variables using Microsoft Excel((R))
Resumo:
This study was conducted to develop a method, termed 'back analysis (BA)', for converting non-compartmental variables to compartment model dependent pharmacokinetic parameters for both one- and two-compartment models. A Microsoft Excel((R)) spreadsheet was implemented with the use of Solver((R)) and visual basic functions. The performance of the BA method in estimating pharmacokinetic parameter values was evaluated by comparing the parameter values obtained to a standard modelling software program, NONMEM, using simulated data. The results show that the BA method was reasonably precise and provided low bias in estimating fixed and random effect parameters for both one- and two-compartment models. The pharmacokinetic parameters estimated from the BA method were similar to those of NONMEM estimation.
Resumo:
Niche apportionment models have only been applied once to parasite communities. Only the random assortment model (RA), which indicates that species abundances are independent from each other and that interspecific competition is unimportant, provided a good fit to 3 out of 6 parasite communities investigated. The generality of this result needs to be validated, however. In this study we apply 5 niche apportionment models to the parasite communities of 14 fish species from the Great Barrier Reef. We determined which model fitted the data when using either numerical abundance or biomass as an estimate of parasite abundance, and whether the fit of niche apportionment models depends on how the parasite community is defined (e.g. ecto, endoparasites or all parasites considered together). The RA model provided a good fit for the whole community of parasites in 7 fish species when using biovolume (as a surrogate of biomass) as a measure of species abundance. The RA model also fitted observed data when ecto- and endoparasites were considered separately, using abundance or biovolume, but less frequently. Variation in fish sizes among species was not associated with the probability of a model fitting the data. Total numerical abundance and biovolume of parasites were not related across host species, suggesting that they capture different aspects of abundance. Biovolume is not only a better measurement to use with niche-orientated models, it should also be the preferred descriptor to analyse parasite community structure in other contexts. Most of the biological assumptions behind the RA model, i.e. randomness in apportioning niche space, lack of interspecific competition, independence of abundance among different species, and species with variable niches in changeable environments, are in accordance with some previous findings on parasite communities. Thus, parasite communities may generally be unsaturated with species, with empty niches, and interspecific interactions may generally be unimportant in determining parasite community structure.
Resumo:
Many images consist of two or more 'phases', where a phase is a collection of homogeneous zones. For example, the phases may represent the presence of different sulphides in an ore sample. Frequently, these phases exhibit very little structure, though all connected components of a given phase may be similar in some sense. As a consequence, random set models are commonly used to model such images. The Boolean model and models derived from the Boolean model are often chosen. An alternative approach to modelling such images is to use the excursion sets of random fields to model each phase. In this paper, the properties of excursion sets will be firstly discussed in terms of modelling binary images. Ways of extending these models to multi-phase images will then be explored. A desirable feature of any model is to be able to fit it to data reasonably well. Different methods for fitting random set models based on excursion sets will be presented and some of the difficulties with these methods will be discussed.
Resumo:
This paper presents a comprehensive and critical review of the mechanisms and kinetics of NO and N2O reduction reaction with coal chars under fluidised-bed combustion conditions (FBC). The heterogeneous reactions of NO and N2O with char/carbon surface have been well recognised as the most important processes in reducing both NOx and N2O in situ FBC. Compared to NO-carbon reactions in FBC, the reactions of N2O with chars have been relatively less understood and studied. Beginning with the overall reaction schemes for both NO and N2O reduction, the paper extensively discusses the reaction mechanisms including the effects of active surface sites. Generally, NO- and N2O-carbon reactions follow a series of step reactions. However, questions remain concerning the role of adsorbed phases of NO and N2O, and the behaviour of different surface sites. Important kinetics factors such as the rate expressions, kinetics parameters as well as the effects of surface area and pore structure are discussed in detail. The main factors influencing the reduction of NO and N2O in FBC conditions are the chemical and physical properties of chars, and the operating parameters of FBC such as temperature, presence of CO, O-2 and pressure. It is shown that under similar conditions, N2O is more readily reduced on the char surface than NO. Temperature was found to be a very important parameter in both NO and N2O reduction. It is generally agreed that both NO- and N2O-carbon reactions follow first-order reaction kinetics with respect to the NO and N2O concentrations. The kinetic parameters for NO and N2O reduction largely depend on the pore structure of chars. The correlation between the char surface area and the reactivities of NO/N2O-char reactions is considered to be of great importance to the determination of the reaction kinetics. The rate of NO reduction by chars is strongly enhanced by the presence of CO and O-2, but these species may not have significant effects on the rate of N2O reduction. However, the presence of these gases in FBC presents difficulties in the study of kinetics since CO cannot be easily eliminated from the carbon surface. In N2O reduction reactions, ash in chars is found to have significant catalytic effects, which must be accounted for in the kinetic models and data evaluation. (C) 1997 Elsevier Science Ltd.
Resumo:
This systematic review aimed to collate randomized controlled trials (RCTs) of various interventions used to treat tardive dyskinesia (TD) and, where appropriate, to combine the data for mete-analysis, Clinical trials were identified by electronic searches, handsearches and contact with principal investigators. Data were extracted independently by two reviewers, for outcomes related to improvement, deterioration, side-effects and drop out rates. Data were pooled using the Mantel-Haenzel Odds Ratio (fixed effect model). For treatments that had significant effects, the number needed to treat (NNT) was calculated. From 296 controlled clinical trials, data were extracted from 47 trials. For most interventions, we could identify no RCT-derived evidence of efficacy. A meta-analysis showed that baclofen, deanol and diazepam were no more effective than a placebo. Single RCTs demonstrated a lack of evidence of any effect for bromocriptine, ceruletide, clonidine, estrogen, gamma linolenic acid, hydergine, lecithin, lithium, progabide, seligiline and tetrahydroisoxazolopyridinol. The meta-analysis found that five interventions were effective: L-dopa, oxypertine, sodium valproate, tiapride and vitamin E; neuroleptic reduction was marginally significant. Data from single RCTs revealed that insulin, alpha methyl dopa and reserpine were more effective than a placebo. There was a significantly increased risk of adverse events associated with baclofen, deanol, L-dopa, oxypertine and reserpine. Metaanalysis of the impact of placebo (n=485) showed that 37.3% of participants showed an improvement. Interpretation of this systematic review requires caution as the individual trials identified tended to have small sample sizes. For many compounds, data from only one trial were available, and where meta-analyses were possible, these were based on a small number of trials. Despite these concerns, the review facilitated the interpretation of the large and diverse range of treatments used for TD. Clinical recommendations for the treatment of TD are made, based on the availability of RCT-derived evidence, the strength of that evidence and the presence of adverse effects. (C) 1999 Elsevier Science B.V. All rights reserved.
Resumo:
The importance of founder events in promoting evolutionary changes on islands has been a subject of long-running controversy. Resolution of this debate has been hindered by a lack of empirical evidence from naturally founded island populations. Here we undertake a genetic analysis of a series of historically documented, natural colonization events by the silvereye species-complex (Zosterops lateralis), a group used to illustrate the process of island colonization in the original founder effect model. Our results indicate that single founder events do not affect levels of heterozygosity or allelic diversity, nor do they result in immediate genetic differentiation between populations. Instead, four to five successive founder events are required before indices of diversity and divergence approach that seen in evolutionarily old forms. A Bayesian analysis based on computer simulation allows inferences to be made on the number of effective founders and indicates that founder effects are weak because island populations are established from relatively large flocks. Indeed, statistical support for a founder event model was not significantly higher than for a gradual-drift model for all recently colonized islands. Taken together, these results suggest that single colonization events in this species complex are rarely accompanied by severe founder effects, and multiple founder events and/or long-term genetic drift have been of greater consequence for neutral genetic diversity.
Resumo:
The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
The Professions in Australia Study is the first longitudinal investigation of the professions in Australia; it spans 33 years. Self-administered questionnaires were distributed on at least eight occasions between 1965 and 1998 to cohorts of students and later practitioners from the professions of engineering, law and medicine. The longitudinal design of this study has allowed for an investigation of individual change over time of three archetypal characteristics of the professions, service, knowledge and autonomy and two of the benefits of professional work, financial rewards and prestige. A cumulative logit random effects model was used to statistically assess changes in the ordinal response scores for measuring importance of the characteristics and benefits through stages of the career path. Individuals were also classified by average trends in response scores over time and hence professions are described through their members' tendency to follow a particular path in attitudes either of change or constancy, in relation to the importance of the five elements (characteristics and benefits). Comparisons in trends are also made between the three professions.
Resumo:
Many variables that are of interest in social science research are nominal variables with two or more categories, such as employment status, occupation, political preference, or self-reported health status. With longitudinal survey data it is possible to analyse the transitions of individuals between different employment states or occupations (for example). In the statistical literature, models for analysing categorical dependent variables with repeated observations belong to the family of models known as generalized linear mixed models (GLMMs). The specific GLMM for a dependent variable with three or more categories is the multinomial logit random effects model. For these models, the marginal distribution of the response does not have a closed form solution and hence numerical integration must be used to obtain maximum likelihood estimates for the model parameters. Techniques for implementing the numerical integration are available but are computationally intensive requiring a large amount of computer processing time that increases with the number of clusters (or individuals) in the data and are not always readily accessible to the practitioner in standard software. For the purposes of analysing categorical response data from a longitudinal social survey, there is clearly a need to evaluate the existing procedures for estimating multinomial logit random effects model in terms of accuracy, efficiency and computing time. The computational time will have significant implications as to the preferred approach by researchers. In this paper we evaluate statistical software procedures that utilise adaptive Gaussian quadrature and MCMC methods, with specific application to modeling employment status of women using a GLMM, over three waves of the HILDA survey.
Resumo:
Most cellular solids are random materials, while practically all theoretical structure-property results are for periodic models. To be able to generate theoretical results for random models, the finite element method (FEM) was used to study the elastic properties of solids with a closed-cell cellular structure. We have computed the density (rho) and microstructure dependence of the Young's modulus (E) and Poisson's ratio (PR) for several different isotropic random models based on Voronoi tessellations and level-cut Gaussian random fields. The effect of partially open cells is also considered. The results, which are best described by a power law E infinity rho (n) (1<n<2), show the influence of randomness and isotropy on the properties of closed-cell cellular materials, and are found to be in good agreement with experimental data. (C) 2001 Acta Materialia Inc. Published by Elsevier Science Ltd. All rights reserved.
Resumo:
Plant cells are characterized by low water content, so the fraction of cell volume (volume fraction) in a vessel is large compared with other cell systems, even if the cell concentrations are the same. Therefore, concentration of plant cells should preferably be expressed by the liquid volume basis rather than by the total vessel volume basis. In this paper, a new model is proposed to analyze behavior of a plant cell culture by dividing the cell suspension into the biotic- and abiotic-phases, Using this model, we analyzed the cell-growth and the alkaloid production by Catharanthus roseus, Large errors in the simulated results were observed if the phase-segregation was not considered.