898 resultados para Bayesian shared component model
Finite mixture regression model with random effects: application to neonatal hospital length of stay
Resumo:
A two-component mixture regression model that allows simultaneously for heterogeneity and dependency among observations is proposed. By specifying random effects explicitly in the linear predictor of the mixture probability and the mixture components, parameter estimation is achieved by maximising the corresponding best linear unbiased prediction type log-likelihood. Approximate residual maximum likelihood estimates are obtained via an EM algorithm in the manner of generalised linear mixed model (GLMM). The method can be extended to a g-component mixture regression model with the component density from the exponential family, leading to the development of the class of finite mixture GLMM. For illustration, the method is applied to analyse neonatal length of stay (LOS). It is shown that identification of pertinent factors that influence hospital LOS can provide important information for health care planning and resource allocation. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
A mathematical model that describes the operation of a sequential leach bed process for anaerobic digestion of organic fraction of municipal solid waste (MSW) is developed and validated. This model assumes that ultimate mineralisation of the organic component of the waste occurs in three steps, namely solubilisation of particulate matter, fermentation to volatile organic acids (modelled as acetic acid) along with liberation of carbon dioxide and hydrogen, and methanogenesis from acetate and hydrogen. The model incorporates the ionic equilibrium equations arising due to dissolution of carbon dioxide, generation of alkalinity from breakdown of solids and dissociation of acetic acid. Rather than a charge balance, a mass balance on the hydronium and hydroxide ions is used to calculate pH. The flow of liquid through the bed is modelled as occurring through two zones-a permeable zone with high flushing rates and the other more stagnant. Some of the kinetic parameters for the biological processes were obtained from batch MSW digestion experiments. The parameters for flow model were obtained from residence time distribution studies conducted using tritium as a tracer. The model was validated using data from leach bed digestion experiments in which a leachate volume equal to 10% of the fresh waste bed volume was sequenced. The model was then tested, without altering any kinetic or flow parameters, by varying volume of leachate that is sequenced between the beds. Simulations for sequencing/recirculating 5 and 30% of the bed volume are presented and compared with experimental results. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Formal specifications can precisely and unambiguously define the required behavior of a software system or component. However, formal specifications are complex artifacts that need to be verified to ensure that they are consistent, complete, and validated against the requirements. Specification testing or animation tools exist to assist with this by allowing the specifier to interpret or execute the specification. However, currently little is known about how to do this effectively. This article presents a framework and tool support for the systematic testing of formal, model-based specifications. Several important generic properties that should be satisfied by model-based specifications are first identified. Following the idea of mutation analysis, we then use variants or mutants of the specification to check that these properties are satisfied. The framework also allows the specifier to test application-specific properties. All properties are tested for a range of states that are defined by the tester in the form of a testgraph, which is a directed graph that partially models the states and transitions of the specification being tested. Tool support is provided for the generation of the mutants, for automatically traversing the testgraph and executing the test cases, and for reporting any errors. The framework is demonstrated on a small specification and its application to three larger specifications is discussed. Experience indicates that the framework can be used effectively to test small to medium-sized specifications and that it can reveal a significant number of problems in these specifications.
Resumo:
A stress-wave force balance for measurement of thrust, lift, and pitching moment on a large scramjet model (40 kg in mass, 1.165 in in length) in a reflected shock tunnel has been designed, calibrated, and tested. Transient finite element analysis was used to model the performance of the balance. This modeling indicates that good decoupling of signals and low sensitivity of the balance to the distribution of. the load can be achieved with a three-bar balance. The balance was constructed and calibrated by applying a series of point loads to the model. A good comparison between finite element analysis and experimental results was obtained with finite element analysis aiding in the interpretation of some experimental results. Force measurements were made in a shock tunnel both with and without fuel injection, and measurements were compared with predictions using simple models of the scramjet and combustion. Results indicate that the balance is capable of resolving lift, thrust, and pitching moments with and without combustion. However vibrations associated with tunnel operation interfered with the signals indicating the importance of vibration isolation for accurate measurements.
Resumo:
We analyse the relation between local two-atom and total multi-atom entanglements in the Dicke system composed of a large number of atoms. We use concurrence as a measure of entanglement between two atoms in the multi-atom system, and the spin squeezing parameter as a measure of entanglement in the whole n-atom system. In addition, the influence of the squeezing phase and bandwidth on entanglement in the steady-state Dicke system is discussed. It is shown that the introduction of a squeezed field leads to a significant enhancement of entanglement between two atoms, and the entanglement increases with increasing degree of squeezing and bandwidth of the incident squeezed field. In the presence of a coherent field the entanglement exhibits a strong dependence on the relative phase between the squeezed and coherent fields, that can jump quite rapidly from unentangled to strongly entangled values when the phase changes from zero to pi. We find that the jump of the degree of entanglement is due to a flip of the spin squeezing from one quadrature component of the atomic spin to the other component when the phase changes from zero to pi. We also analyse the dependence of the entanglement on the number of atoms and find that, despite the reduction in the degree of entanglement between two atoms, a large entanglement is present in the whole n-atom system and the degree of entanglement increases as the number of atoms increases.
Resumo:
West Nile Virus (WNV) is a mosquito-borne flavivirus with a rapidly expanding global distribution. Infection causes severe neurological disease and fatalities in both human and animal hosts. The West Nile viral protease (NS2B-NS3) is essential for post-translational processing in host-infected cells of a viral polypeptide precursor into structural and functional viral proteins, and its inhibition could represent a potential treatment for viral infections. This article describes the design, expression, and enzymatic characterization of a catalytically active recombinant WNV protease, consisting of a 40-residue component of cofactor NS2B tethered via a noncleavable nonapeptide (G(4)SG(4)) to the N-terminal 184 residues of NS3. A chromogenic assay using synthetic para-nitroanilide (pNA) hexapeptide substrates was used to identify optimal enzyme-processing conditions (pH 9.5, I < 0.1 M, 30% glycerol, 1 mM CHAPS), preferred substrate cleavage sites, and the first competitive inhibitor (Ac-FASGKR- H, IC50 &SIM; 1 μM). A putative three-dimensional structure of WNV protease, created through homology modeling based on the crystal structures of Dengue-2 and Hepatitis C NS3 viral proteases, provides some valuable insights for structure-based design of potent and selective inhibitors of WNV protease.
Resumo:
When studying genotype X environment interaction in multi-environment trials, plant breeders and geneticists often consider one of the effects, environments or genotypes, to be fixed and the other to be random. However, there are two main formulations for variance component estimation for the mixed model situation, referred to as the unconstrained-parameters (UP) and constrained-parameters (CP) formulations. These formulations give different estimates of genetic correlation and heritability as well as different tests of significance for the random effects factor. The definition of main effects and interactions and the consequences of such definitions should be clearly understood, and the selected formulation should be consistent for both fixed and random effects. A discussion of the practical outcomes of using the two formulations in the analysis of balanced data from multi-environment trials is presented. It is recommended that the CP formulation be used because of the meaning of its parameters and the corresponding variance components. When managed (fixed) environments are considered, users will have more confidence in prediction for them but will not be overconfident in prediction in the target (random) environments. Genetic gain (predicted response to selection in the target environments from the managed environments) is independent of formulation.
Resumo:
Many studies on birds focus on the collection of data through an experimental design, suitable for investigation in a classical analysis of variance (ANOVA) framework. Although many findings are confirmed by one or more experts, expert information is rarely used in conjunction with the survey data to enhance the explanatory and predictive power of the model. We explore this neglected aspect of ecological modelling through a study on Australian woodland birds, focusing on the potential impact of different intensities of commercial cattle grazing on bird density in woodland habitat. We examine a number of Bayesian hierarchical random effects models, which cater for overdispersion and a high frequency of zeros in the data using WinBUGS and explore the variation between and within different grazing regimes and species. The impact and value of expert information is investigated through the inclusion of priors that reflect the experience of 20 experts in the field of bird responses to disturbance. Results indicate that expert information moderates the survey data, especially in situations where there are little or no data. When experts agreed, credible intervals for predictions were tightened considerably. When experts failed to agree, results were similar to those evaluated in the absence of expert information. Overall, we found that without expert opinion our knowledge was quite weak. The fact that the survey data is quite consistent, in general, with expert opinion shows that we do know something about birds and grazing and we could learn a lot faster if we used this approach more in ecology, where data are scarce. Copyright (c) 2005 John Wiley & Sons, Ltd.
Resumo:
The estimated parameters of output distance functions frequently violate the monotonicity, quasi-convexity and convexity constraints implied by economic theory, leading to estimated elasticities and shadow prices that are incorrectly signed, and ultimately to perverse conclusions concerning the effects of input and output changes on productivity growth and relative efficiency levels. We show how a Bayesian approach can be used to impose these constraints on the parameters of a translog output distance function. Implementing the approach involves the use of a Gibbs sampler with data augmentation. A Metropolis-Hastings algorithm is also used within the Gibbs to simulate observations from truncated pdfs. Our methods are developed for the case where panel data is available and technical inefficiency effects are assumed to be time-invariant. Two models-a fixed effects model and a random effects model-are developed and applied to panel data on 17 European railways. We observe significant changes in estimated elasticities and shadow price ratios when regularity restrictions are imposed. (c) 2004 Elsevier B.V. All rights reserved.
Resumo:
Defining the pharmacokinetics of drugs in overdose is complicated. Deliberate self-poisoning is generally impulsive and associated with poor accuracy in dose history. In addition, early blood samples are rarely collected to characterize the whole plasma-concentration time profile and the effect of decontamination on the pharmacokinetics is uncertain. The aim of this study was to explore a fully Bayesian methodology for population pharmacokinetic analysis of data that arose from deliberate self-poisoning with citalopram. Prior information on the pharmacokinetic parameters was elicited from 14 published studies on citalopram when taken in therapeutic doses. The data set included concentration-time data from 53 patients studied after 63 citalopram overdose events (dose range: 20-1700 mg). Activated charcoal was administered between 0.5 and 4 h after 17 overdose events. The clinical investigator graded the veracity of the patients' dosing history on a 5-point ordinal scale. Inclusion of informative priors stabilised the pharmacokinetic model and the population mean values could be estimated well. There were no indications of non-linear clearance after excessive doses. The final model included an estimated uncertainty of the dose amount which in a simulation study was shown to not affect the model's ability to characterise the effects of activated charcoal. The effect of activated charcoal on clearance and bioavailability was pronounced and resulted in a 72% increase and 22% decrease, respectively. These findings suggest charcoal administration is potentially beneficial after citalopram overdose. The methodology explored seems promising for exploring the dose-exposure relationship in the toxicological settings.
Resumo:
Although cannabis is the most commonly used illicit drug, duration of cannabis use is typically short, with many of those who initiate cannabis use ceasing use by their late twenties. In this paper we analyze data from a volunteer Australian cohort of 6265 male and female twins to examine whether the duration of cannabis use is an informative phenotype for future genetic analyses. Genetic modeling indicated: (a) moderate genetic influences on duration of cannabis use in both males (41%; 95% CI = 31–51) and females (55%; 95% CI = 46–63); (b) strong genetic influences on cannabis dependence in both males (72%, 95% CI = 61–81) and females (62%, 95% CI = 48–74); (c) no evidence of shared environmental influences on duration of cannabis use or on cannabis dependence in either males or females. Importantly, this model fitting indicated that a substantial component of genetic influences (rg = .90, 95% CI = .77–.99 (males); .70, 95% CI = .57–.83 (females)) on duration of cannabis use was shared with those influencing liability to cannabis dependence. While there were high genetic correlations in both women and men, lifetime duration of cannabis may be uniquely informative in assessing components of liability to cannabis use.
Resumo:
Standard factorial designs sometimes may be inadequate for experiments that aim to estimate a generalized linear model, for example, for describing a binary response in terms of several variables. A method is proposed for finding exact designs for such experiments that uses a criterion allowing for uncertainty in the link function, the linear predictor, or the model parameters, together with a design search. Designs are assessed and compared by simulation of the distribution of efficiencies relative to locally optimal designs over a space of possible models. Exact designs are investigated for two applications, and their advantages over factorial and central composite designs are demonstrated.
Resumo:
The speculation that climate change may impact on sustainable fish production suggests a need to understand how these effects influence fish catch on a broad scale. With a gross annual value of A$ 2.2 billion, the fishing industry is a significant primary industry in Australia. Many commercially important fish species use estuarine habitats such as mangroves, tidal flats and seagrass beds as nurseries or breeding grounds and have lifecycles correlated to rainfall and temperature patterns. Correlation of catches of mullet (e.g. Mugil cephalus) and barramundi (Lates calcarifer) with rainfall suggests that fisheries may be sensitive to effects of climate change. This work reviews key commercial fish and crustacean species and their link to estuaries and climate parameters. A conceptual model demonstrates ecological and biophysical links of estuarine habitats that influences capture fisheries production. The difficulty involved in explaining the effect of climate change on fisheries arising from the lack of ecological knowledge may be overcome by relating climate parameters with long-term fish catch data. Catch per unit effort (CPUE), rainfall, the Southern Oscillation Index (SOI) and catch time series for specific combinations of climate seasons and regions have been explored and surplus production models applied to Queensland's commercial fish catch data with the program CLIMPROD. Results indicate that up to 30% of Queensland's total fish catch and up to 80% of the barramundi catch variation for specific regions can be explained by rainfall often with a lagged response to rainfall events. Our approach allows an evaluation of the economic consequences of climate parameters on estuarine fisheries. thus highlighting the need to develop forecast models and manage estuaries for future climate chan e impact by adjusting the quota for climate change sensitive species. Different modelling approaches are discussed with respect to their forecast ability. (c) 2006 Elsevier Ltd. All rights reserved.