952 resultados para Mixed-effect models


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Over broad thermal gradients, the effect of temperature on aerobic respiration and photosynthesis rates explains variation in community structure and function. Yet for local communities, temperature dependent trophic interactions may dominate effects of warming. We tested the hypothesis that food chain length modifies the temperature-dependence of ecosystem fluxes and community structure. In a multi-generation aquatic food web experiment, increasing temperature strengthened a trophic cascade, altering the effect of temperature on estimated mass-corrected ecosystem fluxes. Compared to consumer-free and 3-level food chains, grazer-algae (2-level) food chains responded most strongly to the temperature gradient. Temperature altered community structure, shifting species composition and reducing zooplankton density and body size. Still, food chain length did not alter the temperature dependence of net ecosystem fluxes. We conclude that locally, food chain length interacts with temperature to modify community structure, but only temperature, not food chain length influenced net ecosystem fluxes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The objective of this study is to test the effect of the consumer’s variety-seeking behaviour on the distance the tourist is prepared to travel; that is, his/her willingness to travel further. The empirical application is carried out in Spain in a context with 26 destinations, by applying Mixed Logit Models. The results evidence that the variety-seeking behaviour reduces the dissuasive effect of distance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Thesis (Master's)--University of Washington, 2016-06

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background To determine the pharmacokinetics (PK) of a new i.v. formulation of paracetamol (Perfalgan) in children ≤15 yr of age. Methods After obtaining written informed consent, children under 16 yr of age were recruited to this study. Blood samples were obtained at 0, 15, 30 min, 1, 2, 4, 6, and 8 h after administration of a weight-dependent dose of i.v. paracetamol. Paracetamol concentration was measured using a validated high-performance liquid chromatographic assay with ultraviolet detection method, with a lower limit of quantification (LLOQ) of 900 pg on column and an intra-day coefficient of variation of 14.3% at the LLOQ. Population PK analysis was performed by non-linear mixed-effect modelling using NONMEM. Results One hundred and fifty-nine blood samples from 33 children aged 1.8–15 yr, weight 13.7–56 kg, were analysed. Data were best described by a two-compartment model. Only body weight as a covariate significantly improved the goodness of fit of the model. The final population models for paracetamol clearance (CL), V1 (central volume of distribution), Q (inter-compartmental clearance), and V2 (peripheral volume of distribution) were: 16.51×(WT/70)0.75, 28.4×(WT/70), 11.32×(WT/70)0.75, and 13.26×(WT/70), respectively (CL, Q in litres per hour, WT in kilograms, and V1 and V2 in litres). Conclusions In children aged 1.8–15 yr, the PK parameters for i.v. paracetamol were not influenced directly by age but were by total body weight and, using allometric size scaling, significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Objective: To describe the effect of age and body size on enantiomer selective pharmacokinetic (PK) of intravenous ketorolac in children using a microanalytical assay. Methods: Blood samples were obtained at 0, 15 and 30 min and at 1, 2, 4, 6, 8 and 12 h after a weight-dependent dose of ketorolac. Enantiomer concentration was measured using a liquid chromatography tandem mass spectrometry method. Non-linear mixed-effect modelling was used to assess PK parameters. Key findings: Data from 11 children (1.7–15.6 years, weight 10.7–67.4 kg) were best described by a two-compartment model for R(+), S(−) and racemic ketorolac. Only weight (WT) significantly improved the goodness of fit. The final population models were CL = 1.5 × (WT/46)0.75, V1 = 8.2 × (WT/46), Q = 3.4 × (WT/46)0.75, V2 = 7.9 × (WT/46), CL = 2.98 × (WT/46), V1 = 13.2 × (WT/46), Q = 2.8 × (WT/46)0.75, V2 = 51.5 × (WT/46), and CL = 1.1 × (WT/46)0.75, V1 = 4.9 × (WT/46), Q = 1.7 × (WT/46)0.75 and V2 = 6.3 × (WT/46)for R(+), S(−) and racemic ketorolac. Conclusions: Only body weight influenced the PK parameters for R(+) and S(−) ketorolac. Using allometric size scaling significantly affected the clearances (CL, Q) and volumes of distribution (V1, V2).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

People go through their life making all kinds of decisions, and some of these decisions affect their demand for transportation, for example, their choices of where to live and where to work, how and when to travel and which route to take. Transport related choices are typically time dependent and characterized by large number of alternatives that can be spatially correlated. This thesis deals with models that can be used to analyze and predict discrete choices in large-scale networks. The proposed models and methods are highly relevant for, but not limited to, transport applications. We model decisions as sequences of choices within the dynamic discrete choice framework, also known as parametric Markov decision processes. Such models are known to be difficult to estimate and to apply to make predictions because dynamic programming problems need to be solved in order to compute choice probabilities. In this thesis we show that it is possible to explore the network structure and the flexibility of dynamic programming so that the dynamic discrete choice modeling approach is not only useful to model time dependent choices, but also makes it easier to model large-scale static choices. The thesis consists of seven articles containing a number of models and methods for estimating, applying and testing large-scale discrete choice models. In the following we group the contributions under three themes: route choice modeling, large-scale multivariate extreme value (MEV) model estimation and nonlinear optimization algorithms. Five articles are related to route choice modeling. We propose different dynamic discrete choice models that allow paths to be correlated based on the MEV and mixed logit models. The resulting route choice models become expensive to estimate and we deal with this challenge by proposing innovative methods that allow to reduce the estimation cost. For example, we propose a decomposition method that not only opens up for possibility of mixing, but also speeds up the estimation for simple logit models, which has implications also for traffic simulation. Moreover, we compare the utility maximization and regret minimization decision rules, and we propose a misspecification test for logit-based route choice models. The second theme is related to the estimation of static discrete choice models with large choice sets. We establish that a class of MEV models can be reformulated as dynamic discrete choice models on the networks of correlation structures. These dynamic models can then be estimated quickly using dynamic programming techniques and an efficient nonlinear optimization algorithm. Finally, the third theme focuses on structured quasi-Newton techniques for estimating discrete choice models by maximum likelihood. We examine and adapt switching methods that can be easily integrated into usual optimization algorithms (line search and trust region) to accelerate the estimation process. The proposed dynamic discrete choice models and estimation methods can be used in various discrete choice applications. In the area of big data analytics, models that can deal with large choice sets and sequential choices are important. Our research can therefore be of interest in various demand analysis applications (predictive analytics) or can be integrated with optimization models (prescriptive analytics). Furthermore, our studies indicate the potential of dynamic programming techniques in this context, even for static models, which opens up a variety of future research directions.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The aim of this study was to identify within-season differences in basketball players' game-related statistics according to team quality and playing time. The sample comprised 5309 records from 198 players in the Spanish professional basketball league (2007-2008). Factor analysis with principal components was applied to the game-related statistics gathered from the official box-scores, which limited the analysis to five factors (free-throws, 2-point field-goals, 3-point field-goals, passes, and errors) and two variables (defensive and offensive rebounds). A two-step cluster analysis classified the teams as stronger (69 ± 8 winning percentage), intermediate (43 ± 5 winning percentage), and weaker teams (32 ± 5 winning percentage); individual players were classified based on playing time as important players (28 ± 4 min) or less important players (16 ± 4 min). Seasonal variation was analysed monthly in eight periods. A mixed linear model was applied to identify the effects of team quality and playing time within the months of the season on the previously identified factors and game-related statistics. No significant effect of season period was observed. A team quality effect was identified, with stronger teams being superior in terms of 2-point field-goals and passes. The weaker teams were the worst at defensive rebounding (stronger teams: 0.17 ± 0.05; intermediate teams: 0.17 ± 0.06; weaker teams: 0.15 ± 0.03; P = 0.001). While playing time was significant in almost all variables, errors were the most important factor when contrasting important and less important players, with fewer errors being made by important players. The trends identified can help coaches and players to create performance profiles according to team quality and playing time. However, these performance profiles appear to be independent of season period.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background To identify those characteristics of self-management interventions in patients with heart failure (HF) that are effective in influencing health-related quality of life, mortality, and hospitalizations. Methods and Results Randomized trials on self-management interventions conducted between January 1985 and June 2013 were identified and individual patient data were requested for meta-analysis. Generalized mixed effects models and Cox proportional hazard models including frailty terms were used to assess the relation between characteristics of interventions and health-related outcomes. Twenty randomized trials (5624 patients) were included. Longer intervention duration reduced mortality risk (hazard ratio 0.99, 95% confidence interval [CI] 0.97–0.999 per month increase in duration), risk of HF-related hospitalization (hazard ratio 0.98, 95% CI 0.96–0.99), and HF-related hospitalization at 6 months (risk ratio 0.96, 95% CI 0.92–0.995). Although results were not consistent across outcomes, interventions comprising standardized training of interventionists, peer contact, log keeping, or goal-setting skills appeared less effective than interventions without these characteristics. Conclusion No specific program characteristics were consistently associated with better effects of self-management interventions, but longer duration seemed to improve the effect of self-management interventions on several outcomes. Future research using factorial trial designs and process evaluations is needed to understand the working mechanism of specific program characteristics of self-management interventions in HF patients.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

BACKGROUND AND AIMS: Previous studies have shown that antidepressants reduce inflammation in animal models of colitis. The present trial aimed to examine whether fluoxetine added to standard therapy for Crohn's disease [CD] maintained remission, improved quality of life [QoL] and/or mental health in people with CD as compared to placebo. METHODS: A parallel randomized double-blind placebo controlled trial was conducted. Participants with clinically established CD, with quiescent or only mild disease, were randomly assigned to receive either fluoxetine 20 mg daily or placebo, and followed for 12 months. Participants provided blood and stool samples and completed mental health and QoL questionnaires. Immune functions were assessed by stimulated cytokine secretion [CD3/CD28 stimulation] and flow cytometry for cell type. Linear mixed-effects models were used to compare groups. RESULTS: Of the 26 participants, 14 were randomized to receive fluoxetine and 12 to placebo. Overall, 14 [54%] participants were male. The mean age was 37.4 [SD=13.2] years. Fluoxetine had no effect on inflammatory bowel disease activity measured using either the Crohn's Disease Activity Index [F(3, 27.5)=0.064, p=0.978] or faecal calprotectin [F(3, 32.5)=1.08, p=0.371], but did have modest effects on immune function. There was no effect of fluoxetine on physical, psychological, social or environmental QoL, anxiety or depressive symptoms as compared to placebo [all p>0.05]. CONCLUSIONS: In this small pilot clinical trial, fluoxetine was not superior to placebo in maintaining remission or improving QoL. [ID: ACTRN12612001067864.].

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Understanding how virus strains offer protection against closely related emerging strains is vital for creating effective vaccines. For many viruses, including Foot-and-Mouth Disease Virus (FMDV) and the Influenza virus where multiple serotypes often co-circulate, in vitro testing of large numbers of vaccines can be infeasible. Therefore the development of an in silico predictor of cross-protection between strains is important to help optimise vaccine choice. Vaccines will offer cross-protection against closely related strains, but not against those that are antigenically distinct. To be able to predict cross-protection we must understand the antigenic variability within a virus serotype, distinct lineages of a virus, and identify the antigenic residues and evolutionary changes that cause the variability. In this thesis we present a family of sparse hierarchical Bayesian models for detecting relevant antigenic sites in virus evolution (SABRE), as well as an extended version of the method, the extended SABRE (eSABRE) method, which better takes into account the data collection process. The SABRE methods are a family of sparse Bayesian hierarchical models that use spike and slab priors to identify sites in the viral protein which are important for the neutralisation of the virus. In this thesis we demonstrate how the SABRE methods can be used to identify antigenic residues within different serotypes and show how the SABRE method outperforms established methods, mixed-effects models based on forward variable selection or l1 regularisation, on both synthetic and viral datasets. In addition we also test a number of different versions of the SABRE method, compare conjugate and semi-conjugate prior specifications and an alternative to the spike and slab prior; the binary mask model. We also propose novel proposal mechanisms for the Markov chain Monte Carlo (MCMC) simulations, which improve mixing and convergence over that of the established component-wise Gibbs sampler. The SABRE method is then applied to datasets from FMDV and the Influenza virus in order to identify a number of known antigenic residue and to provide hypotheses of other potentially antigenic residues. We also demonstrate how the SABRE methods can be used to create accurate predictions of the important evolutionary changes of the FMDV serotypes. In this thesis we provide an extended version of the SABRE method, the eSABRE method, based on a latent variable model. The eSABRE method takes further into account the structure of the datasets for FMDV and the Influenza virus through the latent variable model and gives an improvement in the modelling of the error. We show how the eSABRE method outperforms the SABRE methods in simulation studies and propose a new information criterion for selecting the random effects factors that should be included in the eSABRE method; block integrated Widely Applicable Information Criterion (biWAIC). We demonstrate how biWAIC performs equally to two other methods for selecting the random effects factors and combine it with the eSABRE method to apply it to two large Influenza datasets. Inference in these large datasets is computationally infeasible with the SABRE methods, but as a result of the improved structure of the likelihood, we are able to show how the eSABRE method offers a computational improvement, leading it to be used on these datasets. The results of the eSABRE method show that we can use the method in a fully automatic manner to identify a large number of antigenic residues on a variety of the antigenic sites of two Influenza serotypes, as well as making predictions of a number of nearby sites that may also be antigenic and are worthy of further experiment investigation.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We estimate the cost of droughts by matching rainfall data with individual life satisfaction. Our context is Australia over the period 2001 to 2004, which included a particularly severe drought. Using fixed-effect models, we find that a drought in spring has a detrimental effect on life satisfaction equivalent to an annual reduction in income of A$18,000. This effect, however, is only found for individuals living in rural areas. Using our estimates, we calculate that the predicted doubling of the frequency of spring droughts will lead to the equivalent loss in life satisfaction of just over 1% of GDP annually.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Now in its second edition, this book describes tools that are commonly used in transportation data analysis. The first part of the text provides statistical fundamentals while the second part presents continuous dependent variable models. With a focus on count and discrete dependent variable models, the third part features new chapters on mixed logit models, logistic regression, and ordered probability models. The last section provides additional coverage of Bayesian statistical modeling, including Bayesian inference and Markov chain Monte Carlo methods. Data sets are available online to use with the modeling techniques discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Motivation: Unravelling the genetic architecture of complex traits requires large amounts of data, sophisticated models and large computational resources. The lack of user-friendly software incorporating all these requisites is delaying progress in the analysis of complex traits. Methods: Linkage disequilibrium and linkage analysis (LDLA) is a high-resolution gene mapping approach based on sophisticated mixed linear models, applicable to any population structure. LDLA can use population history information in addition to pedigree and molecular markers to decompose traits into genetic components. Analyses are distributed in parallel over a large public grid of computers in the UK. Results: We have proven the performance of LDLA with analyses of simulated data. There are real gains in statistical power to detect quantitative trait loci when using historical information compared with traditional linkage analysis. Moreover, the use of a grid of computers significantly increases computational speed, hence allowing analyses that would have been prohibitive on a single computer. © The Author 2009. Published by Oxford University Press. All rights reserved.