20 resultados para random effect

em CentAUR: Central Archive University of Reading - UK


Relevância:

60.00% 60.00%

Publicador:

Resumo:

The crude prevalence of antibodies to Babesia bovis infection in cattle was estimated by serology using indirect ELISA during the period January to April, 1999. Sera were obtained from 1395 dairy cattle (of all ages, sexes and breeds) on smallholder farms, the majority being kept under a zero grazing regime. The crude prevalence of antibodies to Babesia bovis was 6 % for Tanga and 12 % for Iringa. The forces of infection based on the age sero-prevalence profile, were estimated at six for Iringa and four for Tanga per 100 cattle years-risk, respectively. Using random effect logistic regression as the analytical method, the factors (variables) of age, source of animals and geographic location were hypothesised to be associated with sero-positivity of Babesia bovis in the two regions.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In survival analysis frailty is often used to model heterogeneity between individuals or correlation within clusters. Typically frailty is taken to be a continuous random effect, yielding a continuous mixture distribution for survival times. A Bayesian analysis of a correlated frailty model is discussed in the context of inverse Gaussian frailty. An MCMC approach is adopted and the deviance information criterion is used to compare models. As an illustration of the approach a bivariate data set of corneal graft survival times is analysed. (C) 2006 Elsevier B.V. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Objectives: To assess the potential source of variation that surgeon may add to patient outcome in a clinical trial of surgical procedures. Methods: Two large (n = 1380) parallel multicentre randomized surgical trials were undertaken to compare laparoscopically assisted hysterectomy with conventional methods of abdominal and vaginal hysterectomy; involving 43 surgeons. The primary end point of the trial was the occurrence of at least one major complication. Patients were nested within surgeons giving the data set a hierarchical structure. A total of 10% of patients had at least one major complication, that is, a sparse binary outcome variable. A linear mixed logistic regression model (with logit link function) was used to model the probability of a major complication, with surgeon fitted as a random effect. Models were fitted using the method of maximum likelihood in SAS((R)). Results: There were many convergence problems. These were resolved using a variety of approaches including; treating all effects as fixed for the initial model building; modelling the variance of a parameter on a logarithmic scale and centring of continuous covariates. The initial model building process indicated no significant 'type of operation' across surgeon interaction effect in either trial, the 'type of operation' term was highly significant in the abdominal trial, and the 'surgeon' term was not significant in either trial. Conclusions: The analysis did not find a surgeon effect but it is difficult to conclude that there was not a difference between surgeons. The statistical test may have lacked sufficient power, the variance estimates were small with large standard errors, indicating that the precision of the variance estimates may be questionable.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Matheron's usual variogram estimator can result in unreliable variograms when data are strongly asymmetric or skewed. Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. This paper examines the effects of underlying asymmetry on the variogram and on the accuracy of prediction, and the second one examines the effects arising from outliers. Standard geostatistical texts suggest ways of dealing with underlying asymmetry; however, this is based on informed intuition rather than detailed investigation. To determine whether the methods generally used to deal with underlying asymmetry are appropriate, the effects of different coefficients of skewness on the shape of the experimental variogram and on the model parameters were investigated. Simulated annealing was used to create normally distributed random fields of different size from variograms with different nugget:sill ratios. These data were then modified to give different degrees of asymmetry and the experimental variogram was computed in each case. The effects of standard data transformations on the form of the variogram were also investigated. Cross-validation was used to assess quantitatively the performance of the different variogram models for kriging. The results showed that the shape of the variogram was affected by the degree of asymmetry, and that the effect increased as the size of data set decreased. Transformations of the data were more effective in reducing the skewness coefficient in the larger sets of data. Cross-validation confirmed that variogram models from transformed data were more suitable for kriging than were those from the raw asymmetric data. The results of this study have implications for the 'standard best practice' in dealing with asymmetry in data for geostatistical analyses. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Asymmetry in a distribution can arise from a long tail of values in the underlying process or from outliers that belong to another population that contaminate the primary process. The first paper of this series examined the effects of the former on the variogram and this paper examines the effects of asymmetry arising from outliers. Simulated annealing was used to create normally distributed random fields of different size that are realizations of known processes described by variograms with different nugget:sill ratios. These primary data sets were then contaminated with randomly located and spatially aggregated outliers from a secondary process to produce different degrees of asymmetry. Experimental variograms were computed from these data by Matheron's estimator and by three robust estimators. The effects of standard data transformations on the coefficient of skewness and on the variogram were also investigated. Cross-validation was used to assess the performance of models fitted to experimental variograms computed from a range of data contaminated by outliers for kriging. The results showed that where skewness was caused by outliers the variograms retained their general shape, but showed an increase in the nugget and sill variances and nugget:sill ratios. This effect was only slightly more for the smallest data set than for the two larger data sets and there was little difference between the results for the latter. Overall, the effect of size of data set was small for all analyses. The nugget:sill ratio showed a consistent decrease after transformation to both square roots and logarithms; the decrease was generally larger for the latter, however. Aggregated outliers had different effects on the variogram shape from those that were randomly located, and this also depended on whether they were aggregated near to the edge or the centre of the field. The results of cross-validation showed that the robust estimators and the removal of outliers were the most effective ways of dealing with outliers for variogram estimation and kriging. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The lipid-modulatory effects of high intakes of the fish-oil fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are well established and likely to contribute to cardioprotective benefits. Objectives: We aimed to determine the effect of moderate EPA and DHA intakes (< 2 g EPA + DHA/d) on the plasma fatty acid profile, lipid and apolipoprotein concentrations, lipoprotein subclass distribution, and markers of oxidative status. We also aimed to examine the effect of age, sex, and apolipoprotein E (APOE) genotype on the observed responses. Design: Three hundred twelve adults aged 20-70 y, who were prospectively recruited according to age, sex, and APOE genotype, completed a double-blind placebo-controlled crossover study. Participants consumed control oil, 0.7 g EPA + DHA/d (0.7FO), and 1.8 g EPA + DHA/d (1.8FO) capsules in random order, each for an 8-wk intervention period, separated by 12-wk washout periods. Results: In the group as a whole, 8% and 11% lower plasma triacylglycerol concentrations were evident after 0.7FO and 1.8FO, respectively (P < 0.001): significant sex x treatment (P = 0.038) and sex x genotype x treatment (P = 0.032) interactions were observed, and the greatest triacylglycerol-lowering responses (reductions of 15% and 23% after 0.7FO and 1.8FO, respectively) were evident in APOE4 men. Furthermore, lower VLDL-cholesterol (P = 0.026) and higher LDL-cholesterol (P = 0.010), HDL-cholesterol (P < 0.001), and HDL2 (P < 0.001) concentrations were evident after fish-oil intervention. Conclusions: Supplements providing EPA + DHA at doses as low as 0.7 g/d have a significant effect on the plasma lipid profile. The results of the current trial, which used a prospective recruitment approach to examine the responses in population subgroups, are indicative of a greater triacylglycerol-lowering action of long-chain n-3 polyunsaturated fatty acids in males than in females.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Accelerated failure time models with a shared random component are described, and are used to evaluate the effect of explanatory factors and different transplant centres on survival times following kidney transplantation. Different combinations of the distribution of the random effects and baseline hazard function are considered and the fit of such models to the transplant data is critically assessed. A mixture model that combines short- and long-term components of a hazard function is then developed, which provides a more flexible model for the hazard function. The model can incorporate different explanatory variables and random effects in each component. The model is straightforward to fit using standard statistical software, and is shown to be a good fit to the transplant data. Copyright (C) 2004 John Wiley Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Two types of poly(epsilon-caprolactone (CLo)-co-poly(epsilon-caprolactam (CLa)) copolymers were prepared by catalyzed hydrolytic ring-opening polymerization. Both cyclic comonomers were added simultaneously in the reaction medium for the First type or materials where copolymers have a random distribution of counits, as evidenced by H-1 and C-13 NMR. For the second type of copolymers, the cyclic comonomers were added sequentially, yielding diblock poly(ester-amides). The materials were characterized by differential scanning calorimetry (DSC), wide- and small-angle X-ray scattering (WAXS and SAXS), and transmission and scanning electron microscopies (TEM and SEM). Their biodegradation in compost was also studied. All copolymers were found to be miscible by the absence of structure in the melt. TEM revealed that all samples exhibited a crystalline lamellar morphology. DSC and WAXS showed that in a wide composition range (CLo contents from 6 to 55%) only the CLa units were capable of crystallization in the random copolymers. The block copolymer samples only experience a small reduction of crystallization and melting temperature with composition, and this was attributed to a dilution effect caused by the miscible noncrystalline CLo units. The comparison between block and random copolymers provided a unique opportunity to distinguish the dilution effect of the CLo units on the crystallization and melting of the polyamide phase from the chemical composition effect in the random copolymers case, where the CLa sequences are interrupted statistically by the CLo units, making the crystallization of the polyamide strongly composition dependent. Finally, the enzymatic degradation of the copolymers in composted soil indicate a synergistic behavior where much faster degradation was obtained for random copolymers witha CLo content larger than 30% than for neat PCL.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: The lipid-modulatory effects of high intakes of the fish-oil fatty acids eicosapentaenoic acid (EPA) and docosahexaenoic acid (DHA) are well established and likely to contribute to cardioprotective benefits. Objectives: We aimed to determine the effect of moderate EPA and DHA intakes (< 2 g EPA + DHA/d) on the plasma fatty acid profile, lipid and apolipoprotein concentrations, lipoprotein subclass distribution, and markers of oxidative status. We also aimed to examine the effect of age, sex, and apolipoprotein E (APOE) genotype on the observed responses. Design: Three hundred twelve adults aged 20-70 y, who were prospectively recruited according to age, sex, and APOE genotype, completed a double-blind placebo-controlled crossover study. Participants consumed control oil, 0.7 g EPA + DHA/d (0.7FO), and 1.8 g EPA + DHA/d (1.8FO) capsules in random order, each for an 8-wk intervention period, separated by 12-wk washout periods. Results: In the group as a whole, 8% and 11% lower plasma triacylglycerol concentrations were evident after 0.7FO and 1.8FO, respectively (P < 0.001): significant sex x treatment (P = 0.038) and sex x genotype x treatment (P = 0.032) interactions were observed, and the greatest triacylglycerol-lowering responses (reductions of 15% and 23% after 0.7FO and 1.8FO, respectively) were evident in APOE4 men. Furthermore, lower VLDL-cholesterol (P = 0.026) and higher LDL-cholesterol (P = 0.010), HDL-cholesterol (P < 0.001), and HDL2 (P < 0.001) concentrations were evident after fish-oil intervention. Conclusions: Supplements providing EPA + DHA at doses as low as 0.7 g/d have a significant effect on the plasma lipid profile. The results of the current trial, which used a prospective recruitment approach to examine the responses in population subgroups, are indicative of a greater triacylglycerol-lowering action of long-chain n-3 polyunsaturated fatty acids in males than in females.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Random number generation (RNG) is a functionally complex process that is highly controlled and therefore dependent on Baddeley's central executive. This study addresses this issue by investigating whether key predictions from this framework are compatible with empirical data. In Experiment 1, the effect of increasing task demands by increasing the rate of the paced generation was comprehensively examined. As expected, faster rates affected performance negatively because central resources were increasingly depleted. Next, the effects of participants' exposure were manipulated in Experiment 2 by providing increasing amounts of practice on the task. There was no improvement over 10 practice trials, suggesting that the high level of strategic control required by the task was constant and not amenable to any automatization gain with repeated exposure. Together, the results demonstrate that RNG performance is a highly controlled and demanding process sensitive to additional demands on central resources (Experiment 1) and is unaffected by repeated performance or practice (Experiment 2). These features render the easily administered RNG task an ideal and robust index of executive function that is highly suitable for repeated clinical use.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A poor representation of cloud structure in a general circulation model (GCM) is widely recognised as a potential source of error in the radiation budget. Here, we develop a new way of representing both horizontal and vertical cloud structure in a radiation scheme. This combines the ‘Tripleclouds’ parametrization, which introduces inhomogeneity by using two cloudy regions in each layer as opposed to one, each with different water content values, with ‘exponential-random’ overlap, in which clouds in adjacent layers are not overlapped maximally, but according to a vertical decorrelation scale. This paper, Part I of two, aims to parametrize the two effects such that they can be used in a GCM. To achieve this, we first review a number of studies for a globally applicable value of fractional standard deviation of water content for use in Tripleclouds. We obtain a value of 0.75 ± 0.18 from a variety of different types of observations, with no apparent dependence on cloud type or gridbox size. Then, through a second short review, we create a parametrization of decorrelation scale for use in exponential-random overlap, which varies the scale linearly with latitude from 2.9 km at the Equator to 0.4 km at the poles. When applied to radar data, both components are found to have radiative impacts capable of offsetting biases caused by cloud misrepresentation. Part II of this paper implements Tripleclouds and exponential-random overlap into a radiation code and examines both their individual and combined impacts on the global radiation budget using re-analysis data.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Reliably representing both horizontal cloud inhomogeneity and vertical cloud overlap is fundamentally important for the radiation budget of a general circulation model. Here, we build on the work of Part One of this two-part paper by applying a pair of parameterisations that account for horizontal inhomogeneity and vertical overlap to global re-analysis data. These are applied both together and separately in an attempt to quantify the effects of poor representation of the two components on radiation budget. Horizontal inhomogeneity is accounted for using the “Tripleclouds” scheme, which uses two regions of cloud in each layer of a gridbox as opposed to one; vertical overlap is accounted for using “exponential-random” overlap, which aligns vertically continuous cloud according to a decorrelation height. These are applied to a sample of scenes from a year of ERA-40 data. The largest radiative effect of horizontal inhomogeneity is found to be in areas of marine stratocumulus; the effect of vertical overlap is found to be fairly uniform, but with larger individual short-wave and long-wave effects in areas of deep, tropical convection. The combined effect of the two parameterisations is found to reduce the magnitude of the net top-of-atmosphere cloud radiative forcing (CRF) by 2.25 W m−2, with shifts of up to 10 W m−2 in areas of marine stratocumulus. The effects of the uncertainty in our parameterisations on radiation budget is also investigated. It is found that the uncertainty in the impact of horizontal inhomogeneity is of order ±60%, while the uncertainty in the impact of vertical overlap is much smaller. This suggests an insensitivity of the radiation budget to the exact nature of the global decorrelation height distribution derived in Part One.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The problem of identification of a nonlinear dynamic system is considered. A two-layer neural network is used for the solution of the problem. Systems disturbed with unmeasurable noise are considered, although it is known that the disturbance is a random piecewise polynomial process. Absorption polynomials and nonquadratic loss functions are used to reduce the effect of this disturbance on the estimates of the optimal memory of the neural-network model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Epidemiological evidence shows that a diet high in monounsaturated fatty acids (MUFA) but low in saturated fatty acids (SFA) is associated with reduced risk of CHD. The hypocholesterolaemic effect of MUFA is known but there has been little research on the effect of test meal MUFA and SFA composition on postprandial lipid metabolism. The present study investigated the effect of meals containing different proportions of MUFA and SFA on postprandial triacylglycerol and non-esterified fatty acid (NEFA) metabolism. Thirty healthy male volunteers consumed three meals containing equal amounts of fat (40 g), but different proportions of MUFA (12, 17 and 24% energy) in random order. Postprandial plasma triacylglycerol, apolipoprotein B-48, cholesterol, HDL-cholesterol, glucose and insulin concentrations and lipoprotein lipase (EC 3.1.1.34) activity were not significantly different following the three meals which varied in their levels of SFA and MUFA. There was a significant difference in the postprandial NEFA response between meals. The incremental area under the curve of postprandial plasma NEFA concentrations was significantly (P = 0.03) lower following the high-MUFA meal. Regression analysis showed that the non-significant difference in fasting NEFA concentrations was the most important factor determining difference between meals, and that the test meal MUFA content had only a minor effect. In conclusion, varying the levels of MUFA and SFA in test meals has little or no effect on postprandial lipid metabolism.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This investigation moves beyond the traditional studies of word reading to identify how the production complexity of words affects reading accuracy in an individual with deep dyslexia (JO). We examined JO’s ability to read words aloud while manipulating both the production complexity of the words and the semantic context. The classification of words as either phonetically simple or complex was based on the Index of Phonetic Complexity. The semantic context was varied using a semantic blocking paradigm (i.e., semantically blocked and unblocked conditions). In the semantically blocked condition words were grouped by semantic categories (e.g., table, sit, seat, couch,), whereas in the unblocked condition the same words were presented in a random order. JO’s performance on reading aloud was also compared to her performance on a repetition task using the same items. Results revealed a strong interaction between word complexity and semantic blocking for reading aloud but not for repetition. JO produced the greatest number of errors for phonetically complex words in semantically blocked condition. This interaction suggests that semantic processes are constrained by output production processes which are exaggerated when derived from visual rather than auditory targets. This complex relationship between orthographic, semantic, and phonetic processes highlights the need for word recognition models to explicitly account for production processes.