92 resultados para Relative Validity
em CentAUR: Central Archive University of Reading - UK
Resumo:
Dietary assessment in older adults can be challenging. The Novel Assessment of Nutrition and Ageing (NANA) method is a touch-screen computer-based food record that enables older adults to record their dietary intakes. The objective of the present study was to assess the relative validity of the NANA method for dietary assessment in older adults. For this purpose, three studies were conducted in which a total of ninety-four older adults (aged 65–89 years) used the NANA method of dietary assessment. On a separate occasion, participants completed a 4 d estimated food diary. Blood and 24 h urine samples were also collected from seventy-six of the volunteers for the analysis of biomarkers of nutrient intake. The results from all the three studies were combined, and nutrient intake data collected using the NANA method were compared against the 4 d estimated food diary and biomarkers of nutrient intake. Bland–Altman analysis showed a reasonable agreement between the dietary assessment methods for energy and macronutrient intake; however, there were small, but significant, differences for energy and protein intake, reflecting the tendency for the NANA method to record marginally lower energy intakes. Significant positive correlations were observed between urinary urea and dietary protein intake using both the NANA and the 4 d estimated food diary methods, and between plasma ascorbic acid and dietary vitamin C intake using the NANA method. The results demonstrate the feasibility of computer-based dietary assessment in older adults, and suggest that the NANA method is comparable to the 4 d estimated food diary, and could be used as an alternative to the food diary for the short-term assessment of an individual’s dietary intake.
Resumo:
The utility of an "ecologically rational" recognition-based decision rule in multichoice decision problems is analyzed, varying the type of judgment required (greater or lesser). The maximum size and range of a counterintuitive advantage associated with recognition-based judgment (the "less-is-more effect") is identified for a range of cue validity values. Greater ranges of the less-is-more effect occur when participants are asked which is the greatest of to choices (m > 2) than which is the least. Less-is-more effects also have greater range for larger values of in. This implies that the classic two-altemative forced choice task, as studied by Goldstein and Gigerenzer (2002), may not be the most appropriate test case for less-is-more effects.
Resumo:
It is often assumed on the basis of single-parcel energetics that compressible effects and conversions with internal energy are negligible whenever typical displacements of fluid parcels are small relative to the scale height of the fluid (defined as the ratio of the squared speed of sound over gravitational acceleration). This paper shows that the above approach is flawed, however, and that a correct assessment of compressible effects and internal energy conversions requires considering the energetics of at least two parcels, or more generally, of mass conserving parcel re-arrangements. As a consequence, it is shown that it is the adiabatic lapse rate and its derivative with respect to pressure, rather than the scale height, which controls the relative importance of compressible effects and internal energy conversions when considering the global energy budget of a stratied fluid. Only when mass conservation is properly accounted for is it possible to explain why available internal energy can account for up to 40 percent of the total available potential energy in the oceans. This is considerably larger than the prediction of single-parcel energetics, according to which this number should be no more than about 2 percent.
Resumo:
Chemical methods to predict the bioavailable fraction of organic contaminants are usually validated in the literature by comparison with established bioassays. A soil spiked with polycyclic aromatic hydrocarbons (PAHs) was aged over six months and subjected to butanol, cyclodextrin and tenax extractions as well as an exhaustive extraction to determine total PAH concentrations at several time points. Earthworm (Eisenia fetida) and rye grass root (Lolium multiflorum) accumulation bioassays were conducted in parallel. Butanol extractions gave the best relationship with earthworm accumulation (r2 ≤ 0.54, p ≤ 0.01); cyclodextrin, butanol and acetone–hexane extractions all gave good predictions of accumulation in rye grass roots (r2 ≤ 0.86, p ≤ 0.01). However, the profile of the PAHs extracted by the different chemical methods was significantly different (p < 0.01) to that accumulated in the organisms. Biota accumulated a higher proportion of the heavier 4-ringed PAHs. It is concluded that bioaccumulation is a complex process that cannot be predicted by measuring the bioavailable fraction alone. The ability of chemical methods to predict PAH accumulation in Eisenia fetida and Lolium multiflorum was hindered by the varied metabolic fate of the different PAHs within the organisms.
Resumo:
Key climate feedbacks due to water vapor and clouds rest largely on how relative humidity R changes in a warmer climate, yet this has not been extensively analyzed in models. General circulation models (GCMs) from the CMIP3 archive and several higher resolution atmospheric GCMs examined here generally predict a characteristic pattern of R trend with global temperature that has been reported previously in individual models, including increase around the tropopause, decrease in the tropical upper troposphere, and decrease in midlatitudes. This pattern is very similar to that previously reported for cloud cover in the same GCMs, confirming the role of R in controlling changes in simulated cloud. Comparing different models, the trend in each part of the troposphere is approximately proportional to the upward and/or poleward gradient of R in the present climate. While this suggests that the changes simply reflect a shift of the R pattern upward with the tropopause and poleward with the zonal jets, the drying trend in the subtropics is roughly three times too large to be attributable to shifts of subtropical features, and the subtropical R minima deepen in most models. R trends are correlated with horizontal model resolution, especially outside the tropics, where they show signs of convergence and latitudinal gradients become close to available observations for GCM resolutions near T85 and higher. We argue that much of the systematic change in R can be explained by the local specific humidity having been set (by condensation) in remote regions with different temperature changes, hence the gradients and trends each depend on a model’s ability to resolve moisture transport. Finally, subtropical drying trends predicted from the warming alone fall well short of those observed in recent decades. While this discrepancy supports previous reports of GCMs underestimating Hadley Cell expansion, our results imply that shifts alone are not a sufficient interpretation of changes.
Resumo:
Critical loads are the basis for policies controlling emissions of acidic substances in Europe and elsewhere. They are assessed by several elaborate and ingenious models, each of which requires many parameters, and have to be applied on a spatially-distributed basis. Often the values of the input parameters are poorly known, calling into question the validity of the calculated critical loads. This paper attempts to quantify the uncertainty in the critical loads due to this "parameter uncertainty", using examples from the UK. Models used for calculating critical loads for deposition of acidity and nitrogen in forest and heathland ecosystems were tested at four contrasting sites. Uncertainty was assessed by Monte Carlo methods. Each input parameter or variable was assigned a value, range and distribution in an objective a fashion as possible. Each model was run 5000 times at each site using parameters sampled from these input distributions. Output distributions of various critical load parameters were calculated. The results were surprising. Confidence limits of the calculated critical loads were typically considerably narrower than those of most of the input parameters. This may be due to a "compensation of errors" mechanism. The range of possible critical load values at a given site is however rather wide, and the tails of the distributions are typically long. The deposition reductions required for a high level of confidence that the critical load is not exceeded are thus likely to be large. The implication for pollutant regulation is that requiring a high probability of non-exceedance is likely to carry high costs. The relative contribution of the input variables to critical load uncertainty varied from site to site: any input variable could be important, and thus it was not possible to identify variables as likely targets for research into narrowing uncertainties. Sites where a number of good measurements of input parameters were available had lower uncertainties, so use of in situ measurement could be a valuable way of reducing critical load uncertainty at particularly valuable or disputed sites. From a restricted number of samples, uncertainties in heathland critical loads appear comparable to those of coniferous forest, and nutrient nitrogen critical loads to those of acidity. It was important to include correlations between input variables in the Monte Carlo analysis, but choice of statistical distribution type was of lesser importance. Overall, the analysis provided objective support for the continued use of critical loads in policy development. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This study evaluates computer-generated written explanations about drug prescriptions that are based on an analysis of both patient and doctor informational needs. Three experiments examine the effects of varying the type of information given about the possible side effects of the medication, and the order of information within the explanation. Experiment 1 investigated the effects of these two factors on people's ratings of how good they consider the explanations to be and of their perceived likelihood of taking the medication, as well as on their memory for the information in the explanation. Experiment 2 further examined the effects of varying information about side effects by separating out the contribution of number and severity of side effects. It was found that participants in this study did not “like” explanations that described severe side effects, and also judged that they would be less likely to take the medication if given such explanations. Experiment 3 therefore investigated whether information about severe side effects could be presented in such a way as to increase judgements of how good explanations are thought to be, as well as the perceived likelihood of adherence. The results showed some benefits of providing additional explanatory information.
Resumo:
Laboratory animals should be provided with enrichment objects in their cages; however, it is first necessary to test whether the proposed enrichment objects provide benefits that increase the animals’ welfare. The two main paradigms currently used to assess proposed enrichment objects are the choice test, which is limited to determining relative frequency of choice, and consumer demand studies, which can indicate the strength of a preference but are complex to design. Here, we propose a third methodology: a runway paradigm, which can be used to assess the strength of an animal’s motivation for enrichment objects, is simpler to use than consumer demand studies, and is faster to complete than typical choice tests. Time spent with objects in a standard choice test was used to rank several enrichment objects in order to compare with the ranking found in our runway paradigm. The rats ran significantly more times, ran faster, and interacted longer with objects with which they had previously spent the most time. It was concluded that this simple methodology is suitable for measuring rats’ motivation to reach enrichment objects. This can be used to assess the preference for different types of enrichment objects or to measure reward system processes.
Hydrolyzable tannin structures influence relative globular and random coil protein binding strengths
Resumo:
Binding parameters for the interactions of pentagalloyl glucose (PGG) and four hydrolyzable tannins (representing gallotannins and ellagitannins) with gelatin and bovine serum albumin (BSA) have been determined from isothermal titration calorimetry data. Equilibrium binding constants determined for the interaction of PGG and isolated mixtures of tara gallotannins and of sumac gallotannins with gelatin and BSA were of the same order of magnitude for each tannin (in the range of 10(4)-10(5) M-1 for stronger binding sites when using a binding model consisting of two sets of multiple binding sites). In contrast, isolated mixtures of chestnut ellagitannins and of myrabolan ellagitannins exhibited 3-4 orders of magnitude greater equilibrium binding constants for the interaction with gelatin (similar to 2 x 10(6) M-1) than for that with BSA (similar to 8 x 10(2) M-1). Binding stoichiometries revealed that the stronger binding sites on gelatin outnumbered those on BSA by a ratio of at least similar to 2:1 for all of the hydrolyzable tannins studied. Overall, the data revealed that relative binding constants for the interactions with gelatin and BSA are dependent on the structural flexibility of the tannin molecule.
Resumo:
The Euro-Mediterranean region is an important centre for the diversity of crop wild relatives. Crops, such as oats (Avena sativa), sugar beet (Beta vulgaris), apple (Malus domestica), annual meadow grass (Festuca pratensis), white clover (Trifolium repens), arnica (Arnica montana), asparagus (Asparagus officinalis), lettuce (Lactuca sativa), and sage (Salvia officinalis) etc., all have wild relatives in the region. The European Community funded project, PGR Forum (www.pgrforum.org) is building an online information system to provide access to crop wild relative data to a broad user community; including plant breeders, protected area managers, policy-makers, conservationists, taxonomists and the wider public. The system will include data on uses, geographical distribution, biology, population and habitat information, threats (including IUCN Red List assessments) and conservation actions. This information is vital for the continued sustainable utilisation and conservation of crop wild relatives. Two major databases have been utilised as the backbone to a Euro-Mediterranean crop wild relative catalogue, which forms the core of the information system: Euro+Med PlantBase (www.euromed.org.uk) and Mansfeld’s World Database of Agricultural and Horticultural Crops (http://mansfeld.ipk-gatersleben.de). By matching the genera found within the two databases, a preliminary list of crop wild relatives has been produced. Around 20,000 of the 30,000+ species listed in Euro+Med PlantBase can be considered crop wild relatives, i.e. those species found within the same genus as a crop. The list is currently being refined by implementing a priority ranking system based on the degree of relatedness of taxa to the associated crop.
Resumo:
We focus on the comparison of three statistical models used to estimate the treatment effect in metaanalysis when individually pooled data are available. The models are two conventional models, namely a multi-level and a model based upon an approximate likelihood, and a newly developed model, the profile likelihood model which might be viewed as an extension of the Mantel-Haenszel approach. To exemplify these methods, we use results from a meta-analysis of 22 trials to prevent respiratory tract infections. We show that by using the multi-level approach, in the case of baseline heterogeneity, the number of clusters or components is considerably over-estimated. The approximate and profile likelihood method showed nearly the same pattern for the treatment effect distribution. To provide more evidence two simulation studies are accomplished. The profile likelihood can be considered as a clear alternative to the approximate likelihood model. In the case of strong baseline heterogeneity, the profile likelihood method shows superior behaviour when compared with the multi-level model. Copyright (C) 2006 John Wiley & Sons, Ltd.