979 resultados para Asymptotic Mean Squared Errors
Resumo:
Remote sensing from space-borne platforms is often seen as an appealing method of monitoring components of the hydrological cycle, including river discharge, due to its spatial coverage. However, data from these platforms is often less than ideal because the geophysical properties of interest are rarely measured directly and the measurements that are taken can be subject to significant errors. This study assimilated water levels derived from a TerraSAR-X synthetic aperture radar image and digital aerial photography with simulations from a two dimensional hydraulic model to estimate discharge, inundation extent, depths and velocities at the confluence of the rivers Severn and Avon, UK. An ensemble Kalman filter was used to assimilate spot heights water levels derived by intersecting shorelines from the imagery with a digital elevation model. Discharge was estimated from the ensemble of simulations using state augmentation and then compared with gauge data. Assimilating the real data reduced the error between analyzed mean water levels and levels from three gauging stations to less than 0.3 m, which is less than typically found in post event water marks data from the field at these scales. Measurement bias was evident, but the method still provided a means of improving estimates of discharge for high flows where gauge data are unavailable or of poor quality. Posterior estimates of discharge had standard deviations between 63.3 m3s-1 and 52.7 m3s-1, which were below 15% of the gauged flows along the reach. Therefore, assuming a roughness uncertainty of 0.03-0.05 and no model structural errors discharge could be estimated by the EnKF with accuracy similar to that arguably expected from gauging stations during flood events. Quality control prior to assimilation, where measurements were rejected for being in areas of high topographic slope or close to tall vegetation and trees, was found to be essential. The study demonstrates the potential, but also the significant limitations of currently available imagery to reduce discharge uncertainty in un-gauged or poorly gauged basins when combined with model simulations in a data assimilation framework.
Resumo:
Process-based integrated modelling of weather and crop yield over large areas is becoming an important research topic. The production of the DEMETER ensemble hindcasts of weather allows this work to be carried out in a probabilistic framework. In this study, ensembles of crop yield (groundnut, Arachis hypogaea L.) were produced for 10 2.5 degrees x 2.5 degrees grid cells in western India using the DEMETER ensembles and the general large-area model (GLAM) for annual crops. Four key issues are addressed by this study. First, crop model calibration methods for use with weather ensemble data are assessed. Calibration using yield ensembles was more successful than calibration using reanalysis data (the European Centre for Medium-Range Weather Forecasts 40-yr reanalysis, ERA40). Secondly, the potential for probabilistic forecasting of crop failure is examined. The hindcasts show skill in the prediction of crop failure, with more severe failures being more predictable. Thirdly, the use of yield ensemble means to predict interannual variability in crop yield is examined and their skill assessed relative to baseline simulations using ERA40. The accuracy of multi-model yield ensemble means is equal to or greater than the accuracy using ERA40. Fourthly, the impact of two key uncertainties, sowing window and spatial scale, is briefly examined. The impact of uncertainty in the sowing window is greater with ERA40 than with the multi-model yield ensemble mean. Subgrid heterogeneity affects model accuracy: where correlations are low on the grid scale, they may be significantly positive on the subgrid scale. The implications of the results of this study for yield forecasting on seasonal time-scales are as follows. (i) There is the potential for probabilistic forecasting of crop failure (defined by a threshold yield value); forecasting of yield terciles shows less potential. (ii) Any improvement in the skill of climate models has the potential to translate into improved deterministic yield prediction. (iii) Whilst model input uncertainties are important, uncertainty in the sowing window may not require specific modelling. The implications of the results of this study for yield forecasting on multidecadal (climate change) time-scales are as follows. (i) The skill in the ensemble mean suggests that the perturbation, within uncertainty bounds, of crop and climate parameters, could potentially average out some of the errors associated with mean yield prediction. (ii) For a given technology trend, decadal fluctuations in the yield-gap parameter used by GLAM may be relatively small, implying some predictability on those time-scales.
Resumo:
The importance of temperature in the determination of the yield of an annual crop (groundnut; Arachis hypogaea L. in India) was assessed. Simulations from a regional climate model (PRECIS) were used with a crop model (GLAM) to examine crop growth under simulated current (1961-1990) and future (2071-2100) climates. Two processes were examined: the response of crop duration to mean temperature and the response of seed-set to extremes of temperature. The relative importance of, and interaction between, these two processes was examined for a number of genotypic characteristics, which were represented by using different values of crop model parameters derived from experiments. The impact of mean and extreme temperatures varied geographically, and depended upon the simulated genotypic properties. High temperature stress was not a major determinant of simulated yields in the current climate, but affected the mean and variability of yield under climate change in two regions which had contrasting statistics of daily maximum temperature. Changes in mean temperature had a similar impact on mean yield to that of high temperature stress in some locations and its effects were more widespread. Where the optimal temperature for development was exceeded, the resulting increase in duration in some simulations fully mitigated the negative impacts of extreme temperatures when sufficient water was available for the extended growing period. For some simulations the reduction in mean yield between the current and future climates was as large as 70%, indicating the importance of genotypic adaptation to changes in both means and extremes of temperature under climate change. (c) 2006 Elsevier B.V. All rights reserved.
Resumo:
A more complete understanding of amino acid ( AA) metabolism by the various tissues of the body is required to improve upon current systems for predicting the use of absorbed AA. The objective of this work was to construct and parameterize a model of net removal of AA by the portal-drained viscera (PDV). Six cows were prepared with arterial, portal, and hepatic catheters and infused abomasally with 0, 200, 400, or 600 g of casein daily. Casein infusion increased milk yield quadratically and tended to increase milk protein yield quadratically. Arterial concentrations of a number of essential AA increased linearly with respect to infusion amount. When infused casein was assumed to have a true digestion coefficient of 0.95, the minimum likely true digestion coefficient for noninfused duodenal protein was found to be 0.80. Net PDV use of AA appeared to be linearly related to total supply (arterial plus absorption), and extraction percentages ranged from 0.5 to 7.25% for essential AA. Prediction errors for portal vein AA concentrations ranged from 4 to 9% of the observed mean concentrations. Removal of AA by PDV represented approximately 33% of total postabsorptive catabolic use, including use during absorption but excluding use for milk protein synthesis, and was apparently adequate to support endogenous N losses in feces of 18.4 g/d. As 69% of this use was from arterial blood, increased PDV catabolism of AA in part represents increased absorption of AA in excess of amounts required by other body tissues. Based on the present model, increased anabolic use of AA in the mammary and other tissues would reduce the catabolic use of AA by the PDV.
Resumo:
A study was conducted to estimate variation among laboratories and between manual and automated techniques of measuring pressure on the resulting gas production profiles (GPP). Eight feeds (molassed sugarbeet feed, grass silage, maize silage, soyabean hulls, maize gluten feed, whole crop wheat silage, wheat, glucose) were milled to pass a I mm screen and sent to three laboratories (ADAS Nutritional Sciences Research Unit, UK; Institute of Grassland and Environmental Research (IGER), UK; Wageningen University, The Netherlands). Each laboratory measured GPP over 144 h using standardised procedures with manual pressure transducers (MPT) and automated pressure systems (APS). The APS at ADAS used a pressure transducer and bottles in a shaking water bath, while the APS at Wageningen and IGER used a pressure sensor and bottles held in a stationary rack. Apparent dry matter degradability (ADDM) was estimated at the end of the incubation. GPP were fitted to a modified Michaelis-Menten model assuming a single phase of gas production, and GPP were described in terms of the asymptotic volume of gas produced (A), the time to half A (B), the time of maximum gas production rate (t(RM) (gas)) and maximum gas production rate (R-M (gas)). There were effects (P<0.001) of substrate on all parameters. However, MPT produced more (P<0.001) gas, but with longer (P<0.001) B and t(RM gas) (P<0.05) and lower (P<0.001) R-M gas compared to APS. There was no difference between apparatus in ADDM estimates. Interactions occurred between substrate and apparatus, substrate and laboratory, and laboratory and apparatus. However, when mean values for MPT were regressed from the individual laboratories, relationships were good (i.e., adjusted R-2 = 0.827 or higher). Good relationships were also observed with APS, although they were weaker than for MPT (i.e., adjusted R-2 = 0.723 or higher). The relationships between mean MPT and mean APS data were also good (i.e., adjusted R 2 = 0. 844 or higher). Data suggest that, although laboratory and method of measuring pressure are sources of variation in GPP estimation, it should be possible using appropriate mathematical models to standardise data among laboratories so that data from one laboratory could be extrapolated to others. This would allow development of a database of GPP data from many diverse feeds. (c) 2005 Published by Elsevier B.V.
Resumo:
The primary purpose of this study was to model the partitioning of evapotranspiration in a maize-sunflower intercrop at various canopy covers. The Shuttleworth-Wallace (SW) model was extended for intercropping systems to include both crop transpiration and soil evaporation and allowing interaction between the two. To test the accuracy of the extended SW model, two field experiments of maize-sunflower intercrop were conducted in 1998 and 1999. Plant transpiration and soil evaporation were measured using sap flow gauges and lysimeters, respectively. The mean prediction error (simulated minus measured values) for transpiration was zero (which indicated no overall bias in estimation error), and its accuracy was not affected by the plant growth stages, but simulated transpiration during high measured transpiration rates tended to be slightly underestimated. Overall, the predictions for daily soil evaporation were also accurate. Model estimation errors were probably due to the simplified modelling of soil water content, stomatal resistances and soil heat flux as well as due to the uncertainties in characterising the 2 micrometeorological conditions. The SW’s prediction of transpiration was most sensitive to parameters most directly related to the canopy characteristics such as the partitioning of captured solar radiation, canopy resistance, and bulk boundary layer resistance.
Resumo:
The problem of estimating the individual probabilities of a discrete distribution is considered. The true distribution of the independent observations is a mixture of a family of power series distributions. First, we ensure identifiability of the mixing distribution assuming mild conditions. Next, the mixing distribution is estimated by non-parametric maximum likelihood and an estimator for individual probabilities is obtained from the corresponding marginal mixture density. We establish asymptotic normality for the estimator of individual probabilities by showing that, under certain conditions, the difference between this estimator and the empirical proportions is asymptotically negligible. Our framework includes Poisson, negative binomial and logarithmic series as well as binomial mixture models. Simulations highlight the benefit in achieving normality when using the proposed marginal mixture density approach instead of the empirical one, especially for small sample sizes and/or when interest is in the tail areas. A real data example is given to illustrate the use of the methodology.
Resumo:
Mean platelet volume (MPV) and platelet count (PLT) are highly heritable and tightly regulated traits. We performed a genome-wide association study for MPV and identified one SNP, rs342293, as having highly significant and reproducible association with MPV (per-G allele effect 0.016 +/- 0.001 log fL; P < 1.08 x 10(-24)) and PLT (per-G effect -4.55 +/- 0.80 10(9)/L; P < 7.19 x 10(-8)) in 8586 healthy subjects. Whole-genome expression analysis in the 1-MB region showed a significant association with platelet transcript levels for PIK3CG (n = 35; P = .047). The G allele at rs342293 was also associated with decreased binding of annexin V to platelets activated with collagen-related peptide (n = 84; P = .003). The region 7q22.3 identifies the first QTL influencing platelet volume, counts, and function in healthy subjects. Notably, the association signal maps to a chromosome region implicated in myeloid malignancies, indicating this site as an important regulatory site for hematopoiesis. The identification of loci regulating MPV by this and other studies will increase our insight in the processes of megakaryopoiesis and proplatelet formation, and it may aid the identification of genes that are somatically mutated in essential thrombocytosis. (Blood. 2009; 113: 3831-3837)
Resumo:
This paper presents in detail a theoretical adaptive model of thermal comfort based on the “Black Box” theory, taking into account factors such as culture, climate, social, psychological and behavioural adaptations, which have an impact on the senses used to detect thermal comfort. The model is called the Adaptive Predicted Mean Vote (aPMV) model. The aPMV model explains, by applying the cybernetics concept, the phenomena that the Predicted Mean Vote (PMV) is greater than the Actual Mean Vote (AMV) in free-running buildings, which has been revealed by many researchers in field studies. An Adaptive coefficient (λ) representing the adaptive factors that affect the sense of thermal comfort has been proposed. The empirical coefficients in warm and cool conditions for the Chongqing area in China have been derived by applying the least square method to the monitored onsite environmental data and the thermal comfort survey results.
What do we mean when we refer to Bacteroidetes populations in the human gastrointestinal microbiota?
Resumo:
Recent large-scale cloning studies have shown that the ratio of Bacteroidetes to Firmicutes may be important in the obesity-associated gut microbiota, but the species these phyla represent in this ecosystem has not been examined. The Bacteroidetes data from the recent Turnbaugh study were examined to determine those members of the phylum detected in human faecal samples. In addition, FISH analysis was performed on faecal samples from 17 healthy, nonobese donors using probe Bac303, routinely used by gut microbiologists to enumerate BacteroidesPrevotella populations in faecal samples, and another probe (CFB286) whose target range has some overlap with that of Bac303. Sequence analysis of the Turnbaugh data showed that 23/519 clones were chimeras or erroneous sequences; all good sequences were related to species of the order Bacteroidales, but no one species was present in all donors. FISH analysis demonstrated that approximately one-quarter of the healthy, nonobese donors harboured high numbers of Bacteroidales not detected by probe Bac303. It is clear that Bacteroidales populations in human faecal samples have been underestimated in FISH-based studies. New probes and complementary primer sets should be designed to examine numerical and compositional changes in the Bacteroidales during dietary interventions and in studies of the obesity-associated microbiota in humans and animal model systems.
Resumo:
Objective: To describe the use of a multifaceted strategy for recruiting general practitioners (GPs) and community pharmacists to talk about medication errors which have resulted in preventable drug-related admissions to hospital. This is a potentially sensitive subject with medicolegal implications. Setting: Four primary care trusts and one teaching hospital in the UK. Method: Letters were mailed to community pharmacists and general practitioners asking for provisional consent to be interviewed and permission to contact them again should a patient be admitted to hospital as a result of a medication error. In addition, GPs were asked for permission to approach their patients should they be admitted to hospital. A multifaceted approach to recruitment was used including gaining support for the study from professional defence agencies and local champions. Key findings: Eighty-five percent (310/385) of GPs and 62% (93/149) of community pharmacists responded to the letters. Eighty-five percent (266/310) of GPs who responded and 81% (75/93) of community pharmacists who responded gave provisional consent to participate in interviews. All GPs (14 out of 14) and community pharmacists (10 out of 10) who were subsequently asked to participate, when patients were admitted to hospital, agreed to be interviewed. Conclusion: The multifaceted approach to recruitment was associated with an impressive response when asking healthcare professionals to be interviewed about medication errors which have resulted in preventable drug-related morbidity.
Resumo:
Background: Medication errors are an important cause of morbidity and mortality in primary care. The aims of this study are to determine the effectiveness, cost effectiveness and acceptability of a pharmacist-led information-technology-based complex intervention compared with simple feedback in reducing proportions of patients at risk from potentially hazardous prescribing and medicines management in general (family) practice. Methods: Research subject group: "At-risk" patients registered with computerised general practices in two geographical regions in England. Design: Parallel group pragmatic cluster randomised trial. Interventions: Practices will be randomised to either: (i) Computer-generated feedback; or (ii) Pharmacist-led intervention comprising of computer-generated feedback, educational outreach and dedicated support. Primary outcome measures: The proportion of patients in each practice at six and 12 months post intervention: - with a computer-recorded history of peptic ulcer being prescribed non-selective non-steroidal anti-inflammatory drugs - with a computer-recorded diagnosis of asthma being prescribed beta-blockers - aged 75 years and older receiving long-term prescriptions for angiotensin converting enzyme inhibitors or loop diuretics without a recorded assessment of renal function and electrolytes in the preceding 15 months. Secondary outcome measures; These relate to a number of other examples of potentially hazardous prescribing and medicines management. Economic analysis: An economic evaluation will be done of the cost per error avoided, from the perspective of the UK National Health Service (NHS), comparing the pharmacist-led intervention with simple feedback. Qualitative analysis: A qualitative study will be conducted to explore the views and experiences of health care professionals and NHS managers concerning the interventions, and investigate possible reasons why the interventions prove effective, or conversely prove ineffective. Sample size: 34 practices in each of the two treatment arms would provide at least 80% power (two-tailed alpha of 0.05) to demonstrate a 50% reduction in error rates for each of the three primary outcome measures in the pharmacist-led intervention arm compared with a 11% reduction in the simple feedback arm. Discussion: At the time of submission of this article, 72 general practices have been recruited (36 in each arm of the trial) and the interventions have been delivered. Analysis has not yet been undertaken.
Resumo:
The potential of clarification questions (CQs) to act as a form of corrective input for young children's grammatical errors was examined. Corrective responses were operationalized as those occasions when child speech shifted from erroneous to correct (E -> C) contingent on a clarification question. It was predicted that E -> C sequences would prevail over shifts in the opposite direction (C -> E), as can occur in the case of nonerror-contingent CQs. This prediction was tested via a standard intervention paradigm, whereby every 60s a sequence of two clarification requests (either specific or general) was introduced into conversation with a total of 45 2- and 4-year-old children. For 10 categories of grammatical structure, E -> C sequences predominated over their C -> E counterparts, with levels of E -> C shifts increasing after two clarification questions. Children were also more reluctant to repeat erroneous forms than their correct counterparts, following the intervention of CQs. The findings provide support for Saxton's prompt hypothesis, which predicts that error-contingent CQs bear the potential to cue recall of previously acquired grammatical forms.
Resumo:
Purpose. Accommodation can mask hyperopia and reduce the accuracy of non-cycloplegic refraction. It is, therefore, important to minimize accommodation to obtain a measure of hyperopia as accurate as possible. To characterize the parameters required to measure the maximally hyperopic error using photorefraction, we used different target types and distances to determine which target was most likely to maximally relax accommodation and thus more accurately detect hyperopia in an individual. Methods. A PlusoptiX SO4 infra-red photorefractor was mounted in a remote haploscope which presented the targets. All participants were tested with targets at four fixation distances between 0.3 and 2 m containing all combinations of blur, disparity, and proximity/looming cues. Thirty-eight infants (6 to 44 weeks) were studied longitudinally, and 104 children [4 to 15 years (mean 6.4)] and 85 adults, with a range of refractive errors and binocular vision status, were tested once. Cycloplegic refraction data were available for a sub-set of 59 participants spread across the age range. Results. The maximally hyperopic refraction (MHR) found at any time in the session was most frequently found when fixating the most distant targets and those containing disparity and dynamic proximity/looming cues. Presence or absence of blur was less significant, and targets in which only single cues to depth were present were also less likely to produce MHR. MHR correlated closely with cycloplegic refraction (r = 0.93, mean difference 0.07 D, p = n.s., 95% confidence interval +/-<0.25 D) after correction by a calibration factor. Conclusions. Maximum relaxation of accommodation occurred for binocular targets receding into the distance. Proximal and disparity cues aid relaxation of accommodation to a greater extent than blur, and thus non-cycloplegic refraction targets should incorporate these cues. This is especially important in screening contexts with a brief opportunity to test for significant hyperopia. MHR in our laboratory was found to be a reliable estimation of cycloplegic refraction. (Optom Vis Sci 2009;86:1276-1286)