69 resultados para Estimating
Resumo:
Objectives: To assess the short- and long-term reproducibility of a short food group questionnaire, and to compare its performance for estimating nutrient intakes in comparison with a 7-day diet diary. Design: Participants for the reproducibility study completed the food group questionnaire at two time points, up to 2 years apart. Participants for the performance study completed both the food group questionnaire and a 7-day diet diary a few months apart. Reproducibility was assessed by kappa statistics and percentage change between the two questionnaires; performance was assessed by kappa statistics, rank correlations and percentages of participants classified into the same and opposite thirds of intake. Setting: A random sample of participants in the Million Women Study, a population-based prospective study in the UK. Subjects: In total, 12 221 women aged 50-64 years. Results: in the reproducibility study, 75% of the food group items showed at least moderate agreement for all four time-point comparisons. Items showing fair agreement or worse tended to be those where few respondents reported eating them more than once a week, those consumed in small amounts and those relating to types of fat consumed. Compared with the diet diary, the food group questionnaire showed consistently reasonable performance for the nutrients carbohydrate, saturated fat, cholesterol, total sugars, alcohol, fibre, calcium, riboflavin, folate and vitamin C. Conclusions: The short food group questionnaire used in this study has been shown to be reproducible over time and to perform reasonably well for the assessment of a number of dietary nutrients.
Resumo:
1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.
Resumo:
A method of estimating dissipation rates from a vertically pointing Doppler lidar with high temporal and spatial resolution has been evaluated by comparison with independent measurements derived from a balloon-borne sonic anemometer. This method utilizes the variance of the mean Doppler velocity from a number of sequential samples and requires an estimate of the horizontal wind speed. The noise contribution to the variance can be estimated from the observed signal-to-noise ratio and removed where appropriate. The relative size of the noise variance to the observed variance provides a measure of the confidence in the retrieval. Comparison with in situ dissipation rates derived from the balloon-borne sonic anemometer reveal that this particular Doppler lidar is capable of retrieving dissipation rates over a range of at least three orders of magnitude. This method is most suitable for retrieval of dissipation rates within the convective well-mixed boundary layer where the scales of motion that the Doppler lidar probes remain well within the inertial subrange. Caution must be applied when estimating dissipation rates in more quiescent conditions. For the particular Doppler lidar described here, the selection of suitably short integration times will permit this method to be applicable in such situations but at the expense of accuracy in the Doppler velocity estimates. The two case studies presented here suggest that, with profiles every 4 s, reliable estimates of ϵ can be derived to within at least an order of magnitude throughout almost all of the lowest 2 km and, in the convective boundary layer, to within 50%. Increasing the integration time for individual profiles to 30 s can improve the accuracy substantially but potentially confines retrievals to within the convective boundary layer. Therefore, optimization of certain instrument parameters may be required for specific implementations.
Resumo:
A key aspect in designing an ecient decadal prediction system is ensuring that the uncertainty in the ocean initial conditions is sampled optimally. Here, we consider one strategy to address this issue by investigating the growth of optimal perturbations in the HadCM3 global climate model (GCM). More specically, climatically relevant singular vectors (CSVs) - the small perturbations which grow most rapidly for a specic initial condition - are estimated for decadal timescales in the Atlantic Ocean. It is found that reliable CSVs can be estimated by running a large ensemble of integrations of the GCM. Amplication of the optimal perturbations occurs for more than 10 years, and possibly up to 40 years. The identi ed regions for growing perturbations are found to be in the far North Atlantic, and these perturbations cause amplication through an anomalous meridional overturning circulation response. Additionally, this type of analysis potentially informs the design of future ocean observing systems by identifying the sensitive regions where small uncertainties in the ocean state can grow maximally. Although these CSVs are expensive to compute, we identify ways in which the process could be made more ecient in the future.
Resumo:
A Bayesian method of estimating multivariate sample selection models is introduced and applied to the estimation of a demand system for food in the UK to account for censoring arising from infrequency of purchase. We show how it is possible to impose identifying restrictions on the sample selection equations and that, unlike a maximum likelihood framework, the imposition of adding up at both latent and observed levels is straightforward. Our results emphasise the role played by low incomes and socio-economic circumstances in leading to poor diets and also indicate that the presence of children in a household has a negative impact on dietary quality.
Resumo:
Observations of a chemical at a point in the atmosphere typically show sudden transitions between episodes of high and low concentration. Often these are associated with a rapid change in the origin of air arriving at the site. Lagrangian chemical models riding along trajectories can reproduce such transitions, but small timing errors from trajectory phase errors dramatically reduce the correlation between modeled concentrations and observations. Here the origin averaging technique is introduced to obtain maps of average concentration as a function of air mass origin for the East Atlantic Summer Experiment 1996 (EASE96, a ground-based chemistry campaign). These maps are used to construct origin averaged time series which enable comparison between a chemistry model and observations with phase errors factored out. The amount of the observed signal explained by trajectory changes can be quantified, as can the systematic model errors as a function of air mass origin. The Cambridge Tropospheric Trajectory model of Chemistry and Transport (CiTTyCAT) can account for over 70% of the observed ozone signal variance during EASE96 when phase errors are side-stepped by origin averaging. The dramatic increase in correlation (from 23% without averaging) cannot be achieved by time averaging. The success of the model is attributed to the strong relationship between changes in ozone along trajectories and their origin and its ability to simulate those changes. The model performs less well for longer-lived chemical constituents because the initial conditions 5 days before arrival are insufficiently well known.
Resumo:
Many well-established statistical methods in genetics were developed in a climate of severe constraints on computational power. Recent advances in simulation methodology now bring modern, flexible statistical methods within the reach of scientists having access to a desktop workstation. We illustrate the potential advantages now available by considering the problem of assessing departures from Hardy-Weinberg (HW) equilibrium. Several hypothesis tests of HW have been established, as well as a variety of point estimation methods for the parameter which measures departures from HW under the inbreeding model. We propose a computational, Bayesian method for assessing departures from HW, which has a number of important advantages over existing approaches. The method incorporates the effects-of uncertainty about the nuisance parameters--the allele frequencies--as well as the boundary constraints on f (which are functions of the nuisance parameters). Results are naturally presented visually, exploiting the graphics capabilities of modern computer environments to allow straightforward interpretation. Perhaps most importantly, the method is founded on a flexible, likelihood-based modelling framework, which can incorporate the inbreeding model if appropriate, but also allows the assumptions of the model to he investigated and, if necessary, relaxed. Under appropriate conditions, information can be shared across loci and, possibly, across populations, leading to more precise estimation. The advantages of the method are illustrated by application both to simulated data and to data analysed by alternative methods in the recent literature.
Resumo:
Research on the topic of liquidity has greatly benefited from the improved availability of data. Researchers have addressed questions regarding the factors that influence bid-ask spreads and the relationship between spreads and risk, return and liquidity. Intra-day data have been used to measure the effective spread and researchers have been able to refine the concepts of liquidity to include the price impact of transactions on a trade-by-trade analysis. The growth in the creation of tax-transparent securities has greatly enhanced the visibility of securitized real estate, and has naturally led to the question of whether the increased visibility of real estate has caused market liquidity to change. Although the growth in the public market for securitized real estate has occurred in international markets, it has not been accompanied by universal publication of transaction data. Therefore this paper develops an aggregate daily data-based test for liquidity and applies the test to US data in order to check for consistency with the results of prior intra-day analysis. If the two approaches produce similar results, we can apply the same technique to markets in which less detailed data are available and offer conclusions on the liquidity of a wider set of markets.
Resumo:
Various methods of assessment have been applied to the One Dimensional Time to Explosion (ODTX) apparatus and experiments with the aim of allowing an estimate of the comparative violence of the explosion event to be made. Non-mechanical methods used were a simple visual inspection, measuring the increase in the void volume of the anvils following an explosion and measuring the velocity of the sound produced by the explosion over 1 metre. Mechanical methods used included monitoring piezo-electric devices inserted in the frame of the machine and measuring the rotational velocity of a rotating bar placed on the top of the anvils after it had been displaced by the shock wave. This last method, which resembles original Hopkinson Bar experiments, seemed the easiest to apply and analyse, giving relative rankings of violence and the possibility of the calculation of a “detonation” pressure.