956 resultados para Tax revenue estimating
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
This paper presents the method and findings of a contingent valuation (CV) study that aimed to elicit United Kingdom citizens' willingness to pay to support legislation to phase out the use of battery cages for egg production in the European Union (EU). The method takes account of various biases associated with the CV technique, including 'warm glow', 'part-whole' and sample response biases. Estimated mean willingness to pay to support the legislation is used to estimate the annual benefit of the legislation to UK citizens. This is compared with the estimated annual costs of the legislation over a 12-year period, which allows for readjustment by the UK egg industry. The analysis shows that the estimated benefits of the legislation outweigh the costs. The study demonstrates that CV is a potentially useful technique for assessing the likely benefits associated with proposed legislation. However, estimates of CV studies must be treated with caution. It is important that they are derived from carefully designed surveys and that the willingness to pay estimation method allows for various biases. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
Technical efficiency is estimated and examined for a cross-section of Australian dairy farms using various frontier methodologies; Bayesian and Classical stochastic frontiers, and Data Envelopment Analysis. The results indicate technical inefficiency is present in the sample data. Also identified are statistical differences between the point estimates of technical efficiency generated by the various methodologies. However, the rank of farm level technical efficiency is statistically invariant to the estimation technique employed. Finally, when confidence/credible intervals of technical efficiency are compared significant overlap is found for many of the farms' intervals for all frontier methods employed. The results indicate that the choice of estimation methodology may matter, but the explanatory power of all frontier methods is significantly weaker when interval estimate of technical efficiency is examined.
Resumo:
This investigation determines the accuracy of estimation of methanogenesis by a dynamic mechanistic model with real data determined in a respiration trial, where cows were fed a wide range of different carbohydrates included in the concentrates. The model was able to predict ECM (Energy corrected milk) very well, while the NDF digestibility of fibrous feed was less well predicted. Methane emissions were predicted quite well, with the exception of one diet containing wheat. The mechanistic model is therefore a helpful tool to estimate methanogenesis based on chemical analysis and dry matter intake, but the prediction can still be improved.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
Estimation of whole-grain (WG) food intake in epidemiological and nutritional studies is normally based on general diet FFQ, which are not designed to specifically capture WG intake. To estimate WG cereal intake, we developed a forty-three-item FFQ focused on cereal product intake over the past month. We validated this questionnaire against a 3-d-weighed food record (3DWFR) in thirty-one subjects living in the French-speaking part of Switzerland (nineteen female and twelve male). Subjects completed the FFQ on day 1 (FFQ1), the 3DWFR between days 2 and 13 and the FFQ again on day 14 (FFQ2). The subjects provided a fasting blood sample within 1 week of FFQ2. Total cereal intake, total WG intake, intake of individual cereals, intake of different groups of cereal products and alkylresorcinol (AR) intake were calculated from both FFQ and the 3DWFR. Plasma AR, possible biomarkers for WG wheat and rye intake were also analysed. The total WG intake for the 3DWFR, FFQ1, FFQ2 was 26 (sd 22), 28 (sd 25) and 21 (sd 16) g/d, respectively. Mean plasma AR concentration was 55.8 (sd 26.8) nmol/l. FFQ1, FFQ2 and plasma AR were correlated with the 3DWFR (r 0.72, 0.81 and 0.57, respectively). Adjustment for age, sex, BMI and total energy intake did not affect the results. This FFQ appears to give a rapid and adequate estimate of WG cereal intake in free-living subjects.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
We focus on the comparison of three statistical models used to estimate the treatment effect in metaanalysis when individually pooled data are available. The models are two conventional models, namely a multi-level and a model based upon an approximate likelihood, and a newly developed model, the profile likelihood model which might be viewed as an extension of the Mantel-Haenszel approach. To exemplify these methods, we use results from a meta-analysis of 22 trials to prevent respiratory tract infections. We show that by using the multi-level approach, in the case of baseline heterogeneity, the number of clusters or components is considerably over-estimated. The approximate and profile likelihood method showed nearly the same pattern for the treatment effect distribution. To provide more evidence two simulation studies are accomplished. The profile likelihood can be considered as a clear alternative to the approximate likelihood model. In the case of strong baseline heterogeneity, the profile likelihood method shows superior behaviour when compared with the multi-level model. Copyright (C) 2006 John Wiley & Sons, Ltd.
Resumo:
Objectives: To assess the short- and long-term reproducibility of a short food group questionnaire, and to compare its performance for estimating nutrient intakes in comparison with a 7-day diet diary. Design: Participants for the reproducibility study completed the food group questionnaire at two time points, up to 2 years apart. Participants for the performance study completed both the food group questionnaire and a 7-day diet diary a few months apart. Reproducibility was assessed by kappa statistics and percentage change between the two questionnaires; performance was assessed by kappa statistics, rank correlations and percentages of participants classified into the same and opposite thirds of intake. Setting: A random sample of participants in the Million Women Study, a population-based prospective study in the UK. Subjects: In total, 12 221 women aged 50-64 years. Results: in the reproducibility study, 75% of the food group items showed at least moderate agreement for all four time-point comparisons. Items showing fair agreement or worse tended to be those where few respondents reported eating them more than once a week, those consumed in small amounts and those relating to types of fat consumed. Compared with the diet diary, the food group questionnaire showed consistently reasonable performance for the nutrients carbohydrate, saturated fat, cholesterol, total sugars, alcohol, fibre, calcium, riboflavin, folate and vitamin C. Conclusions: The short food group questionnaire used in this study has been shown to be reproducible over time and to perform reasonably well for the assessment of a number of dietary nutrients.
Resumo:
1. Wildlife managers often require estimates of abundance. Direct methods of estimation are often impractical, especially in closed-forest environments, so indirect methods such as dung or nest surveys are increasingly popular. 2. Dung and nest surveys typically have three elements: surveys to estimate abundance of the dung or nests; experiments to estimate the production (defecation or nest construction) rate; and experiments to estimate the decay or disappearance rate. The last of these is usually the most problematic, and was the subject of this study. 3. The design of experiments to allow robust estimation of mean time to decay was addressed. In most studies to date, dung or nests have been monitored until they disappear. Instead, we advocate that fresh dung or nests are located, with a single follow-up visit to establish whether the dung or nest is still present or has decayed. 4. Logistic regression was used to estimate probability of decay as a function of time, and possibly of other covariates. Mean time to decay was estimated from this function. 5. Synthesis and applications. Effective management of mammal populations usually requires reliable abundance estimates. The difficulty in estimating abundance of mammals in forest environments has increasingly led to the use of indirect survey methods, in which abundance of sign, usually dung (e.g. deer, antelope and elephants) or nests (e.g. apes), is estimated. Given estimated rates of sign production and decay, sign abundance estimates can be converted to estimates of animal abundance. Decay rates typically vary according to season, weather, habitat, diet and many other factors, making reliable estimation of mean time to decay of signs present at the time of the survey problematic. We emphasize the need for retrospective rather than prospective rates, propose a strategy for survey design, and provide analysis methods for estimating retrospective rates.
Resumo:
Traditional resource management has had as its main objective the optimization of throughput, based on parameters such as CPU, memory, and network bandwidth. With the appearance of Grid markets, new variables that determine economic expenditure, benefit and opportunity must be taken into account. The Self-organizing ICT Resource Management (SORMA) project aims at allowing resource owners and consumers to exploit market mechanisms to sell and buy resources across the Grid. SORMA's motivation is to achieve efficient resource utilization by maximizing revenue for resource providers and minimizing the cost of resource consumption within a market environment. An overriding factor in Grid markets is the need to ensure that the desired quality of service levels meet the expectations of market participants. This paper explains the proposed use of an economically enhanced resource manager (EERM) for resource provisioning based on economic models. In particular, this paper describes techniques used by the EERM to support revenue maximization across multiple service level agreements and provides an application scenario to demonstrate its usefulness and effectiveness. Copyright © 2008 John Wiley & Sons, Ltd.