139 resultados para generalized estimating equation
Resumo:
In this paper we consider the estimation of population size from onesource capture–recapture data, that is, a list in which individuals can potentially be found repeatedly and where the question is how many individuals are missed by the list. As a typical example, we provide data from a drug user study in Bangkok from 2001 where the list consists of drug users who repeatedly contact treatment institutions. Drug users with 1, 2, 3, . . . contacts occur, but drug users with zero contacts are not present, requiring the size of this group to be estimated. Statistically, these data can be considered as stemming from a zero-truncated count distribution.We revisit an estimator for the population size suggested by Zelterman that is known to be robust under potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a locally truncated Poisson likelihood which is equivalent to a binomial likelihood. This result allows the extension of the Zelterman estimator by means of logistic regression to include observed heterogeneity in the form of covariates. We also review an estimator proposed by Chao and explain why we are not able to obtain similar results for this estimator. The Zelterman estimator is applied in two case studies, the first a drug user study from Bangkok, the second an illegal immigrant study in the Netherlands. Our results suggest the new estimator should be used, in particular, if substantial unobserved heterogeneity is present.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
This paper presents the method and findings of a contingent valuation (CV) study that aimed to elicit United Kingdom citizens' willingness to pay to support legislation to phase out the use of battery cages for egg production in the European Union (EU). The method takes account of various biases associated with the CV technique, including 'warm glow', 'part-whole' and sample response biases. Estimated mean willingness to pay to support the legislation is used to estimate the annual benefit of the legislation to UK citizens. This is compared with the estimated annual costs of the legislation over a 12-year period, which allows for readjustment by the UK egg industry. The analysis shows that the estimated benefits of the legislation outweigh the costs. The study demonstrates that CV is a potentially useful technique for assessing the likely benefits associated with proposed legislation. However, estimates of CV studies must be treated with caution. It is important that they are derived from carefully designed surveys and that the willingness to pay estimation method allows for various biases. (C) 2003 Elsevier Science B.V. All rights reserved.
Resumo:
A method is proposed to determine the extent of degradation in the rumen involving a two-stage mathematical modeling process. In the first stage, a statistical model shifts (or maps) the gas accumulation profile obtained using a fecal inoculum to a ruminal gas profile. Then, a kinetic model determines the extent of degradation in the rumen from the shifted profile. The kinetic model is presented as a generalized mathematical function, allowing any one of a number of alternative equation forms to be selected. This method might allow the gas production technique to become an approach for determining extent of degradation in the rumen, decreasing the need for surgically modified animals while still maintaining the link with the animal. Further research is needed before the proposed methodology can be used as a standard method across a range of feeds.
Resumo:
Technical efficiency is estimated and examined for a cross-section of Australian dairy farms using various frontier methodologies; Bayesian and Classical stochastic frontiers, and Data Envelopment Analysis. The results indicate technical inefficiency is present in the sample data. Also identified are statistical differences between the point estimates of technical efficiency generated by the various methodologies. However, the rank of farm level technical efficiency is statistically invariant to the estimation technique employed. Finally, when confidence/credible intervals of technical efficiency are compared significant overlap is found for many of the farms' intervals for all frontier methods employed. The results indicate that the choice of estimation methodology may matter, but the explanatory power of all frontier methods is significantly weaker when interval estimate of technical efficiency is examined.
Resumo:
This investigation determines the accuracy of estimation of methanogenesis by a dynamic mechanistic model with real data determined in a respiration trial, where cows were fed a wide range of different carbohydrates included in the concentrates. The model was able to predict ECM (Energy corrected milk) very well, while the NDF digestibility of fibrous feed was less well predicted. Methane emissions were predicted quite well, with the exception of one diet containing wheat. The mechanistic model is therefore a helpful tool to estimate methanogenesis based on chemical analysis and dry matter intake, but the prediction can still be improved.
Resumo:
A model was published by Lewis et al. (2002) to predict the mean age at first egg (AFE) for pullets of laying strains reared under non-limiting environmental conditions and exposed to a single change in photoperiod during the rearing stage. Subsequently, Lewis et al. (2003) reported the effects of two opposing changes in photoperiod, which showed that the first change appears to alter the pullet's physiological age so that it responds to the second change as though it had been given at an earlier age (if photoperiod was decreased), or later age (if photoperiod was increased) than the true chronological age. During the construction of a computer model based on these two publications, it became apparent that some of the components of the models needed adjustment. The amendments relate to (1) the standard deviation (S.D.) used for calculating the proportion of a young flock that has attained photosensitivity, (2) the equation for calculating the slope of the line relating AFE to age at transfer from one photoperiod to another, (3) the equation used for estimating the distribution of AFE as a function of the mean value, (4) the point of no return when pullets which have started spontaneous maturation in response to the current photoperiod can no longer respond to a late change in photoperiod and (5) the equations used for calculating the distribution of AFE when the trait is bimodal.
Resumo:
The paper concerns the design and analysis of serial dilution assays to estimate the infectivity of a sample of tissue when it is assumed that the sample contains a finite number of indivisible infectious units such that a subsample will be infectious if it contains one or more of these units. The aim of the study is to estimate the number of infectious units in the original sample. The standard approach to the analysis of data from such a study is based on the assumption of independence of aliquots both at the same dilution level and at different dilution levels, so that the numbers of infectious units in the aliquots follow independent Poisson distributions. An alternative approach is based on calculation of the expected value of the total number of samples tested that are not infectious. We derive the likelihood for the data on the basis of the discrete number of infectious units, enabling calculation of the maximum likelihood estimate and likelihood-based confidence intervals. We use the exact probabilities that are obtained to compare the maximum likelihood estimate with those given by the other methods in terms of bias and standard error and to compare the coverage of the confidence intervals. We show that the methods have very similar properties and conclude that for practical use the method that is based on the Poisson assumption is to be recommended, since it can be implemented by using standard statistical software. Finally we consider the design of serial dilution assays, concluding that it is important that neither the dilution factor nor the number of samples that remain untested should be too large.
Resumo:
Estimation of whole-grain (WG) food intake in epidemiological and nutritional studies is normally based on general diet FFQ, which are not designed to specifically capture WG intake. To estimate WG cereal intake, we developed a forty-three-item FFQ focused on cereal product intake over the past month. We validated this questionnaire against a 3-d-weighed food record (3DWFR) in thirty-one subjects living in the French-speaking part of Switzerland (nineteen female and twelve male). Subjects completed the FFQ on day 1 (FFQ1), the 3DWFR between days 2 and 13 and the FFQ again on day 14 (FFQ2). The subjects provided a fasting blood sample within 1 week of FFQ2. Total cereal intake, total WG intake, intake of individual cereals, intake of different groups of cereal products and alkylresorcinol (AR) intake were calculated from both FFQ and the 3DWFR. Plasma AR, possible biomarkers for WG wheat and rye intake were also analysed. The total WG intake for the 3DWFR, FFQ1, FFQ2 was 26 (sd 22), 28 (sd 25) and 21 (sd 16) g/d, respectively. Mean plasma AR concentration was 55.8 (sd 26.8) nmol/l. FFQ1, FFQ2 and plasma AR were correlated with the 3DWFR (r 0.72, 0.81 and 0.57, respectively). Adjustment for age, sex, BMI and total energy intake did not affect the results. This FFQ appears to give a rapid and adequate estimate of WG cereal intake in free-living subjects.
Resumo:
With the current concern over climate change, descriptions of how rainfall patterns are changing over time can be useful. Observations of daily rainfall data over the last few decades provide information on these trends. Generalized linear models are typically used to model patterns in the occurrence and intensity of rainfall. These models describe rainfall patterns for an average year but are more limited when describing long-term trends, particularly when these are potentially non-linear. Generalized additive models (GAMS) provide a framework for modelling non-linear relationships by fitting smooth functions to the data. This paper describes how GAMS can extend the flexibility of models to describe seasonal patterns and long-term trends in the occurrence and intensity of daily rainfall using data from Mauritius from 1962 to 2001. Smoothed estimates from the models provide useful graphical descriptions of changing rainfall patterns over the last 40 years at this location. GAMS are particularly helpful when exploring non-linear relationships in the data. Care is needed to ensure the choice of smooth functions is appropriate for the data and modelling objectives. (c) 2008 Elsevier B.V. All rights reserved.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
We focus on the comparison of three statistical models used to estimate the treatment effect in metaanalysis when individually pooled data are available. The models are two conventional models, namely a multi-level and a model based upon an approximate likelihood, and a newly developed model, the profile likelihood model which might be viewed as an extension of the Mantel-Haenszel approach. To exemplify these methods, we use results from a meta-analysis of 22 trials to prevent respiratory tract infections. We show that by using the multi-level approach, in the case of baseline heterogeneity, the number of clusters or components is considerably over-estimated. The approximate and profile likelihood method showed nearly the same pattern for the treatment effect distribution. To provide more evidence two simulation studies are accomplished. The profile likelihood can be considered as a clear alternative to the approximate likelihood model. In the case of strong baseline heterogeneity, the profile likelihood method shows superior behaviour when compared with the multi-level model. Copyright (C) 2006 John Wiley & Sons, Ltd.