4 resultados para Logistic regression methodology
em CentAUR: Central Archive University of Reading - UK
Resumo:
A statistical technique for fault analysis in industrial printing is reported. The method specifically deals with binary data, for which the results of the production process fall into two categories, rejected or accepted. The method is referred to as logistic regression, and is capable of predicting future fault occurrences by the analysis of current measurements from machine parts sensors. Individual analysis of each type of fault can determine which parts of the plant have a significant influence on the occurrence of such faults; it is also possible to infer which measurable process parameters have no significant influence on the generation of these faults. Information derived from the analysis can be helpful in the operator's interpretation of the current state of the plant. Appropriate actions may then be taken to prevent potential faults from occurring. The algorithm is being implemented as part of an applied self-learning expert system.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture–recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture–recapture models. Alternative methods, still under the capture–recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture–recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao’s lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates—in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
None of the current surveillance streams monitoring the presence of scrapie in Great Britain provide a comprehensive and unbiased estimate of the prevalence of the disease at the holding level. Previous work to estimate the under-ascertainment adjusted prevalence of scrapie in Great Britain applied multiple-list capture-recapture methods. The enforcement of new control measures on scrapie-affected holdings in 2004 has stopped the overlapping between surveillance sources and, hence, the application of multiple-list capture-recapture models. Alternative methods, still under the capture-recapture methodology, relying on repeated entries in one single list have been suggested in these situations. In this article, we apply one-list capture-recapture approaches to data held on the Scrapie Notifications Database to estimate the undetected population of scrapie-affected holdings with clinical disease in Great Britain for the years 2002, 2003, and 2004. For doing so, we develop a new diagnostic tool for indication of heterogeneity as well as a new understanding of the Zelterman and Chao's lower bound estimators to account for potential unobserved heterogeneity. We demonstrate that the Zelterman estimator can be viewed as a maximum likelihood estimator for a special, locally truncated Poisson likelihood equivalent to a binomial likelihood. This understanding allows the extension of the Zelterman approach by means of logistic regression to include observed heterogeneity in the form of covariates-in case studied here, the holding size and country of origin. Our results confirm the presence of substantial unobserved heterogeneity supporting the application of our two estimators. The total scrapie-affected holding population in Great Britain is around 300 holdings per year. None of the covariates appear to inform the model significantly.
Resumo:
Purpose – The purpose of this paper is to seek to shed light on the practice of incomplete corporate disclosure of quantitative Greenhouse gas (GHG) emissions and investigates whether external stakeholder pressure influences the existence, and separately, the completeness of voluntary GHG emissions disclosures by 431 European companies. Design/methodology/approach – A classification of reporting completeness is developed with respect to the scope, type and reporting boundary of GHG emissions based on the guidelines of the GHG Protocol, Global Reporting Initiative and the Carbon Disclosure Project. Logistic regression analysis is applied to examine whether proxies for exposure to climate change concerns from different stakeholder groups influence the existence and/or completeness of quantitative GHG emissions disclosure. Findings – From 2005 to 2009, on average only 15 percent of companies that disclose GHG emissions report them in a manner that the authors consider complete. Results of regression analyses suggest that external stakeholder pressure is a determinant of the existence but not the completeness of emissions disclosure. Findings are consistent with stakeholder theory arguments that companies respond to external stakeholder pressure to report GHG emissions, but also with legitimacy theory claims that firms can use carbon disclosure, in this case the incomplete reporting of emissions, as a symbolic act to address legitimacy exposures. Practical implications – Bringing corporate GHG emissions disclosure in line with recommended guidelines will require either more direct stakeholder pressure or, perhaps, a mandated disclosure regime. In the meantime, users of the data will need to carefully consider the relevance of the reported data and develop the necessary competencies to detect and control for its incompleteness. A more troubling concern is that stakeholders may instead grow to accept less than complete disclosure. Originality/value – The paper represents the first large-scale empirical study into the completeness of companies’ disclosure of quantitative GHG emissions and is the first to analyze these disclosures in the context of stakeholder pressure and its relation to legitimation.