11 resultados para Bayesian statistical decision
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
The reasons for the development and collapse of Maya civilization remain controversial and historical events carved on stone monuments throughout this region provide a remarkable source of data about the rise and fall of these complex polities. Use of these records depends on correlating the Maya and European calendars so that they can be compared with climate and environmental datasets. Correlation constants can vary up to 1000 years and remain controversial. We report a series of high-resolution AMS C-14 dates on a wooden lintel collected from the Classic Period city of Tikal bearing Maya calendar dates. The radiocarbon dates were calibrated using a Bayesian statistical model and indicate that the dates were carved on the lintel between AD 658-696. This strongly supports the Goodman-Martinez-Thompson (GMT) correlation and the hypothesis that climate change played an important role in the development and demise of this complex civilization.
Resumo:
A first result of the search for ν ( )μ( ) → ν ( )e( ) oscillations in the OPERA experiment, located at the Gran Sasso Underground Laboratory, is presented. The experiment looked for the appearance of ν ( )e( ) in the CNGS neutrino beam using the data collected in 2008 and 2009. Data are compatible with the non-oscillation hypothesis in the three-flavour mixing model. A further analysis of the same data constrains the non-standard oscillation parameters θ (new) and suggested by the LSND and MiniBooNE experiments. For large values (>0.1 eV(2)), the OPERA 90% C.L. upper limit on sin(2)(2θ (new)) based on a Bayesian statistical method reaches the value 7.2 × 10(−3).
Resumo:
A first result of the search for nu(mu)->nu(e) oscillations in the OPERA experiment, located at the Gran Sasso Underground Laboratory, is presented. The experiment looked for the appearance of nu(e) in the CNGS neutrino beam using the data collected in 2008 and 2009. Data are compatible with the non-oscillation hypothesis in the three-flavour mixing model. A further analysis of the same data constrains the non-standard oscillation parameters theta(new) and Delta m(new)(2) suggested by the LSND and MiniBooNE experiments. For large Delta m(new)(2) values (>0.1 eV(2)), the OPERA 90% C.L. upper limit on sin(2)(2 theta(new)) based on a Bayesian statistical method reaches the value 7.2 x 10(-3).
Resumo:
The risk of a financial position is usually summarized by a risk measure. As this risk measure has to be estimated from historical data, it is important to be able to verify and compare competing estimation procedures. In statistical decision theory, risk measures for which such verification and comparison is possible, are called elicitable. It is known that quantile-based risk measures such as value at risk are elicitable. In this paper, the existing result of the nonelicitability of expected shortfall is extended to all law-invariant spectral risk measures unless they reduce to minus the expected value. Hence, it is unclear how to perform forecast verification or comparison. However, the class of elicitable law-invariant coherent risk measures does not reduce to minus the expected value. We show that it consists of certain expectiles.
Resumo:
Vietnam has developed rapidly over the past 15 years. However, progress was not uniformly distributed across the country. Availability, adequate visualization and analysis of spatially explicit data on socio-economic and environmental aspects can support both research and policy towards sustainable development. Applying appropriate mapping techniques allows gleaning important information from tabular socio-economic data. Spatial analysis of socio-economic phenomena can yield insights into locally-specifi c patterns and processes that cannot be generated by non-spatial applications. This paper presents techniques and applications that develop and analyze spatially highly disaggregated socioeconomic datasets. A number of examples show how such information can support informed decisionmaking and research in Vietnam.
Resumo:
An appropriate model of recent human evolution is not only important to understand our own history, but it is necessary to disentangle the effects of demography and selection on genome diversity. Although most genetic data support the view that our species originated recently in Africa, it is still unclear if it completely replaced former members of the Homo genus, or if some interbreeding occurred during its range expansion. Several scenarios of modern human evolution have been proposed on the basis of molecular and paleontological data, but their likelihood has never been statistically assessed. Using DNA data from 50 nuclear loci sequenced in African, Asian and Native American samples, we show here by extensive simulations that a simple African replacement model with exponential growth has a higher probability (78%) as compared with alternative multiregional evolution or assimilation scenarios. A Bayesian analysis of the data under this best supported model points to an origin of our species approximately 141 thousand years ago (Kya), an exit out-of-Africa approximately 51 Kya, and a recent colonization of the Americas approximately 10.5 Kya. We also find that the African replacement model explains not only the shallow ancestry of mtDNA or Y-chromosomes but also the occurrence of deep lineages at some autosomal loci, which has been formerly interpreted as a sign of interbreeding with Homo erectus.
Resumo:
The rise of evidence-based medicine as well as important progress in statistical methods and computational power have led to a second birth of the >200-year-old Bayesian framework. The use of Bayesian techniques, in particular in the design and interpretation of clinical trials, offers several substantial advantages over the classical statistical approach. First, in contrast to classical statistics, Bayesian analysis allows a direct statement regarding the probability that a treatment was beneficial. Second, Bayesian statistics allow the researcher to incorporate any prior information in the analysis of the experimental results. Third, Bayesian methods can efficiently handle complex statistical models, which are suited for advanced clinical trial designs. Finally, Bayesian statistics encourage a thorough consideration and presentation of the assumptions underlying an analysis, which enables the reader to fully appraise the authors' conclusions. Both Bayesian and classical statistics have their respective strengths and limitations and should be viewed as being complementary to each other; we do not attempt to make a head-to-head comparison, as this is beyond the scope of the present review. Rather, the objective of the present article is to provide a nonmathematical, reader-friendly overview of the current practice of Bayesian statistics coupled with numerous intuitive examples from the field of oncology. It is hoped that this educational review will be a useful resource to the oncologist and result in a better understanding of the scope, strengths, and limitations of the Bayesian approach.
Resumo:
OBJECTIVES To evaluate prosthetic parameters in the edentulous anterior maxilla for decision making between fixed and removable implant prosthesis using virtual planning software. MATERIAL AND METHODS CT- or DVT-scans of 43 patients (mean age 62 ± 8 years) with an edentulous maxilla were analyzed with the NobelGuide software. Implants (≥3.5 mm diameter, ≥10 mm length) were virtually placed in the optimal three-dimensional prosthetic position of all maxillary front teeth. Anatomical and prosthetic landmarks, including the cervical crown point (C-Point), the acrylic flange border (F-Point), and the implant-platform buccal-end (I-Point) were defined in each middle section to determine four measuring parameters: (1) acrylic flange height (FLHeight), (2) mucosal coverage (MucCov), (3) crown-Implant distance (CID) and (4) buccal prosthesis profile (ProsthProfile). Based on these parameters, all patients were assigned to one of three classes: (A) MucCov ≤ 0 mm and ProsthProfile≥45(0) allowing for fixed prosthesis, (B) MucCov = 0-5 mm and/or ProsthProfile = 30(0) -45(0) probably allowing for fixed prosthesis, and (C) MucCov ≥ 5 mm and/or ProsthProfile ≤ 30(0) where removable prosthesis is favorable. Statistical analyses included descriptive methods and non-parametric tests. RESULTS Mean values were for FLHeight 10.0 mm, MucCov 5.6 mm, CID 7.4 mm, and ProsthProfile 39.1(0) . Seventy percent of patients fulfilled class C criteria (removable), 21% class B (probably fixed), and 2% class A (fixed), while in 7% (three patients) bone volume was insufficient for implant planning. CONCLUSIONS The proposed classification and virtual planning procedure simplify the decision-making process regarding type of prosthesis and increase predictability of esthetic treatment outcomes. It was demonstrated that in the majority of cases, the space between the prosthetic crown and implant platform had to be filled with prosthetic materials.
Resumo:
The identification of plausible causes for water body status deterioration will be much easier if it can build on available, reliable, extensive and comprehensive biogeochemical monitoring data (preferably aggregated in a database). A plausible identification of such causes is a prerequisite for well-informed decisions on which mitigation or remediation measures to take. In this chapter, first a rationale for an extended monitoring programme is provided; it is then compared to the one required by the Water Framework Directive (WFD). This proposal includes a list of relevant parameters that are needed for an integrated, a priori status assessment. Secondly, a few sophisticated statistical tools are described that subsequently allow for the estiation of the magnitude of impairment as well as the likely relative importance of different stressors in a multiple stressed environment. The advantages and restrictions of these rather complicated analytical methods are discussed. Finally, the use of Decision Support Systems (DSS) is advocated with regard to the specific WFD implementation requirements.
Resumo:
In this work we propose the adoption of a statistical framework used in the evaluation of forensic evidence as a tool for evaluating and presenting circumstantial "evidence" of a disease outbreak from syndromic surveillance. The basic idea is to exploit the predicted distributions of reported cases to calculate the ratio of the likelihood of observing n cases given an ongoing outbreak over the likelihood of observing n cases given no outbreak. The likelihood ratio defines the Value of Evidence (V). Using Bayes' rule, the prior odds for an ongoing outbreak are multiplied by V to obtain the posterior odds. This approach was applied to time series on the number of horses showing clinical respiratory symptoms or neurological symptoms. The separation between prior beliefs about the probability of an outbreak and the strength of evidence from syndromic surveillance offers a transparent reasoning process suitable for supporting decision makers. The value of evidence can be translated into a verbal statement, as often done in forensics or used for the production of risk maps. Furthermore, a Bayesian approach offers seamless integration of data from syndromic surveillance with results from predictive modeling and with information from other sources such as disease introduction risk assessments.