920 resultados para Bayesian statistical decision theory


Relevância:

30.00% 30.00%

Publicador:

Resumo:

First discussion on compositional data analysis is attributable to Karl Pearson, in 1897. However, notwithstanding the recent developments on algebraic structure of the simplex, more than twenty years after Aitchison’s idea of log-transformations of closed data, scientific literature is again full of statistical treatments of this type of data by using traditional methodologies. This is particularly true in environmental geochemistry where besides the problem of the closure, the spatial structure (dependence) of the data have to be considered. In this work we propose the use of log-contrast values, obtained by asimplicial principal component analysis, as LQGLFDWRUV of given environmental conditions. The investigation of the log-constrast frequency distributions allows pointing out the statistical laws able togenerate the values and to govern their variability. The changes, if compared, for example, with the mean values of the random variables assumed as models, or other reference parameters, allow definingmonitors to be used to assess the extent of possible environmental contamination. Case study on running and ground waters from Chiavenna Valley (Northern Italy) by using Na+, K+, Ca2+, Mg2+, HCO3-, SO4 2- and Cl- concentrations will be illustrated

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This book gives a general view of sequence analysis, the statistical study of successions of states or events. It includes innovative contributions on life course studies, transitions into and out of employment, contemporaneous and historical careers, and political trajectories. The approach presented in this book is now central to the life-course perspective and the study of social processes more generally. This volume promotes the dialogue between approaches to sequence analysis that developed separately, within traditions contrasted in space and disciplines. It includes the latest developments in sequential concepts, coding, atypical datasets and time patterns, optimal matching and alternative algorithms, survey optimization, and visualization. Field studies include original sequential material related to parenting in 19th-century Belgium, higher education and work in Finland and Italy, family formation before and after German reunification, French Jews persecuted in occupied France, long-term trends in electoral participation, and regime democratization. Overall the book reassesses the classical uses of sequences and it promotes new ways of collecting, formatting, representing and processing them. The introduction provides basic sequential concepts and tools, as well as a history of the method. Chapters are presented in a way that is both accessible to the beginner and informative to the expert.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Background: Recent advances on high-throughput technologies have produced a vast amount of protein sequences, while the number of high-resolution structures has seen a limited increase. This has impelled the production of many strategies to built protein structures from its sequence, generating a considerable amount of alternative models. The selection of the closest model to the native conformation has thus become crucial for structure prediction. Several methods have been developed to score protein models by energies, knowledge-based potentials and combination of both.Results: Here, we present and demonstrate a theory to split the knowledge-based potentials in scoring terms biologically meaningful and to combine them in new scores to predict near-native structures. Our strategy allows circumventing the problem of defining the reference state. In this approach we give the proof for a simple and linear application that can be further improved by optimizing the combination of Zscores. Using the simplest composite score () we obtained predictions similar to state-of-the-art methods. Besides, our approach has the advantage of identifying the most relevant terms involved in the stability of the protein structure. Finally, we also use the composite Zscores to assess the conformation of models and to detect local errors.Conclusion: We have introduced a method to split knowledge-based potentials and to solve the problem of defining a reference state. The new scores have detected near-native structures as accurately as state-of-art methods and have been successful to identify wrongly modeled regions of many near-native conformations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Almost 30 years ago, Bayesian networks (BNs) were developed in the field of artificial intelligence as a framework that should assist researchers and practitioners in applying the theory of probability to inference problems of more substantive size and, thus, to more realistic and practical problems. Since the late 1980s, Bayesian networks have also attracted researchers in forensic science and this tendency has considerably intensified throughout the last decade. This review article provides an overview of the scientific literature that describes research on Bayesian networks as a tool that can be used to study, develop and implement probabilistic procedures for evaluating the probative value of particular items of scientific evidence in forensic science. Primary attention is drawn here to evaluative issues that pertain to forensic DNA profiling evidence because this is one of the main categories of evidence whose assessment has been studied through Bayesian networks. The scope of topics is large and includes almost any aspect that relates to forensic DNA profiling. Typical examples are inference of source (or, 'criminal identification'), relatedness testing, database searching and special trace evidence evaluation (such as mixed DNA stains or stains with low quantities of DNA). The perspective of the review presented here is not exclusively restricted to DNA evidence, but also includes relevant references and discussion on both, the concept of Bayesian networks as well as its general usage in legal sciences as one among several different graphical approaches to evidence evaluation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a method to evaluate cyclical models which does not require knowledge of the DGP and the exact empirical specification of the aggregate decision rules. We derive robust restrictions in a class of models; use some to identify structural shocks and others to evaluate the model or contrast sub-models. The approach has good size and excellent power properties, even in small samples. We show how to examine the validity of a class of models, sort out the relevance of certain frictions, evaluate the importance of an added feature, and indirectly estimate structural parameters.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a theory of choice among lotteries in which the decision maker's attention is drawn to (precisely defined) salient payoffs. This leads the decision maker to a context-dependent representation of lotteries in which true probabilities are replaced by decision weights distorted in favor of salient payoffs. By endogenizing decision weights as a function of payoffs, our model provides a novel and unified account of many empirical phenomena, including frequent risk-seeking behavior, invariance failures such as the Allais paradox, and preference reversals. It also yields new predictions, including some that distinguish it from Prospect Theory, which we test.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The sample dimension, types of variables, format used for measurement, and construction of instruments to collect valid and reliable data must be considered during the research process. In the social and health sciences, and more specifically in nursing, data-collection instruments are usually composed of latent variables or variables that cannot be directly observed. Such facts emphasize the importance of deciding how to measure study variables (using an ordinal scale or a Likert or Likert-type scale). Psychometric scales are examples of instruments that are affected by the type of variables that comprise them, which could cause problems with measurement and statistical analysis (parametric tests versus non-parametric tests). Hence, investigators using these variables must rely on suppositions based on simulation studies or recommendations based on scientific evidence in order to make the best decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A method to evaluate cyclical models not requiring knowledge of the DGP and the exact specificationof the aggregate decision rules is proposed. We derive robust restrictions in a class of models; use someto identify structural shocks in the data and others to evaluate the class or contrast sub-models. Theapproach has good properties, even in small samples, and when the class of models is misspecified. Themethod is used to sort out the relevance of a certain friction (the presence of rule-of-thumb consumers)in a standard class of models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Extensive field and experimental evidence in a variety of environments show that behavior depends on a reference point. This paper provides an axiomatic characterization of this dependence. We proceed by imposing gradually more structure on both choice correspondences and preference relations, requiring increasingly higher levels of rationality, and freeing the decision-maker from certain types of inconsistencies. The appropriate degree of behavioral structure will depend on the phenomenon that is to be modeled. Lastly, we provide two applications of our work: one to model the status-quo bias, and another to model addictive behavior.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this research was to evaluate how fingerprint analysts would incorporate information from newly developed tools into their decision making processes. Specifically, we assessed effects using the following: (1) a quality tool to aid in the assessment of the clarity of the friction ridge details, (2) a statistical tool to provide likelihood ratios representing the strength of the corresponding features between compared fingerprints, and (3) consensus information from a group of trained fingerprint experts. The measured variables for the effect on examiner performance were the accuracy and reproducibility of the conclusions against the ground truth (including the impact on error rates) and the analyst accuracy and variation for feature selection and comparison.¦The results showed that participants using the consensus information from other fingerprint experts demonstrated more consistency and accuracy in minutiae selection. They also demonstrated higher accuracy, sensitivity, and specificity in the decisions reported. The quality tool also affected minutiae selection (which, in turn, had limited influence on the reported decisions); the statistical tool did not appear to influence the reported decisions.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the past 20 years the theory of robust estimation has become an important topic of mathematical statistics. We discuss here some basic concepts of this theory with the help of simple examples. Furthermore we describe a subroutine library for the application of robust statistical procedures, which was developed with the support of the Swiss National Science Foundation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a Bayesian image reconstruction algorithm with entropy prior (FMAPE) that uses a space-variant hyperparameter. The spatial variation of the hyperparameter allows different degrees of resolution in areas of different statistical characteristics, thus avoiding the large residuals resulting from algorithms that use a constant hyperparameter. In the first implementation of the algorithm, we begin by segmenting a Maximum Likelihood Estimator (MLE) reconstruction. The segmentation method is based on using a wavelet decomposition and a self-organizing neural network. The result is a predetermined number of extended regions plus a small region for each star or bright object. To assign a different value of the hyperparameter to each extended region and star, we use either feasibility tests or cross-validation methods. Once the set of hyperparameters is obtained, we carried out the final Bayesian reconstruction, leading to a reconstruction with decreased bias and excellent visual characteristics. The method has been applied to data from the non-refurbished Hubble Space Telescope. The method can be also applied to ground-based images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the induced aggregation operators. The analysis begins with a revision of some basic concepts such as the induced ordered weighted averaging (IOWA) operator and the induced ordered weighted geometric (IOWG) operator. We then analyze the problem of decision making with Dempster-Shafer theory of evidence. We suggest the use of induced aggregation operators in decision making with Dempster-Shafer theory. We focus on the aggregation step and examine some of its main properties, including the distinction between descending and ascending orders and different families of induced operators. Finally, we present an illustrative example in which the results obtained using different types of aggregation operators can be seen.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We study the induced aggregation operators. The analysis begins with a revision of some basic concepts such as the induced ordered weighted averaging (IOWA) operator and the induced ordered weighted geometric (IOWG) operator. We then analyze the problem of decision making with Dempster-Shafer theory of evidence. We suggest the use of induced aggregation operators in decision making with Dempster-Shafer theory. We focus on the aggregation step and examine some of its main properties, including the distinction between descending and ascending orders and different families of induced operators. Finally, we present an illustrative example in which the results obtained using different types of aggregation operators can be seen.