29 resultados para Database search Evidential value Bayesian decision theory Influence diagrams
em Consorci de Serveis Universitaris de Catalunya (CSUC), Spain
Resumo:
In many areas of economics there is a growing interest in how expertise andpreferences drive individual and group decision making under uncertainty. Increasingly, we wish to estimate such models to quantify which of these drive decisionmaking. In this paper we propose a new channel through which we can empirically identify expertise and preference parameters by using variation in decisionsover heterogeneous priors. Relative to existing estimation approaches, our \Prior-Based Identification" extends the possible environments which can be estimated,and also substantially improves the accuracy and precision of estimates in thoseenvironments which can be estimated using existing methods.
Resumo:
Standard practice of wave-height hazard analysis often pays little attention to the uncertainty of assessed return periods and occurrence probabilities. This fact favors the opinion that, when large events happen, the hazard assessment should change accordingly. However, uncertainty of the hazard estimates is normally able to hide the effect of those large events. This is illustrated using data from the Mediterranean coast of Spain, where the last years have been extremely disastrous. Thus, it is possible to compare the hazard assessment based on data previous to those years with the analysis including them. With our approach, no significant change is detected when the statistical uncertainty is taken into account. The hazard analysis is carried out with a standard model. Time-occurrence of events is assumed Poisson distributed. The wave-height of each event is modelled as a random variable which upper tail follows a Generalized Pareto Distribution (GPD). Moreover, wave-heights are assumed independent from event to event and also independent of their occurrence in time. A threshold for excesses is assessed empirically. The other three parameters (Poisson rate, shape and scale parameters of GPD) are jointly estimated using Bayes' theorem. Prior distribution accounts for physical features of ocean waves in the Mediterranean sea and experience with these phenomena. Posterior distribution of the parameters allows to obtain posterior distributions of other derived parameters like occurrence probabilities and return periods. Predictives are also available. Computations are carried out using the program BGPE v2.0
Resumo:
Implementar una base de datos para que pueda ser integrada en un aplicativo de gestión de una red de bibliotecas y que permita obtener de forma eficiente aquellos elementos valiosos para la toma de decisiones.
Resumo:
Model predictiu basat en xarxes bayesianes que permet identificar els pacients amb major risc d'ingrés a un hospital segons una sèrie d'atributs de dades demogràfiques i clíniques.
Resumo:
This paper proposes to estimate the covariance matrix of stock returnsby an optimally weighted average of two existing estimators: the samplecovariance matrix and single-index covariance matrix. This method isgenerally known as shrinkage, and it is standard in decision theory andin empirical Bayesian statistics. Our shrinkage estimator can be seenas a way to account for extra-market covariance without having to specifyan arbitrary multi-factor structure. For NYSE and AMEX stock returns from1972 to 1995, it can be used to select portfolios with significantly lowerout-of-sample variance than a set of existing estimators, includingmulti-factor models.
Resumo:
Linear spaces consisting of σ-finite probability measures and infinite measures (improper priors and likelihood functions) are defined. The commutative group operation, called perturbation, is the updating given by Bayes theorem; the inverse operation is the Radon-Nikodym derivative. Bayes spaces of measures are sets of classes of proportional measures. In this framework, basic notions of mathematical statistics get a simple algebraic interpretation. For example, exponential families appear as affine subspaces with their sufficient statistics as a basis. Bayesian statistics, in particular some well-known properties of conjugated priors and likelihood functions, are revisited and slightly extended
Resumo:
Health and inequalities in health among inhabitants of European cities are of major importance for European public health and there is great interest in how different health care systems in Europe perform in the reduction of health inequalities. However, evidence on the spatial distribution of cause-specific mortality across neighbourhoods of European cities is scarce. This study presents maps of avoidable mortality in European cities and analyses differences in avoidable mortality between neighbourhoods with different levels of deprivation. Methods: We determined the level of mortality from 14 avoidable causes of death for each neighbourhood of 15 large cities in different European regions. To address the problems associated with Standardised Mortality Ratios for small areas we smooth them using the Bayesian model proposed by Besag, York and Mollié. Ecological regression analysis was used to assess the association between social deprivation and mortality. Results: Mortality from avoidable causes of death is higher in deprived neighbourhoods and mortality rate ratios between areas with different levels of deprivation differ between gender and cities. In most cases rate ratios are lower among women. While Eastern and Southern European cities show higher levels of avoidable mortality, the association of mortality with social deprivation tends to be higher in Northern and lower in Southern Europe. Conclusions: There are marked differences in the level of avoidable mortality between neighbourhoods of European cities and the level of avoidable mortality is associated with social deprivation. There is no systematic difference in the magnitude of this association between European cities or regions. Spatial patterns of avoidable mortality across small city areas can point to possible local problems and specific strategies to reduce health inequality which is important for the development of urban areas and the well-being of their inhabitants
Resumo:
(INFINITIVE + CLITIC + AUX) is an evidential configuration in Old Spanish and Old Catalan, whereas (PARTICIPLE + CLITIC + AUX) is an instance of weak or unmarked focus fronting. The evidentiality of mesoclitic structures can be put forward on the bases of three main arguments: a) mesoclisis is not compulsory (i.e., whenever you have a clitic, you can either have mesoclisis or proclisis/enclisis); b) mesoclitic futures and conditionals areattested in interrogative sentences (with wh- elements); and c) they are not found in derived adverbial clauses (which is what you expect if they have an evidential value, since they bring about intervention effects corresponding to the derivational account of conditional and temporal sentences, for example - see Haegeman 2007 and ff.), and are related to high modal expressions (thus interfering with MoodPIrrealis)
Resumo:
Minimax lower bounds for concept learning state, for example, thatfor each sample size $n$ and learning rule $g_n$, there exists a distributionof the observation $X$ and a concept $C$ to be learnt such that the expectederror of $g_n$ is at least a constant times $V/n$, where $V$ is the VC dimensionof the concept class. However, these bounds do not tell anything about therate of decrease of the error for a {\sl fixed} distribution--concept pair.\\In this paper we investigate minimax lower bounds in such a--stronger--sense.We show that for several natural $k$--parameter concept classes, includingthe class of linear halfspaces, the class of balls, the class of polyhedrawith a certain number of faces, and a class of neural networks, for any{\sl sequence} of learning rules $\{g_n\}$, there exists a fixed distributionof $X$ and a fixed concept $C$ such that the expected error is larger thana constant times $k/n$ for {\sl infinitely many n}. We also obtain suchstrong minimax lower bounds for the tail distribution of the probabilityof error, which extend the corresponding minimax lower bounds.
Resumo:
We obtain minimax lower and upper bounds for the expected distortionredundancy of empirically designed vector quantizers. We show that the meansquared distortion of a vector quantizer designed from $n$ i.i.d. datapoints using any design algorithm is at least $\Omega (n^{-1/2})$ awayfrom the optimal distortion for some distribution on a bounded subset of${\cal R}^d$. Together with existing upper bounds this result shows thatthe minimax distortion redundancy for empirical quantizer design, as afunction of the size of the training data, is asymptotically on the orderof $n^{1/2}$. We also derive a new upper bound for the performance of theempirically optimal quantizer.
Resumo:
The classical binary classification problem is investigatedwhen it is known in advance that the posterior probability function(or regression function) belongs to some class of functions. We introduceand analyze a method which effectively exploits this knowledge. The methodis based on minimizing the empirical risk over a carefully selected``skeleton'' of the class of regression functions. The skeleton is acovering of the class based on a data--dependent metric, especiallyfitted for classification. A new scale--sensitive dimension isintroduced which is more useful for the studied classification problemthan other, previously defined, dimension measures. This fact isdemonstrated by performance bounds for the skeleton estimate in termsof the new dimension.
Resumo:
The projects dealing with e-government are currently the principal factor in instigating the changes taking place within the public administration. This change, which is considered unstoppable, has important implications regarding the management and conservation of administrative documents generated by electronic transactions. This article analyses the methodological, legal and cultural challenges that come about when the archives paradigm is included within e-government projects and also the consequences that this may have on the archives themselves. In conclusion the author proposes a set of methodological solutions for identifying those electronic documents with evidential value, determining their life cycle, defining a preservation policy and creating a digital archive.
Resumo:
This paper introduces a mixture model based on the beta distribution, without preestablishedmeans and variances, to analyze a large set of Beauty-Contest data obtainedfrom diverse groups of experiments (Bosch-Domenech et al. 2002). This model gives a bettert of the experimental data, and more precision to the hypothesis that a large proportionof individuals follow a common pattern of reasoning, described as iterated best reply (degenerate),than mixture models based on the normal distribution. The analysis shows thatthe means of the distributions across the groups of experiments are pretty stable, while theproportions of choices at dierent levels of reasoning vary across groups.
Resumo:
Possible new ways in the pharmacological treatment of bipolar disorder and comorbid alcoholism. Azorin JM, Bowden CL, Garay RP, Perugi G, Vieta E, Young AH. Source Department of Psychiatry, CHU Sainte Marguerite, Marseilles, France. Abstract About half of all bipolar patients have an alcohol abuse problem at some point of their lifetime. However, only one randomized, controlled trial of pharmacotherapy (valproate) in this patient population was published as of 2006. Therefore, we reviewed clinical trials in this indication of the last four years (using mood stabilizers, atypical antipsychotics, and other drugs). Priority was given to randomized trials, comparing drugs with placebo or active comparator. Published studies were found through systematic database search (PubMed, Scirus, EMBASE, Cochrane Library, Science Direct). In these last four years, the only randomized, clinically relevant study in bipolar patients with comorbid alcoholism is that of Brown and colleagues (2008) showing that quetiapine therapy decreased depressive symptoms in the early weeks of use, without modifying alcohol use. Several other open-label trials have been generally positive and support the efficacy and tolerability of agents from different classes in this patient population. Valproate efficacy to reduce excessive alcohol consumption in bipolar patients was confirmed and new controlled studies revealed its therapeutic benefit to prevent relapse in newly abstinent alcoholics and to improve alcohol hallucinosis. Topiramate deserves to be investigated in bipolar patients with comorbid alcoholism since this compound effectively improves physical health and quality of life of alcohol-dependent individuals. In conclusion, randomized, controlled research is still needed to provide guidelines for possible use of valproate and other agents in patients with a dual diagnosis of bipolar disorder and substance abuse or dependence.
Resumo:
Marketing has studied the permanence of a client within an enterprise because it is a key element in the study of the value (economic) of the client (CLV). The research that they have developed is based in deterministic or random models, which allowed estimating the permanence of the client, and the CLV. However, when it is not possible to apply these schemes for not having the panel data that this model requires, the period of time of a client with the enterprise is uncertain data. We consider that the value of the current work is to have an alternative way to estimate the period of time with subjective information proper of the theory of uncertainty.