964 resultados para 230201 Probability Theory


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Using a laboratory experiment, we investigate whether incentive compatibility affects subjective probabilities elicited via the exchangeability method (EM), an elicitation technique consisting of several chained questions. We hypothesize that subjects who are aware of the chaining strategically behave and provide invalid subjective probabilities, while subjects who are not aware of the chaining state their real beliefs and provide valid subjective probabilities. The validity of subjective probabilities is investigated using de Finetti's notion of coherence, under which probability estimates are valid if and only if they obey all axioms of probability theory.
Four experimental treatments are designed and implemented. Subjects are divided into two initial treatment groups: in the first, they are provided with real monetary incentives, and in the second, they are not. Each group is further sub-divided into two treatment groups, in the first, the chained structure of the experimental design is made clear to the subjects, while, in the second, the chained structure is hidden by randomizing the elicitation questions.
Our results suggest that subjects provided with monetary incentives and randomized questions provide valid subjective probabilities because they are not aware of the chaining which undermines the incentive compatibility of the exchangeability method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Context awareness, dynamic reconfiguration at runtime and heterogeneity are key characteristics of future distributed systems, particularly in ubiquitous and mobile computing scenarios. The main contributions of this dissertation are theoretical as well as architectural concepts facilitating information exchange and fusion in heterogeneous and dynamic distributed environments. Our main focus is on bridging the heterogeneity issues and, at the same time, considering uncertain, imprecise and unreliable sensor information in information fusion and reasoning approaches. A domain ontology is used to establish a common vocabulary for the exchanged information. We thereby explicitly support different representations for the same kind of information and provide Inter-Representation Operations that convert between them. Special account is taken of the conversion of associated meta-data that express uncertainty and impreciseness. The Unscented Transformation, for example, is applied to propagate Gaussian normal distributions across highly non-linear Inter-Representation Operations. Uncertain sensor information is fused using the Dempster-Shafer Theory of Evidence as it allows explicit modelling of partial and complete ignorance. We also show how to incorporate the Dempster-Shafer Theory of Evidence into probabilistic reasoning schemes such as Hidden Markov Models in order to be able to consider the uncertainty of sensor information when deriving high-level information from low-level data. For all these concepts we provide architectural support as a guideline for developers of innovative information exchange and fusion infrastructures that are particularly targeted at heterogeneous dynamic environments. Two case studies serve as proof of concept. The first case study focuses on heterogeneous autonomous robots that have to spontaneously form a cooperative team in order to achieve a common goal. The second case study is concerned with an approach for user activity recognition which serves as baseline for a context-aware adaptive application. Both case studies demonstrate the viability and strengths of the proposed solution and emphasize that the Dempster-Shafer Theory of Evidence should be preferred to pure probability theory in applications involving non-linear Inter-Representation Operations.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Die zunehmende Vernetzung der Informations- und Kommunikationssysteme führt zu einer weiteren Erhöhung der Komplexität und damit auch zu einer weiteren Zunahme von Sicherheitslücken. Klassische Schutzmechanismen wie Firewall-Systeme und Anti-Malware-Lösungen bieten schon lange keinen Schutz mehr vor Eindringversuchen in IT-Infrastrukturen. Als ein sehr wirkungsvolles Instrument zum Schutz gegenüber Cyber-Attacken haben sich hierbei die Intrusion Detection Systeme (IDS) etabliert. Solche Systeme sammeln und analysieren Informationen von Netzwerkkomponenten und Rechnern, um ungewöhnliches Verhalten und Sicherheitsverletzungen automatisiert festzustellen. Während signatur-basierte Ansätze nur bereits bekannte Angriffsmuster detektieren können, sind anomalie-basierte IDS auch in der Lage, neue bisher unbekannte Angriffe (Zero-Day-Attacks) frühzeitig zu erkennen. Das Kernproblem von Intrusion Detection Systeme besteht jedoch in der optimalen Verarbeitung der gewaltigen Netzdaten und der Entwicklung eines in Echtzeit arbeitenden adaptiven Erkennungsmodells. Um diese Herausforderungen lösen zu können, stellt diese Dissertation ein Framework bereit, das aus zwei Hauptteilen besteht. Der erste Teil, OptiFilter genannt, verwendet ein dynamisches "Queuing Concept", um die zahlreich anfallenden Netzdaten weiter zu verarbeiten, baut fortlaufend Netzverbindungen auf, und exportiert strukturierte Input-Daten für das IDS. Den zweiten Teil stellt ein adaptiver Klassifikator dar, der ein Klassifikator-Modell basierend auf "Enhanced Growing Hierarchical Self Organizing Map" (EGHSOM), ein Modell für Netzwerk Normalzustand (NNB) und ein "Update Model" umfasst. In dem OptiFilter werden Tcpdump und SNMP traps benutzt, um die Netzwerkpakete und Hostereignisse fortlaufend zu aggregieren. Diese aggregierten Netzwerkpackete und Hostereignisse werden weiter analysiert und in Verbindungsvektoren umgewandelt. Zur Verbesserung der Erkennungsrate des adaptiven Klassifikators wird das künstliche neuronale Netz GHSOM intensiv untersucht und wesentlich weiterentwickelt. In dieser Dissertation werden unterschiedliche Ansätze vorgeschlagen und diskutiert. So wird eine classification-confidence margin threshold definiert, um die unbekannten bösartigen Verbindungen aufzudecken, die Stabilität der Wachstumstopologie durch neuartige Ansätze für die Initialisierung der Gewichtvektoren und durch die Stärkung der Winner Neuronen erhöht, und ein selbst-adaptives Verfahren eingeführt, um das Modell ständig aktualisieren zu können. Darüber hinaus besteht die Hauptaufgabe des NNB-Modells in der weiteren Untersuchung der erkannten unbekannten Verbindungen von der EGHSOM und der Überprüfung, ob sie normal sind. Jedoch, ändern sich die Netzverkehrsdaten wegen des Concept drif Phänomens ständig, was in Echtzeit zur Erzeugung nicht stationärer Netzdaten führt. Dieses Phänomen wird von dem Update-Modell besser kontrolliert. Das EGHSOM-Modell kann die neuen Anomalien effektiv erkennen und das NNB-Model passt die Änderungen in Netzdaten optimal an. Bei den experimentellen Untersuchungen hat das Framework erfolgversprechende Ergebnisse gezeigt. Im ersten Experiment wurde das Framework in Offline-Betriebsmodus evaluiert. Der OptiFilter wurde mit offline-, synthetischen- und realistischen Daten ausgewertet. Der adaptive Klassifikator wurde mit dem 10-Fold Cross Validation Verfahren evaluiert, um dessen Genauigkeit abzuschätzen. Im zweiten Experiment wurde das Framework auf einer 1 bis 10 GB Netzwerkstrecke installiert und im Online-Betriebsmodus in Echtzeit ausgewertet. Der OptiFilter hat erfolgreich die gewaltige Menge von Netzdaten in die strukturierten Verbindungsvektoren umgewandelt und der adaptive Klassifikator hat sie präzise klassifiziert. Die Vergleichsstudie zwischen dem entwickelten Framework und anderen bekannten IDS-Ansätzen zeigt, dass der vorgeschlagene IDSFramework alle anderen Ansätze übertrifft. Dies lässt sich auf folgende Kernpunkte zurückführen: Bearbeitung der gesammelten Netzdaten, Erreichung der besten Performanz (wie die Gesamtgenauigkeit), Detektieren unbekannter Verbindungen und Entwicklung des in Echtzeit arbeitenden Erkennungsmodells von Eindringversuchen.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

La complejidad que supone abarcar el estudio de la responsabilidad patrimonial del Estado en el ámbito médico sanitario, hace preciso prestar atención a ciertos temas que resultan especialmente relevantes y que han sido decantados jurisprudencialmente por el Honorable Consejo de Estado. De esta manera el presente trabajo desarrolla temas descollantes y novedosos en materia de imputabilidad como viene a ser la prueba de la falla médica mediante la teoría "res ipsa loquitur"; la prueba del nexo causal a través de la prueba indiciaria y la teoría de la probabilidad preponderante. Así mismo se estudian los diversos tipos de daños antijurídicos que pueden darse dentro de la prestación médica a cargo del Estado, destacando especialmente la lesión al derecho a recibir una atención oportuna y eficaz, la pérdida de una oportunidad debida a la no obtención del consentimiento informado del paciente, lo que supone, a su vez, el cercenamiento del derecho de este a elegir someterse o no a determinado tratamiento, previo valoración de pros y contras de la terapia sugerida por el galeno (principio de no agravación). Así mismo se analizanlas hipótesis de daños antijurídicos derivados del error en el diagnóstico, la falla por la omisión de las entidades de control y vigilancia, falla en gineco-obstetricia, así como las hipótesis de responsabilidad objetiva del Estado por óblito quirúrgico, para finalmente tratar el tema novedoso del alea terapéutica con sus particulares características y eventual aplicabilidad en el sistema jurídico colombiano.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

El objetivo de este documento es recopilar algunos resultados clasicos sobre existencia y unicidad ´ de soluciones de ecuaciones diferenciales estocasticas (EDEs) con condici ´ on final (en ingl ´ es´ Backward stochastic differential equations) con particular enfasis en el caso de coeficientes mon ´ otonos, y su cone- ´ xion con soluciones de viscosidad de sistemas de ecuaciones diferenciales parciales (EDPs) parab ´ olicas ´ y el´ıpticas semilineales de segundo orden.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Despite the success of studies attempting to integrate remotely sensed data and flood modelling and the need to provide near-real time data routinely on a global scale as well as setting up online data archives, there is to date a lack of spatially and temporally distributed hydraulic parameters to support ongoing efforts in modelling. Therefore, the objective of this project is to provide a global evaluation and benchmark data set of floodplain water stages with uncertainties and assimilation in a large scale flood model using space-borne radar imagery. An algorithm is developed for automated retrieval of water stages with uncertainties from a sequence of radar imagery and data are assimilated in a flood model using the Tewkesbury 2007 flood event as a feasibility study. The retrieval method that we employ is based on possibility theory which is an extension of fuzzy sets and that encompasses probability theory. In our case we first attempt to identify main sources of uncertainty in the retrieval of water stages from radar imagery for which we define physically meaningful ranges of parameter values. Possibilities of values are then computed for each parameter using a triangular ‘membership’ function. This procedure allows the computation of possible values of water stages at maximum flood extents along a river at many different locations. At a later stage in the project these data are then used in assimilation, calibration or validation of a flood model. The application is subsequently extended to a global scale using wide swath radar imagery and a simple global flood forecasting model thereby providing improved river discharge estimates to update the latter.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Scale functions play a central role in the fluctuation theory of spectrally negative Lévy processes and often appear in the context of martingale relations. These relations are often require excursion theory rather than Itô calculus. The reason for the latter is that standard Itô calculus is only applicable to functions with a sufficient degree of smoothness and knowledge of the precise degree of smoothness of scale functions is seemingly incomplete. The aim of this article is to offer new results concerning properties of scale functions in relation to the smoothness of the underlying Lévy measure. We place particular emphasis on spectrally negative Lévy processes with a Gaussian component and processes of bounded variation. An additional motivation is the very intimate relation of scale functions to renewal functions of subordinators. The results obtained for scale functions have direct implications offering new results concerning the smoothness of such renewal functions for which there seems to be very little existing literature on this topic.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We wish to characterize when a Lévy process X t crosses boundaries b(t), in a two-sided sense, for small times t, where b(t) satisfies very mild conditions. An integral test is furnished for computing the value of sup t→0|X t |/b(t) = c. In some cases, we also specify a function b(t) in terms of the Lévy triplet, such that sup t→0 |X t |/b(t) = 1.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The notions of resolution and discrimination of probability forecasts are revisited. It is argued that the common concept underlying both resolution and discrimination is the dependence (in the sense of probability theory) of forecasts and observations. More specifically, a forecast has no resolution if and only if it has no discrimination if and only if forecast and observation are stochastically independent. A statistical tests for independence is thus also a test for no resolution and, at the same time, for no discrimination. The resolution term in the decomposition of the logarithmic scoring rule, and the area under the Receiver Operating Characteristic will be investigated in this light.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper we extend the long-term survival model proposed by Chen et al. [Chen, M.-H., Ibrahim, J.G., Sinha, D., 1999. A new Bayesian model for survival data with a surviving fraction. journal of the American Statistical Association 94, 909-919] via the generating function of a real sequence introduced by Feller [Feller, W., 1968. An Introduction to Probability Theory and its Applications, third ed., vol. 1, Wiley, New York]. A direct consequence of this new formulation is the unification of the long-term survival models proposed by Berkson and Gage [Berkson, J., Gage, R.P., 1952. Survival cure for cancer patients following treatment. journal of the American Statistical Association 47, 501-515] and Chen et al. (see citation above). Also, we show that the long-term survival function formulated in this paper satisfies the proportional hazards property if, and only if, the number of competing causes related to the occurrence of an event of interest follows a Poisson distribution. Furthermore, a more flexible model than the one proposed by Yin and Ibrahim [Yin, G., Ibrahim, J.G., 2005. Cure rate models: A unified approach. The Canadian journal of Statistics 33, 559-570] is introduced and, motivated by Feller`s results, a very useful competing index is defined. (c) 2008 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider one-dimensional random walks in random environment which are transient to the right. Our main interest is in the study of the sub-ballistic regime, where at time n the particle is typically at a distance of order O(n (kappa) ) from the origin, kappa is an element of (0, 1). We investigate the probabilities of moderate deviations from this behaviour. Specifically, we are interested in quenched and annealed probabilities of slowdown (at time n, the particle is at a distance of order O (n (nu 0)) from the origin, nu(0) is an element of (0, kappa)), and speedup (at time n, the particle is at a distance of order n (nu 1) from the origin , nu(1) is an element of (kappa, 1)), for the current location of the particle and for the hitting times. Also, we study probabilities of backtracking: at time n, the particle is located around (-n (nu) ), thus making an unusual excursion to the left. For the slowdown, our results are valid in the ballistic case as well.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The Laplace distribution is one of the earliest distributions in probability theory. For the first time, based on this distribution, we propose the so-called beta Laplace distribution, which extends the Laplace distribution. Various structural properties of the new distribution are derived, including expansions for its moments, moment generating function, moments of the order statistics, and so forth. We discuss maximum likelihood estimation of the model parameters and derive the observed information matrix. The usefulness of the new model is illustrated by means of a real data set. (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Solutions to combinatorial optimization problems, such as problems of locating facilities, frequently rely on heuristics to minimize the objective function. The optimum is sought iteratively and a criterion is needed to decide when the procedure (almost) attains it. Pre-setting the number of iterations dominates in OR applications, which implies that the quality of the solution cannot be ascertained. A small, almost dormant, branch of the literature suggests using statistical principles to estimate the minimum and its bounds as a tool to decide upon stopping and evaluating the quality of the solution. In this paper we examine the functioning of statistical bounds obtained from four different estimators by using simulated annealing on p-median test problems taken from Beasley’s OR-library. We find the Weibull estimator and the 2nd order Jackknife estimator preferable and the requirement of sample size to be about 10 being much less than the current recommendation. However, reliable statistical bounds are found to depend critically on a sample of heuristic solutions of high quality and we give a simple statistic useful for checking the quality. We end the paper with an illustration on using statistical bounds in a problem of locating some 70 distribution centers of the Swedish Post in one Swedish region. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Background: Linkage mapping is used to identify genomic regions affecting the expression of complex traits. However, when experimental crosses such as F2 populations or backcrosses are used to map regions containing a Quantitative Trait Locus (QTL), the size of the regions identified remains quite large, i.e. 10 or more Mb. Thus, other experimental strategies are needed to refine the QTL locations. Advanced Intercross Lines (AIL) are produced by repeated intercrossing of F2 animals and successive generations, which decrease linkage disequilibrium in a controlled manner. Although this approach is seen as promising, both to replicate QTL analyses and fine-map QTL, only a few AIL datasets, all originating from inbred founders, have been reported in the literature. Methods: We have produced a nine-generation AIL pedigree (n = 1529) from two outbred chicken lines divergently selected for body weight at eight weeks of age. All animals were weighed at eight weeks of age and genotyped for SNP located in nine genomic regions where significant or suggestive QTL had previously been detected in the F2 population. In parallel, we have developed a novel strategy to analyse the data that uses both genotype and pedigree information of all AIL individuals to replicate the detection of and fine-map QTL affecting juvenile body weight. Results: Five of the nine QTL detected with the original F2 population were confirmed and fine-mapped with the AIL, while for the remaining four, only suggestive evidence of their existence was obtained. All original QTL were confirmed as a single locus, except for one, which split into two linked QTL. Conclusions: Our results indicate that many of the QTL, which are genome-wide significant or suggestive in the analyses of large intercross populations, are true effects that can be replicated and fine-mapped using AIL. Key factors for success are the use of large populations and powerful statistical tools. Moreover, we believe that the statistical methods we have developed to efficiently study outbred AIL populations will increase the number of organisms for which in-depth complex traits can be analyzed.