10 resultados para Teorema de Bayes
em BORIS: Bern Open Repository and Information System - Berna - Suiça
Resumo:
In this work we propose the adoption of a statistical framework used in the evaluation of forensic evidence as a tool for evaluating and presenting circumstantial "evidence" of a disease outbreak from syndromic surveillance. The basic idea is to exploit the predicted distributions of reported cases to calculate the ratio of the likelihood of observing n cases given an ongoing outbreak over the likelihood of observing n cases given no outbreak. The likelihood ratio defines the Value of Evidence (V). Using Bayes' rule, the prior odds for an ongoing outbreak are multiplied by V to obtain the posterior odds. This approach was applied to time series on the number of horses showing clinical respiratory symptoms or neurological symptoms. The separation between prior beliefs about the probability of an outbreak and the strength of evidence from syndromic surveillance offers a transparent reasoning process suitable for supporting decision makers. The value of evidence can be translated into a verbal statement, as often done in forensics or used for the production of risk maps. Furthermore, a Bayesian approach offers seamless integration of data from syndromic surveillance with results from predictive modeling and with information from other sources such as disease introduction risk assessments.
Resumo:
We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.
Resumo:
When tilted sideways participants misperceive the visual vertical assessed by means of a luminous line in otherwise complete dark- ness. A recent modeling approach (De Vrijer et al., 2009) claimed that these typical patterns of errors (known as A- and E-effects) could be explained by as- suming that participants behave in a Bayes optimal manner. In this study, we experimentally manipulate participants’ prior information about body-in-space orientation and measure the effect of this manipulation on the subjective visual vertical (SVV). Specifically, we explore the effects of veridical and misleading instructions about body tilt orientations on the SVV. We used a psychophys- ical 2AFC SVV task at roll tilt angles of 0 degrees, 16 degrees and 4 degrees CW and CCW. Participants were tilted to 4 degrees under different instruction conditions: in one condition, participants received veridical instructions as to their tilt angle, whereas in another condition, participants received the mis- leading instruction that their body position was perfectly upright. Our results indicate systematic differences between the instruction conditions at 4 degrees CW and CCW. Participants did not simply use an ego-centric reference frame in the misleading condition; instead, participants’ estimates of the SVV seem to lie between their head’s Z-axis and the estimate of the SVV as measured in the veridical condition. All participants displayed A-effects at roll tilt an- gles of 16 degrees CW and CCW. We discuss our results in the context of the Bayesian model by De Vrijer et al. (2009), and claim that this pattern of re- sults is consistent with a manipulation of precision of a prior distribution over body-in-space orientations. Furthermore, we introduce a Bayesian Generalized Linear Model for estimating parameters of participants’ psychometric function, which allows us to jointly estimate group level and individual level parameters under all experimental conditions simultaneously, rather than relying on the traditional two-step approach to obtaining group level parameter estimates.
Resumo:
Zielsetzung: Diese Studie untersuchte die Validität und Reliabilität von verschiedenen visuellen dentalen Vergrösserungshilfen in Bezug auf die okklusale Kariesdiagnostik mit Hilfe des International Caries Detection and Assessment System (ICDAS). Material und Methode: Die Okklusalflächen von 100 extrahierten Zähnen wurde an einer zuvor bestimmten Stelle von 10 Studenten (5 Studenten des 3. Jahreskurses (Bachelor-Studenten) und 5 Studenten des 4. Jahreskurses (Master-Studenten) der Zahnmedizinischen Kliniken der Universität Bern) und 4 Zahnärzten visuell untersucht und nach ICDAS auf das Vorhandensein und die Tiefe einer kariösen Läsion beurteilt. Die Beurteilung der Zähne erfolgte je zwei Mal von blossem Auge, mit einem Galilei-Lupensystem (2.5x Vergrösserung), mit einem Kepler-Lupensystem (4.5x Vergrösserung) und mit dem Operationsmikroskop (10x Vergrösserung) mit mindestens 24 Stunden Abstand zwischen den jeweiligen Untersuchungen. Als Goldstandard diente die Histologie. Die statistische Auswertung der Untersuchungen erfolgte mit der Berechnung der Kappa-Koeffizienten für die Intra- und Inter-Untersucher Reliabilität sowie einer Bayes-Analyse durch Ermittlung von Sensitivität, Spezifität und der Fläche unter der Receiver Operating Characteristic Kurve (AUC). Ergebnisse: Bei den Untersuchungsdurchläufen, welche mit dentalen Vergrösserungshilfen für die Diagnostik der okklusalen Zahnoberflächen durchgeführt wurden, sank die Anzahl der mit einem ICDAS-Code 0 (gesunde Zahnoberfläche) beurteilten Zähne, während die Quantität des Codes 3 (Schmelzeinbruch) mit höheren Vergrösserungen drastisch zunahm. Mit steigendem Vergrösserungsfaktor liessen sich sowohl mehr Schmelzkaries als auch Dentinkaries richtig erkennen (bessere Sensitivität), im Gegenzug sanken aber die Werte der Spezifität auf ein klinisch unakzeptables Niveau. Während der Abfall der Spezifität und AUC-Werte bei der Beurteilung von Schmelzkaries unter Verwendung von kleinen Vergrösserungen lediglich einen Trend darstellte, waren die Verschlechterungen in der Diagnostik bei der Dentinkaries unter der Zuhilfenahme von höheren Vergrösserungen häufig signifikant. So stiegen zum Beispiel bei den Zahnärzten die Werte der Sensitivität (Bandbreite) auf dem D3-Diagnostikniveau von 0.47 (0.17-0.79) bei dem Durchlauf von Auge auf 0.91 (0.83-1.00) bei der Benutzung des Operationsmikroskopes an, während jedoch die Spezifitätswerte (Bandbreite) von 0.78 (0.58-0.95) auf 0.30 (0.07-0.55) sanken. Ebenfalls einen negativen Einfluss von optischen Hilfsmitteln zeigte sich bei der Inter-Untersucher Reliabilität, während die Intra-Untersucher Reliabilität unbeeinflusst blieb. Die persönliche klinische Erfahrung scheint sowohl in Bezug auf das Mass der Übereinstimmung visueller Kariesdiagnostik als auch auf die Präferenz bei der Vergabe der ICDAS-Codes und somit auf die Werte der Validität einen wesentlichen Faktor auszumachen. Die Studenten erreichten die besten Werte der Sensitivität, indes die Zahnärzte dies bei der Spezifität erzielten. Schlussfolgerung: Insgesamt zeigte sich, dass ICDAS nicht für den zusätzlichen Gebrauch von optischen Vergrösserungen konzipiert wurde. Da es auf Grund von der Zuhilfenahme von dentalen Vergrösserungen zu mehr und unnötigen invasiven Behandlungsentscheidungen kommen könnte, ist von der Zuhilfenahme derselben für die okklusale Kariesdiagnostik mit ICDAS abzuraten.
Resumo:
Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.
Resumo:
Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.
Resumo:
OBJECTIVE Our aim was to assess the diagnostic and predictive value of several quantitative EEG (qEEG) analysis methods in comatose patients. METHODS In 79 patients, coupling between EEG signals on the left-right (inter-hemispheric) axis and on the anterior-posterior (intra-hemispheric) axis was measured with four synchronization measures: relative delta power asymmetry, cross-correlation, symbolic mutual information and transfer entropy directionality. Results were compared with etiology of coma and clinical outcome. Using cross-validation, the predictive value of measure combinations was assessed with a Bayes classifier with mixture of Gaussians. RESULTS Five of eight measures showed a statistically significant difference between patients grouped according to outcome; one measure revealed differences in patients grouped according to the etiology. Interestingly, a high level of synchrony between the left and right hemisphere was associated with mortality on intensive care unit, whereas higher synchrony between anterior and posterior brain regions was associated with survival. The combination with the best predictive value reached an area-under the curve of 0.875 (for patients with post anoxic encephalopathy: 0.946). CONCLUSIONS EEG synchronization measures can contribute to clinical assessment, and provide new approaches for understanding the pathophysiology of coma. SIGNIFICANCE Prognostication in coma remains a challenging task. qEEG could improve current multi-modal approaches.
Resumo:
Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes’ theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment.
Resumo:
We report on measurements of neutrino oscillation using data from the T2K long-baseline neutrino experiment collected between 2010 and 2013. In an analysis of muon neutrino disappearance alone, we find the following estimates and 68% confidence intervals for the two possible mass hierarchies: Normal Hierarchy: sin²θ₂₃= 0.514+0.055−0.056 and ∆m²_32 = (2.51 ± 0.10) × 10⁻³ eV²/c⁴ Inverted Hierarchy: sin²θ₂₃= 0.511 ± 0.055 and ∆m²_13 = (2.48 ± 0.10) × 10⁻³ eV²/c⁴ The analysis accounts for multi-nucleon mechanisms in neutrino interactions which were found to introduce negligible bias. We describe our first analyses that combine measurements of muon neutrino disappearance and electron neutrino appearance to estimate four oscillation parameters, |∆m^2|, sin²θ₂₃, sin²θ₁₃, δCP , and the mass hierarchy. Frequentist and Bayesian intervals are presented for combinations of these parameters, with and without including recent reactor measurements. At 90% confidence level and including reactor measurements, we exclude the region δCP = [0.15, 0.83]π for normal hierarchy and δCP = [−0.08, 1.09]π for inverted hierarchy. The T2K and reactor data weakly favor the normal hierarchy with a Bayes Factor of 2.2. The most probable values and 68% 1D credible intervals for the other oscillation parameters, when reactor data are included, are: sin²θ₂₃= 0.528+0.055−0.038 and |∆m²_32| = (2.51 ± 0.11) × 10⁻³ eV²/c⁴.
Resumo:
Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.