923 resultados para Teorema de Bayes


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Functional Magnetic Resonance Imaging (fMRI) is a non-invasive technique which is commonly used to quantify changes in blood oxygenation and flow coupled to neuronal activation. One of the primary goals of fMRI studies is to identify localized brain regions where neuronal activation levels vary between groups. Single voxel t-tests have been commonly used to determine whether activation related to the protocol differs across groups. Due to the generally limited number of subjects within each study, accurate estimation of variance at each voxel is difficult. Thus, combining information across voxels in the statistical analysis of fMRI data is desirable in order to improve efficiency. Here we construct a hierarchical model and apply an Empirical Bayes framework on the analysis of group fMRI data, employing techniques used in high throughput genomic studies. The key idea is to shrink residual variances by combining information across voxels, and subsequently to construct an improved test statistic in lieu of the classical t-statistic. This hierarchical model results in a shrinkage of voxel-wise residual sample variances towards a common value. The shrunken estimator for voxelspecific variance components on the group analyses outperforms the classical residual error estimator in terms of mean squared error. Moreover, the shrunken test-statistic decreases false positive rate when testing differences in brain contrast maps across a wide range of simulation studies. This methodology was also applied to experimental data regarding a cognitive activation task.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

BACKGROUND: It is unclear whether aggressive phototherapy to prevent neurotoxic effects of bilirubin benefits or harms infants with extremely low birth weight (1000 g or less). METHODS: We randomly assigned 1974 infants with extremely low birth weight at 12 to 36 hours of age to undergo either aggressive or conservative phototherapy. The primary outcome was a composite of death or neurodevelopmental impairment determined for 91% of the infants by investigators who were unaware of the treatment assignments. RESULTS: Aggressive phototherapy, as compared with conservative phototherapy, significantly reduced the mean peak serum bilirubin level (7.0 vs. 9.8 mg per deciliter [120 vs. 168 micromol per liter], P<0.01) but not the rate of the primary outcome (52% vs. 55%; relative risk, 0.94; 95% confidence interval [CI], 0.87 to 1.02; P=0.15). Aggressive phototherapy did reduce rates of neurodevelopmental impairment (26%, vs. 30% for conservative phototherapy; relative risk, 0.86; 95% CI, 0.74 to 0.99). Rates of death in the aggressive-phototherapy and conservative-phototherapy groups were 24% and 23%, respectively (relative risk, 1.05; 95% CI, 0.90 to 1.22). In preplanned subgroup analyses, the rates of death were 13% with aggressive phototherapy and 14% with conservative phototherapy for infants with a birth weight of 751 to 1000 g and 39% and 34%, respectively (relative risk, 1.13; 95% CI, 0.96 to 1.34), for infants with a birth weight of 501 to 750 g. CONCLUSIONS: Aggressive phototherapy did not significantly reduce the rate of death or neurodevelopmental impairment. The rate of neurodevelopmental impairment alone was significantly reduced with aggressive phototherapy. This reduction may be offset by an increase in mortality among infants weighing 501 to 750 g at birth. (ClinicalTrials.gov number, NCT00114543.)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When tilted sideways participants misperceive the visual vertical assessed by means of a luminous line in otherwise complete dark- ness. A recent modeling approach (De Vrijer et al., 2009) claimed that these typical patterns of errors (known as A- and E-effects) could be explained by as- suming that participants behave in a Bayes optimal manner. In this study, we experimentally manipulate participants’ prior information about body-in-space orientation and measure the effect of this manipulation on the subjective visual vertical (SVV). Specifically, we explore the effects of veridical and misleading instructions about body tilt orientations on the SVV. We used a psychophys- ical 2AFC SVV task at roll tilt angles of 0 degrees, 16 degrees and 4 degrees CW and CCW. Participants were tilted to 4 degrees under different instruction conditions: in one condition, participants received veridical instructions as to their tilt angle, whereas in another condition, participants received the mis- leading instruction that their body position was perfectly upright. Our results indicate systematic differences between the instruction conditions at 4 degrees CW and CCW. Participants did not simply use an ego-centric reference frame in the misleading condition; instead, participants’ estimates of the SVV seem to lie between their head’s Z-axis and the estimate of the SVV as measured in the veridical condition. All participants displayed A-effects at roll tilt an- gles of 16 degrees CW and CCW. We discuss our results in the context of the Bayesian model by De Vrijer et al. (2009), and claim that this pattern of re- sults is consistent with a manipulation of precision of a prior distribution over body-in-space orientations. Furthermore, we introduce a Bayesian Generalized Linear Model for estimating parameters of participants’ psychometric function, which allows us to jointly estimate group level and individual level parameters under all experimental conditions simultaneously, rather than relying on the traditional two-step approach to obtaining group level parameter estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zielsetzung: Diese Studie untersuchte die Validität und Reliabilität von verschiedenen visuellen dentalen Vergrösserungshilfen in Bezug auf die okklusale Kariesdiagnostik mit Hilfe des International Caries Detection and Assessment System (ICDAS). Material und Methode: Die Okklusalflächen von 100 extrahierten Zähnen wurde an einer zuvor bestimmten Stelle von 10 Studenten (5 Studenten des 3. Jahreskurses (Bachelor-Studenten) und 5 Studenten des 4. Jahreskurses (Master-Studenten) der Zahnmedizinischen Kliniken der Universität Bern) und 4 Zahnärzten visuell untersucht und nach ICDAS auf das Vorhandensein und die Tiefe einer kariösen Läsion beurteilt. Die Beurteilung der Zähne erfolgte je zwei Mal von blossem Auge, mit einem Galilei-Lupensystem (2.5x Vergrösserung), mit einem Kepler-Lupensystem (4.5x Vergrösserung) und mit dem Operationsmikroskop (10x Vergrösserung) mit mindestens 24 Stunden Abstand zwischen den jeweiligen Untersuchungen. Als Goldstandard diente die Histologie. Die statistische Auswertung der Untersuchungen erfolgte mit der Berechnung der Kappa-Koeffizienten für die Intra- und Inter-Untersucher Reliabilität sowie einer Bayes-Analyse durch Ermittlung von Sensitivität, Spezifität und der Fläche unter der Receiver Operating Characteristic Kurve (AUC). Ergebnisse: Bei den Untersuchungsdurchläufen, welche mit dentalen Vergrösserungshilfen für die Diagnostik der okklusalen Zahnoberflächen durchgeführt wurden, sank die Anzahl der mit einem ICDAS-Code 0 (gesunde Zahnoberfläche) beurteilten Zähne, während die Quantität des Codes 3 (Schmelzeinbruch) mit höheren Vergrösserungen drastisch zunahm. Mit steigendem Vergrösserungsfaktor liessen sich sowohl mehr Schmelzkaries als auch Dentinkaries richtig erkennen (bessere Sensitivität), im Gegenzug sanken aber die Werte der Spezifität auf ein klinisch unakzeptables Niveau. Während der Abfall der Spezifität und AUC-Werte bei der Beurteilung von Schmelzkaries unter Verwendung von kleinen Vergrösserungen lediglich einen Trend darstellte, waren die Verschlechterungen in der Diagnostik bei der Dentinkaries unter der Zuhilfenahme von höheren Vergrösserungen häufig signifikant. So stiegen zum Beispiel bei den Zahnärzten die Werte der Sensitivität (Bandbreite) auf dem D3-Diagnostikniveau von 0.47 (0.17-0.79) bei dem Durchlauf von Auge auf 0.91 (0.83-1.00) bei der Benutzung des Operationsmikroskopes an, während jedoch die Spezifitätswerte (Bandbreite) von 0.78 (0.58-0.95) auf 0.30 (0.07-0.55) sanken. Ebenfalls einen negativen Einfluss von optischen Hilfsmitteln zeigte sich bei der Inter-Untersucher Reliabilität, während die Intra-Untersucher Reliabilität unbeeinflusst blieb. Die persönliche klinische Erfahrung scheint sowohl in Bezug auf das Mass der Übereinstimmung visueller Kariesdiagnostik als auch auf die Präferenz bei der Vergabe der ICDAS-Codes und somit auf die Werte der Validität einen wesentlichen Faktor auszumachen. Die Studenten erreichten die besten Werte der Sensitivität, indes die Zahnärzte dies bei der Spezifität erzielten. Schlussfolgerung: Insgesamt zeigte sich, dass ICDAS nicht für den zusätzlichen Gebrauch von optischen Vergrösserungen konzipiert wurde. Da es auf Grund von der Zuhilfenahme von dentalen Vergrösserungen zu mehr und unnötigen invasiven Behandlungsentscheidungen kommen könnte, ist von der Zuhilfenahme derselben für die okklusale Kariesdiagnostik mit ICDAS abzuraten.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

OBJECTIVE Our aim was to assess the diagnostic and predictive value of several quantitative EEG (qEEG) analysis methods in comatose patients. METHODS In 79 patients, coupling between EEG signals on the left-right (inter-hemispheric) axis and on the anterior-posterior (intra-hemispheric) axis was measured with four synchronization measures: relative delta power asymmetry, cross-correlation, symbolic mutual information and transfer entropy directionality. Results were compared with etiology of coma and clinical outcome. Using cross-validation, the predictive value of measure combinations was assessed with a Bayes classifier with mixture of Gaussians. RESULTS Five of eight measures showed a statistically significant difference between patients grouped according to outcome; one measure revealed differences in patients grouped according to the etiology. Interestingly, a high level of synchrony between the left and right hemisphere was associated with mortality on intensive care unit, whereas higher synchrony between anterior and posterior brain regions was associated with survival. The combination with the best predictive value reached an area-under the curve of 0.875 (for patients with post anoxic encephalopathy: 0.946). CONCLUSIONS EEG synchronization measures can contribute to clinical assessment, and provide new approaches for understanding the pathophysiology of coma. SIGNIFICANCE Prognostication in coma remains a challenging task. qEEG could improve current multi-modal approaches.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Prediction of psychosis in patients at clinical high risk (CHR) has become a mainstream focus of clinical and research interest worldwide. When using CHR instruments for clinical purposes, the predicted outcome is but only a probability; and, consequently, any therapeutic action following the assessment is based on probabilistic prognostic reasoning. Yet, probabilistic reasoning makes considerable demands on the clinicians. We provide here a scholarly practical guide summarising the key concepts to support clinicians with probabilistic prognostic reasoning in the CHR state. We review risk or cumulative incidence of psychosis in, person-time rate of psychosis, Kaplan-Meier estimates of psychosis risk, measures of prognostic accuracy, sensitivity and specificity in receiver operator characteristic curves, positive and negative predictive values, Bayes’ theorem, likelihood ratios, potentials and limits of real-life applications of prognostic probabilistic reasoning in the CHR state. Understanding basic measures used for prognostic probabilistic reasoning is a prerequisite for successfully implementing the early detection and prevention of psychosis in clinical practice. Future refinement of these measures for CHR patients may actually influence risk management, especially as regards initiating or withholding treatment.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The genomic era brought by recent advances in the next-generation sequencing technology makes the genome-wide scans of natural selection a reality. Currently, almost all the statistical tests and analytical methods for identifying genes under selection was performed on the individual gene basis. Although these methods have the power of identifying gene subject to strong selection, they have limited power in discovering genes targeted by moderate or weak selection forces, which are crucial for understanding the molecular mechanisms of complex phenotypes and diseases. Recent availability and rapid completeness of many gene network and protein-protein interaction databases accompanying the genomic era open the avenues of exploring the possibility of enhancing the power of discovering genes under natural selection. The aim of the thesis is to explore and develop normal mixture model based methods for leveraging gene network information to enhance the power of natural selection target gene discovery. The results show that the developed statistical method, which combines the posterior log odds of the standard normal mixture model and the Guilt-By-Association score of the gene network in a naïve Bayes framework, has the power to discover moderate/weak selection gene which bridges the genes under strong selection and it helps our understanding the biology under complex diseases and related natural selection phenotypes.^

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El artículo pretende mostrar a partir del ejemplo del descubrimiento del Teorema de Pitágoras, cómo las ideas sociales de una época determinada, para nuestro caso las de la sociedad griega del siglo V a.C., intervienen en el proceso de formación científica ya sea para facilitar o bloquear su desarrollo. El caso de las matemáticas pitagóricas es significativo al respecto; su análisis no deja de ser un punto paradigmático en la historia de las ciencias desde el punto de vista de la sociología.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Source routes and Spatial Diffusion of capuchin monkeys over the past 6 million years, rebuilt in the SPREAD 1.0.6 from the MCC tree. The map shows the 10 different regions to which distinctive samples were associated. The different transmission routes have been calculated from the average rate over time. Only rates with Bayes factor> 3 were considered as significantly different from zero. Significant diffusion pathways are highlighted with color varying from dark brown to red, being the dark brown less significant rates and deep red the most significant rates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We report on measurements of neutrino oscillation using data from the T2K long-baseline neutrino experiment collected between 2010 and 2013. In an analysis of muon neutrino disappearance alone, we find the following estimates and 68% confidence intervals for the two possible mass hierarchies: Normal Hierarchy: sin²θ₂₃= 0.514+0.055−0.056 and ∆m²_32 = (2.51 ± 0.10) × 10⁻³ eV²/c⁴ Inverted Hierarchy: sin²θ₂₃= 0.511 ± 0.055 and ∆m²_13 = (2.48 ± 0.10) × 10⁻³ eV²/c⁴ The analysis accounts for multi-nucleon mechanisms in neutrino interactions which were found to introduce negligible bias. We describe our first analyses that combine measurements of muon neutrino disappearance and electron neutrino appearance to estimate four oscillation parameters, |∆m^2|, sin²θ₂₃, sin²θ₁₃, δCP , and the mass hierarchy. Frequentist and Bayesian intervals are presented for combinations of these parameters, with and without including recent reactor measurements. At 90% confidence level and including reactor measurements, we exclude the region δCP = [0.15, 0.83]π for normal hierarchy and δCP = [−0.08, 1.09]π for inverted hierarchy. The T2K and reactor data weakly favor the normal hierarchy with a Bayes Factor of 2.2. The most probable values and 68% 1D credible intervals for the other oscillation parameters, when reactor data are included, are: sin²θ₂₃= 0.528+0.055−0.038 and |∆m²_32| = (2.51 ± 0.11) × 10⁻³ eV²/c⁴.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper estimates the impact of industrial agglomeration on firm-level productivity in Chinese manufacturing sectors. To account for spatial autocorrelation across regions, we formulate a hierarchical spatial model at the firm level and develop a Bayesian estimation algorithm. A Bayesian instrumental-variables approach is used to address endogeneity bias of agglomeration. Robust to these potential biases, we find that agglomeration of the same industry (i.e. localization) has a productivity-boosting effect, but agglomeration of urban population (i.e. urbanization) has no such effects. Additionally, the localization effects increase with educational levels of employees and the share of intermediate inputs in gross output. These results may suggest that agglomeration externalities occur through knowledge spillovers and input sharing among firms producing similar manufactures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

El estudio de la fiabilidad de componentes y sistemas tiene gran importancia en diversos campos de la ingenieria, y muy concretamente en el de la informatica. Al analizar la duracion de los elementos de la muestra hay que tener en cuenta los elementos que no fallan en el tiempo que dure el experimento, o bien los que fallen por causas distintas a la que es objeto de estudio. Por ello surgen nuevos tipos de muestreo que contemplan estos casos. El mas general de ellos, el muestreo censurado, es el que consideramos en nuestro trabajo. En este muestreo tanto el tiempo hasta que falla el componente como el tiempo de censura son variables aleatorias. Con la hipotesis de que ambos tiempos se distribuyen exponencialmente, el profesor Hurt estudio el comportamiento asintotico del estimador de maxima verosimilitud de la funcion de fiabilidad. En principio parece interesante utilizar metodos Bayesianos en el estudio de la fiabilidad porque incorporan al analisis la informacion a priori de la que se dispone normalmente en problemas reales. Por ello hemos considerado dos estimadores Bayesianos de la fiabilidad de una distribucion exponencial que son la media y la moda de la distribucion a posteriori. Hemos calculado la expansion asint6tica de la media, varianza y error cuadratico medio de ambos estimadores cuando la distribuci6n de censura es exponencial. Hemos obtenido tambien la distribucion asintotica de los estimadores para el caso m3s general de que la distribucion de censura sea de Weibull. Dos tipos de intervalos de confianza para muestras grandes se han propuesto para cada estimador. Los resultados se han comparado con los del estimador de maxima verosimilitud, y con los de dos estimadores no parametricos: limite producto y Bayesiano, resultando un comportamiento superior por parte de uno de nuestros estimadores. Finalmente nemos comprobado mediante simulacion que nuestros estimadores son robustos frente a la supuesta distribuci6n de censura, y que uno de los intervalos de confianza propuestos es valido con muestras pequenas. Este estudio ha servido tambien para confirmar el mejor comportamiento de uno de nuestros estimadores. SETTING OUT AND SUMMARY OF THE THESIS When we study the lifetime of components it's necessary to take into account the elements that don't fail during the experiment, or those that fail by reasons which are desirable to exclude from consideration. The model of random censorship is very usefull for analysing these data. In this model the time to failure and the time censor are random variables. We obtain two Bayes estimators of the reliability function of an exponential distribution based on randomly censored data. We have calculated the asymptotic expansion of the mean, variance and mean square error of both estimators, when the censor's distribution is exponential. We have obtained also the asymptotic distribution of the estimators for the more general case of censor's Weibull distribution. Two large-sample confidence bands have been proposed for each estimator. The results have been compared with those of the maximum likelihood estimator, and with those of two non parametric estimators: Product-limit and Bayesian. One of our estimators has the best behaviour. Finally we have shown by simulation, that our estimators are robust against the assumed censor's distribution, and that one of our intervals does well in small sample situation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Se definen conceptos y se aplica el teorema de Valverde para escribir un algoritmo que computa bases de similaridades. This paper studies sorne theory and methods to build a representation theorem basis of a similarity from the basis of its subsimilarities, providing an alternative recursive method to compute the basis of a similarity.