19 resultados para variational Bayes, Voronoi tessellations

em BORIS: Bern Open Repository and Information System - Berna - Suiça


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Blind deconvolution is the problem of recovering a sharp image and a blur kernel from a noisy blurry image. Recently, there has been a significant effort on understanding the basic mechanisms to solve blind deconvolution. While this effort resulted in the deployment of effective algorithms, the theoretical findings generated contrasting views on why these approaches worked. On the one hand, one could observe experimentally that alternating energy minimization algorithms converge to the desired solution. On the other hand, it has been shown that such alternating minimization algorithms should fail to converge and one should instead use a so-called Variational Bayes approach. To clarify this conundrum, recent work showed that a good image and blur prior is instead what makes a blind deconvolution algorithm work. Unfortunately, this analysis did not apply to algorithms based on total variation regularization. In this manuscript, we provide both analysis and experiments to get a clearer picture of blind deconvolution. Our analysis reveals the very reason why an algorithm based on total variation works. We also introduce an implementation of this algorithm and show that, in spite of its extreme simplicity, it is very robust and achieves a performance comparable to the top performing algorithms.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this work we propose the adoption of a statistical framework used in the evaluation of forensic evidence as a tool for evaluating and presenting circumstantial "evidence" of a disease outbreak from syndromic surveillance. The basic idea is to exploit the predicted distributions of reported cases to calculate the ratio of the likelihood of observing n cases given an ongoing outbreak over the likelihood of observing n cases given no outbreak. The likelihood ratio defines the Value of Evidence (V). Using Bayes' rule, the prior odds for an ongoing outbreak are multiplied by V to obtain the posterior odds. This approach was applied to time series on the number of horses showing clinical respiratory symptoms or neurological symptoms. The separation between prior beliefs about the probability of an outbreak and the strength of evidence from syndromic surveillance offers a transparent reasoning process suitable for supporting decision makers. The value of evidence can be translated into a verbal statement, as often done in forensics or used for the production of risk maps. Furthermore, a Bayesian approach offers seamless integration of data from syndromic surveillance with results from predictive modeling and with information from other sources such as disease introduction risk assessments.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of twenty questions with noisy answers, in which we seek to find a target by repeatedly choosing a set, asking an oracle whether the target lies in this set, and obtaining an answer corrupted by noise. Starting with a prior distribution on the target's location, we seek to minimize the expected entropy of the posterior distribution. We formulate this problem as a dynamic program and show that any policy optimizing the one-step expected reduction in entropy is also optimal over the full horizon. Two such Bayes optimal policies are presented: one generalizes the probabilistic bisection policy due to Horstein and the other asks a deterministic set of questions. We study the structural properties of the latter, and illustrate its use in a computer vision application.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we propose a variational approach for multimodal image registration based on the diffeomorphic demons algorithm. Diffeomorphic demons has proven to be a robust and efficient way for intensity-based image registration. However, the main drawback is that it cannot deal with multiple modalities. We propose to replace the standard demons similarity metric (image intensity differences) by point-wise mutual information (PMI) in the energy function. By comparing the accuracy between our PMI based diffeomorphic demons and the B-Spline based free-form deformation approach (FFD) on simulated deformations, we show the proposed algorithm performs significantly better.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Natural methane (CH4) emissions from wet ecosystems are an important part of today's global CH4 budget. Climate affects the exchange of CH4 between ecosystems and the atmosphere by influencing CH4 production, oxidation, and transport in the soil. The net CH4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH4 emissions for different ecosystems: northern peatlands (45°–90° N), naturally inundated wetlands (60° S–45° N), rice agriculture and wet mineral soils. Mineral soils are a potential CH4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003–2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a~significant reduction in the emissions from northern peatlands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we diagnose model parameters in LPJ-WHyMe and simulate the surface exchange of CH4 over the period 1990–2008. Over the whole period we infer an increase of global ecosystem CH4 emissions of +1.11 Tg CH4 yr−1, not considering potential additional changes in wetland extent. The increase in simulated CH4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land temperature and in atmospheric carbon dioxide that were used as input. The long-term decline of the atmospheric CH4 growth rate from 1990 to 2006 cannot be fully explained with the simulated ecosystem emissions. However, these emissions show an increasing trend of +3.62 Tg CH4 yr−1 over 2005–2008 which can partly explain the renewed increase in atmospheric CH4 concentration during recent years.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Similarity measure is one of the main factors that affect the accuracy of intensity-based 2D/3D registration of X-ray fluoroscopy to CT images. Information theory has been used to derive similarity measure for image registration leading to the introduction of mutual information, an accurate similarity measure for multi-modal and mono-modal image registration tasks. However, it is known that the standard mutual information measure only takes intensity values into account without considering spatial information and its robustness is questionable. Previous attempt to incorporate spatial information into mutual information either requires computing the entropy of higher dimensional probability distributions, or is not robust to outliers. In this paper, we show how to incorporate spatial information into mutual information without suffering from these problems. Using a variational approximation derived from the Kullback-Leibler bound, spatial information can be effectively incorporated into mutual information via energy minimization. The resulting similarity measure has a least-squares form and can be effectively minimized by a multi-resolution Levenberg-Marquardt optimizer. Experimental results are presented on datasets of two applications: (a) intra-operative patient pose estimation from a few (e.g. 2) calibrated fluoroscopic images, and (b) post-operative cup alignment estimation from single X-ray radiograph with gonadal shielding.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In variational linguistics, the concept of space has always been a central issue. However, different research traditions considering space coexisted for a long time separately. Traditional dialectology focused primarily on the diatopic dimension of linguistic variation, whereas in sociolinguistic studies diastratic and diaphasic dimensions were considered. For a long time only very few linguistic investigations tried to combine both research traditions in a two-dimensional design – a desideratum which is meant to be compensated by the contributions of this volume. The articles present findings from empirical studies which take on these different concepts and examine how they relate to one another. Besides dialectological and sociolinguistic concepts also a lay perspective of linguistic space is considered, a paradigm that is often referred to as “folk dialectology”. Many of the studies in this volume make use of new computational possibilities of processing and cartographically representing large corpora of linguistic data. The empirical studies incorporate findings from different linguistic communities in Europe and pursue the objective to shed light on the inter-relationship between the different concepts of space and their relevance to variational linguistics.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we prove a Lions-type compactness embedding result for symmetric unbounded domains of the Heisenberg group. The natural group action on the Heisenberg group TeX is provided by the unitary group U(n) × {1} and its appropriate subgroups, which will be used to construct subspaces with specific symmetry and compactness properties in the Folland-Stein’s horizontal Sobolev space TeX. As an application, we study the multiplicity of solutions for a singular subelliptic problem by exploiting a technique of solving the Rubik-cube applied to subgroups of U(n) × {1}. In our approach we employ concentration compactness, group-theoretical arguments, and variational methods.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

When tilted sideways participants misperceive the visual vertical assessed by means of a luminous line in otherwise complete dark- ness. A recent modeling approach (De Vrijer et al., 2009) claimed that these typical patterns of errors (known as A- and E-effects) could be explained by as- suming that participants behave in a Bayes optimal manner. In this study, we experimentally manipulate participants’ prior information about body-in-space orientation and measure the effect of this manipulation on the subjective visual vertical (SVV). Specifically, we explore the effects of veridical and misleading instructions about body tilt orientations on the SVV. We used a psychophys- ical 2AFC SVV task at roll tilt angles of 0 degrees, 16 degrees and 4 degrees CW and CCW. Participants were tilted to 4 degrees under different instruction conditions: in one condition, participants received veridical instructions as to their tilt angle, whereas in another condition, participants received the mis- leading instruction that their body position was perfectly upright. Our results indicate systematic differences between the instruction conditions at 4 degrees CW and CCW. Participants did not simply use an ego-centric reference frame in the misleading condition; instead, participants’ estimates of the SVV seem to lie between their head’s Z-axis and the estimate of the SVV as measured in the veridical condition. All participants displayed A-effects at roll tilt an- gles of 16 degrees CW and CCW. We discuss our results in the context of the Bayesian model by De Vrijer et al. (2009), and claim that this pattern of re- sults is consistent with a manipulation of precision of a prior distribution over body-in-space orientations. Furthermore, we introduce a Bayesian Generalized Linear Model for estimating parameters of participants’ psychometric function, which allows us to jointly estimate group level and individual level parameters under all experimental conditions simultaneously, rather than relying on the traditional two-step approach to obtaining group level parameter estimates.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Zielsetzung: Diese Studie untersuchte die Validität und Reliabilität von verschiedenen visuellen dentalen Vergrösserungshilfen in Bezug auf die okklusale Kariesdiagnostik mit Hilfe des International Caries Detection and Assessment System (ICDAS). Material und Methode: Die Okklusalflächen von 100 extrahierten Zähnen wurde an einer zuvor bestimmten Stelle von 10 Studenten (5 Studenten des 3. Jahreskurses (Bachelor-Studenten) und 5 Studenten des 4. Jahreskurses (Master-Studenten) der Zahnmedizinischen Kliniken der Universität Bern) und 4 Zahnärzten visuell untersucht und nach ICDAS auf das Vorhandensein und die Tiefe einer kariösen Läsion beurteilt. Die Beurteilung der Zähne erfolgte je zwei Mal von blossem Auge, mit einem Galilei-Lupensystem (2.5x Vergrösserung), mit einem Kepler-Lupensystem (4.5x Vergrösserung) und mit dem Operationsmikroskop (10x Vergrösserung) mit mindestens 24 Stunden Abstand zwischen den jeweiligen Untersuchungen. Als Goldstandard diente die Histologie. Die statistische Auswertung der Untersuchungen erfolgte mit der Berechnung der Kappa-Koeffizienten für die Intra- und Inter-Untersucher Reliabilität sowie einer Bayes-Analyse durch Ermittlung von Sensitivität, Spezifität und der Fläche unter der Receiver Operating Characteristic Kurve (AUC). Ergebnisse: Bei den Untersuchungsdurchläufen, welche mit dentalen Vergrösserungshilfen für die Diagnostik der okklusalen Zahnoberflächen durchgeführt wurden, sank die Anzahl der mit einem ICDAS-Code 0 (gesunde Zahnoberfläche) beurteilten Zähne, während die Quantität des Codes 3 (Schmelzeinbruch) mit höheren Vergrösserungen drastisch zunahm. Mit steigendem Vergrösserungsfaktor liessen sich sowohl mehr Schmelzkaries als auch Dentinkaries richtig erkennen (bessere Sensitivität), im Gegenzug sanken aber die Werte der Spezifität auf ein klinisch unakzeptables Niveau. Während der Abfall der Spezifität und AUC-Werte bei der Beurteilung von Schmelzkaries unter Verwendung von kleinen Vergrösserungen lediglich einen Trend darstellte, waren die Verschlechterungen in der Diagnostik bei der Dentinkaries unter der Zuhilfenahme von höheren Vergrösserungen häufig signifikant. So stiegen zum Beispiel bei den Zahnärzten die Werte der Sensitivität (Bandbreite) auf dem D3-Diagnostikniveau von 0.47 (0.17-0.79) bei dem Durchlauf von Auge auf 0.91 (0.83-1.00) bei der Benutzung des Operationsmikroskopes an, während jedoch die Spezifitätswerte (Bandbreite) von 0.78 (0.58-0.95) auf 0.30 (0.07-0.55) sanken. Ebenfalls einen negativen Einfluss von optischen Hilfsmitteln zeigte sich bei der Inter-Untersucher Reliabilität, während die Intra-Untersucher Reliabilität unbeeinflusst blieb. Die persönliche klinische Erfahrung scheint sowohl in Bezug auf das Mass der Übereinstimmung visueller Kariesdiagnostik als auch auf die Präferenz bei der Vergabe der ICDAS-Codes und somit auf die Werte der Validität einen wesentlichen Faktor auszumachen. Die Studenten erreichten die besten Werte der Sensitivität, indes die Zahnärzte dies bei der Spezifität erzielten. Schlussfolgerung: Insgesamt zeigte sich, dass ICDAS nicht für den zusätzlichen Gebrauch von optischen Vergrösserungen konzipiert wurde. Da es auf Grund von der Zuhilfenahme von dentalen Vergrösserungen zu mehr und unnötigen invasiven Behandlungsentscheidungen kommen könnte, ist von der Zuhilfenahme derselben für die okklusale Kariesdiagnostik mit ICDAS abzuraten.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Manual counting of bacterial colony forming units (CFUs) on agar plates is laborious and error-prone. We therefore implemented a colony counting system with a novel segmentation algorithm to discriminate bacterial colonies from blood and other agar plates.A colony counter hardware was designed and a novel segmentation algorithm was written in MATLAB. In brief, pre-processing with Top-Hat-filtering to obtain a uniform background was followed by the segmentation step, during which the colony images were extracted from the blood agar and individual colonies were separated. A Bayes classifier was then applied to count the final number of bacterial colonies as some of the colonies could still be concatenated to form larger groups. To assess accuracy and performance of the colony counter, we tested automated colony counting of different agar plates with known CFU numbers of S. pneumoniae, P. aeruginosa and M. catarrhalis and showed excellent performance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Efforts are ongoing to decrease the noise of the GRACE gravity field models and hence to arrive closer to the GRACE baseline. The most significant error sources belong the untreated errors in the observation data and the imperfections in the background models. The recent study (Bandikova&Flury,2014) revealed that the current release of the star camera attitude data (SCA1B RL02) contain noise systematically higher than expected by about a factor 3-4. This is due to an incorrect implementation of the algorithms for quaternion combination in the JPL processing routines. Generating improved SCA data requires that valid data from both star camera heads are available which is not always the case because the Sun and Moon at times blind one camera. In the gravity field modeling, the attitude data are needed for the KBR antenna offset correction and to orient the non-gravitational linear accelerations sensed by the accelerometer. Hence any improvement in the SCA data is expected to be reflected in the gravity field models. In order to quantify the effect on the gravity field, we processed one month of observation data using two different approaches: the celestial mechanics approach (AIUB) and the variational equations approach (ITSG). We show that the noise in the KBR observations and the linear accelerations has effectively decreased. However, the effect on the gravity field on a global scale is hardly evident. We conclude that, at the current level of accuracy, the errors seen in the temporal gravity fields are dominated by errors coming from sources other than the attitude data.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Activities of daily living (ADL) are important for quality of life. They are indicators of cognitive health status and their assessment is a measure of independence in everyday living. ADL are difficult to reliably assess using questionnaires due to self-reporting biases. Various sensor-based (wearable, in-home, intrusive) systems have been proposed to successfully recognize and quantify ADL without relying on self-reporting. New classifiers required to classify sensor data are on the rise. We propose two ad-hoc classifiers that are based only on non-intrusive sensor data. METHODS: A wireless sensor system with ten sensor boxes was installed in the home of ten healthy subjects to collect ambient data over a duration of 20 consecutive days. A handheld protocol device and a paper logbook were also provided to the subjects. Eight ADL were selected for recognition. We developed two ad-hoc ADL classifiers, namely the rule based forward chaining inference engine (RBI) classifier and the circadian activity rhythm (CAR) classifier. The RBI classifier finds facts in data and matches them against the rules. The CAR classifier works within a framework to automatically rate routine activities to detect regular repeating patterns of behavior. For comparison, two state-of-the-art [Naïves Bayes (NB), Random Forest (RF)] classifiers have also been used. All classifiers were validated with the collected data sets for classification and recognition of the eight specific ADL. RESULTS: Out of a total of 1,373 ADL, the RBI classifier correctly determined 1,264, while missing 109 and the CAR determined 1,305 while missing 68 ADL. The RBI and CAR classifier recognized activities with an average sensitivity of 91.27 and 94.36%, respectively, outperforming both RF and NB. CONCLUSIONS: The performance of the classifiers varied significantly and shows that the classifier plays an important role in ADL recognition. Both RBI and CAR classifier performed better than existing state-of-the-art (NB, RF) on all ADL. Of the two ad-hoc classifiers, the CAR classifier was more accurate and is likely to be better suited than the RBI for distinguishing and recognizing complex ADL.