181 resultados para bayesian learning
Resumo:
Scientific reporting and communication is a challenging topic for which traditional study programs do not offer structured learning activities on a regular basis. This paper reports on the development and implementation of a web application and associated learning activities that intend to raise the awareness of reporting and communication issues among students in forensic science and law. The project covers interdisciplinary case studies based on a library of written reports about forensic examinations. Special features of the web framework, in particular a report annotation tool, support the design of various individual and group learning activities that focus on the development of knowledge and competence in dealing with reporting and communication challenges in the students' future areas of professional activity.
Resumo:
The paper presents the Multiple Kernel Learning (MKL) approach as a modelling and data exploratory tool and applies it to the problem of wind speed mapping. Support Vector Regression (SVR) is used to predict spatial variations of the mean wind speed from terrain features (slopes, terrain curvature, directional derivatives) generated at different spatial scales. Multiple Kernel Learning is applied to learn kernels for individual features and thematic feature subsets, both in the context of feature selection and optimal parameters determination. An empirical study on real-life data confirms the usefulness of MKL as a tool that enhances the interpretability of data-driven models.
Resumo:
Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.
Resumo:
We present MBIS (Multivariate Bayesian Image Segmentation tool), a clustering tool based on the mixture of multivariate normal distributions model. MBIS supports multichannel bias field correction based on a B-spline model. A second methodological novelty is the inclusion of graph-cuts optimization for the stationary anisotropic hidden Markov random field model. Along with MBIS, we release an evaluation framework that contains three different experiments on multi-site data. We first validate the accuracy of segmentation and the estimated bias field for each channel. MBIS outperforms a widely used segmentation tool in a cross-comparison evaluation. The second experiment demonstrates the robustness of results on atlas-free segmentation of two image sets from scan-rescan protocols on 21 healthy subjects. Multivariate segmentation is more replicable than the monospectral counterpart on T1-weighted images. Finally, we provide a third experiment to illustrate how MBIS can be used in a large-scale study of tissue volume change with increasing age in 584 healthy subjects. This last result is meaningful as multivariate segmentation performs robustly without the need for prior knowledge.
Resumo:
This article extends existing discussion in literature on probabilistic inference and decision making with respect to continuous hypotheses that are prevalent in forensic toxicology. As a main aim, this research investigates the properties of a widely followed approach for quantifying the level of toxic substances in blood samples, and to compare this procedure with a Bayesian probabilistic approach. As an example, attention is confined to the presence of toxic substances, such as THC, in blood from car drivers. In this context, the interpretation of results from laboratory analyses needs to take into account legal requirements for establishing the 'presence' of target substances in blood. In a first part, the performance of the proposed Bayesian model for the estimation of an unknown parameter (here, the amount of a toxic substance) is illustrated and compared with the currently used method. The model is then used in a second part to approach-in a rational way-the decision component of the problem, that is judicial questions of the kind 'Is the quantity of THC measured in the blood over the legal threshold of 1.5 μg/l?'. This is pointed out through a practical example.
Resumo:
Significant progress has been made with regard to the quantitative integration of geophysical and hydrological data at the local scale. However, extending the corresponding approaches to the regional scale represents a major, and as-of-yet largely unresolved, challenge. To address this problem, we have developed a downscaling procedure based on a non-linear Bayesian sequential simulation approach. The basic objective of this algorithm is to estimate the value of the sparsely sampled hydraulic conductivity at non-sampled locations based on its relation to the electrical conductivity, which is available throughout the model space. The in situ relationship between the hydraulic and electrical conductivities is described through a non-parametric multivariate kernel density function. This method is then applied to the stochastic integration of low-resolution, re- gional-scale electrical resistivity tomography (ERT) data in combination with high-resolution, local-scale downhole measurements of the hydraulic and electrical conductivities. Finally, the overall viability of this downscaling approach is tested and verified by performing and comparing flow and transport simulation through the original and the downscaled hydraulic conductivity fields. Our results indicate that the proposed procedure does indeed allow for obtaining remarkably faithful estimates of the regional-scale hydraulic conductivity structure and correspondingly reliable predictions of the transport characteristics over relatively long distances.
Resumo:
Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.
Resumo:
Dans le domaine de la perception, l'apprentissage est contraint par la présence d'une architecture fonctionnelle constituée d'aires corticales distribuées et très spécialisées. Dans le domaine des troubles visuels d'origine cérébrale, l'apprentissage d'un patient hémi-anopsique ou agnosique sera limité par ses capacités perceptives résiduelles, mais un déficit de reconnaissance visuelle de nature apparemment perceptive, peut également être associé à une altération des représentations en mémoire à long terme. Des réseaux neuronaux distincts pour la reconnaissance - cortex temporal - et pour la localisation des sons - cortex pariétal - ont été décrits chez l'homme. L'étude de patients cérébro-lésés confirme le rôle des indices spatiaux dans un traitement auditif explicite du « where » et dans la discrimination implicite du « what ». Cette organisation, similaire à ce qui a été décrit dans la modalité visuelle, faciliterait les apprentissages perceptifs. Plus généralement, l'apprentissage implicite fonde une grande partie de nos connaissances sur le monde en nous rendant sensible, à notre insu, aux règles et régularités de notre environnement. Il serait impliqué dans le développement cognitif, la formation des réactions émotionnelles ou encore l'apprentissage par le jeune enfant de sa langue maternelle. Le caractère inconscient de cet apprentissage est confirmé par l'étude des temps de réaction sériels de patients amnésiques dans l'acquisition d'une grammaire artificielle. Son évaluation pourrait être déterminante dans la prise en charge ré-adaptative. [In the field of perception, learning is formed by a distributed functional architecture of very specialized cortical areas. For example, capacities of learning in patients with visual deficits - hemianopia or visual agnosia - from cerebral lesions are limited by perceptual abilities. Moreover a visual deficit in link with abnormal perception may be associated with an alteration of representations in long term (semantic) memory. Furthermore, perception and memory traces rely on parallel processing. This has been recently demonstrated for human audition. Activation studies in normal subjects and psychophysical investigations in patients with focal hemispheric lesions have shown that auditory information relevant to sound recognition and that relevant to sound localisation are processed in parallel, anatomically distinct cortical networks, often referred to as the "What" and "Where" processing streams. Parallel processing may appear counterintuitive from the point of view of a unified perception of the auditory world, but there are advantages, such as rapidity of processing within a single stream, its adaptability in perceptual learning or facility of multisensory interactions. More generally, implicit learning mechanisms are responsible for the non-conscious acquisition of a great part of our knowledge about the world, using our sensitivity to the rules and regularities structuring our environment. Implicit learning is involved in cognitive development, in the generation of emotional processing and in the acquisition of natural language. Preserved implicit learning abilities have been shown in amnesic patients with paradigms like serial reaction time and artificial grammar learning tasks, confirming that implicit learning mechanisms are not sustained by the cognitive processes and the brain structures that are damaged in amnesia. In a clinical perspective, the assessment of implicit learning abilities in amnesic patients could be critical for building adapted neuropsychological rehabilitation programs.]
Resumo:
We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.
Resumo:
BACKGROUND: An auditory perceptual learning paradigm was used to investigate whether implicit memories are formed during general anesthesia. METHODS: Eighty-seven patients who had an American Society of Anesthesiologists physical status of I-III and were scheduled to undergo an elective surgery with general anesthesia were randomly assigned to one of two groups. One group received auditory stimulation during surgery, whereas the other did not. The auditory stimulation consisted of pure tones presented via headphones. The Bispectral Index level was maintained between 40 and 50 during surgery. To assess learning, patients performed an auditory frequency discrimination task after surgery, and comparisons were made between the groups. General anesthesia was induced with thiopental and maintained with a mixture of fentanyl and sevoflurane. RESULTS: There was no difference in the amount of learning between the two groups (mean +/- SD improvement: stimulated patients 9.2 +/- 11.3 Hz, controls 9.4 +/- 14.1 Hz). There was also no difference in initial thresholds (mean +/- SD initial thresholds: stimulated patients 31.1 +/- 33.4 Hz, controls 28.4 +/- 34.2 Hz). These results suggest that perceptual learning was not induced during anesthesia. No correlation between the bispectral index and the initial level of performance was found (Pearson r = -0.09, P = 0.59). CONCLUSION: Perceptual learning was not induced by repetitive auditory stimulation during anesthesia. This result may indicate that perceptual learning requires top-down processing, which is suppressed by the anesthetic.