854 resultados para sparse Bayesian regression
Resumo:
Diffusion MRI is a well established imaging modality providing a powerful way to probe the structure of the white matter non-invasively. Despite its potential, the intrinsic long scan times of these sequences have hampered their use in clinical practice. For this reason, a large variety of methods have been recently proposed to shorten the acquisition times. Among them, spherical deconvolution approaches have gained a lot of interest for their ability to reliably recover the intra-voxel fiber configuration with a relatively small number of data samples. To overcome the intrinsic instabilities of deconvolution, these methods use regularization schemes generally based on the assumption that the fiber orientation distribution (FOD) to be recovered in each voxel is sparse. The well known Constrained Spherical Deconvolution (CSD) approach resorts to Tikhonov regularization, based on an ℓ(2)-norm prior, which promotes a weak version of sparsity. Also, in the last few years compressed sensing has been advocated to further accelerate the acquisitions and ℓ(1)-norm minimization is generally employed as a means to promote sparsity in the recovered FODs. In this paper, we provide evidence that the use of an ℓ(1)-norm prior to regularize this class of problems is somewhat inconsistent with the fact that the fiber compartments all sum up to unity. To overcome this ℓ(1) inconsistency while simultaneously exploiting sparsity more optimally than through an ℓ(2) prior, we reformulate the reconstruction problem as a constrained formulation between a data term and a sparsity prior consisting in an explicit bound on the ℓ(0)norm of the FOD, i.e. on the number of fibers. The method has been tested both on synthetic and real data. Experimental results show that the proposed ℓ(0) formulation significantly reduces modeling errors compared to the state-of-the-art ℓ(2) and ℓ(1) regularization approaches.
Resumo:
Vector Autoregressive Moving Average (VARMA) models have many theoretical properties which should make them popular among empirical macroeconomists. However, they are rarely used in practice due to over-parameterization concerns, difficulties in ensuring identification and computational challenges. With the growing interest in multivariate time series models of high dimension, these problems with VARMAs become even more acute, accounting for the dominance of VARs in this field. In this paper, we develop a Bayesian approach for inference in VARMAs which surmounts these problems. It jointly ensures identification and parsimony in the context of an efficient Markov chain Monte Carlo (MCMC) algorithm. We use this approach in a macroeconomic application involving up to twelve dependent variables. We find our algorithm to work successfully and provide insights beyond those provided by VARs.
Resumo:
Lean meat percentage (LMP) is an important carcass quality parameter. The aim of this work is to obtain a calibration equation for the Computed Tomography (CT) scans with the Partial Least Square Regression (PLS) technique in order to predict the LMP of the carcass and the different cuts and to study and compare two different methodologies of the selection of the variables (Variable Importance for Projection — VIP- and Stepwise) to be included in the prediction equation. The error of prediction with cross-validation (RMSEPCV) of the LMP obtained with PLS and selection based on VIP value was 0.82% and for stepwise selection it was 0.83%. The prediction of the LMP scanning only the ham had a RMSEPCV of 0.97% and if the ham and the loin were scanned the RMSEPCV was 0.90%. Results indicate that for CT data both VIP and stepwise selection are good methods. Moreover the scanning of only the ham allowed us to obtain a good prediction of the LMP of the whole carcass.
Resumo:
In this paper, we forecast EU-area inflation with many predictors using time-varying parameter models. The facts that time-varying parameter models are parameter-rich and the time span of our data is relatively short motivate a desire for shrinkage. In constant coefficient regression models, the Bayesian Lasso is gaining increasing popularity as an effective tool for achieving such shrinkage. In this paper, we develop econometric methods for using the Bayesian Lasso with time-varying parameter models. Our approach allows for the coefficient on each predictor to be: i) time varying, ii) constant over time or iii) shrunk to zero. The econometric methodology decides automatically which category each coefficient belongs in. Our empirical results indicate the benefits of such an approach.
Resumo:
Bayesian model averaging (BMA) methods are regularly used to deal with model uncertainty in regression models. This paper shows how to introduce Bayesian model averaging methods in quantile regressions, and allow for different predictors to affect different quantiles of the dependent variable. I show that quantile regression BMA methods can help reduce uncertainty regarding outcomes of future inflation by providing superior predictive densities compared to mean regression models with and without BMA.
Resumo:
This paper explores the effects of two main sources of innovation - intramural and external R&D— on the productivity level in a sample of 3,267 Catalonian firms. The data set used is based on the official innovation survey of Catalonia which was a part of the Spanish sample of CIS4, covering the years 2002-2004. We compare empirical results by applying usual OLS and quantile regression techniques both in manufacturing and services industries. In quantile regression, results suggest different patterns at both innovation sources as we move across conditional quantiles. The elasticity of intramural R&D activities on productivity decreased when we move up the high productivity levels both in manufacturing and services sectors, while the effects of external R&D rise in high-technology industries but are more ambiguous in low-technology and knowledge-intensive services. JEL codes: O300, C100, O140 Keywords: Innovation sources, R&D, Productivity, Quantile Regression
Resumo:
Continuing developments in science and technology mean that the amounts of information forensic scientists are able to provide for criminal investigations is ever increasing. The commensurate increase in complexity creates difficulties for scientists and lawyers with regard to evaluation and interpretation, notably with respect to issues of inference and decision. Probability theory, implemented through graphical methods, and specifically Bayesian networks, provides powerful methods to deal with this complexity. Extensions of these methods to elements of decision theory provide further support and assistance to the judicial system. Bayesian Networks for Probabilistic Inference and Decision Analysis in Forensic Science provides a unique and comprehensive introduction to the use of Bayesian decision networks for the evaluation and interpretation of scientific findings in forensic science, and for the support of decision-makers in their scientific and legal tasks. Includes self-contained introductions to probability and decision theory. Develops the characteristics of Bayesian networks, object-oriented Bayesian networks and their extension to decision models. Features implementation of the methodology with reference to commercial and academically available software. Presents standard networks and their extensions that can be easily implemented and that can assist in the reader's own analysis of real cases. Provides a technique for structuring problems and organizing data based on methods and principles of scientific reasoning. Contains a method for the construction of coherent and defensible arguments for the analysis and evaluation of scientific findings and for decisions based on them. Is written in a lucid style, suitable for forensic scientists and lawyers with minimal mathematical background. Includes a foreword by Ian Evett. The clear and accessible style of this second edition makes this book ideal for all forensic scientists, applied statisticians and graduate students wishing to evaluate forensic findings from the perspective of probability and decision analysis. It will also appeal to lawyers and other scientists and professionals interested in the evaluation and interpretation of forensic findings, including decision making based on scientific information.
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
"Vegeu el resum a l'inici del document del fitxer adjunt."
Resumo:
Calomys callosus a wild rodent, is a natural host of Trypanosoma cruzi. Twelve C. callosus were infected with 10(5) trypomastigotes of the F strain (a myotropic strain) of T. cruzi. Parasitemia decreased on the 21 st day becoming negative around the 40th day of infection. All animals survived but had positive parasitological tests, until the end of the experiment. The infected animals developed severe inflammation in the myocardium and skeletal muscle. This process was pronounced from the 26 th to the 30th day and gradually subsided from the 50 th day becoming absent or residual on the 64 th day after infection. Collagen was identified by the picro Sirius red method. Fibrogenesis developed early, but regression of fibrosis occurred between the 50th and 64th day. Ultrastructural study disclosed a predominance of macrophages and fibroblasts in the inflammatory infiltrates, with small numbers of lymphocytes. Macrophages had active phagocytosis and showed points of contact with altered muscle cells. Different degrees of matrix expansion were present, with granular and fibrilar deposits and collagen bundles. These alterations subsided by the 64th days. Macrophages seem to be the main immune effector cell in the C. callosus model of infection with T. cruzi. The mechanisms involved in the rapid fibrogenesis and its regression deserve further investigation.
Resumo:
Background Individual signs and symptoms are of limited value for the diagnosis of influenza. Objective To develop a decision tree for the diagnosis of influenza based on a classification and regression tree (CART) analysis. Methods Data from two previous similar cohort studies were assembled into a single dataset. The data were randomly divided into a development set (70%) and a validation set (30%). We used CART analysis to develop three models that maximize the number of patients who do not require diagnostic testing prior to treatment decisions. The validation set was used to evaluate overfitting of the model to the training set. Results Model 1 has seven terminal nodes based on temperature, the onset of symptoms and the presence of chills, cough and myalgia. Model 2 was a simpler tree with only two splits based on temperature and the presence of chills. Model 3 was developed with temperature as a dichotomous variable (≥38°C) and had only two splits based on the presence of fever and myalgia. The area under the receiver operating characteristic curves (AUROCC) for the development and validation sets, respectively, were 0.82 and 0.80 for Model 1, 0.75 and 0.76 for Model 2 and 0.76 and 0.77 for Model 3. Model 2 classified 67% of patients in the validation group into a high- or low-risk group compared with only 38% for Model 1 and 54% for Model 3. Conclusions A simple decision tree (Model 2) classified two-thirds of patients as low or high risk and had an AUROCC of 0.76. After further validation in an independent population, this CART model could support clinical decision making regarding influenza, with low-risk patients requiring no further evaluation for influenza and high-risk patients being candidates for empiric symptomatic or drug therapy.
Resumo:
Sampling issues represent a topic of ongoing interest to the forensic science community essentially because of their crucial role in laboratory planning and working protocols. For this purpose, forensic literature described thorough (Bayesian) probabilistic sampling approaches. These are now widely implemented in practice. They allow, for instance, to obtain probability statements that parameters of interest (e.g., the proportion of a seizure of items that present particular features, such as an illegal substance) satisfy particular criteria (e.g., a threshold or an otherwise limiting value). Currently, there are many approaches that allow one to derive probability statements relating to a population proportion, but questions on how a forensic decision maker - typically a client of a forensic examination or a scientist acting on behalf of a client - ought actually to decide about a proportion or a sample size, remained largely unexplored to date. The research presented here intends to address methodology from decision theory that may help to cope usefully with the wide range of sampling issues typically encountered in forensic science applications. The procedures explored in this paper enable scientists to address a variety of concepts such as the (net) value of sample information, the (expected) value of sample information or the (expected) decision loss. All of these aspects directly relate to questions that are regularly encountered in casework. Besides probability theory and Bayesian inference, the proposed approach requires some additional elements from decision theory that may increase the efforts needed for practical implementation. In view of this challenge, the present paper will emphasise the merits of graphical modelling concepts, such as decision trees and Bayesian decision networks. These can support forensic scientists in applying the methodology in practice. How this may be achieved is illustrated with several examples. The graphical devices invoked here also serve the purpose of supporting the discussion of the similarities, differences and complementary aspects of existing Bayesian probabilistic sampling criteria and the decision-theoretic approach proposed throughout this paper.
Resumo:
This study presents a classification criteria for two-class Cannabis seedlings. As the cultivation of drug type cannabis is forbidden in Switzerland, law enforcement authorities regularly ask laboratories to determine cannabis plant's chemotype from seized material in order to ascertain that the plantation is legal or not. In this study, the classification analysis is based on data obtained from the relative proportion of three major leaf compounds measured by gas-chromatography interfaced with mass spectrometry (GC-MS). The aim is to discriminate between drug type (illegal) and fiber type (legal) cannabis at an early stage of the growth. A Bayesian procedure is proposed: a Bayes factor is computed and classification is performed on the basis of the decision maker specifications (i.e. prior probability distributions on cannabis type and consequences of classification measured by losses). Classification rates are computed with two statistical models and results are compared. Sensitivity analysis is then performed to analyze the robustness of classification criteria.
Advanced mapping of environmental data: Geostatistics, Machine Learning and Bayesian Maximum Entropy
Resumo:
This book combines geostatistics and global mapping systems to present an up-to-the-minute study of environmental data. Featuring numerous case studies, the reference covers model dependent (geostatistics) and data driven (machine learning algorithms) analysis techniques such as risk mapping, conditional stochastic simulations, descriptions of spatial uncertainty and variability, artificial neural networks (ANN) for spatial data, Bayesian maximum entropy (BME), and more.