971 resultados para Bayesian variable selection


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Probabilistic modeling is the de�ning characteristic of estimation of distribution algorithms (EDAs) which determines their behavior and performance in optimization. Regularization is a well-known statistical technique used for obtaining an improved model by reducing the generalization error of estimation, especially in high-dimensional problems. `1-regularization is a type of this technique with the appealing variable selection property which results in sparse model estimations. In this thesis, we study the use of regularization techniques for model learning in EDAs. Several methods for regularized model estimation in continuous domains based on a Gaussian distribution assumption are presented, and analyzed from di�erent aspects when used for optimization in a high-dimensional setting, where the population size of EDA has a logarithmic scale with respect to the number of variables. The optimization results obtained for a number of continuous problems with an increasing number of variables show that the proposed EDA based on regularized model estimation performs a more robust optimization, and is able to achieve signi�cantly better results for larger dimensions than other Gaussian-based EDAs. We also propose a method for learning a marginally factorized Gaussian Markov random �eld model using regularization techniques and a clustering algorithm. The experimental results show notable optimization performance on continuous additively decomposable problems when using this model estimation method. Our study also covers multi-objective optimization and we propose joint probabilistic modeling of variables and objectives in EDAs based on Bayesian networks, speci�cally models inspired from multi-dimensional Bayesian network classi�ers. It is shown that with this approach to modeling, two new types of relationships are encoded in the estimated models in addition to the variable relationships captured in other EDAs: objectivevariable and objective-objective relationships. An extensive experimental study shows the e�ectiveness of this approach for multi- and many-objective optimization. With the proposed joint variable-objective modeling, in addition to the Pareto set approximation, the algorithm is also able to obtain an estimation of the multi-objective problem structure. Finally, the study of multi-objective optimization based on joint probabilistic modeling is extended to noisy domains, where the noise in objective values is represented by intervals. A new version of the Pareto dominance relation for ordering the solutions in these problems, namely �-degree Pareto dominance, is introduced and its properties are analyzed. We show that the ranking methods based on this dominance relation can result in competitive performance of EDAs with respect to the quality of the approximated Pareto sets. This dominance relation is then used together with a method for joint probabilistic modeling based on `1-regularization for multi-objective feature subset selection in classi�cation, where six di�erent measures of accuracy are considered as objectives with interval values. The individual assessment of the proposed joint probabilistic modeling and solution ranking methods on datasets with small-medium dimensionality, when using two di�erent Bayesian classi�ers, shows that comparable or better Pareto sets of feature subsets are approximated in comparison to standard methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Background:Malignancies arising in the large bowel cause the second largest number of deaths from cancer in the Western World. Despite progresses made during the last decades, colorectal cancer remains one of the most frequent and deadly neoplasias in the western countries. Methods: A genomic study of human colorectal cancer has been carried out on a total of 31 tumoral samples, corresponding to different stages of the disease, and 33 non-tumoral samples. The study was carried out by hybridisation of the tumour samples against a reference pool of non-tumoral samples using Agilent Human 1A 60-mer oligo microarrays. The results obtained were validated by qRT-PCR. In the subsequent bioinformatics analysis, gene networks by means of Bayesian classifiers, variable selection and bootstrap resampling were built. The consensus among all the induced models produced a hierarchy of dependences and, thus, of variables. Results: After an exhaustive process of pre-processing to ensure data quality--lost values imputation, probes quality, data smoothing and intraclass variability filtering--the final dataset comprised a total of 8, 104 probes. Next, a supervised classification approach and data analysis was carried out to obtain the most relevant genes. Two of them are directly involved in cancer progression and in particular in colorectal cancer. Finally, a supervised classifier was induced to classify new unseen samples. Conclusions: We have developed a tentative model for the diagnosis of colorectal cancer based on a biomarker panel. Our results indicate that the gene profile described herein can discriminate between non-cancerous and cancerous samples with 94.45% accuracy using different supervised classifiers (AUC values in the range of 0.997 and 0.955).

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Los accidentes del tráfico son un fenómeno social muy relevantes y una de las principales causas de mortalidad en los países desarrollados. Para entender este fenómeno complejo se aplican modelos econométricos sofisticados tanto en la literatura académica como por las administraciones públicas. Esta tesis está dedicada al análisis de modelos macroscópicos para los accidentes del tráfico en España. El objetivo de esta tesis se puede dividir en dos bloques: a. Obtener una mejor comprensión del fenómeno de accidentes de trafico mediante la aplicación y comparación de dos modelos macroscópicos utilizados frecuentemente en este área: DRAG y UCM, con la aplicación a los accidentes con implicación de furgonetas en España durante el período 2000-2009. Los análisis se llevaron a cabo con enfoque frecuencista y mediante los programas TRIO, SAS y TRAMO/SEATS. b. La aplicación de modelos y la selección de las variables más relevantes, son temas actuales de investigación y en esta tesis se ha desarrollado y aplicado una metodología que pretende mejorar, mediante herramientas teóricas y prácticas, el entendimiento de selección y comparación de los modelos macroscópicos. Se han desarrollado metodologías tanto para selección como para comparación de modelos. La metodología de selección de modelos se ha aplicado a los accidentes mortales ocurridos en la red viaria en el período 2000-2011, y la propuesta metodológica de comparación de modelos macroscópicos se ha aplicado a la frecuencia y la severidad de los accidentes con implicación de furgonetas en el período 2000-2009. Como resultado de los desarrollos anteriores se resaltan las siguientes contribuciones: a. Profundización de los modelos a través de interpretación de las variables respuesta y poder de predicción de los modelos. El conocimiento sobre el comportamiento de los accidentes con implicación de furgonetas se ha ampliado en este proceso. bl. Desarrollo de una metodología para selección de variables relevantes para la explicación de la ocurrencia de accidentes de tráfico. Teniendo en cuenta los resultados de a) la propuesta metodológica se basa en los modelos DRAG, cuyos parámetros se han estimado con enfoque bayesiano y se han aplicado a los datos de accidentes mortales entre los años 2000-2011 en España. Esta metodología novedosa y original se ha comparado con modelos de regresión dinámica (DR), que son los modelos más comunes para el trabajo con procesos estocásticos. Los resultados son comparables, y con la nueva propuesta se realiza una aportación metodológica que optimiza el proceso de selección de modelos, con escaso coste computacional. b2. En la tesis se ha diseñado una metodología de comparación teórica entre los modelos competidores mediante la aplicación conjunta de simulación Monte Cario, diseño de experimentos y análisis de la varianza ANOVA. Los modelos competidores tienen diferentes estructuras, que afectan a la estimación de efectos de las variables explicativas. Teniendo en cuenta el estudio desarrollado en bl) este desarrollo tiene el propósito de determinar como interpretar la componente de tendencia estocástica que un modelo UCM modela explícitamente, a través de un modelo DRAG, que no tiene un método específico para modelar este elemento. Los resultados de este estudio son importantes para ver si la serie necesita ser diferenciada antes de modelar. b3. Se han desarrollado nuevos algoritmos para realizar los ejercicios metodológicos, implementados en diferentes programas como R, WinBUGS, y MATLAB. El cumplimiento de los objetivos de la tesis a través de los desarrollos antes enunciados se remarcan en las siguientes conclusiones: 1. El fenómeno de accidentes del tráfico se ha analizado mediante dos modelos macroscópicos. Los efectos de los factores de influencia son diferentes dependiendo de la metodología aplicada. Los resultados de predicción son similares aunque con ligera superioridad de la metodología DRAG. 2. La metodología para selección de variables y modelos proporciona resultados prácticos en cuanto a la explicación de los accidentes de tráfico. La predicción y la interpretación también se han mejorado mediante esta nueva metodología. 3. Se ha implementado una metodología para profundizar en el conocimiento de la relación entre las estimaciones de los efectos de dos modelos competidores como DRAG y UCM. Un aspecto muy importante en este tema es la interpretación de la tendencia mediante dos modelos diferentes de la que se ha obtenido información muy útil para los investigadores en el campo del modelado. Los resultados han proporcionado una ampliación satisfactoria del conocimiento en torno al proceso de modelado y comprensión de los accidentes con implicación de furgonetas y accidentes mortales totales en España. ABSTRACT Road accidents are a very relevant social phenomenon and one of the main causes of death in industrialized countries. Sophisticated econometric models are applied in academic work and by the administrations for a better understanding of this very complex phenomenon. This thesis is thus devoted to the analysis of macro models for road accidents with application to the Spanish case. The objectives of the thesis may be divided in two blocks: a. To achieve a better understanding of the road accident phenomenon by means of the application and comparison of two of the most frequently used macro modelings: DRAG (demand for road use, accidents and their gravity) and UCM (unobserved components model); the application was made to van involved accident data in Spain in the period 2000-2009. The analysis has been carried out within the frequentist framework and using available state of the art software, TRIO, SAS and TRAMO/SEATS. b. Concern on the application of the models and on the relevant input variables to be included in the model has driven the research to try to improve, by theoretical and practical means, the understanding on methodological choice and model selection procedures. The theoretical developments have been applied to fatal accidents during the period 2000-2011 and van-involved road accidents in 2000-2009. This has resulted in the following contributions: a. Insight on the models has been gained through interpretation of the effect of the input variables on the response and prediction accuracy of both models. The behavior of van-involved road accidents has been explained during this process. b1. Development of an input variable selection procedure, which is crucial for an efficient choice of the inputs. Following the results of a) the procedure uses the DRAG-like model. The estimation is carried out within the Bayesian framework. The procedure has been applied for the total road accident data in Spain in the period 2000-2011. The results of the model selection procedure are compared and validated through a dynamic regression model given that the original data has a stochastic trend. b2. A methodology for theoretical comparison between the two models through Monte Carlo simulation, computer experiment design and ANOVA. The models have a different structure and this affects the estimation of the effects of the input variables. The comparison is thus carried out in terms of the effect of the input variables on the response, which is in general different, and should be related. Considering the results of the study carried out in b1) this study tries to find out how a stochastic time trend will be captured in DRAG model, since there is no specific trend component in DRAG. Given the results of b1) the findings of this study are crucial in order to see if the estimation of data with stochastic component through DRAG will be valid or whether the data need a certain adjustment (typically differencing) prior to the estimation. The model comparison methodology was applied to the UCM and DRAG models, considering that, as mentioned above, the UCM has a specific trend term while DRAG does not. b3. New algorithms were developed for carrying out the methodological exercises. For this purpose different softwares, R, WinBUGs and MATLAB were used. These objectives and contributions have been resulted in the following findings: 1. The road accident phenomenon has been analyzed by means of two macro models: The effects of the influential input variables may be estimated through the models, but it has been observed that the estimates vary from one model to the other, although prediction accuracy is similar, with a slight superiority of the DRAG methodology. 2. The variable selection methodology provides very practical results, as far as the explanation of road accidents is concerned. Prediction accuracy and interpretability have been improved by means of a more efficient input variable and model selection procedure. 3. Insight has been gained on the relationship between the estimates of the effects using the two models. A very relevant issue here is the role of trend in both models, relevant recommendations for the analyst have resulted from here. The results have provided a very satisfactory insight into both modeling aspects and the understanding of both van-involved and total fatal accidents behavior in Spain.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJETIVO: Investigar a prevalência de consumo de alimentos complementares e os fatores associados à alimentação complementar oportuna em menores de um ano. MÉTODOS: Participaram do estudo 1 176 crianças, durante a Campanha Nacional de Vacinação de 2003, em São Bernardo do Campo (SP), cujos acompanhantes responderam questionário que incluiu questões sobre a alimentação da criança nas 24 horas precedentes. A estimativa da prevalência de consumo dos alimentos complementares foi realizada por um modelo de regressão logística ajustado por idade; as medianas de introdução de alimentos por análise de sobrevida e os fatores associados à alimentação complementar oportuna por regressão de Poisson com ajuste robusto de variância e seleção hierarquizada de variáveis. RESULTADOS: Observou-se introdução precoce de alimentos complementares: no quarto mês, cerca de um terço das crianças recebiam suco de fruta e um quarto das crianças recebiam mingau, fruta ou sopa, ao passo que a probabilidade de consumir a comida da família aos oito meses foi baixa (48%). A mediana de idade para o consumo de frutas foi de 266 dias (IC95% 256-275), de papa de legumes foi 258 dias (IC95% 250-264) e comida da família, 292 dias (IC 95% 287-303). Os fatores associados ao consumo de alimentos sólidos antes dos seis meses de idade foram: sistema de assistência à saúde; idade materna; trabalho materno e uso de chupeta. CONCLUSÃO: O consumo precoce de alimentos sólidos, um risco potencial para a saúde infantil e para o desenvolvimento de doenças crônicas na idade adulta, evidenciam a necessidade de ações programáticas para reversão deste quadro.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The application of laser induced breakdown spectrometry (LIBS) aiming the direct analysis of plant materials is a great challenge that still needs efforts for its development and validation. In this way, a series of experimental approaches has been carried out in order to show that LIBS can be used as an alternative method to wet acid digestions based methods for analysis of agricultural and environmental samples. The large amount of information provided by LIBS spectra for these complex samples increases the difficulties for selecting the most appropriated wavelengths for each analyte. Some applications have suggested that improvements in both accuracy and precision can be achieved by the application of multivariate calibration in LIBS data when compared to the univariate regression developed with line emission intensities. In the present work, the performance of univariate and multivariate calibration, based on partial least squares regression (PLSR), was compared for analysis of pellets of plant materials made from an appropriate mixture of cryogenically ground samples with cellulose as the binding agent. The development of a specific PLSR model for each analyte and the selection of spectral regions containing only lines of the analyte of interest were the best conditions for the analysis. In this particular application, these models showed a similar performance. but PLSR seemed to be more robust due to a lower occurrence of outliers in comparison to the univariate method. Data suggests that efforts dealing with sample presentation and fitness of standards for LIBS analysis must be done in order to fulfill the boundary conditions for matrix independent development and validation. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Chlorpheniramine maleate (CLOR) enantiomers were quantified by ultraviolet spectroscopy and partial least squares regression. The CLOR enantiomers were prepared as inclusion complexes with beta-cyclodextrin and 1-butanol with mole fractions in the range from 50 to 100%. For the multivariate calibration the outliers were detected and excluded and variable selection was performed by interval partial least squares and a genetic algorithm. Figures of merit showed results for accuracy of 3.63 and 2.83% (S)-CLOR for root mean square errors of calibration and prediction, respectively. The ellipse confidence region included the point for the intercept and the slope of 1 and 0, respectively. Precision and analytical sensitivity were 0.57 and 0.50% (S)-CLOR, respectively. The sensitivity, selectivity, adjustment, and signal-to-noise ratio were also determined. The model was validated by a paired t test with the results obtained by high-performance liquid chromatography proposed by the European pharmacopoeia and circular dichroism spectroscopy. The results showed there was no significant difference between the methods at the 95% confidence level, indicating that the proposed method can be used as an alternative to standard procedures for chiral analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cannabis sativa, the most widely used illicit drug, has profound effects on levels of anxiety in animals and humans. Although recent studies have helped provide a better understanding of the neurofunctional correlates of these effects, indicating the involvement of the amygdala and cingulate cortex, their reciprocal influence is still mostly unknown. In this study dynamic causal modelling (DCM) and Bayesian model selection (BMS) were used to explore the effects of pure compounds of C. sativa [600 mg of cannabidiol (CBD) and 10 mg Delta(9)-tetrahydrocannabinol (Delta(9)-THC)] on prefrontal-subcortical effective connectivity in 15 healthy subjects who underwent a double-blind randomized, placebo-controlled fMRI paradigm while viewing faces which elicited different levels of anxiety. In the placebo condition, BMS identified a model with driving inputs entering via the anterior cingulate and forward intrinsic connectivity between the amygdala and the anterior cingulate as the best fit. CBD but not Delta(9)-THC disrupted forward connectivity between these regions during the neural response to fearful faces. This is the first study to show that the disruption of prefrontal-subocrtical connectivity by CBD may represent neurophysiological correlates of its anxiolytic properties.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Tuberculosis (TB) is a worldwide infectious disease that has shown over time extremely high mortality levels. The urgent need to develop new antitubercular drugs is due to the increasing rate of appearance of multi-drug resistant strains to the commonly used drugs, and the longer durations of therapy and recovery, particularly in immuno-compromised patients. The major goal of the present study is the exploration of data from different families of compounds through the use of a variety of machine learning techniques so that robust QSAR-based models can be developed to further guide in the quest for new potent anti-TB compounds. Eight QSAR models were built using various types of descriptors (from ADRIANA.Code and Dragon software) with two publicly available structurally diverse data sets, including recent data deposited in PubChem. QSAR methodologies used Random Forests and Associative Neural Networks. Predictions for the external evaluation sets obtained accuracies in the range of 0.76-0.88 (for active/inactive classifications) and Q(2)=0.66-0.89 for regressions. Models developed in this study can be used to estimate the anti-TB activity of drug candidates at early stages of drug development (C) 2011 Elsevier B.V. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Copyright © 2013 Springer Netherlands.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A Work Project, presented as part of the requirements for the Award of a Masters Degree in Finance from the NOVA – School of Business and Economics

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Software product lines (SPL) are diverse systems that are developed using a dual engineering process: (a)family engineering defines the commonality and variability among all members of the SPL, and (b) application engineering derives specific products based on the common foundation combined with a variable selection of features. The number of derivable products in an SPL can thus be exponential in the number of features. This inherent complexity poses two main challenges when it comes to modelling: Firstly, the formalism used for modelling SPLs needs to be modular and scalable. Secondly, it should ensure that all products behave correctly by providing the ability to analyse and verify complex models efficiently. In this paper we propose to integrate an established modelling formalism (Petri nets) with the domain of software product line engineering. To this end we extend Petri nets to Feature Nets. While Petri nets provide a framework for formally modelling and verifying single software systems, Feature Nets offer the same sort of benefits for software product lines. We show how SPLs can be modelled in an incremental, modular fashion using Feature Nets, provide a Feature Nets variant that supports modelling dynamic SPLs, and propose an analysis method for SPL modelled as Feature Nets. By facilitating the construction of a single model that includes the various behaviours exhibited by the products in an SPL, we make a significant step towards efficient and practical quality assurance methods for software product lines.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Understanding the genetic underpinnings of adaptive change is a fundamental but largely unresolved problem in evolutionary biology. Drosophila melanogaster, an ancestrally tropical insect that has spread to temperate regions and become cosmopolitan, offers a powerful opportunity for identifying the molecular polymorphisms underlying clinal adaptation. Here, we use genome-wide next-generation sequencing of DNA pools ('pool-seq') from three populations collected along the North American east coast to examine patterns of latitudinal differentiation. Comparing the genomes of these populations is particularly interesting since they exhibit clinal variation in a number of important life history traits. We find extensive latitudinal differentiation, with many of the most strongly differentiated genes involved in major functional pathways such as the insulin/TOR, ecdysone, torso, EGFR, TGFβ/BMP, JAK/STAT, immunity and circadian rhythm pathways. We observe particularly strong differentiation on chromosome 3R, especially within the cosmopolitan inversion In(3R)Payne, which contains a large number of clinally varying genes. While much of the differentiation might be driven by clinal differences in the frequency of In(3R)P, we also identify genes that are likely independent of this inversion. Our results provide genome-wide evidence consistent with pervasive spatially variable selection acting on numerous loci and pathways along the well-known North American cline, with many candidates implicated in life history regulation and exhibiting parallel differentiation along the previously investigated Australian cline.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The aim of this work is to evaluate the capabilities and limitations of chemometric methods and other mathematical treatments applied on spectroscopic data and more specifically on paint samples. The uniqueness of the spectroscopic data comes from the fact that they are multivariate - a few thousands variables - and highly correlated. Statistical methods are used to study and discriminate samples. A collection of 34 red paint samples was measured by Infrared and Raman spectroscopy. Data pretreatment and variable selection demonstrated that the use of Standard Normal Variate (SNV), together with removal of the noisy variables by a selection of the wavelengths from 650 to 1830 cm−1 and 2730-3600 cm−1, provided the optimal results for infrared analysis. Principal component analysis (PCA) and hierarchical clusters analysis (HCA) were then used as exploratory techniques to provide evidence of structure in the data, cluster, or detect outliers. With the FTIR spectra, the Principal Components (PCs) correspond to binder types and the presence/absence of calcium carbonate. 83% of the total variance is explained by the four first PCs. As for the Raman spectra, we observe six different clusters corresponding to the different pigment compositions when plotting the first two PCs, which account for 37% and 20% respectively of the total variance. In conclusion, the use of chemometrics for the forensic analysis of paints provides a valuable tool for objective decision-making, a reduction of the possible classification errors, and a better efficiency, having robust results with time saving data treatments.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

OBJECTIVES: The aim of the study was to assess whether prospective follow-up data within the Swiss HIV Cohort Study can be used to predict patients who stop smoking; or among smokers who stop, those who start smoking again. METHODS: We built prediction models first using clinical reasoning ('clinical models') and then by selecting from numerous candidate predictors using advanced statistical methods ('statistical models'). Our clinical models were based on literature that suggests that motivation drives smoking cessation, while dependence drives relapse in those attempting to stop. Our statistical models were based on automatic variable selection using additive logistic regression with component-wise gradient boosting. RESULTS: Of 4833 smokers, 26% stopped smoking, at least temporarily; because among those who stopped, 48% started smoking again. The predictive performance of our clinical and statistical models was modest. A basic clinical model for cessation, with patients classified into three motivational groups, was nearly as discriminatory as a constrained statistical model with just the most important predictors (the ratio of nonsmoking visits to total visits, alcohol or drug dependence, psychiatric comorbidities, recent hospitalization and age). A basic clinical model for relapse, based on the maximum number of cigarettes per day prior to stopping, was not as discriminatory as a constrained statistical model with just the ratio of nonsmoking visits to total visits. CONCLUSIONS: Predicting smoking cessation and relapse is difficult, so that simple models are nearly as discriminatory as complex ones. Patients with a history of attempting to stop and those known to have stopped recently are the best candidates for an intervention.