86 resultados para Neural networks and clustering
Resumo:
Many studies have assessed the neural underpinnings of creativity, failing to find a clear anatomical localization. We aimed to provide evidence for a multi-componential neural system for creativity. We applied a general activation likelihood estimation (ALE) meta-analysis to 45 fMRI studies. Three individual ALE analyses were performed to assess creativity in different cognitive domains (Musical, Verbal, and Visuo-spatial). The general ALE revealed that creativity relies on clusters of activations in the bilateral occipital, parietal, frontal, and temporal lobes. The individual ALE revealed different maximal activation in different domains. Musical creativity yields activations in the bilateral medial frontal gyrus, in the left cingulate gyrus, middle frontal gyrus, and inferior parietal lobule and in the right postcentral and fusiform gyri. Verbal creativity yields activations mainly located in the left hemisphere, in the prefrontal cortex, middle and superior temporal gyri, inferior parietal lobule, postcentral and supramarginal gyri, middle occipital gyrus, and insula. The right inferior frontal gyrus and the lingual gyrus were also activated. Visuo-spatial creativity activates the right middle and inferior frontal gyri, the bilateral thalamus and the left precentral gyrus. This evidence suggests that creativity relies on multi-componential neural networks and that different creativity domains depend on different brain regions.
Resumo:
In recent years, the boundaries between e-commerce and social networking have become increasingly blurred. Many e-commerce websites support the mechanism of social login where users can sign on the websites using their social network identities such as their Facebook or Twitter accounts. Users can also post their newly purchased products on microblogs with links to the e-commerce product web pages. In this paper, we propose a novel solution for cross-site cold-start product recommendation, which aims to recommend products from e-commerce websites to users at social networking sites in 'cold-start' situations, a problem which has rarely been explored before. A major challenge is how to leverage knowledge extracted from social networking sites for cross-site cold-start product recommendation. We propose to use the linked users across social networking sites and e-commerce websites (users who have social networking accounts and have made purchases on e-commerce websites) as a bridge to map users' social networking features to another feature representation for product recommendation. In specific, we propose learning both users' and products' feature representations (called user embeddings and product embeddings, respectively) from data collected from e-commerce websites using recurrent neural networks and then apply a modified gradient boosting trees method to transform users' social networking features into user embeddings. We then develop a feature-based matrix factorization approach which can leverage the learnt user embeddings for cold-start product recommendation. Experimental results on a large dataset constructed from the largest Chinese microblogging service Sina Weibo and the largest Chinese B2C e-commerce website JingDong have shown the effectiveness of our proposed framework.
Resumo:
Feature selection is important in medical field for many reasons. However, selecting important variables is a difficult task with the presence of censoring that is a unique feature in survival data analysis. This paper proposed an approach to deal with the censoring problem in endovascular aortic repair survival data through Bayesian networks. It was merged and embedded with a hybrid feature selection process that combines cox's univariate analysis with machine learning approaches such as ensemble artificial neural networks to select the most relevant predictive variables. The proposed algorithm was compared with common survival variable selection approaches such as; least absolute shrinkage and selection operator LASSO, and Akaike information criterion AIC methods. The results showed that it was capable of dealing with high censoring in the datasets. Moreover, ensemble classifiers increased the area under the roc curves of the two datasets collected from two centers located in United Kingdom separately. Furthermore, ensembles constructed with center 1 enhanced the concordance index of center 2 prediction compared to the model built with a single network. Although the size of the final reduced model using the neural networks and its ensembles is greater than other methods, the model outperformed the others in both concordance index and sensitivity for center 2 prediction. This indicates the reduced model is more powerful for cross center prediction.
Resumo:
This paper provides the most fully comprehensive evidence to date on whether or not monetary aggregates are valuable for forecasting US inflation in the early to mid 2000s. We explore a wide range of different definitions of money, including different methods of aggregation and different collections of included monetary assets. In our forecasting experiment we use two nonlinear techniques, namely, recurrent neural networks and kernel recursive least squares regressiontechniques that are new to macroeconomics. Recurrent neural networks operate with potentially unbounded input memory, while the kernel regression technique is a finite memory predictor. The two methodologies compete to find the best fitting US inflation forecasting models and are then compared to forecasts from a nave random walk model. The best models were nonlinear autoregressive models based on kernel methods. Our findings do not provide much support for the usefulness of monetary aggregates in forecasting inflation. Beyond its economic findings, our study is in the tradition of physicists' long-standing interest in the interconnections among statistical mechanics, neural networks, and related nonparametric statistical methods, and suggests potential avenues of extension for such studies. © 2010 Elsevier B.V. All rights reserved.
Resumo:
This thesis studies survival analysis techniques dealing with censoring to produce predictive tools that predict the risk of endovascular aortic aneurysm repair (EVAR) re-intervention. Censoring indicates that some patients do not continue follow up, so their outcome class is unknown. Methods dealing with censoring have drawbacks and cannot handle the high censoring of the two EVAR datasets collected. Therefore, this thesis presents a new solution to high censoring by modifying an approach that was incapable of differentiating between risks groups of aortic complications. Feature selection (FS) becomes complicated with censoring. Most survival FS methods depends on Cox's model, however machine learning classifiers (MLC) are preferred. Few methods adopted MLC to perform survival FS, but they cannot be used with high censoring. This thesis proposes two FS methods which use MLC to evaluate features. The two FS methods use the new solution to deal with censoring. They combine factor analysis with greedy stepwise FS search which allows eliminated features to enter the FS process. The first FS method searches for the best neural networks' configuration and subset of features. The second approach combines support vector machines, neural networks, and K nearest neighbor classifiers using simple and weighted majority voting to construct a multiple classifier system (MCS) for improving the performance of individual classifiers. It presents a new hybrid FS process by using MCS as a wrapper method and merging it with the iterated feature ranking filter method to further reduce the features. The proposed techniques outperformed FS methods based on Cox's model such as; Akaike and Bayesian information criteria, and least absolute shrinkage and selector operator in the log-rank test's p-values, sensitivity, and concordance. This proves that the proposed techniques are more powerful in correctly predicting the risk of re-intervention. Consequently, they enable doctors to set patients’ appropriate future observation plan.
Resumo:
In recent years, there has been an increas-ing interest in learning a distributed rep-resentation of word sense. Traditional context clustering based models usually require careful tuning of model parame-ters, and typically perform worse on infre-quent word senses. This paper presents a novel approach which addresses these lim-itations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned represen-tations outperform the publicly available embeddings on 2 out of 4 metrics in the word similarity task, and 6 out of 13 sub tasks in the analogical reasoning task.
Resumo:
In recent years, there has been an increasing interest in learning a distributed representation of word sense. Traditional context clustering based models usually require careful tuning of model parameters, and typically perform worse on infrequent word senses. This paper presents a novel approach which addresses these limitations by first initializing the word sense embeddings through learning sentence-level embeddings from WordNet glosses using a convolutional neural networks. The initialized word sense embeddings are used by a context clustering based model to generate the distributed representations of word senses. Our learned representations outperform the publicly available embeddings on half of the metrics in the word similarity task, 6 out of 13 sub tasks in the analogical reasoning task, and gives the best overall accuracy in the word sense effect classification task, which shows the effectiveness of our proposed distributed distribution learning model.
Resumo:
In this paper we consider four alternative approaches to complexity control in feed-forward networks based respectively on architecture selection, regularization, early stopping, and training with noise. We show that there are close similarities between these approaches and we argue that, for most practical applications, the technique of regularization should be the method of choice.
Resumo:
It is generally assumed when using Bayesian inference methods for neural networks that the input data contains no noise. For real-world (errors in variable) problems this is clearly an unsafe assumption. This paper presents a Bayesian neural network framework which accounts for input noise provided that a model of the noise process exists. In the limit where the noise process is small and symmetric it is shown, using the Laplace approximation, that this method adds an extra term to the usual Bayesian error bar which depends on the variance of the input noise process. Further, by treating the true (noiseless) input as a hidden variable, and sampling this jointly with the network’s weights, using a Markov chain Monte Carlo method, it is demonstrated that it is possible to infer the regression over the noiseless input. This leads to the possibility of training an accurate model of a system using less accurate, or more uncertain, data. This is demonstrated on both the, synthetic, noisy sine wave problem and a real problem of inferring the forward model for a satellite radar backscatter system used to predict sea surface wind vectors.
Resumo:
Background - The Met allele of the catechol-O-methyltransferase (COMT) valine-to-methionine (Val158Met) polymorphism is known to affect dopamine-dependent affective regulation within amygdala-prefrontal cortical (PFC) networks. It is also thought to increase the risk of a number of disorders characterized by affective morbidity including bipolar disorder (BD), major depressive disorder (MDD) and anxiety disorders. The disease risk conferred is small, suggesting that this polymorphism represents a modifier locus. Therefore our aim was to investigate how the COMT Val158Met may contribute to phenotypic variation in clinical diagnosis using sad facial affect processing as a probe for its neural action. Method - We employed functional magnetic resonance imaging to measure activation in the amygdala, ventromedial PFC (vmPFC) and ventrolateral PFC (vlPFC) during sad facial affect processing in family members with BD (n=40), MDD and anxiety disorders (n=22) or no psychiatric diagnosis (n=25) and 50 healthy controls. Results - Irrespective of clinical phenotype, the Val158 allele was associated with greater amygdala activation and the Met allele with greater signal change in the vmPFC and vlPFC. Signal changes in the amygdala and vmPFC were not associated with disease expression. However, in the right vlPFC the Met158 allele was associated with greater activation in all family members with affective morbidity compared with relatives without a psychiatric diagnosis and healthy controls. Conclusions - Our results suggest that the COMT Val158Met polymorphism has a pleiotropic effect within the neural networks subserving emotional processing. Furthermore the Met158 allele further reduces cortical efficiency in the vlPFC in individuals with affective morbidity. © 2010 Cambridge University Press.
Resumo:
Background Lifelong surveillance after endovascular repair (EVAR) of abdominal aortic aneurysms (AAA) is considered mandatory to detect potentially life-threatening endograft complications. A minority of patients require reintervention but cannot be predictively identified by existing methods. This study aimed to improve the prediction of endograft complications and mortality, through the application of machine-learning techniques. Methods Patients undergoing EVAR at 2 centres were studied from 2004-2010. Pre-operative aneurysm morphology was quantified and endograft complications were recorded up to 5 years following surgery. An artificial neural networks (ANN) approach was used to predict whether patients would be at low- or high-risk of endograft complications (aortic/limb) or mortality. Centre 1 data were used for training and centre 2 data for validation. ANN performance was assessed by Kaplan-Meier analysis to compare the incidence of aortic complications, limb complications, and mortality; in patients predicted to be low-risk, versus those predicted to be high-risk. Results 761 patients aged 75 +/- 7 years underwent EVAR. Mean follow-up was 36+/- 20 months. An ANN was created from morphological features including angulation/length/areas/diameters/ volume/tortuosity of the aneurysm neck/sac/iliac segments. ANN models predicted endograft complications and mortality with excellent discrimination between a low-risk and high-risk group. In external validation, the 5-year rates of freedom from aortic complications, limb complications and mortality were 95.9% vs 67.9%; 99.3% vs 92.0%; and 87.9% vs 79.3% respectively (p0.001) Conclusion This study presents ANN models that stratify the 5-year risk of endograft complications or mortality using routinely available pre-operative data.