63 resultados para New Deal art -- Nebraska


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Pixel color has proven to be a useful and robust cue for detection of most objects of interest like fire. In this paper, a hybrid intelligent algorithm is proposed to detect fire pixels in the background of an image. The proposed algorithm is introduced by the combination of a computational search method based on a swarm intelligence technique and the Kemdoids clustering method in order to form a Fire-based Color Space (FCS), in fact, the new technique converts RGB color system to FCS through a 3*3 matrix. This algorithm consists of five main stages:(1) extracting fire and non-fire pixels manually from the original image. (2) using K-medoids clustering to find a Cost function to minimize the error value. (3) applying Particle Swarm Optimization (PSO) to search and find the best W components in order to minimize the fitness function. (4) reporting the best matrix including feature weights, and utilizing this matrix to convert the all original images in the database to the new color space. (5) using Otsu threshold technique to binarize the final images. As compared with some state-of-the-art techniques, the experimental results show the ability and efficiency of the new method to detect fire pixels in color images.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Emerging Electronic Medical Records (EMRs) have reformed the modern healthcare. These records have great potential to be used for building clinical prediction models. However, a problem in using them is their high dimensionality. Since a lot of information may not be relevant for prediction, the underlying complexity of the prediction models may not be high. A popular way to deal with this problem is to employ feature selection. Lasso and l1-norm based feature selection methods have shown promising results. But, in presence of correlated features, these methods select features that change considerably with small changes in data. This prevents clinicians to obtain a stable feature set, which is crucial for clinical decision making. Grouping correlated variables together can improve the stability of feature selection, however, such grouping is usually not known and needs to be estimated for optimal performance. Addressing this problem, we propose a new model that can simultaneously learn the grouping of correlated features and perform stable feature selection. We formulate the model as a constrained optimization problem and provide an efficient solution with guaranteed convergence. Our experiments with both synthetic and real-world datasets show that the proposed model is significantly more stable than Lasso and many existing state-of-the-art shrinkage and classification methods. We further show that in terms of prediction performance, the proposed method consistently outperforms Lasso and other baselines. Our model can be used for selecting stable risk factors for a variety of healthcare problems, so it can assist clinicians toward accurate decision making.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Learning from small number of examples is a challenging problem in machine learning. An effective way to improve the performance is through exploiting knowledge from other related tasks. Multi-task learning (MTL) is one such useful paradigm that aims to improve the performance through jointly modeling multiple related tasks. Although there exist numerous classification or regression models in machine learning literature, most of the MTL models are built around ridge or logistic regression. There exist some limited works, which propose multi-task extension of techniques such as support vector machine, Gaussian processes. However, all these MTL models are tied to specific classification or regression algorithms and there is no single MTL algorithm that can be used at a meta level for any given learning algorithm. Addressing this problem, we propose a generic, model-agnostic joint modeling framework that can take any classification or regression algorithm of a practitioner’s choice (standard or custom-built) and build its MTL variant. The key observation that drives our framework is that due to small number of examples, the estimates of task parameters are usually poor, and we show that this leads to an under-estimation of task relatedness between any two tasks with high probability. We derive an algorithm that brings the tasks closer to their true relatedness by improving the estimates of task parameters. This is achieved by appropriate sharing of data across tasks. We provide the detail theoretical underpinning of the algorithm. Through our experiments with both synthetic and real datasets, we demonstrate that the multi-task variants of several classifiers/regressors (logistic regression, support vector machine, K-nearest neighbor, Random Forest, ridge regression, support vector regression) convincingly outperform their single-task counterparts. We also show that the proposed model performs comparable or better than many state-of-the-art MTL and transfer learning baselines.