950 resultados para distributed combination of classifiers


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Optimum-Path Forest (OPF) classifier is a recent and promising method for pattern recognition, with a fast training algorithm and good accuracy results. Therefore, the investigation of a combining method for this kind of classifier can be important for many applications. In this paper we report a fast method to combine OPF-based classifiers trained with disjoint training subsets. Given a fixed number of subsets, the algorithm chooses random samples, without replacement, from the original training set. Each subset accuracy is improved by a learning procedure. The final decision is given by majority vote. Experiments with simulated and real data sets showed that the proposed combining method is more efficient and effective than naive approach provided some conditions. It was also showed that OPF training step runs faster for a series of small subsets than for the whole training set. The combining scheme was also designed to support parallel or distributed processing, speeding up the procedure even more. © 2011 Springer-Verlag.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Low cost RGB-D cameras such as the Microsoft’s Kinect or the Asus’s Xtion Pro are completely changing the computer vision world, as they are being successfully used in several applications and research areas. Depth data are particularly attractive and suitable for applications based on moving objects detection through foreground/background segmentation approaches; the RGB-D applications proposed in literature employ, in general, state of the art foreground/background segmentation techniques based on the depth information without taking into account the color information. The novel approach that we propose is based on a combination of classifiers that allows improving background subtraction accuracy with respect to state of the art algorithms by jointly considering color and depth data. In particular, the combination of classifiers is based on a weighted average that allows to adaptively modifying the support of each classifier in the ensemble by considering foreground detections in the previous frames and the depth and color edges. In this way, it is possible to reduce false detections due to critical issues that can not be tackled by the individual classifiers such as: shadows and illumination changes, color and depth camouflage, moved background objects and noisy depth measurements. Moreover, we propose, for the best of the author’s knowledge, the first publicly available RGB-D benchmark dataset with hand-labeled ground truth of several challenging scenarios to test background/foreground segmentation algorithms.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In many domains when we have several competing classifiers available we want to synthesize them or some of them to get a more accurate classifier by a combination function. In this paper we propose a ‘class-indifferent’ method for combining classifier decisions represented by evidential structures called triplet and quartet, using Dempster's rule of combination. This method is unique in that it distinguishes important elements from the trivial ones in representing classifier decisions, makes use of more information than others in calculating the support for class labels and provides a practical way to apply the theoretically appealing Dempster–Shafer theory of evidence to the problem of ensemble learning. We present a formalism for modelling classifier decisions as triplet mass functions and we establish a range of formulae for combining these mass functions in order to arrive at a consensus decision. In addition we carry out a comparative study with the alternatives of simplet and dichotomous structure and also compare two combination methods, Dempster's rule and majority voting, over the UCI benchmark data, to demonstrate the advantage our approach offers. (A continuation of the work in this area that was published in IEEE Trans on KDE, and conferences)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Several real problems involve the classification of data into categories or classes. Given a data set containing data whose classes are known, Machine Learning algorithms can be employed for the induction of a classifier able to predict the class of new data from the same domain, performing the desired discrimination. Some learning techniques are originally conceived for the solution of problems with only two classes, also named binary classification problems. However, many problems require the discrimination of examples into more than two categories or classes. This paper presents a survey on the main strategies for the generalization of binary classifiers to problems with more than two classes, known as multiclass classification problems. The focus is on strategies that decompose the original multiclass problem into multiple binary subtasks, whose outputs are combined to obtain the final prediction.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

For certain continuum problems, it is desirable and beneficial to combine two different methods together in order to exploit their advantages while evading their disadvantages. In this paper, a bridging transition algorithm is developed for the combination of the meshfree method (MM) with the finite element method (FEM). In this coupled method, the meshfree method is used in the sub-domain where the MM is required to obtain high accuracy, and the finite element method is employed in other sub-domains where FEM is required to improve the computational efficiency. The MM domain and the FEM domain are connected by a transition (bridging) region. A modified variational formulation and the Lagrange multiplier method are used to ensure the compatibility of displacements and their gradients. To improve the computational efficiency and reduce the meshing cost in the transition region, regularly distributed transition particles, which are independent of either the meshfree nodes or the FE nodes, can be inserted into the transition region. The newly developed coupled method is applied to the stress analysis of 2D solids and structures in order to investigate its’ performance and study parameters. Numerical results show that the present coupled method is convergent, accurate and stable. The coupled method has a promising potential for practical applications, because it can take advantages of both the meshfree method and FEM when overcome their shortcomings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Background The expression of biomass-degrading enzymes (such as cellobiohydrolases) in transgenic plants has the potential to reduce the costs of biomass saccharification by providing a source of enzymes to supplement commercial cellulase mixtures. Cellobiohydrolases are the main enzymes in commercial cellulase mixtures. In the present study, a cellobiohydrolase was expressed in transgenic corn stover leaf and assessed as an additive for two commercial cellulase mixtures for the saccharification of pretreated sugar cane bagasse obtained by different processes. Results Recombinant cellobiohydrolase in the senescent leaves of transgenic corn was extracted using a simple buffer with no concentration step. The extract significantly enhanced the performance of Celluclast 1.5 L (a commercial cellulase mixture) by up to fourfold on sugar cane bagasse pretreated at the pilot scale using a dilute sulfuric acid steam explosion process compared to the commercial cellulase mixture on its own. Also, the extracts were able to enhance the performance of Cellic CTec2 (a commercial cellulase mixture) up to fourfold on a range of residues from sugar cane bagasse pretreated at the laboratory (using acidified ethylene carbonate/ethylene glycol, 1-butyl-3-methylimidazolium chloride, and ball-milling) and pilot (dilute sodium hydroxide and glycerol/hydrochloric acid steam explosion) scales. We have demonstrated using tap water as a solvent (under conditions that mimic an industrial process) extraction of about 90% recombinant cellobiohydrolase from senescent, transgenic corn stover leaf that had minimal tissue disruption. Conclusions The accumulation of recombinant cellobiohydrolase in senescent, transgenic corn stover leaf is a viable strategy to reduce the saccharification cost associated with the production of fermentable sugars from pretreated biomass. We envisage an industrial-scale process in which transgenic plants provide both fibre and biomass-degrading enzymes for pretreatment and enzymatic hydrolysis, respectively.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This study reports on an original concept of additive manufacturing for the fabrication of tissue engineered constructs (TEC), offering the possibility of concomitantly manufacturing a customized scaffold and a bioreactor chamber to any size and shape. As a proof of concept towards the development of anatomically relevant TECs, this concept was utilized for the design and fabrication of a highly porous sheep tibia scaffold around which a bioreactor chamber of similar shape was simultaneously built. The morphology of the bioreactor/scaffold device was investigated by micro-computed tomography and scanning electron microscopy confirming the porous architecture of the sheep tibiae as opposed to the non-porous nature of the bioreactor chamber. Additionally, this study demonstrates that both the shape, as well as the inner architecture of the device can significantly impact the perfusion of fluid within the scaffold architecture. Indeed, fluid flow modelling revealed that this was of significant importance for controlling the nutrition flow pattern within the scaffold and the bioreactor chamber, avoiding the formation of stagnant flow regions detrimental for in vitro tissue development. The bioreactor/scaffold device was dynamically seeded with human primary osteoblasts and cultured under bi-directional perfusion for two and six weeks. Primary human osteoblasts were observed homogenously distributed throughout the scaffold, and were viable for the six week culture period. This work demonstrates a novel application for additive manufacturing in the development of scaffolds and bioreactors. Given the intrinsic flexibility of the additive manufacturing technology platform developed, more complex culture systems can be fabricated which would contribute to the advances in customized and patient-specific tissue engineering strategies for a wide range of applications.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Climate change is an important environmental problem and one whose economic implications are many and varied. This paper starts with the presumption that mitigation of greenhouse gases is a necessary policy that has to be designed in a cost effective way. It is well known that market instruments are the best option for cost effectiveness. But the discussion regarding which of the various market instruments should be used, how they may interact and what combinations of policies should be implemented is still open and very lively. In this paper we propose a combination of instruments: the marketable emission permits already in place in Europe for major economic sectors and a CO(2) tax for economic sectors not included in the emissions permit scheme. The study uses an applied general equilibrium model for the Spanish economy to compute the results obtained with the new mix of instruments proposed. As the combination of the market for emission permits and the CO(2) tax admits different possibilities that depend on how the mitigation is distributed among the economic sectors, we concentrate on four possibilities: cost-effective, equalitarian, proportional to emissions, and proportional to output distributions. Other alternatives to the CO(2) tax are also analysed (tax on energy, on oil and on electricity). Our findings suggest that careful, well designed policies are needed as any deviation imposes significant additional costs that increase more than proportionally to the level of emissions reduction targeted by the EU.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Qens/wins 2014 - 11th International Conference on Quasielastic Neutron Scattering and 6th International Workshop on Inelastic Neutron Spectrometers / editado por:Frick, B; Koza, MM; Boehm, M; Mutka, H

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we present a novel approach for multispectral image contextual classification by combining iterative combinatorial optimization algorithms. The pixel-wise decision rule is defined using a Bayesian approach to combine two MRF models: a Gaussian Markov Random Field (GMRF) for the observations (likelihood) and a Potts model for the a priori knowledge, to regularize the solution in the presence of noisy data. Hence, the classification problem is stated according to a Maximum a Posteriori (MAP) framework. In order to approximate the MAP solution we apply several combinatorial optimization methods using multiple simultaneous initializations, making the solution less sensitive to the initial conditions and reducing both computational cost and time in comparison to Simulated Annealing, often unfeasible in many real image processing applications. Markov Random Field model parameters are estimated by Maximum Pseudo-Likelihood (MPL) approach, avoiding manual adjustments in the choice of the regularization parameters. Asymptotic evaluations assess the accuracy of the proposed parameter estimation procedure. To test and evaluate the proposed classification method, we adopt metrics for quantitative performance assessment (Cohen`s Kappa coefficient), allowing a robust and accurate statistical analysis. The obtained results clearly show that combining sub-optimal contextual algorithms significantly improves the classification performance, indicating the effectiveness of the proposed methodology. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Different data classification algorithms have been developed and applied in various areas to analyze and extract valuable information and patterns from large datasets with noise and missing values. However, none of them could consistently perform well over all datasets. To this end, ensemble methods have been suggested as the promising measures. This paper proposes a novel hybrid algorithm, which is the combination of a multi-objective Genetic Algorithm (GA) and an ensemble classifier. While the ensemble classifier, which consists of a decision tree classifier, an Artificial Neural Network (ANN) classifier, and a Support Vector Machine (SVM) classifier, is used as the classification committee, the multi-objective Genetic Algorithm is employed as the feature selector to facilitate the ensemble classifier to improve the overall sample classification accuracy while also identifying the most important features in the dataset of interest. The proposed GA-Ensemble method is tested on three benchmark datasets, and compared with each individual classifier as well as the methods based on mutual information theory, bagging and boosting. The results suggest that this GA-Ensemble method outperform other algorithms in comparison, and be a useful method for classification and feature selection problems.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An aerosol time-of-flight mass spectrometer (ATOFMS) was deployed for the measurement of the size resolved chemical composition of single particles at a site in Cork Harbour, Ireland for three weeks in August 2008. The ATOFMS was co-located with a suite of semi-continuous instrumentation for the measurement of particle number, elemental carbon (EC), organic carbon (OC), sulfate and particulate matter smaller than 2.5 μm in diameter (PM2.5). The temporality of the ambient ATOFMS particle classes was subsequently used in conjunction with the semi-continuous measurements to apportion PM2.5 mass using positive matrix factorisation. The synergy of the single particle classification procedure and positive matrix factorisation allowed for the identification of six factors, corresponding to vehicular traffic, marine, long-range transport, various combustion, domestic solid fuel combustion and shipping traffic with estimated contributions to the measured PM2.5 mass of 23%, 14%, 13%, 11%, 5% and 1.5% respectively. Shipping traffic was found to contribute 18% of the measured particle number (20–600 nm mobility diameter), and thus may have important implications for human health considering the size and composition of ship exhaust particles. The positive matrix factorisation procedure enabled a more refined interpretation of the single particle results by providing source contributions to PM2.5 mass, while the single particle data enabled the identification of additional factors not possible with typical semi-continuous measurements, including local shipping traffic.