998 resultados para Boosting Algorithm
Resumo:
We present a new co-clustering problem of images and visual features. The problem involves a set of non-object images in addition to a set of object images and features to be co-clustered. Co-clustering is performed in a way that maximises discrimination of object images from non-object images, thus emphasizing discriminative features. This provides a way of obtaining perceptual joint-clusters of object images and features. We tackle the problem by simultaneously boosting multiple strong classifiers which compete for images by their expertise. Each boosting classifier is an aggregation of weak-learners, i.e. simple visual features. The obtained classifiers are useful for object detection tasks which exhibit multimodalities, e.g. multi-category and multi-view object detection tasks. Experiments on a set of pedestrian images and a face data set demonstrate that the method yields intuitive image clusters with associated features and is much superior to conventional boosting classifiers in object detection tasks.
Resumo:
The extrinsic tensile strength of glass can be determined explicitly if the characteristics of the critical surface flaw are known, or stochastically if the critical flaw characteristics are unknown. This paper makes contributions to both these approaches. Firstly it presents a unified model for determining the strength of glass explicitly, by accounting for both the inert strength limit and the sub-critical crack growth threshold. Secondly, it describes and illustrates the use of a numerical algorithm, based on the stochastic approach, that computes the characteristic tensile strength of float glass by piecewise summation of the surface stresses. The experimental validation and sensitivity analysis reported in this paper show that the proposed computer algorithm provides an accurate and efficient means of determining the characteristic strength of float glass. The algorithm is particularly useful for annealed and thermally treated float glass used in the construction industry. © 2012 Elsevier Ltd.
Resumo:
Most of the manual labor needed to create the geometric building information model (BIM) of an existing facility is spent converting raw point cloud data (PCD) to a BIM description. Automating this process would drastically reduce the modeling cost. Surface extraction from PCD is a fundamental step in this process. Compact modeling of redundant points in PCD as a set of planes leads to smaller file size and fast interactive visualization on cheap hardware. Traditional approaches for smooth surface reconstruction do not explicitly model the sparse scene structure or significantly exploit the redundancy. This paper proposes a method based on sparsity-inducing optimization to address the planar surface extraction problem. Through sparse optimization, points in PCD are segmented according to their embedded linear subspaces. Within each segmented part, plane models can be estimated. Experimental results on a typical noisy PCD demonstrate the effectiveness of the algorithm.
Resumo:
Engineering changes (ECs) are raised throughout the lifecycle of engineering products. A single change to one component produces knock-on effects on others necessitating additional changes. This change propagation significantly affects the development time and cost and determines the product's success. Predicting and managing such ECs is, thus, essential to companies. Some prediction tools model change propagation by algorithms, whereof a subgroup is numerical. Current numerical change propagation algorithms either do not account for the exclusion of cyclic propagation paths or are based on exhaustive searching methods. This paper presents a new matrix-calculation-based algorithm which can be applied directly to a numerical product model to analyze change propagation and support change prediction. The algorithm applies matrix multiplications on mutations of a given design structure matrix accounting for the exclusion of self-dependences and cyclic propagation paths and delivers the same results as the exhaustive search-based Trail Counting algorithm. Despite its factorial time complexity, the algorithm proves advantageous because of its straightforward matrix-based calculations which avoid exhaustive searching. Thereby, the algorithm can be implemented in established numerical programs such as Microsoft Excel which promise a wider application of the tools within and across companies along with better familiarity, usability, practicality, security, and robustness. © 1988-2012 IEEE.
Resumo:
We present a novel filtering algorithm for tracking multiple clusters of coordinated objects. Based on a Markov chain Monte Carlo (MCMC) mechanism, the new algorithm propagates a discrete approximation of the underlying filtering density. A dynamic Gaussian mixture model is utilized for representing the time-varying clustering structure. This involves point process formulations of typical behavioral moves such as birth and death of clusters as well as merging and splitting. For handling complex, possibly large scale scenarios, the sampling efficiency of the basic MCMC scheme is enhanced via the use of a Metropolis within Gibbs particle refinement step. As the proposed methodology essentially involves random set representations, a new type of estimator, termed the probability hypothesis density surface (PHDS), is derived for computing point estimates. It is further proved that this estimator is optimal in the sense of the mean relative entropy. Finally, the algorithm's performance is assessed and demonstrated in both synthetic and realistic tracking scenarios. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a preliminary study which describes and evaluates a multi-objective (MO) version of a recently created single objective (SO) optimization algorithm called the "Alliance Algorithm" (AA). The algorithm is based on the metaphorical idea that several tribes, with certain skills and resource needs, try to conquer an environment for their survival and to ally together to improve the likelihood of conquest. The AA has given promising results in several fields to which has been applied, thus the development of a MO variant (MOAA) is a natural extension. Here the MOAA's performance is compared with two well-known MO algorithms: NSGA-II and SPEA-2. The performance measures chosen for this study are the convergence and diversity metrics. The benchmark functions chosen for the comparison are from the ZDT and OKA families and the main classical MO problems. The results show that the three algorithms have similar overall performance. Thus, it is not possible to identify a best algorithm for all the problems; the three algorithms show a certain complementarity because they offer superior performance for different classes of problems. © 2012 IEEE.
Resumo:
In this paper we formulate the nonnegative matrix factorisation (NMF) problem as a maximum likelihood estimation problem for hidden Markov models and propose online expectation-maximisation (EM) algorithms to estimate the NMF and the other unknown static parameters. We also propose a sequential Monte Carlo approximation of our online EM algorithm. We show the performance of the proposed method with two numerical examples. © 2012 IFAC.
Resumo:
In this paper, we present an expectation-maximisation (EM) algorithm for maximum likelihood estimation in multiple target models (MTT) with Gaussian linear state-space dynamics. We show that estimation of sufficient statistics for EM in a single Gaussian linear state-space model can be extended to the MTT case along with a Monte Carlo approximation for inference of unknown associations of targets. The stochastic approximation EM algorithm that we present here can be used along with any Monte Carlo method which has been developed for tracking in MTT models, such as Markov chain Monte Carlo and sequential Monte Carlo methods. We demonstrate the performance of the algorithm with a simulation. © 2012 ISIF (Intl Society of Information Fusi).
Resumo:
We live in an era of abundant data. This has necessitated the development of new and innovative statistical algorithms to get the most from experimental data. For example, faster algorithms make practical the analysis of larger genomic data sets, allowing us to extend the utility of cutting-edge statistical methods. We present a randomised algorithm that accelerates the clustering of time series data using the Bayesian Hierarchical Clustering (BHC) statistical method. BHC is a general method for clustering any discretely sampled time series data. In this paper we focus on a particular application to microarray gene expression data. We define and analyse the randomised algorithm, before presenting results on both synthetic and real biological data sets. We show that the randomised algorithm leads to substantial gains in speed with minimal loss in clustering quality. The randomised time series BHC algorithm is available as part of the R package BHC, which is available for download from Bioconductor (version 2.10 and above) via http://bioconductor.org/packages/2.10/bioc/html/BHC.html. We have also made available a set of R scripts which can be used to reproduce the analyses carried out in this paper. These are available from the following URL. https://sites.google.com/site/randomisedbhc/.
Resumo:
We study the global behaviour of a Newton algorithm on the Grassmann manifold for invariant subspace computation. It is shown that the basins of attraction of the invariant subspaces may collapse in case of small eigenvalue gaps. A Levenberg-Marquardt-like modification of the algorithm with low numerical cost is proposed. A simple strategy for choosing the parameter is shown to dramatically enlarge the basins of attraction of the invariant subspaces while preserving the fast local convergence.
Resumo:
A novel smoke sensor was used to realize smoke feedback control on a diesel engine. The controller design based on a combination of PI control algorithm and the engine performance optimization is described. Experimental results demonstrate how this control system behave to meet both of the speed and smoke requirements during engine transients.