150 resultados para PC-algorithm
Resumo:
Changepoint models are widely used to model the heterogeneity of sequential data. We present a novel sequential Monte Carlo (SMC) online Expectation-Maximization (EM) algorithm for estimating the static parameters of such models. The SMC online EM algorithm has a cost per time which is linear in the number of particles and could be particularly important when the data is representable as a long sequence of observations, since it drastically reduces the computational requirements for implementation. We present an asymptotic analysis for the stability of the SMC estimates used in the online EM algorithm and demonstrate the performance of this scheme using both simulated and real data originating from DNA analysis.
Resumo:
Changepoint models are widely used to model the heterogeneity of sequential data. We present a novel sequential Monte Carlo (SMC) online Expectation-Maximization (EM) algorithm for estimating the static parameters of such models. The SMC online EM algorithm has a cost per time which is linear in the number of particles and could be particularly important when the data is representable as a long sequence of observations, since it drastically reduces the computational requirements for implementation. We present an asymptotic analysis for the stability of the SMC estimates used in the online EM algorithm and demonstrate the performance of this scheme using both simulated and real data originating from DNA analysis.
Resumo:
A control algorithm is presented that addresses the stability issues inherent to the operation of monolithic mode-locked laser diodes. It enables a continuous pulse duration tuning without any onset of Q-switching instabilities. A demonstration of the algorithm performance is presented for two radically different laser diode geometries and continuous pulse duration tuning between 0.5 ps to 2.2 ps and 1.2 ps to 10.2 ps is achieved. With practical applications in mind, this algorithm also facilitates control over performance parameters such as output power and wavelength during pulse duration tuning. The developed algorithm enables the user to harness the operational flexibility from such a laser with 'push-button' simplicity.
Resumo:
The extrinsic tensile strength of glass can be determined explicitly if the characteristics of the critical surface flaw are known, or stochastically if the critical flaw characteristics are unknown. This paper makes contributions to both these approaches. Firstly it presents a unified model for determining the strength of glass explicitly, by accounting for both the inert strength limit and the sub-critical crack growth threshold. Secondly, it describes and illustrates the use of a numerical algorithm, based on the stochastic approach, that computes the characteristic tensile strength of float glass by piecewise summation of the surface stresses. The experimental validation and sensitivity analysis reported in this paper show that the proposed computer algorithm provides an accurate and efficient means of determining the characteristic strength of float glass. The algorithm is particularly useful for annealed and thermally treated float glass used in the construction industry. © 2012 Elsevier Ltd.
Resumo:
Most of the manual labor needed to create the geometric building information model (BIM) of an existing facility is spent converting raw point cloud data (PCD) to a BIM description. Automating this process would drastically reduce the modeling cost. Surface extraction from PCD is a fundamental step in this process. Compact modeling of redundant points in PCD as a set of planes leads to smaller file size and fast interactive visualization on cheap hardware. Traditional approaches for smooth surface reconstruction do not explicitly model the sparse scene structure or significantly exploit the redundancy. This paper proposes a method based on sparsity-inducing optimization to address the planar surface extraction problem. Through sparse optimization, points in PCD are segmented according to their embedded linear subspaces. Within each segmented part, plane models can be estimated. Experimental results on a typical noisy PCD demonstrate the effectiveness of the algorithm.
Resumo:
Engineering changes (ECs) are raised throughout the lifecycle of engineering products. A single change to one component produces knock-on effects on others necessitating additional changes. This change propagation significantly affects the development time and cost and determines the product's success. Predicting and managing such ECs is, thus, essential to companies. Some prediction tools model change propagation by algorithms, whereof a subgroup is numerical. Current numerical change propagation algorithms either do not account for the exclusion of cyclic propagation paths or are based on exhaustive searching methods. This paper presents a new matrix-calculation-based algorithm which can be applied directly to a numerical product model to analyze change propagation and support change prediction. The algorithm applies matrix multiplications on mutations of a given design structure matrix accounting for the exclusion of self-dependences and cyclic propagation paths and delivers the same results as the exhaustive search-based Trail Counting algorithm. Despite its factorial time complexity, the algorithm proves advantageous because of its straightforward matrix-based calculations which avoid exhaustive searching. Thereby, the algorithm can be implemented in established numerical programs such as Microsoft Excel which promise a wider application of the tools within and across companies along with better familiarity, usability, practicality, security, and robustness. © 1988-2012 IEEE.
Resumo:
We present a novel filtering algorithm for tracking multiple clusters of coordinated objects. Based on a Markov chain Monte Carlo (MCMC) mechanism, the new algorithm propagates a discrete approximation of the underlying filtering density. A dynamic Gaussian mixture model is utilized for representing the time-varying clustering structure. This involves point process formulations of typical behavioral moves such as birth and death of clusters as well as merging and splitting. For handling complex, possibly large scale scenarios, the sampling efficiency of the basic MCMC scheme is enhanced via the use of a Metropolis within Gibbs particle refinement step. As the proposed methodology essentially involves random set representations, a new type of estimator, termed the probability hypothesis density surface (PHDS), is derived for computing point estimates. It is further proved that this estimator is optimal in the sense of the mean relative entropy. Finally, the algorithm's performance is assessed and demonstrated in both synthetic and realistic tracking scenarios. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
Language models (LMs) are often constructed by building multiple individual component models that are combined using context independent interpolation weights. By tuning these weights, using either perplexity or discriminative approaches, it is possible to adapt LMs to a particular task. This paper investigates the use of context dependent weighting in both interpolation and test-time adaptation of language models. Depending on the previous word contexts, a discrete history weighting function is used to adjust the contribution from each component model. As this dramatically increases the number of parameters to estimate, robust weight estimation schemes are required. Several approaches are described in this paper. The first approach is based on MAP estimation where interpolation weights of lower order contexts are used as smoothing priors. The second approach uses training data to ensure robust estimation of LM interpolation weights. This can also serve as a smoothing prior for MAP adaptation. A normalized perplexity metric is proposed to handle the bias of the standard perplexity criterion to corpus size. A range of schemes to combine weight information obtained from training data and test data hypotheses are also proposed to improve robustness during context dependent LM adaptation. In addition, a minimum Bayes' risk (MBR) based discriminative training scheme is also proposed. An efficient weighted finite state transducer (WFST) decoding algorithm for context dependent interpolation is also presented. The proposed technique was evaluated using a state-of-the-art Mandarin Chinese broadcast speech transcription task. Character error rate (CER) reductions up to 7.3 relative were obtained as well as consistent perplexity improvements. © 2012 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a preliminary study which describes and evaluates a multi-objective (MO) version of a recently created single objective (SO) optimization algorithm called the "Alliance Algorithm" (AA). The algorithm is based on the metaphorical idea that several tribes, with certain skills and resource needs, try to conquer an environment for their survival and to ally together to improve the likelihood of conquest. The AA has given promising results in several fields to which has been applied, thus the development of a MO variant (MOAA) is a natural extension. Here the MOAA's performance is compared with two well-known MO algorithms: NSGA-II and SPEA-2. The performance measures chosen for this study are the convergence and diversity metrics. The benchmark functions chosen for the comparison are from the ZDT and OKA families and the main classical MO problems. The results show that the three algorithms have similar overall performance. Thus, it is not possible to identify a best algorithm for all the problems; the three algorithms show a certain complementarity because they offer superior performance for different classes of problems. © 2012 IEEE.