305 resultados para Parameter Optimization
Resumo:
This paper presents a method for fast and accurate determination of parameters relevant to the characterization of capacitive MEMS resonators like quality factor (Q), resonant frequency (fn), and equivalent circuit parameters such as the motional capacitance (Cm). In the presence of a parasitic feedthrough capacitor (CF) appearing across the input and output ports, the transmission characteristic is marked by two resonances: series (S) and parallel (P). Close approximations of these circuit parameters are obtained without having to first de-embed the resonator motional current typically buried in feedthrough by using the series and parallel resonances. While previous methods with the same objective are well known, we show that these are limited to the condition where CF ≪ CmQ. In contrast, this work focuses on moderate capacitive feedthrough levels where CF > CmQ, which are more common in MEMS resonators. The method is applied to data obtained from the measured electrical transmission of fabricated SOI MEMS resonators. Parameter values deduced via direct extraction are then compared against those obtained by a full extraction procedure where de-embedding is first performed and followed by a Lorentzian fit to the data based on the classical transfer function associated with a generic LRC series resonant circuit. © 2011 Elsevier B.V. All rights reserved.
Resumo:
Sequential Monte Carlo methods, also known as particle methods, are a widely used set of computational tools for inference in non-linear non-Gaussian state-space models. In many applications it may be necessary to compute the sensitivity, or derivative, of the optimal filter with respect to the static parameters of the state-space model; for instance, in order to obtain maximum likelihood model parameters of interest, or to compute the optimal controller in an optimal control problem. In Poyiadjis et al. [2011] an original particle algorithm to compute the filter derivative was proposed and it was shown using numerical examples that the particle estimate was numerically stable in the sense that it did not deteriorate over time. In this paper we substantiate this claim with a detailed theoretical study. Lp bounds and a central limit theorem for this particle approximation of the filter derivative are presented. It is further shown that under mixing conditions these Lp bounds and the asymptotic variance characterized by the central limit theorem are uniformly bounded with respect to the time index. We demon- strate the performance predicted by theory with several numerical examples. We also use the particle approximation of the filter derivative to perform online maximum likelihood parameter estimation for a stochastic volatility model.
Resumo:
Approximate Bayesian computation (ABC) is a popular technique for analysing data for complex models where the likelihood function is intractable. It involves using simulation from the model to approximate the likelihood, with this approximate likelihood then being used to construct an approximate posterior. In this paper, we consider methods that estimate the parameters by maximizing the approximate likelihood used in ABC. We give a theoretical analysis of the asymptotic properties of the resulting estimator. In particular, we derive results analogous to those of consistency and asymptotic normality for standard maximum likelihood estimation. We also discuss how sequential Monte Carlo methods provide a natural method for implementing our likelihood-based ABC procedures.
An overview of sequential Monte Carlo methods for parameter estimation in general state-space models
Resumo:
Nonlinear non-Gaussian state-space models arise in numerous applications in control and signal processing. Sequential Monte Carlo (SMC) methods, also known as Particle Filters, are numerical techniques based on Importance Sampling for solving the optimal state estimation problem. The task of calibrating the state-space model is an important problem frequently faced by practitioners and the observed data may be used to estimate the parameters of the model. The aim of this paper is to present a comprehensive overview of SMC methods that have been proposed for this task accompanied with a discussion of their advantages and limitations.
Resumo:
Sequential Monte Carlo (SMC) methods are popular computational tools for Bayesian inference in non-linear non-Gaussian state-space models. For this class of models, we propose SMC algorithms to compute the score vector and observed information matrix recursively in time. We propose two different SMC implementations, one with computational complexity $\mathcal{O}(N)$ and the other with complexity $\mathcal{O}(N^{2})$ where $N$ is the number of importance sampling draws. Although cheaper, the performance of the $\mathcal{O}(N)$ method degrades quickly in time as it inherently relies on the SMC approximation of a sequence of probability distributions whose dimension is increasing linearly with time. In particular, even under strong \textit{mixing} assumptions, the variance of the estimates computed with the $\mathcal{O}(N)$ method increases at least quadratically in time. The $\mathcal{O}(N^{2})$ is a non-standard SMC implementation that does not suffer from this rapid degrade. We then show how both methods can be used to perform batch and recursive parameter estimation.
Resumo:
Simulated annealing is a popular method for approaching the solution of a global optimization problem. Existing results on its performance apply to discrete combinatorial optimization where the optimization variables can assume only a finite set of possible values. We introduce a new general formulation of simulated annealing which allows one to guarantee finite-time performance in the optimization of functions of continuous variables. The results hold universally for any optimization problem on a bounded domain and establish a connection between simulated annealing and up-to-date theory of convergence of Markov chain Monte Carlo methods on continuous domains. This work is inspired by the concept of finite-time learning with known accuracy and confidence developed in statistical learning theory.