981 resultados para Particle Filter
Resumo:
Dissertação para obtenção do grau de Mestre em Engenharia Electrotécnica Ramo de Automação e Electrónica Industrial
Resumo:
In this paper we introduce a formation control loop that maximizes the performance of the cooperative perception of a tracked target by a team of mobile robots, while maintaining the team in formation, with a dynamically adjustable geometry which is a function of the quality of the target perception by the team. In the formation control loop, the controller module is a distributed non-linear model predictive controller and the estimator module fuses local estimates of the target state, obtained by a particle filter at each robot. The two modules and their integration are described in detail, including a real-time database associated to a wireless communication protocol that facilitates the exchange of state data while reducing collisions among team members. Simulation and real robot results for indoor and outdoor teams of different robots are presented. The results highlight how our method successfully enables a team of homogeneous robots to minimize the total uncertainty of the tracked target cooperative estimate while complying with performance criteria such as keeping a pre-set distance between the teammates and the target, avoiding collisions with teammates and/or surrounding obstacles.
Resumo:
Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.
Resumo:
New ways of combining observations with numerical models are discussed in which the size of the state space can be very large, and the model can be highly nonlinear. Also the observations of the system can be related to the model variables in highly nonlinear ways, making this data-assimilation (or inverse) problem highly nonlinear. First we discuss the connection between data assimilation and inverse problems, including regularization. We explore the choice of proposal density in a Particle Filter and show how the ’curse of dimensionality’ might be beaten. In the standard Particle Filter ensembles of model runs are propagated forward in time until observations are encountered, rendering it a pure Monte-Carlo method. In large-dimensional systems this is very inefficient and very large numbers of model runs are needed to solve the data-assimilation problem realistically. In our approach we steer all model runs towards the observations resulting in a much more efficient method. By further ’ensuring almost equal weight’ we avoid performing model runs that are useless in the end. Results are shown for the 40 and 1000 dimensional Lorenz 1995 model.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
The use of Bayesian inference in the inference of time-frequency representations has, thus far, been limited to offline analysis of signals, using a smoothing spline based model of the time-frequency plane. In this paper we introduce a new framework that allows the routine use of Bayesian inference for online estimation of the time-varying spectral density of a locally stationary Gaussian process. The core of our approach is the use of a likelihood inspired by a local Whittle approximation. This choice, along with the use of a recursive algorithm for non-parametric estimation of the local spectral density, permits the use of a particle filter for estimating the time-varying spectral density online. We provide demonstrations of the algorithm through tracking chirps and the analysis of musical data.
Resumo:
A potential problem with Ensemble Kalman Filter is the implicit Gaussian assumption at analysis times. Here we explore the performance of a recently proposed fully nonlinear particle filter on a high-dimensional but simplified ocean model, in which the Gaussian assumption is not made. The model simulates the evolution of the vorticity field in time, described by the barotropic vorticity equation, in a highly nonlinear flow regime. While common knowledge is that particle filters are inefficient and need large numbers of model runs to avoid degeneracy, the newly developed particle filter needs only of the order of 10-100 particles on large scale problems. The crucial new ingredient is that the proposal density cannot only be used to ensure all particles end up in high-probability regions of state space as defined by the observations, but also to ensure that most of the particles have similar weights. Using identical twin experiments we found that the ensemble mean follows the truth reliably, and the difference from the truth is captured by the ensemble spread. A rank histogram is used to show that the truth run is indistinguishable from any of the particles, showing statistical consistency of the method.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Direct-sampling and remote-sensing measurements were made at the crater rim of Masaya volcano (Nicaragua) to sample the aerosol plume emanating from the active vent. We report the first measurements of the size distribution of fine silicate particles (d <10 mu m) in Masaya's plume, by automated scanning electron microscopy (QEMSCAN) analysis of a particle filter. The particle size distribution was approximately lognormal with modal d similar to 1.15 mu m. The majority of these particles were found to be spherical. These particles are interpreted to be droplets of quenched magma produced by a spattering process. Compositional analyses confirm earlier reports that the fine silicate particles show a range of compositions between that of the degassing magma and nearly pure silica and that the extent of compositional variability decreases with increasing particle size. These results indicate that fine silicate particles are altered owing to reactions with acidic droplets in the plume. The emission flux of fine silicate particles was estimated as similar to 10(11) s(-1), equivalent to similar to 55 kg d(-1). Sun photometry, aerosol spectrometry, and thermal precipitation were used to determine the overall particle size distribution of the plume (0.01 < d(mu m) < 10). Sun photometry and aerosol spectrometry measurements indicate the presence of a large number of particles (assumed to be aqueous) with d similar to 1 mu m. Aerosol spectrometry measurements further show an increase in particle size as the nighttime approached. The emission flux of particles from Masaya was estimated as similar to 10(17) s(-1), equivalent to similar to 5.5 Mg d(-1) where d < 4 mu m.
Resumo:
In the first chapter, we consider the joint estimation of objective and risk-neutral parameters for SV option pricing models. We propose a strategy which exploits the information contained in large heterogeneous panels of options, and we apply it to S&P 500 index and index call options data. Our approach breaks the stochastic singularity between contemporaneous option prices by assuming that every observation is affected by measurement error. We evaluate the likelihood function by using a MC-IS strategy combined with a Particle Filter algorithm. The second chapter examines the impact of different categories of traders on market transactions. We estimate a model which takes into account traders’ identities at the transaction level, and we find that the stock prices follow the direction of institutional trading. These results are carried out with data from an anonymous market. To explain our estimates, we examine the informativeness of a wide set of market variables and we find that most of them are unambiguously significant to infer the identity of traders. The third chapter investigates the relationship between the categories of market traders and three definitions of financial durations. We consider trade, price and volume durations, and we adopt a Log-ACD model where we include information on traders at the transaction level. As to trade durations, we observe an increase of the trading frequency when informed traders and the liquidity provider intensify their presence in the market. For price and volume durations, we find the same effect to depend on the state of the market activity. The fourth chapter proposes a strategy to express order aggressiveness in quantitative terms. We consider a simultaneous equation model to examine price and volume aggressiveness at Euronext Paris, and we analyse the impact of a wide set of order book variables on the price-quantity decision.
Resumo:
In this study the Aerodyne Aerosol Mass Spectrometer (AMS) was used during three laboratory measurement campaigns, FROST1, FROST2 and ACI-03. The FROST campaigns took place at the Leipzig Aerosol Cloud Interaction Simulator (LACIS) at the IfT in Leipzig and the ACI-03 campaign was conducted at the AIDA facility at the Karlsruhe Institute of Technology (KIT). In all three campaigns, the effect of coatings on mineral dust ice nuclei (IN) was investigated. During the FROST campaigns, Arizona Test Dust (ATD) particles of 200, 300 and 400 nm diameter were coated with thin coatings (< 7 nm) of sulphuric acid. At these very thin coatings, the AMS was operated close to its detection limits. Up to now it was not possible to accurately determine AMS detection limits during regular measurements. Therefore, the mathematical tools to analyse the detection limits of the AMS have been improved in this work. It is now possible to calculate detection limits of the AMS under operating conditions, without losing precious time by sampling through a particle filter. The instrument was characterised in more detail to enable correct quantification of the sulphate loadings on the ATD particle surfaces. Correction factors for the instrument inlet transmission, the collection efficiency, and the relative ionisation efficiency have been determined. With these corrections it was possible to quantify the sulphate mass per particle on the ATD after the condensation of sulphuric acid on its surface. The AMS results have been combined with the ice nucleus counter results. This revealed that the IN-efficiency of ATD is reduced when it is coated with sulphuric acid. The reason for this reduction is a chemical reaction of sulphuric acid with the particle's surface. These reactions are increasingly taking place when the aerosol is humidified or heated after the coating with sulphuric acid. A detailed analysis of the solubility and the evaporation temperature of the surface reaction products revealed that most likely aluminium sulphate is produced in these reactions.
Resumo:
In dieser Arbeit werden neuere methodische Entwicklungen aus dem Bereich der Numerischen Integration für die näherungsweise Berechnung von Zustandraummodellen erprobt. Die resultierenden Algorithmen werden bzgl. ihrer Approximationsgüte mit den populären simulationsbasierten Näherungsverfahren verglichen.
Resumo:
Free radicals are present in cigarette smoke and can have a negative effect on human health by attacking lipids, nucleic acids, proteins and other biologically important species. However, because of the complexity of the tobacco smoke system and the dynamic nature of radicals, little is known about the identity of the radicals, and debate continues on the mechanisms by which those radicals are produced. In this study, acetyl radicals were trapped from the gas phase using 3-amino-2, 2, 5, 5- tetramethyl-proxyl (3AP) on solid support to form stable 3AP adducts for later analysis by high performance liquid chromatography (HPLC), mass spectrometry/tandem mass spectrometry (MS-MS/MS) and liquid chromatography- mass spectrometry (LC-MS). Simulations of acetyl radical generation were performed using Matlab and the Master Chemical Mechanism (MCM) programs. A range of 10- 150 nmol/cigarette of acetyl radical was measured from gas phase tobacco smoke of both commerial and research cigarettes under several different smoking conditions. More radicals were detected from the puff smoking method compared to continuous flow sampling. Approximately twice as many acetyl radicals were trapped when a GF/F particle filter was placed before the trapping zone. Computational simulations show that NO/NO2 reacts with isoprene, initiating chain reactions to produce a hydroxyl radical, which abstracts hydrogen from acetaldehyde to generate acetyl radical. With initial concentrations of NO, acetaldehyde, and isoprene in a real-world cigarette smoke scenario, these mechanisms can account for the full amount of acetyl radical detected experimentally. This study contributes to the overall understanding of the free radical generation in gas phase cigarette smoke.
Resumo:
The mid-Holocene (6 kyr BP; thousand years before present) is a key period to study the consistency between model results and proxy-based reconstruction data as it corresponds to a standard test for models and a reasonable number of proxy-based records is available. Taking advantage of this relatively large amount of information, we have compared a compilation of 50 air and sea surface temperature reconstructions with the results of three simulations performed with general circulation models and one carried out with LOVECLIM, a model of intermediate complexity. The conclusions derived from this analysis confirm that models and data agree on the large-scale spatial pattern but the models underestimate the magnitude of some observed changes and that large discrepancies are observed at the local scale. To further investigate the origin of those inconsistencies, we have constrained LOVECLIM to follow the signal recorded by the proxies selected in the compilation using a data-assimilation method based on a particle filter. In one simulation, all the 50 proxy-based records are used while in the other two only the continental or oceanic proxy-based records constrain the model results. As expected, data assimilation leads to improving the consistency between model results and the reconstructions. In particular, this is achieved in a robust way in all the experiments through a strengthening of the westerlies at midlatitude that warms up northern Europe. Furthermore, the comparison of the LOVECLIM simulations with and without data assimilation has also objectively identified 16 proxy-based paleoclimate records whose reconstructed signal is either incompatible with the signal recorded by some other proxy-based records or with model physics.