977 resultados para Covariance matrix estimation
Resumo:
The power rating of wind turbines is constantly increasing; however, keeping the voltage rating at the low-voltage level results in high kilo-ampere currents. An alternative for increasing the power levels without raising the voltage level is provided by multiphase machines. Multiphase machines are used for instance in ship propulsion systems, aerospace applications, electric vehicles, and in other high-power applications including wind energy conversion systems. A machine model in an appropriate reference frame is required in order to design an efficient control for the electric drive. Modeling of multiphase machines poses a challenge because of the mutual couplings between the phases. Mutual couplings degrade the drive performance unless they are properly considered. In certain multiphase machines there is also a problem of high current harmonics, which are easily generated because of the small current path impedance of the harmonic components. However, multiphase machines provide special characteristics compared with the three-phase counterparts: Multiphase machines have a better fault tolerance, and are thus more robust. In addition, the controlled power can be divided among more inverter legs by increasing the number of phases. Moreover, the torque pulsation can be decreased and the harmonic frequency of the torque ripple increased by an appropriate multiphase configuration. By increasing the number of phases it is also possible to obtain more torque per RMS ampere for the same volume, and thus, increase the power density. In this doctoral thesis, a decoupled d–q model of double-star permanent-magnet (PM) synchronous machines is derived based on the inductance matrix diagonalization. The double-star machine is a special type of multiphase machines. Its armature consists of two three-phase winding sets, which are commonly displaced by 30 electrical degrees. In this study, the displacement angle between the sets is considered a parameter. The diagonalization of the inductance matrix results in a simplified model structure, in which the mutual couplings between the reference frames are eliminated. Moreover, the current harmonics are mapped into a reference frame, in which they can be easily controlled. The work also presents methods to determine the machine inductances by a finite-element analysis and by voltage-source inverters on-site. The derived model is validated by experimental results obtained with an example double-star interior PM (IPM) synchronous machine having the sets displaced by 30 electrical degrees. The derived transformation, and consequently, the decoupled d–q machine model, are shown to model the behavior of an actual machine with an acceptable accuracy. Thus, the proposed model is suitable to be used for the model-based control design of electric drives consisting of double-star IPM synchronous machines.
Resumo:
The attached file is created with Scientific Workplace Latex
Resumo:
Ma thèse est composée de trois chapitres reliés à l'estimation des modèles espace-état et volatilité stochastique. Dans le première article, nous développons une procédure de lissage de l'état, avec efficacité computationnelle, dans un modèle espace-état linéaire et gaussien. Nous montrons comment exploiter la structure particulière des modèles espace-état pour tirer les états latents efficacement. Nous analysons l'efficacité computationnelle des méthodes basées sur le filtre de Kalman, l'algorithme facteur de Cholesky et notre nouvelle méthode utilisant le compte d'opérations et d'expériences de calcul. Nous montrons que pour de nombreux cas importants, notre méthode est plus efficace. Les gains sont particulièrement grands pour les cas où la dimension des variables observées est grande ou dans les cas où il faut faire des tirages répétés des états pour les mêmes valeurs de paramètres. Comme application, on considère un modèle multivarié de Poisson avec le temps des intensités variables, lequel est utilisé pour analyser le compte de données des transactions sur les marchés financières. Dans le deuxième chapitre, nous proposons une nouvelle technique pour analyser des modèles multivariés à volatilité stochastique. La méthode proposée est basée sur le tirage efficace de la volatilité de son densité conditionnelle sachant les paramètres et les données. Notre méthodologie s'applique aux modèles avec plusieurs types de dépendance dans la coupe transversale. Nous pouvons modeler des matrices de corrélation conditionnelles variant dans le temps en incorporant des facteurs dans l'équation de rendements, où les facteurs sont des processus de volatilité stochastique indépendants. Nous pouvons incorporer des copules pour permettre la dépendance conditionnelle des rendements sachant la volatilité, permettant avoir différent lois marginaux de Student avec des degrés de liberté spécifiques pour capturer l'hétérogénéité des rendements. On tire la volatilité comme un bloc dans la dimension du temps et un à la fois dans la dimension de la coupe transversale. Nous appliquons la méthode introduite par McCausland (2012) pour obtenir une bonne approximation de la distribution conditionnelle à posteriori de la volatilité d'un rendement sachant les volatilités d'autres rendements, les paramètres et les corrélations dynamiques. Le modèle est évalué en utilisant des données réelles pour dix taux de change. Nous rapportons des résultats pour des modèles univariés de volatilité stochastique et deux modèles multivariés. Dans le troisième chapitre, nous évaluons l'information contribuée par des variations de volatilite réalisée à l'évaluation et prévision de la volatilité quand des prix sont mesurés avec et sans erreur. Nous utilisons de modèles de volatilité stochastique. Nous considérons le point de vue d'un investisseur pour qui la volatilité est une variable latent inconnu et la volatilité réalisée est une quantité d'échantillon qui contient des informations sur lui. Nous employons des méthodes bayésiennes de Monte Carlo par chaîne de Markov pour estimer les modèles, qui permettent la formulation, non seulement des densités a posteriori de la volatilité, mais aussi les densités prédictives de la volatilité future. Nous comparons les prévisions de volatilité et les taux de succès des prévisions qui emploient et n'emploient pas l'information contenue dans la volatilité réalisée. Cette approche se distingue de celles existantes dans la littérature empirique en ce sens que ces dernières se limitent le plus souvent à documenter la capacité de la volatilité réalisée à se prévoir à elle-même. Nous présentons des applications empiriques en utilisant les rendements journaliers des indices et de taux de change. Les différents modèles concurrents sont appliqués à la seconde moitié de 2008, une période marquante dans la récente crise financière.
Resumo:
Depuis quelques années, l'évolution moléculaire cherche à caractériser les variations et l'intensité de la sélection grâce au rapport entre taux de substitution synonyme et taux de substitution non-synonyme (dN/dS). Cette mesure, dN/dS, a permis d'étudier l'histoire de la variation de l'intensité de la sélection au cours du temps ou de détecter des épisodes de la sélection positive. Les liens entre sélection et variation de taille efficace interfèrent cependant dans ces mesures. Les méthodes comparatives, quant a elle, permettent de mesurer les corrélations entre caractères quantitatifs le long d'une phylogénie. Elles sont également utilisées pour tester des hypothèses sur l'évolution corrélée des traits d'histoire de vie, mais pour être employées pour étudier les corrélations entre traits d'histoire de vie, masse, taux de substitution ou dN/dS. Nous proposons ici une approche combinant une méthode comparative basée sur le principe des contrastes indépendants et un modèle d'évolution moléculaire, dans un cadre probabiliste Bayésien. Intégrant, le long d'une phylogénie, sur les reconstructions ancestrales des traits et et de dN/dS nous estimons les covariances entre traits ainsi qu'entre traits et paramètres du modèle d'évolution moléculaire. Un modèle hiérarchique, a été implémenté dans le cadre du logiciel coevol, publié au cours de cette maitrise. Ce modèle permet l'analyse simultané de plusieurs gènes sans perdre la puissance donnée par l'ensemble de séquences. Un travail deparallélisation des calculs donne la liberté d'augmenter la taille du modèle jusqu'à l'échelle du génome. Nous étudions ici les placentaires, pour lesquels beaucoup de génomes complets et de mesures phénotypiques sont disponibles. À la lumière des théories sur les traits d'histoire de vie, notre méthode devrait permettre de caractériser l'implication de groupes de gènes dans les processus biologique liés aux phénotypes étudiés.
Resumo:
L’analyse biomécanique du mouvement humain en utilisant des systèmes optoélectroniques et des marqueurs cutanés considère les segments du corps comme des corps rigides. Cependant, le mouvement des tissus mous par rapport à l'os, c’est à dire les muscles et le tissu adipeux, provoque le déplacement des marqueurs. Ce déplacement est le fait de deux composantes, une composante propre correspondant au mouvement aléatoire de chaque marqueur et une composante à l’unisson provoquant le déplacement commun des marqueurs cutanés lié au mouvement des masses sous-jacentes. Si nombre d’études visent à minimiser ces déplacements, des simulations ont montré que le mouvement des masses molles réduit la dynamique articulaire. Cette observation est faite uniquement par la simulation, car il n'existe pas de méthodes capables de dissocier la cinématique des masses molles de celle de l’os. L’objectif principal de cette thèse consiste à développer une méthode numérique capable de distinguer ces deux cinématiques. Le premier objectif était d'évaluer une méthode d'optimisation locale pour estimer le mouvement des masses molles par rapport à l’humérus obtenu avec une tige intra-corticale vissée chez trois sujets. Les résultats montrent que l'optimisation locale sous-estime de 50% le déplacement des marqueurs et qu’elle conduit à un classement de marqueurs différents en fonction de leur déplacement. La limite de cette méthode vient du fait qu'elle ne tient pas compte de l’ensemble des composantes du mouvement des tissus mous, notamment la composante en unisson. Le second objectif était de développer une méthode numérique qui considère toutes les composantes du mouvement des tissus mous. Plus précisément, cette méthode devait fournir une cinématique similaire et une plus grande estimation du déplacement des marqueurs par rapport aux méthodes classiques et dissocier ces composantes. Le membre inférieur est modélisé avec une chaine cinématique de 10 degrés de liberté reconstruite par optimisation globale en utilisant seulement les marqueurs placés sur le pelvis et la face médiale du tibia. L’estimation de la cinématique sans considérer les marqueurs placés sur la cuisse et le mollet permet d'éviter l’influence de leur déplacement sur la reconstruction du modèle cinématique. Cette méthode testée sur 13 sujets lors de sauts a obtenu jusqu’à 2,1 fois plus de déplacement des marqueurs en fonction de la méthode considérée en assurant des cinématiques similaires. Une approche vectorielle a montré que le déplacement des marqueurs est surtout dû à la composante à l’unisson. Une approche matricielle associant l’optimisation locale à la chaine cinématique a montré que les masses molles se déplacent principalement autour de l'axe longitudinal et le long de l'axe antéro-postérieur de l'os. L'originalité de cette thèse est de dissocier numériquement la cinématique os de celle des masses molles et les composantes de ce mouvement. Les méthodes développées dans cette thèse augmentent les connaissances sur le mouvement des masses molles et permettent d’envisager l’étude de leur effet sur la dynamique articulaire.
Resumo:
The objectives of the proposed work are preparation of ceramic nickel zinc ferrite belonging to the series Ni1-XZnXFe2O4 with x varying from 0 to 1in steps of 0.2, structrural, magnetic and electrical characterization of Ni1-XZnXFe2O4, preparation and evaluation of Cure characteristics of Rubber Ferrite Composites (RFCs), magnetic characterization of RFCs using vibrating sample magnetometer (VSM), electrical characterization of RFCs and estimation of magnetostriction constant form HL parameters. The study deals with the structural and magnetic properties of ceramic fillers, variation of coercivity with composition and the variation of magnetization for different filler loadings are compared and correlated. The dielectric properties of ceramic Ni1-XZnXFe2O4 and rubber ferrite composites containing Ni1-XZnXFe2O4 were evaluated and the ac electrical conductivity (ac) of ceramic as well as composite samples can be calculated by using a simple relationship of the form ac = 2f tan 0r, with the data available from dielectric measurements. The results suggest that the ac electrical conductivity is directly proportional to the frequency
Resumo:
The thesis has covered various aspects of modeling and analysis of finite mean time series with symmetric stable distributed innovations. Time series analysis based on Box and Jenkins methods are the most popular approaches where the models are linear and errors are Gaussian. We highlighted the limitations of classical time series analysis tools and explored some generalized tools and organized the approach parallel to the classical set up. In the present thesis we mainly studied the estimation and prediction of signal plus noise model. Here we assumed the signal and noise follow some models with symmetric stable innovations.We start the thesis with some motivating examples and application areas of alpha stable time series models. Classical time series analysis and corresponding theories based on finite variance models are extensively discussed in second chapter. We also surveyed the existing theories and methods correspond to infinite variance models in the same chapter. We present a linear filtering method for computing the filter weights assigned to the observation for estimating unobserved signal under general noisy environment in third chapter. Here we consider both the signal and the noise as stationary processes with infinite variance innovations. We derived semi infinite, double infinite and asymmetric signal extraction filters based on minimum dispersion criteria. Finite length filters based on Kalman-Levy filters are developed and identified the pattern of the filter weights. Simulation studies show that the proposed methods are competent enough in signal extraction for processes with infinite variance.Parameter estimation of autoregressive signals observed in a symmetric stable noise environment is discussed in fourth chapter. Here we used higher order Yule-Walker type estimation using auto-covariation function and exemplify the methods by simulation and application to Sea surface temperature data. We increased the number of Yule-Walker equations and proposed a ordinary least square estimate to the autoregressive parameters. Singularity problem of the auto-covariation matrix is addressed and derived a modified version of the Generalized Yule-Walker method using singular value decomposition.In fifth chapter of the thesis we introduced partial covariation function as a tool for stable time series analysis where covariance or partial covariance is ill defined. Asymptotic results of the partial auto-covariation is studied and its application in model identification of stable auto-regressive models are discussed. We generalize the Durbin-Levinson algorithm to include infinite variance models in terms of partial auto-covariation function and introduce a new information criteria for consistent order estimation of stable autoregressive model.In chapter six we explore the application of the techniques discussed in the previous chapter in signal processing. Frequency estimation of sinusoidal signal observed in symmetric stable noisy environment is discussed in this context. Here we introduced a parametric spectrum analysis and frequency estimate using power transfer function. Estimate of the power transfer function is obtained using the modified generalized Yule-Walker approach. Another important problem in statistical signal processing is to identify the number of sinusoidal components in an observed signal. We used a modified version of the proposed information criteria for this purpose.
Resumo:
In a seminal paper, Aitchison and Lauder (1985) introduced classical kernel density estimation techniques in the context of compositional data analysis. Indeed, they gave two options for the choice of the kernel to be used in the kernel estimator. One of these kernels is based on the use the alr transformation on the simplex SD jointly with the normal distribution on RD-1. However, these authors themselves recognized that this method has some deficiencies. A method for overcoming these dificulties based on recent developments for compositional data analysis and multivariate kernel estimation theory, combining the ilr transformation with the use of the normal density with a full bandwidth matrix, was recently proposed in Martín-Fernández, Chacón and Mateu- Figueras (2006). Here we present an extensive simulation study that compares both methods in practice, thus exploring the finite-sample behaviour of both estimators
Resumo:
Data assimilation is a sophisticated mathematical technique for combining observational data with model predictions to produce state and parameter estimates that most accurately approximate the current and future states of the true system. The technique is commonly used in atmospheric and oceanic modelling, combining empirical observations with model predictions to produce more accurate and well-calibrated forecasts. Here, we consider a novel application within a coastal environment and describe how the method can also be used to deliver improved estimates of uncertain morphodynamic model parameters. This is achieved using a technique known as state augmentation. Earlier applications of state augmentation have typically employed the 4D-Var, Kalman filter or ensemble Kalman filter assimilation schemes. Our new method is based on a computationally inexpensive 3D-Var scheme, where the specification of the error covariance matrices is crucial for success. A simple 1D model of bed-form propagation is used to demonstrate the method. The scheme is capable of recovering near-perfect parameter values and, therefore, improves the capability of our model to predict future bathymetry. Such positive results suggest the potential for application to more complex morphodynamic models.
Resumo:
The influence matrix is used in ordinary least-squares applications for monitoring statistical multiple-regression analyses. Concepts related to the influence matrix provide diagnostics on the influence of individual data on the analysis - the analysis change that would occur by leaving one observation out, and the effective information content (degrees of freedom for signal) in any sub-set of the analysed data. In this paper, the corresponding concepts have been derived in the context of linear statistical data assimilation in numerical weather prediction. An approximate method to compute the diagonal elements of the influence matrix (the self-sensitivities) has been developed for a large-dimension variational data assimilation system (the four-dimensional variational system of the European Centre for Medium-Range Weather Forecasts). Results show that, in the boreal spring 2003 operational system, 15% of the global influence is due to the assimilated observations in any one analysis, and the complementary 85% is the influence of the prior (background) information, a short-range forecast containing information from earlier assimilated observations. About 25% of the observational information is currently provided by surface-based observing systems, and 75% by satellite systems. Low-influence data points usually occur in data-rich areas, while high-influence data points are in data-sparse areas or in dynamically active regions. Background-error correlations also play an important role: high correlation diminishes the observation influence and amplifies the importance of the surrounding real and pseudo observations (prior information in observation space). Incorrect specifications of background and observation-error covariance matrices can be identified, interpreted and better understood by the use of influence-matrix diagnostics for the variety of observation types and observed variables used in the data assimilation system. Copyright © 2004 Royal Meteorological Society
Resumo:
A new robust neurofuzzy model construction algorithm has been introduced for the modeling of a priori unknown dynamical systems from observed finite data sets in the form of a set of fuzzy rules. Based on a Takagi-Sugeno (T-S) inference mechanism a one to one mapping between a fuzzy rule base and a model matrix feature subspace is established. This link enables rule based knowledge to be extracted from matrix subspace to enhance model transparency. In order to achieve maximized model robustness and sparsity, a new robust extended Gram-Schmidt (G-S) method has been introduced via two effective and complementary approaches of regularization and D-optimality experimental design. Model rule bases are decomposed into orthogonal subspaces, so as to enhance model transparency with the capability of interpreting the derived rule base energy level. A locally regularized orthogonal least squares algorithm, combined with a D-optimality used for subspace based rule selection, has been extended for fuzzy rule regularization and subspace based information extraction. By using a weighting for the D-optimality cost function, the entire model construction procedure becomes automatic. Numerical examples are included to demonstrate the effectiveness of the proposed new algorithm.
Resumo:
A sparse kernel density estimator is derived based on the zero-norm constraint, in which the zero-norm of the kernel weights is incorporated to enhance model sparsity. The classical Parzen window estimate is adopted as the desired response for density estimation, and an approximate function of the zero-norm is used for achieving mathemtical tractability and algorithmic efficiency. Under the mild condition of the positive definite design matrix, the kernel weights of the proposed density estimator based on the zero-norm approximation can be obtained using the multiplicative nonnegative quadratic programming algorithm. Using the -optimality based selection algorithm as the preprocessing to select a small significant subset design matrix, the proposed zero-norm based approach offers an effective means for constructing very sparse kernel density estimates with excellent generalisation performance.
Resumo:
This paper derives an efficient algorithm for constructing sparse kernel density (SKD) estimates. The algorithm first selects a very small subset of significant kernels using an orthogonal forward regression (OFR) procedure based on the D-optimality experimental design criterion. The weights of the resulting sparse kernel model are then calculated using a modified multiplicative nonnegative quadratic programming algorithm. Unlike most of the SKD estimators, the proposed D-optimality regression approach is an unsupervised construction algorithm and it does not require an empirical desired response for the kernel selection task. The strength of the D-optimality OFR is owing to the fact that the algorithm automatically selects a small subset of the most significant kernels related to the largest eigenvalues of the kernel design matrix, which counts for the most energy of the kernel training data, and this also guarantees the most accurate kernel weight estimate. The proposed method is also computationally attractive, in comparison with many existing SKD construction algorithms. Extensive numerical investigation demonstrates the ability of this regression-based approach to efficiently construct a very sparse kernel density estimate with excellent test accuracy, and our results show that the proposed method compares favourably with other existing sparse methods, in terms of test accuracy, model sparsity and complexity, for constructing kernel density estimates.
Resumo:
We describe a model-data fusion (MDF) inter-comparison project (REFLEX), which compared various algorithms for estimating carbon (C) model parameters consistent with both measured carbon fluxes and states and a simple C model. Participants were provided with the model and with both synthetic net ecosystem exchange (NEE) of CO2 and leaf area index (LAI) data, generated from the model with added noise, and observed NEE and LAI data from two eddy covariance sites. Participants endeavoured to estimate model parameters and states consistent with the model for all cases over the two years for which data were provided, and generate predictions for one additional year without observations. Nine participants contributed results using Metropolis algorithms, Kalman filters and a genetic algorithm. For the synthetic data case, parameter estimates compared well with the true values. The results of the analyses indicated that parameters linked directly to gross primary production (GPP) and ecosystem respiration, such as those related to foliage allocation and turnover, or temperature sensitivity of heterotrophic respiration, were best constrained and characterised. Poorly estimated parameters were those related to the allocation to and turnover of fine root/wood pools. Estimates of confidence intervals varied among algorithms, but several algorithms successfully located the true values of annual fluxes from synthetic experiments within relatively narrow 90% confidence intervals, achieving >80% success rate and mean NEE confidence intervals <110 gC m−2 year−1 for the synthetic case. Annual C flux estimates generated by participants generally agreed with gap-filling approaches using half-hourly data. The estimation of ecosystem respiration and GPP through MDF agreed well with outputs from partitioning studies using half-hourly data. Confidence limits on annual NEE increased by an average of 88% in the prediction year compared to the previous year, when data were available. Confidence intervals on annual NEE increased by 30% when observed data were used instead of synthetic data, reflecting and quantifying the addition of model error. Finally, our analyses indicated that incorporating additional constraints, using data on C pools (wood, soil and fine roots) would help to reduce uncertainties for model parameters poorly served by eddy covariance data.
Resumo:
Eddy covariance has been used in urban areas to evaluate the net exchange of CO2 between the surface and the atmosphere. Typically, only the vertical flux is measured at a height 2–3 times that of the local roughness elements; however, under conditions of relatively low instability, CO2 may accumulate in the airspace below the measurement height. This can result in inaccurate emissions estimates if the accumulated CO2 drains away or is flushed upwards during thermal expansion of the boundary layer. Some studies apply a single height storage correction; however, this requires the assumption that the response of the CO2 concentration profile to forcing is constant with height. Here a full seasonal cycle (7th June 2012 to 3rd June 2013) of single height CO2 storage data calculated from concentrations measured at 10 Hz by open path gas analyser are compared to a data set calculated from a concurrent switched vertical profile measured (2 Hz, closed path gas analyser) at 10 heights within and above a street canyon in central London. The assumption required for the former storage determination is shown to be invalid. For approximately regular street canyons at least one other measurement is required. Continuous measurements at fewer locations are shown to be preferable to a spatially dense, switched profile, as temporal interpolation is ineffective. The majority of the spectral energy of the CO2 storage time series was found to be between 0.001 and 0.2 Hz (500 and 5 s respectively); however, sampling frequencies of 2 Hz and below still result in significantly lower CO2 storage values. An empirical method of correcting CO2 storage values from under-sampled time series is proposed.