966 resultados para Permutation-Symmetric Covariance


Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis is concerned with the state and parameter estimation in state space models. The estimation of states and parameters is an important task when mathematical modeling is applied to many different application areas such as the global positioning systems, target tracking, navigation, brain imaging, spread of infectious diseases, biological processes, telecommunications, audio signal processing, stochastic optimal control, machine learning, and physical systems. In Bayesian settings, the estimation of states or parameters amounts to computation of the posterior probability density function. Except for a very restricted number of models, it is impossible to compute this density function in a closed form. Hence, we need approximation methods. A state estimation problem involves estimating the states (latent variables) that are not directly observed in the output of the system. In this thesis, we use the Kalman filter, extended Kalman filter, Gauss–Hermite filters, and particle filters to estimate the states based on available measurements. Among these filters, particle filters are numerical methods for approximating the filtering distributions of non-linear non-Gaussian state space models via Monte Carlo. The performance of a particle filter heavily depends on the chosen importance distribution. For instance, inappropriate choice of the importance distribution can lead to the failure of convergence of the particle filter algorithm. In this thesis, we analyze the theoretical Lᵖ particle filter convergence with general importance distributions, where p ≥2 is an integer. A parameter estimation problem is considered with inferring the model parameters from measurements. For high-dimensional complex models, estimation of parameters can be done by Markov chain Monte Carlo (MCMC) methods. In its operation, the MCMC method requires the unnormalized posterior distribution of the parameters and a proposal distribution. In this thesis, we show how the posterior density function of the parameters of a state space model can be computed by filtering based methods, where the states are integrated out. This type of computation is then applied to estimate parameters of stochastic differential equations. Furthermore, we compute the partial derivatives of the log-posterior density function and use the hybrid Monte Carlo and scaled conjugate gradient methods to infer the parameters of stochastic differential equations. The computational efficiency of MCMC methods is highly depend on the chosen proposal distribution. A commonly used proposal distribution is Gaussian. In this kind of proposal, the covariance matrix must be well tuned. To tune it, adaptive MCMC methods can be used. In this thesis, we propose a new way of updating the covariance matrix using the variational Bayesian adaptive Kalman filter algorithm.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Distributed storage systems are studied. The interest in such system has become relatively wide due to the increasing amount of information needed to be stored in data centers or different kinds of cloud systems. There are many kinds of solutions for storing the information into distributed devices regarding the needs of the system designer. This thesis studies the questions of designing such storage systems and also fundamental limits of such systems. Namely, the subjects of interest of this thesis include heterogeneous distributed storage systems, distributed storage systems with the exact repair property, and locally repairable codes. For distributed storage systems with either functional or exact repair, capacity results are proved. In the case of locally repairable codes, the minimum distance is studied. Constructions for exact-repairing codes between minimum bandwidth regeneration (MBR) and minimum storage regeneration (MSR) points are given. These codes exceed the time-sharing line of the extremal points in many cases. Other properties of exact-regenerating codes are also studied. For the heterogeneous setup, the main result is that the capacity of such systems is always smaller than or equal to the capacity of a homogeneous system with symmetric repair with average node size and average repair bandwidth. A randomized construction for a locally repairable code with good minimum distance is given. It is shown that a random linear code of certain natural type has a good minimum distance with high probability. Other properties of locally repairable codes are also studied.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The aim of this work is to invert the ionospheric electron density profile from Riometer (Relative Ionospheric opacity meter) measurement. The newly Riometer instrument KAIRA (Kilpisjärvi Atmospheric Imaging Receiver Array) is used to measure the cosmic HF radio noise absorption that taking place in the D-region ionosphere between 50 to 90 km. In order to invert the electron density profile synthetic data is used to feed the unknown parameter Neq using spline height method, which works by taking electron density profile at different altitude. Moreover, smoothing prior method also used to sample from the posterior distribution by truncating the prior covariance matrix. The smoothing profile approach makes the problem easier to find the posterior using MCMC (Markov Chain Monte Carlo) method.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis examines the suitability of VaR in foreign exchange rate risk management from the perspective of a European investor. The suitability of four different VaR models is evaluated in respect to have insight if VaR is a valuable tool in managing foreign exchange rate risk. The models evaluated are historical method, historical bootstrap method, variance-covariance method and Monte Carlo simulation. The data evaluated are divided into emerging and developed market currencies to have more intriguing analysis. The foreign exchange rate data in this thesis is from 31st January 2000 to 30th April 2014. The results show that the previously mentioned VaR models performance in foreign exchange risk management is not to be considered as a single tool in foreign exchange rate risk management. The variance-covariance method and Monte Carlo simulation performs poorest in both currency portfolios. Both historical methods performed better but should also be considered as an additional tool along with other more sophisticated analysis tools. A comparative study of VaR estimates and forward prices is also included in the thesis. The study reveals that regardless of the expensive hedging cost of emerging market currencies the risk captured by VaR is more expensive and thus FX forward hedging is recommended

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The thesis work models the squeezing of the tube and computes the fluid motion of a peristaltic pump. The simulations have been conducted by using COMSOL Multiphysics FSI module. The model is setup in axis symmetric with several simulation cases to have a clear understanding of the results. The model captures total displacement of the tube, velocity magnitude, and average pressure fluctuation of the fluid motion. A clear understanding and review of many mathematical and physical concepts are also discussed with their applications in real field. In order to solve the problems and work around the resource constraints, a thorough understanding of mass balance and momentum equations, finite element concepts, arbitrary Lagrangian-Eulerian method, one-way coupling method, two-way coupling method, and COMSOL Multiphysics simulation setup are understood and briefly narrated.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The contribution of genetic factors to the development of obesity has been widely recognized, but the identity of the genes involved has not yet been fully clarified. Variation in genes involved in adipocyte differentiation and energy metabolism is expected to have a role in the etiology of obesity. We assessed the potential association of a polymorphism in one candidate gene, peroxisome proliferator-activated receptor-gamma (PPARGg), involved in these pathways and obesity-related phenotypes in 335 Brazilians of European descent. All individuals included in the sample were adults. Pregnant women, as well as those individuals with secondary hyperlipidemia due to renal, liver or thyroid disease, and diabetes, were not invited to participate in the study; all other individuals were included. The gene variant PPARG Pro12Ala was studied by a PCR-based method and the association between this genetic polymorphism and obesity-related phenotypes was evaluated by analysis of covariance. Variant allele frequency was PPARG Ala12 = 0.09 which is in the same range as described for European and European-derived populations. No statistically significant differences were observed for mean total cholesterol, LDL cholesterol, HDL cholesterol, or triglyceride levels among PPARG genotypes in either gender. In the male sample, an association between the PPARG Pro12Ala variant and body mass index was detected, with male carriers of the Ala variant presenting a higher mean body mass index than wild-type homozygotes (28.3 vs 26.2 kg/m², P = 0.037). No effect of this polymorphism was detected in women. This finding suggests that the PPARG gene has a gender-specific effect and contributes to the susceptibility to obesity in this population.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Rough turning is an important form of manufacturing cylinder-symmetric parts. Thus far, increasing the level of automation in rough turning has included process monitoring methods or adaptive turning control methods that aim to keep the process conditions constant. However, in order to improve process safety, quality and efficiency, an adaptive turning control should be transformed into an intelligent machining system optimizing cutting values to match process conditions or to actively seek to improve process conditions. In this study, primary and secondary chatter and chip formation are studied to understand how to measure the effect of these phenomena to the process conditions and how to avoid undesired cutting conditions. The concept of cutting state is used to address the combination of these phenomena and the current use of the power capacity of the lathe. The measures to the phenomena are not developed based on physical measures, but instead, the severity of the measures is modelled against expert opinion. Based on the concept of cutting state, an expert system style fuzzy control system capable of optimizing the cutting process was created. Important aspects of the system include the capability to adapt to several cutting phenomena appearing at once, even if the said phenomena would potentially require conflicting control action.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The current thesis manuscript studies the suitability of a recent data assimilation method, the Variational Ensemble Kalman Filter (VEnKF), to real-life fluid dynamic problems in hydrology. VEnKF combines a variational formulation of the data assimilation problem based on minimizing an energy functional with an Ensemble Kalman filter approximation to the Hessian matrix that also serves as an approximation to the inverse of the error covariance matrix. One of the significant features of VEnKF is the very frequent re-sampling of the ensemble: resampling is done at every observation step. This unusual feature is further exacerbated by observation interpolation that is seen beneficial for numerical stability. In this case the ensemble is resampled every time step of the numerical model. VEnKF is implemented in several configurations to data from a real laboratory-scale dam break problem modelled with the shallow water equations. It is also tried in a two-layer Quasi- Geostrophic atmospheric flow problem. In both cases VEnKF proves to be an efficient and accurate data assimilation method that renders the analysis more realistic than the numerical model alone. It also proves to be robust against filter instability by its adaptive nature.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Optimization of quantum measurement processes has a pivotal role in carrying out better, more accurate or less disrupting, measurements and experiments on a quantum system. Especially, convex optimization, i.e., identifying the extreme points of the convex sets and subsets of quantum measuring devices plays an important part in quantum optimization since the typical figures of merit for measuring processes are affine functionals. In this thesis, we discuss results determining the extreme quantum devices and their relevance, e.g., in quantum-compatibility-related questions. Especially, we see that a compatible device pair where one device is extreme can be joined into a single apparatus essentially in a unique way. Moreover, we show that the question whether a pair of quantum observables can be measured jointly can often be formulated in a weaker form when some of the observables involved are extreme. Another major line of research treated in this thesis deals with convex analysis of special restricted quantum device sets, covariance structures or, in particular, generalized imprimitivity systems. Some results on the structure ofcovariant observables and instruments are listed as well as results identifying the extreme points of covariance structures in quantum theory. As a special case study, not published anywhere before, we study the structure of Euclidean-covariant localization observables for spin-0-particles. We also discuss the general form of Weyl-covariant phase-space instruments. Finally, certain optimality measures originating from convex geometry are introduced for quantum devices, namely, boundariness measuring how ‘close’ to the algebraic boundary of the device set a quantum apparatus is and the robustness of incompatibility quantifying the level of incompatibility for a quantum device pair by measuring the highest amount of noise the pair tolerates without becoming compatible. Boundariness is further associated to minimum-error discrimination of quantum devices, and robustness of incompatibility is shown to behave monotonically under certain compatibility-non-decreasing operations. Moreover, the value of robustness of incompatibility is given for a few special device pairs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This thesis concerns the analysis of epidemic models. We adopt the Bayesian paradigm and develop suitable Markov Chain Monte Carlo (MCMC) algorithms. This is done by considering an Ebola outbreak in the Democratic Republic of Congo, former Zaïre, 1995 as a case of SEIR epidemic models. We model the Ebola epidemic deterministically using ODEs and stochastically through SDEs to take into account a possible bias in each compartment. Since the model has unknown parameters, we use different methods to estimate them such as least squares, maximum likelihood and MCMC. The motivation behind choosing MCMC over other existing methods in this thesis is that it has the ability to tackle complicated nonlinear problems with large number of parameters. First, in a deterministic Ebola model, we compute the likelihood function by sum of square of residuals method and estimate parameters using the LSQ and MCMC methods. We sample parameters and then use them to calculate the basic reproduction number and to study the disease-free equilibrium. From the sampled chain from the posterior, we test the convergence diagnostic and confirm the viability of the model. The results show that the Ebola model fits the observed onset data with high precision, and all the unknown model parameters are well identified. Second, we convert the ODE model into a SDE Ebola model. We compute the likelihood function using extended Kalman filter (EKF) and estimate parameters again. The motivation of using the SDE formulation here is to consider the impact of modelling errors. Moreover, the EKF approach allows us to formulate a filtered likelihood for the parameters of such a stochastic model. We use the MCMC procedure to attain the posterior distributions of the parameters of the SDE Ebola model drift and diffusion parts. In this thesis, we analyse two cases: (1) the model error covariance matrix of the dynamic noise is close to zero , i.e. only small stochasticity added into the model. The results are then similar to the ones got from deterministic Ebola model, even if methods of computing the likelihood function are different (2) the model error covariance matrix is different from zero, i.e. a considerable stochasticity is introduced into the Ebola model. This accounts for the situation where we would know that the model is not exact. As a results, we obtain parameter posteriors with larger variances. Consequently, the model predictions then show larger uncertainties, in accordance with the assumption of an incomplete model.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

For my Licentiate thesis, I conducted research on risk measures. Continuing with this research, I now focus on capital allocation. In the proportional capital allocation principle, the choice of risk measure plays a very important part. In the chapters Introduction and Basic concepts, we introduce three definitions of economic capital, discuss the purpose of capital allocation, give different viewpoints of capital allocation and present an overview of relevant literature. Risk measures are defined and the concept of coherent risk measure is introduced. Examples of important risk measures are given, e. g., Value at Risk (VaR), Tail Value at Risk (TVaR). We also discuss the implications of dependence and review some important distributions. In the following chapter on Capital allocation we introduce different principles for allocating capital. We prefer to work with the proportional allocation method. In the following chapter, Capital allocation based on tails, we focus on insurance business lines with heavy-tailed loss distribution. To emphasize capital allocation based on tails, we define the following risk measures: Conditional Expectation, Upper Tail Covariance and Tail Covariance Premium Adjusted (TCPA). In the final chapter, called Illustrative case study, we simulate two sets of data with five insurance business lines using Normal copulas and Cauchy copulas. The proportional capital allocation is calculated using TCPA as risk measure. It is compared with the result when VaR is used as risk measure and with covariance capital allocation. In this thesis, it is emphasized that no single allocation principle is perfect for all purposes. When focusing on the tail of losses, the allocation based on TCPA is a good one, since TCPA in a sense includes features of TVaR and Tail covariance.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Various researches in the field of econophysics has shown that fluid flow have analogous phenomena in financial market behavior, the typical parallelism being delivered between energy in fluids and information on markets. However, the geometry of the manifold on which market dynamics act out their dynamics (corporate space) is not yet known. In this thesis, utilizing a Seven year time series of prices of stocks used to compute S&P500 index on the New York Stock Exchange, we have created local chart to the corporate space with the goal of finding standing waves and other soliton like patterns in the behavior of stock price deviations from the S&P500 index. By first calculating the correlation matrix of normalized stock price deviations from the S&P500 index, we have performed a local singular value decomposition over a set of four different time windows as guides to the nature of patterns that may emerge. I turns out that in almost all cases, each singular vector is essentially determined by relatively small set of companies with big positive or negative weights on that singular vector. Over particular time windows, sometimes these weights are strongly correlated with at least one industrial sector and certain sectors are more prone to fast dynamics whereas others have longer standing waves.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Contient : « C'est l'avaluation des monnoyes d'or et d'argent estrangères, selon les poix et essay qui en ont esté faictz par nous, Jean L'HUILLIER, seigneur de Boulancourt, président en la Chambre des Comptes, et Jean GROLLIER, seigneur d'Aguisy, trésorier de France, appelez avecques nous maistres Alexandre DE LA TORRETTE, président en la Court des Monnoies, Guillaume MARILLAC, maistre ordinaire en la Chambre desdictz Comptes, Charles PREVOST, auditeur en icelle Chambre, Claude MARCEL, essayeur général desdictes monnoies, et Guillaume LE GRAS, marchant, bourgeois à Paris, suivant les lettres closes du roy » François II ; « faict et arresté à Paris, le XIIIe jour d'avril, l'an 1559, avant Pasques » [1560] ; orig. signé (f. 28) de « Luillier », « Grolier », « de La Tourrete », « Marillac », « Le Prevost », « Marcel » et « Legras » ; Traité sur les monnaies, copie du XVIIe siècle : « Les monoyes ont esté establies non seulement pour éviter les incomoditez de la permutation... »

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Two efficient, regio- and stereo controlled synthetic approaches to the synthesis of racemic analogs of pancratistatin have been accomplished and they serve as the model systems for the total synthesis of optically active 7-deoxy-pancratistatin. In the Diels-Alder approach, an efficient [4+2] cycloaddition of 3,4-methylenedioxyco- nitrostyrene with Danishefsky's diene to selectively form an exo-nitro adduct has been developed as the key step in the construction of the C-ring of the target molecule. In the Michael addition approach, the key step was a conjugate addition of an organic zinc-cuprate to the 3,4-methylenedioxy-(B-nitrostyrene, followed by a diastereocontroUed closure to form the cyclohexane C-ring of the target molecule via an intramolecular nitro-aldol cyclization on a neutral alumina surface. A chair-like transition state for such a cyclization has been established and such a chelation controlled transition state can be useful in the prediction of diastereoselectivity in other related 6-exo-trig nitroaldol reactions. Cyclization of the above products fi^om both approaches by using a Bischler-Napieralski type reaction afforded two lycoricidine derivatives 38 and 50 in good yields. The initial results from the above modeling studies as well as the analysis of the synthetic strategy were directed to a chiral pool approach to the total synthesis of optically active 7-deoxy-pancratistatin. Selective monsilylation and iodination of Ltartaric acid provided a chiral precursor for the proposed key Michael transformation. The outlook for the total synthesis of 7-deoxy-pancratistatin by this approach is very promising.A concise synthesis of novel designed, optically pure, Cz-symmetrical disulfonylamide chiral ligands starting from L-tartaric acid has also been achieved. This sequence employs the metallation of indole followed by Sfj2 replacement of a dimesylate as the key step. The activity for this Cz-symmetric chiral disulfonamide ligand in the catalytic enantioselective reaction has been confirmed by nucleophilic addition to benzaldehyde in the disulfonamide-Ti (0-i-Pr)4-diethylzinc system with a 48% yield and a 33% e.e. value. Such a ligand tethered with a suitable metal complex should be also applicable towards the total synthesis of 7-deoxy-pancratistatin.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The present study evaluated the use of stimulus equivalence in teaching monetary skills to school aged children with autism. An AB within-subject design with periodic probes was used. At pretest, three participants demonstrated relation DA, an auditory-visual relation (matching dictated coin values to printed coin prices). Using a three-choice match-to-sample procedure, with a multi-component intervention package, these participants were taught two trained relations, BA (matching coins to printed prices) and CA (matching coin combinations to printed prices). Two participants achieved positive tests of equivalence, and the third participant demonstrated emergent performances with a symmetric and transitive relation. In addition, two participants were able to show generalization of learned skills with a parent, in a second naturalistic setting. The present research replicates and extends the results of previous studies by demonstrating that stimulus equivalence can be used to teach an adaptive skill to children with autism.