904 resultados para Non-linear behavior


Relevância:

90.00% 90.00%

Publicador:

Resumo:

Codirection: Dr. Gonzalo Lizarralde

Relevância:

90.00% 90.00%

Publicador:

Resumo:

L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

To ensure quality of machined products at minimum machining costs and maximum machining effectiveness, it is very important to select optimum parameters when metal cutting machine tools are employed. Traditionally, the experience of the operator plays a major role in the selection of optimum metal cutting conditions. However, attaining optimum values each time by even a skilled operator is difficult. The non-linear nature of the machining process has compelled engineers to search for more effective methods to attain optimization. The design objective preceding most engineering design activities is simply to minimize the cost of production or to maximize the production efficiency. The main aim of research work reported here is to build robust optimization algorithms by exploiting ideas that nature has to offer from its backyard and using it to solve real world optimization problems in manufacturing processes.In this thesis, after conducting an exhaustive literature review, several optimization techniques used in various manufacturing processes have been identified. The selection of optimal cutting parameters, like depth of cut, feed and speed is a very important issue for every machining process. Experiments have been designed using Taguchi technique and dry turning of SS420 has been performed on Kirlosker turn master 35 lathe. Analysis using S/N and ANOVA were performed to find the optimum level and percentage of contribution of each parameter. By using S/N analysis the optimum machining parameters from the experimentation is obtained.Optimization algorithms begin with one or more design solutions supplied by the user and then iteratively check new design solutions, relative search spaces in order to achieve the true optimum solution. A mathematical model has been developed using response surface analysis for surface roughness and the model was validated using published results from literature.Methodologies in optimization such as Simulated annealing (SA), Particle Swarm Optimization (PSO), Conventional Genetic Algorithm (CGA) and Improved Genetic Algorithm (IGA) are applied to optimize machining parameters while dry turning of SS420 material. All the above algorithms were tested for their efficiency, robustness and accuracy and observe how they often outperform conventional optimization method applied to difficult real world problems. The SA, PSO, CGA and IGA codes were developed using MATLAB. For each evolutionary algorithmic method, optimum cutting conditions are provided to achieve better surface finish.The computational results using SA clearly demonstrated that the proposed solution procedure is quite capable in solving such complicated problems effectively and efficiently. Particle Swarm Optimization (PSO) is a relatively recent heuristic search method whose mechanics are inspired by the swarming or collaborative behavior of biological populations. From the results it has been observed that PSO provides better results and also more computationally efficient.Based on the results obtained using CGA and IGA for the optimization of machining process, the proposed IGA provides better results than the conventional GA. The improved genetic algorithm incorporating a stochastic crossover technique and an artificial initial population scheme is developed to provide a faster search mechanism. Finally, a comparison among these algorithms were made for the specific example of dry turning of SS 420 material and arriving at optimum machining parameters of feed, cutting speed, depth of cut and tool nose radius for minimum surface roughness as the criterion. To summarize, the research work fills in conspicuous gaps between research prototypes and industry requirements, by simulating evolutionary procedures seen in nature that optimize its own systems.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Identification and Control of Non‐linear dynamical systems are challenging problems to the control engineers.The topic is equally relevant in communication,weather prediction ,bio medical systems and even in social systems,where nonlinearity is an integral part of the system behavior.Most of the real world systems are nonlinear in nature and wide applications are there for nonlinear system identification/modeling.The basic approach in analyzing the nonlinear systems is to build a model from known behavior manifest in the form of system output.The problem of modeling boils down to computing a suitably parameterized model,representing the process.The parameters of the model are adjusted to optimize a performanace function,based on error between the given process output and identified process/model output.While the linear system identification is well established with many classical approaches,most of those methods cannot be directly applied for nonlinear system identification.The problem becomes more complex if the system is completely unknown but only the output time series is available.Blind recognition problem is the direct consequence of such a situation.The thesis concentrates on such problems.Capability of Artificial Neural Networks to approximate many nonlinear input-output maps makes it predominantly suitable for building a function for the identification of nonlinear systems,where only the time series is available.The literature is rich with a variety of algorithms to train the Neural Network model.A comprehensive study of the computation of the model parameters,using the different algorithms and the comparison among them to choose the best technique is still a demanding requirement from practical system designers,which is not available in a concise form in the literature.The thesis is thus an attempt to develop and evaluate some of the well known algorithms and propose some new techniques,in the context of Blind recognition of nonlinear systems.It also attempts to establish the relative merits and demerits of the different approaches.comprehensiveness is achieved in utilizing the benefits of well known evaluation techniques from statistics. The study concludes by providing the results of implementation of the currently available and modified versions and newly introduced techniques for nonlinear blind system modeling followed by a comparison of their performance.It is expected that,such comprehensive study and the comparison process can be of great relevance in many fields including chemical,electrical,biological,financial and weather data analysis.Further the results reported would be of immense help for practical system designers and analysts in selecting the most appropriate method based on the goodness of the model for the particular context.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This thesis is divided in to 9 chapters and deals with the modification of TiO2 for various applications include photocatalysis, thermal reaction, photovoltaics and non-linear optics. Chapter 1 involves a brief introduction of the topic of study. An introduction to the applications of modified titania systems in various fields are discussed concisely. Scope and objectives of the present work are also discussed in this chapter. Chapter 2 explains the strategy adopted for the synthesis of metal, nonmetal co-doped TiO2 systems. Hydrothermal technique was employed for the preparation of the co-doped TiO2 system, where Ti[OCH(CH3)2]4, urea and metal nitrates were used as the sources for TiO2, N and metals respectively. In all the co-doped systems, urea to Ti[OCH(CH3)2]4 was taken in a 1:1 molar ratio and varied the concentration of metals. Five different co-doped catalytic systems and for each catalysts, three versions were prepared by varying the concentration of metals. A brief explanation of physico-chemical techniques used for the characterization of the material was also presented in this chapter. This includes X-ray Diffraction (XRD), Raman Spectroscopy, FTIR analysis, Thermo Gravimetric Analysis, Energy Dispersive X-ray Analysis (EDX), Scanning Electron Microscopy(SEM), UV-Visible Diffuse Reflectance Spectroscopy (UV-Vis DRS), Transmission Electron Microscopy (TEM), BET Surface Area Measurements and X-ray Photoelectron Spectroscopy (XPS). Chapter 3 contains the results and discussion of characterization techniques used for analyzing the prepared systems. Characterization is an inevitable part of materials research. Determination of physico-chemical properties of the prepared materials using suitable characterization techniques is very crucial to find its exact field of application. It is clear from the XRD pattern that photocatalytically active anatase phase dominates in the calcined samples with peaks at 2θ values around 25.4°, 38°, 48.1°, 55.2° and 62.7° corresponding to (101), (004), (200), (211) and (204) crystal planes (JCPDS 21-1272) respectively. But in the case of Pr-N-Ti sample, a new peak was observed at 2θ = 30.8° corresponding to the (121) plane of the polymorph brookite. There are no visible peaks corresponding to dopants, which may be due to their low concentration or it is an indication of the better dispersion of impurities in the TiO2. Crystallite size of the sample was calculated from Scherrer equation byusing full width at half maximum (FWHM) of the (101) peak of the anatase phase. Crystallite size of all the co-doped TiO2 was found to be lower than that of bare TiO2 which indicates that the doping of metal ions having higher ionic radius into the lattice of TiO2 causes some lattice distortion which suppress the growth of TiO2 nanoparticles. The structural identity of the prepared system obtained from XRD pattern is further confirmed by Raman spectra measurements. Anatase has six Raman active modes. Band gap of the co-doped system was calculated using Kubelka-Munk equation and that was found to be lower than pure TiO2. Stability of the prepared systems was understood from thermo gravimetric analysis. FT-IR was performed to understand the functional groups as well as to study the surface changes occurred during modification. EDX was used to determine the impurities present in the system. The EDX spectra of all the co-doped samples show signals directly related to the dopants. Spectra of all the co-doped systems contain O and Ti as the main components with low concentrations of doped elements. Morphologies of the prepared systems were obtained from SEM and TEM analysis. Average particle size of the systems was drawn from histogram data. Electronic structures of the samples were identified perfectly from XPS measurements. Chapter 4 describes the photocatalytic degradation of herbicides Atrazine and Metolachlor using metal, non-metal co-doped titania systems. The percentage of degradation was analyzed by HPLC technique. Parameters such as effect of different catalysts, effect of time, effect of catalysts amount and reusability studies were discussed. Chapter 5 deals with the photo-oxidation of some anthracene derivatives by co-doped catalytic systems. These anthracene derivatives come underthe category of polycyclic aromatic hydrocarbons (PAH). Due to the presence of stable benzene rings, most of the PAH show strong inhibition towards biological degradation and the common methods employed for their removal. According to environmental protection agency, most of the PAH are highly toxic in nature. TiO2 photochemistry has been extensively investigated as a method for the catalytic conversion of such organic compounds, highlighting the potential of thereof in the green chemistry. There are actually two methods for the removal of pollutants from the ecosystem. Complete mineralization is the one way to remove pollutants. Conversion of toxic compounds to another compound having toxicity less than the initial starting compound is the second way. Here in this chapter, we are concentrating on the second aspect. The catalysts used were Gd(1wt%)-N-Ti, Pd(1wt%)-N-Ti and Ag(1wt%)-N-Ti. Here we were very successfully converted all the PAH to anthraquinone, a compound having diverse applications in industrial as well as medical fields. Substitution of 10th position of desired PAH by phenyl ring reduces the feasibility of photo reaction and produced 9-hydroxy 9-phenyl anthrone (9H9PA) as an intermediate species. The products were separated and purified by column chromatography using 70:30 hexane/DCM mixtures as the mobile phase and the resultant products were characterized thoroughly by 1H NMR, IR spectroscopy and GCMS analysis. Chapter 6 elucidates the heterogeneous Suzuki coupling reaction by Cu/Pd bimetallic supported on TiO2. Sol-Gel followed by impregnation method was adopted for the synthesis of Cu/Pd-TiO2. The prepared system was characterized by XRD, TG-DTG, SEM, EDX, BET Surface area and XPS. The product was separated and purified by column chromatography using hexane as the mobile phase. Maximum isolated yield of biphenyl of around72% was obtained in DMF using Cu(2wt%)-Pd(4wt%)-Ti as the catalyst. In this reaction, effective solvent, base and catalyst were found to be DMF, K2CO3 and Cu(2wt%)-Pd(4wt%)-Ti respectively. Chapter 7 gives an idea about the photovoltaic (PV) applications of TiO2 based thin films. Due to energy crisis, the whole world is looking for a new sustainable energy source. Harnessing solar energy is one of the most promising ways to tackle this issue. The present dominant photovoltaic (PV) technologies are based on inorganic materials. But the high material, low power conversion efficiency and manufacturing cost limits its popularization. A lot of research has been conducted towards the development of low-cost PV technologies, of which organic photovoltaic (OPV) devices are one of the promising. Here two TiO2 thin films having different thickness were prepared by spin coating technique. The prepared films were characterized by XRD, AFM and conductivity measurements. The thickness of the films was measured by Stylus Profiler. This chapter mainly concentrated on the fabrication of an inverted hetero junction solar cell using conducting polymer MEH-PPV as photo active layer. Here TiO2 was used as the electron transport layer. Thin films of MEH-PPV were also prepared using spin coating technique. Two fullerene derivatives such as PCBM and ICBA were introduced into the device in order to improve the power conversion efficiency. Effective charge transfer between the conducting polymer and ICBA were understood from fluorescence quenching studies. The fabricated Inverted hetero junction exhibited maximum power conversion efficiency of 0.22% with ICBA as the acceptor molecule. Chapter 8 narrates the third order order nonlinear optical properties of bare and noble metal modified TiO2 thin films. Thin films were fabricatedby spray pyrolysis technique. Sol-Gel derived Ti[OCH(CH3)2]4 in CH3CH2OH/CH3COOH was used as the precursor for TiO2. The precursors used for Au, Ag and Pd were the aqueous solutions of HAuCl4, AgNO3 and Pd(NO3)2 respectively. The prepared films were characterized by XRD, SEM and EDX. The nonlinear optical properties of the prepared materials were investigated by Z-Scan technique comprising of Nd-YAG laser (532 nm,7 ns and10 Hz). The non-linear coefficients were obtained by fitting the experimental Z-Scan plot with the theoretical plots. Nonlinear absorption is a phenomenon defined as a nonlinear change (increase or decrease) in absorption with increasing of intensity. This can be mainly divided into two types: saturable absorption (SA) and reverse saturable absorption (RSA). Depending on the pump intensity and on the absorption cross- section at the excitation wavelength, most molecules show non- linear absorption. With increasing intensity, if the excited states show saturation owing to their long lifetimes, the transmission will show SA characteristics. Here absorption decreases with increase of intensity. If, however, the excited state has strong absorption compared with that of the ground state, the transmission will show RSA characteristics. Here in our work most of the materials show SA behavior and some materials exhibited RSA behavior. Both these properties purely depend on the nature of the materials and alignment of energy states within them. Both these SA and RSA have got immense applications in electronic devices. The important results obtained from various studies are presented in chapter 9.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Verschiedene Bedeutungen sind mit dem Begriff Mode verbunden. Mode ist nicht ausschließlich ein Synonym für Bekleidung. Vielmehr bezeichnet Mode den ständigen Wandel kollektiver Verhaltensweisen und bildet damit das Gegenstück zu Traditionen. Die Vergänglichkeit einer Mode ist ebenso Merkmal wie ein Verhalten, daß aus der Interaktion der Menschen resultiert. Mode läßt sich als exogener Schock modelliert oder durch endogene Prozesse erzeugt abbilden. Im synergetischen Modell erzeugen Personen auf der Makroebene bestimmte Ordner, die als Moden interpretiert werden können und an denen sich die Individuen wiederum orientieren. Neben einer umfassenden Betrachtung des Terminus Mode wird der Stand der sozial- und wirtschaftswissenschaftlichen Forschung zur Erklärung von Mode referiert. Ausgangspunkt der Dissertation ist das Fehlen eines ökonomischen Erklärungsansatzes der Mode, der unter Berücksichtigung der wesentlichsten Motive menschlichen Handelns die Vielseitigkeit dieses kollektiven Phänomens in einem mathematischen Modell abbildet. Ein solches Modell wird in der Arbeit entwickelt. Neben den sozial abhängigen Verhaltensweisen, die ein fundamentales Wesensmerkmal der Mode darstellen, gilt dem Aspekt der Neuheit im Kontext der Mode besondere Aufmerksamkeit. Auf das Problem, Neugierdeverhalten und Neuheit zu modellieren, wird detailliert eingegangen. Es ist weder Ziel der Arbeit, den Ursprung der inhaltlichen Ausgestaltung einer Mode zu identifizieren, noch die konkreten Entstehungszusammenhänge von Neuheit herauszuarbeiten. Vielmehr wird der Wirkungszusammenhang von Neuem im Kontext der Mode analysiert, um verschiedene Entstehungszusammenhänge von Mode darstellen zu können. Außerdem wird eine eigene empirische Studie zum Verhalten von Personen in bezug auf Kleidung und deren Neuartigkeit vorgestellt. Die Erhebungsdaten werden mit Hilfe der sogenannten Kohonenkarte klassifiziert, wobei insbesondere nichtlineare Zusammenhänge zwischen den Variablen berücksichtigt werden können. Im Rahmen eines synergetischen Erklärungsansatzes ist diese Karte deshalb von großem Interesse, weil sie sich selbst organisiert und deshalb dem synergetischen Erklärungsansatz modelladäquat ist.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Web services from different partners can be combined to applications that realize a more complex business goal. Such applications built as Web service compositions define how interactions between Web services take place in order to implement the business logic. Web service compositions not only have to provide the desired functionality but also have to comply with certain Quality of Service (QoS) levels. Maximizing the users' satisfaction, also reflected as Quality of Experience (QoE), is a primary goal to be achieved in a Service-Oriented Architecture (SOA). Unfortunately, in a dynamic environment like SOA unforeseen situations might appear like services not being available or not responding in the desired time frame. In such situations, appropriate actions need to be triggered in order to avoid the violation of QoS and QoE constraints. In this thesis, proper solutions are developed to manage Web services and Web service compositions with regard to QoS and QoE requirements. The Business Process Rules Language (BPRules) was developed to manage Web service compositions when undesired QoS or QoE values are detected. BPRules provides a rich set of management actions that may be triggered for controlling the service composition and for improving its quality behavior. Regarding the quality properties, BPRules allows to distinguish between the QoS values as they are promised by the service providers, QoE values that were assigned by end-users, the monitored QoS as measured by our BPR framework, and the predicted QoS and QoE values. BPRules facilitates the specification of certain user groups characterized by different context properties and allows triggering a personalized, context-aware service selection tailored for the specified user groups. In a service market where a multitude of services with the same functionality and different quality values are available, the right services need to be selected for realizing the service composition. We developed new and efficient heuristic algorithms that are applied to choose high quality services for the composition. BPRules offers the possibility to integrate multiple service selection algorithms. The selection algorithms are applicable also for non-linear objective functions and constraints. The BPR framework includes new approaches for context-aware service selection and quality property predictions. We consider the location information of users and services as context dimension for the prediction of response time and throughput. The BPR framework combines all new features and contributions to a comprehensive management solution. Furthermore, it facilitates flexible monitoring of QoS properties without having to modify the description of the service composition. We show how the different modules of the BPR framework work together in order to execute the management rules. We evaluate how our selection algorithms outperform a genetic algorithm from related research. The evaluation reveals how context data can be used for a personalized prediction of response time and throughput.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A modeling study of hippocampal pyramidal neurons is described. This study is based on simulations using HIPPO, a program which simulates the somatic electrical activity of these cells. HIPPO is based on a) descriptions of eleven non-linear conductances that have been either reported for this class of cell in the literature or postulated in the present study, and b) an approximation of the electrotonic structure of the cell that is derived in this thesis, based on data for the linear properties of these cells. HIPPO is used a) to integrate empirical data from a variety of sources on the electrical characteristics of this type of cell, b) to investigate the functional significance of the various elements that underly the electrical behavior, and c) to provide a tool for the electrophysiologist to supplement direct observation of these cells and provide a method of testing speculations regarding parameters that are not accessible.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Since robots are typically designed with an individual actuator at each joint, the control of these systems is often difficult and non-intuitive. This thesis explains a more intuitive control scheme called Virtual Model Control. This thesis also demonstrates the simplicity and ease of this control method by using it to control a simulated walking hexapod. Virtual Model Control uses imagined mechanical components to create virtual forces, which are applied through the joint torques of real actuators. This method produces a straightforward means of controlling joint torques to produce a desired robot behavior. Due to the intuitive nature of this control scheme, the design of a virtual model controller is similar to the design of a controller with basic mechanical components. The ease of this control scheme facilitates the use of a high level control system which can be used above the low level virtual model controllers to modulate the parameters of the imaginary mechanical components. In order to apply Virtual Model Control to parallel mechanisms, a solution to the force distribution problem is required. This thesis uses an extension of Gardner`s Partitioned Force Control method which allows for the specification of constrained degrees of freedom. This virtual model control technique was applied to a simulated hexapod robot. Although the hexapod is a highly non-linear, parallel mechanism, the virtual models allowed text-book control solutions to be used while the robot was walking. Using a simple linear control law, the robot walked while simultaneously balancing a pendulum and tracking an object.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper shows that a wavelet network and a linear term can be advantageously combined for the purpose of non linear system identification. The theoretical foundation of this approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such nonlinear regression structures, termed linear-wavelet models, is described. For illustration, sim ulation data are used to identify a model for a two-link robotic manipulator. The results show that the introduction of wavelets does improve the prediction ability of a linear model.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents and implements a number of tests for non-linear dependence and a test for chaos using transactions prices on three LIFFE futures contracts: the Short Sterling interest rate contract, the Long Gilt government bond contract, and the FTSE 100 stock index futures contract. While previous studies of high frequency futures market data use only those transactions which involve a price change, we use all of the transaction prices on these contracts whether they involve a price change or not. Our results indicate irrefutable evidence of non-linearity in two of the three contracts, although we find no evidence of a chaotic process in any of the series. We are also able to provide some indications of the effect of the duration of the trading day on the degree of non-linearity of the underlying contract. The trading day for the Long Gilt contract was extended in August 1994, and prior to this date there is no evidence of any structure in the return series. However, after the extension of the trading day we do find evidence of a non-linear return structure.