913 resultados para Non-linear fiber
Resumo:
L'objectif du présent mémoire vise à présenter des modèles de séries chronologiques multivariés impliquant des vecteurs aléatoires dont chaque composante est non-négative. Nous considérons les modèles vMEM (modèles vectoriels et multiplicatifs avec erreurs non-négatives) présentés par Cipollini, Engle et Gallo (2006) et Cipollini et Gallo (2010). Ces modèles représentent une généralisation au cas multivarié des modèles MEM introduits par Engle (2002). Ces modèles trouvent notamment des applications avec les séries chronologiques financières. Les modèles vMEM permettent de modéliser des séries chronologiques impliquant des volumes d'actif, des durées, des variances conditionnelles, pour ne citer que ces applications. Il est également possible de faire une modélisation conjointe et d'étudier les dynamiques présentes entre les séries chronologiques formant le système étudié. Afin de modéliser des séries chronologiques multivariées à composantes non-négatives, plusieurs spécifications du terme d'erreur vectoriel ont été proposées dans la littérature. Une première approche consiste à considérer l'utilisation de vecteurs aléatoires dont la distribution du terme d'erreur est telle que chaque composante est non-négative. Cependant, trouver une distribution multivariée suffisamment souple définie sur le support positif est plutôt difficile, au moins avec les applications citées précédemment. Comme indiqué par Cipollini, Engle et Gallo (2006), un candidat possible est une distribution gamma multivariée, qui impose cependant des restrictions sévères sur les corrélations contemporaines entre les variables. Compte tenu que les possibilités sont limitées, une approche possible est d'utiliser la théorie des copules. Ainsi, selon cette approche, des distributions marginales (ou marges) peuvent être spécifiées, dont les distributions en cause ont des supports non-négatifs, et une fonction de copule permet de tenir compte de la dépendance entre les composantes. Une technique d'estimation possible est la méthode du maximum de vraisemblance. Une approche alternative est la méthode des moments généralisés (GMM). Cette dernière méthode présente l'avantage d'être semi-paramétrique dans le sens que contrairement à l'approche imposant une loi multivariée, il n'est pas nécessaire de spécifier une distribution multivariée pour le terme d'erreur. De manière générale, l'estimation des modèles vMEM est compliquée. Les algorithmes existants doivent tenir compte du grand nombre de paramètres et de la nature élaborée de la fonction de vraisemblance. Dans le cas de l'estimation par la méthode GMM, le système à résoudre nécessite également l'utilisation de solveurs pour systèmes non-linéaires. Dans ce mémoire, beaucoup d'énergies ont été consacrées à l'élaboration de code informatique (dans le langage R) pour estimer les différents paramètres du modèle. Dans le premier chapitre, nous définissons les processus stationnaires, les processus autorégressifs, les processus autorégressifs conditionnellement hétéroscédastiques (ARCH) et les processus ARCH généralisés (GARCH). Nous présentons aussi les modèles de durées ACD et les modèles MEM. Dans le deuxième chapitre, nous présentons la théorie des copules nécessaire pour notre travail, dans le cadre des modèles vectoriels et multiplicatifs avec erreurs non-négatives vMEM. Nous discutons également des méthodes possibles d'estimation. Dans le troisième chapitre, nous discutons les résultats des simulations pour plusieurs méthodes d'estimation. Dans le dernier chapitre, des applications sur des séries financières sont présentées. Le code R est fourni dans une annexe. Une conclusion complète ce mémoire.
Resumo:
Associative memory networks such as Radial Basis Functions, Neurofuzzy and Fuzzy Logic used for modelling nonlinear processes suffer from the curse of dimensionality (COD), in that as the input dimension increases the parameterization, computation cost, training data requirements, etc. increase exponentially. Here a new algorithm is introduced for the construction of a Delaunay input space partitioned optimal piecewise locally linear models to overcome the COD as well as generate locally linear models directly amenable to linear control and estimation algorithms. The training of the model is configured as a new mixture of experts network with a new fast decision rule derived using convex set theory. A very fast simulated reannealing (VFSR) algorithm is utilized to search a global optimal solution of the Delaunay input space partition. A benchmark non-linear time series is used to demonstrate the new approach.
Resumo:
This paper shows that a wavelet network and a linear term can be advantageously combined for the purpose of non linear system identification. The theoretical foundation of this approach is laid by proving that radial wavelets are orthogonal to linear functions. A constructive procedure for building such nonlinear regression structures, termed linear-wavelet models, is described. For illustration, sim ulation data are used to identify a model for a two-link robotic manipulator. The results show that the introduction of wavelets does improve the prediction ability of a linear model.
Resumo:
ABSTRACT Non-Gaussian/non-linear data assimilation is becoming an increasingly important area of research in the Geosciences as the resolution and non-linearity of models are increased and more and more non-linear observation operators are being used. In this study, we look at the effect of relaxing the assumption of a Gaussian prior on the impact of observations within the data assimilation system. Three different measures of observation impact are studied: the sensitivity of the posterior mean to the observations, mutual information and relative entropy. The sensitivity of the posterior mean is derived analytically when the prior is modelled by a simplified Gaussian mixture and the observation errors are Gaussian. It is found that the sensitivity is a strong function of the value of the observation and proportional to the posterior variance. Similarly, relative entropy is found to be a strong function of the value of the observation. However, the errors in estimating these two measures using a Gaussian approximation to the prior can differ significantly. This hampers conclusions about the effect of the non-Gaussian prior on observation impact. Mutual information does not depend on the value of the observation and is seen to be close to its Gaussian approximation. These findings are illustrated with the particle filter applied to the Lorenz ’63 system. This article is concluded with a discussion of the appropriateness of these measures of observation impact for different situations.
Resumo:
This paper presents and implements a number of tests for non-linear dependence and a test for chaos using transactions prices on three LIFFE futures contracts: the Short Sterling interest rate contract, the Long Gilt government bond contract, and the FTSE 100 stock index futures contract. While previous studies of high frequency futures market data use only those transactions which involve a price change, we use all of the transaction prices on these contracts whether they involve a price change or not. Our results indicate irrefutable evidence of non-linearity in two of the three contracts, although we find no evidence of a chaotic process in any of the series. We are also able to provide some indications of the effect of the duration of the trading day on the degree of non-linearity of the underlying contract. The trading day for the Long Gilt contract was extended in August 1994, and prior to this date there is no evidence of any structure in the return series. However, after the extension of the trading day we do find evidence of a non-linear return structure.
Resumo:
A number of tests for non-linear dependence in time series are presented and implemented on a set of 10 daily sterling exchange rates covering the entire post Bretton-Woods era until the present day. Irrefutable evidence of non-linearity is shown in many of the series, but most of this dependence can apparently be explained by reference to the GARCH family of models. It is suggested that the literature in this area has reached an impasse, with the presence of ARCH effects clearly demonstrated in a large number of papers, but with the tests for non-linearity which are currently available being unable to classify any additional non-linear structure.
Resumo:
In this paper, the laminar fluid flow of Newtonian and non-Newtonian of aqueous solutions in a tubular membrane is numerically studied. The mathematical formulation, with associated initial and boundary conditions for cylindrical coordinates, comprises the mass conservation, momentum conservation and mass transfer equations. These equations are discretized by using the finite-difference technique on a staggered grid system. Comparisons of the three upwinding schemes for discretization of the non-linear (convective) terms are presented. The effects of several physical parameters on the concentration profile are investigated. The numerical results compare favorably with experimental data and the analytical solutions. (C) 2011 Elsevier Inc. All rights reserved.
A elasticidade do ligamento colateral medial da articulação do cotovelo de cão não advém de elastina
Resumo:
A literatura relata que ligamentos consistem de tecido conjuntivo denso, composto por água, colágeno tipos I e III, diversas proteoglicanas, pouca elastina e várias outras substâncias. Além disso, os ligamentos, quando testados in vitro com tensão longitudinal e unidirecional, apresentam um comportamento mecânico não-linear, ou seja, as fibras colágenas são alongadas aos poucos, perdendo seu padrão ondulado, até que todas estejam no limite máximo de tração e iniciem o rompimento. Portanto, no presente estudo avaliou-se a presença de fibras elásticas (elastina) no ligamento colateral medial do cotovelo de cães adultos para ponderar se a elasticidade do referido ligamento deve-se à presença de fibras elásticas ou às propriedades elásticas do colágeno ou à combinação de ambas. Foram utilizadas quatro articulações, de machos e fêmeas em igual proporção, das quais foram adquiridas as amostras das porções médias dos ligamentos colaterais mediais para a rotina histológica. Os cortes foram corados pela técnica de Weigert, e não foi observada a presença de fibras elásticas, detectável por esta técnica à microscopia de luz. Concluiu-se que a elasticidade do ligamento colateral medial do cotovelo de cão deve-se, principalmente, ao padrão ondulado das fibras colágenas, devido à quantidade ínfima ou até à inexistência de fibras elásticas nesta estrutura.
Resumo:
Objetivou-se com este trabalho, desenvolver modelos de programação não-linear para sistematização de terras, aplicáveis para áreas com formato regular e que minimizem a movimentação de terra, utilizando o software GAMS para o cálculo. Esses modelos foram comparados com o Método dos Quadrados Mínimos Generalizado, desenvolvido por Scaloppi & Willardson (1986), sendo o parâmetro de avaliação o volume de terra movimentado. Concluiu-se que, ambos os modelos de programação não-linear desenvolvidos nesta pesquisa mostraram-se adequados para aplicação em áreas regulares e forneceram menores valores de movimentação de terra quando comparados com o método dos quadrados mínimos.
Resumo:
This work presents a modelling and identification method for a wheeled mobile robot, including the actuator dynamics. Instead of the classic modelling approach, where the robot position coordinates (x,y) are utilized as state variables (resulting in a non linear model), the proposed discrete model is based on the travelled distance increment Delta_l. Thus, the resulting model is linear and time invariant and it can be identified through classical methods such as Recursive Least Mean Squares. This approach has a problem: Delta_l can not be directly measured. In this paper, this problem is solved using an estimate of Delta_l based on a second order polynomial approximation. Experimental data were colected and the proposed method was used to identify the model of a real robot
Resumo:
Slugging is a well-known slugging phenomenon in multiphase flow, which may cause problems such as vibration in pipeline and high liquid level in the separator. It can be classified according to the place of its occurrence. The most severe, known as slugging in the riser, occurs in the vertical pipe which feeds the platform. Also known as severe slugging, it is capable of causing severe pressure fluctuations in the flow of the process, excessive vibration, flooding in separator tanks, limited production, nonscheduled stop of production, among other negative aspects that motivated the production of this work . A feasible solution to deal with this problem would be to design an effective method for the removal or reduction of the system, a controller. According to the literature, a conventional PID controller did not produce good results due to the high degree of nonlinearity of the process, fueling the development of advanced control techniques. Among these, the model predictive controller (MPC), where the control action results from the solution of an optimization problem, it is robust, can incorporate physical and /or security constraints. The objective of this work is to apply a non-conventional non-linear model predictive control technique to severe slugging, where the amount of liquid mass in the riser is controlled by the production valve and, indirectly, the oscillation of flow and pressure is suppressed, while looking for environmental and economic benefits. The proposed strategy is based on the use of the model linear approximations and repeatedly solving of a quadratic optimization problem, providing solutions that improve at each iteration. In the event where the convergence of this algorithm is satisfied, the predicted values of the process variables are the same as to those obtained by the original nonlinear model, ensuring that the constraints are satisfied for them along the prediction horizon. A mathematical model recently published in the literature, capable of representing characteristics of severe slugging in a real oil well, is used both for simulation and for the project of the proposed controller, whose performance is compared to a linear MPC
Resumo:
Nowadays, optic fiber is one of the most used communication methods, mainly due to the fact that the data transmission rates of those systems exceed all of the other means of digital communication. Despite the great advantage, there are problems that prevent full utilization of the optical channel: by increasing the transmission speed and the distances involved, the data is subjected to non-linear inter symbolic interference caused by the dispersion phenomena in the fiber. Adaptive equalizers can be used to solve this problem, they compensate non-ideal responses of the channel in order to restore the signal that was transmitted. This work proposes an equalizer based on artificial neural networks and evaluates its performance in optical communication systems. The proposal is validated through a simulated optic channel and the comparison with other adaptive equalization techniques
Resumo:
Composites based on PEEK + PTFE + CARBON FIBER + Graphite (G_CFRP) has increased application in the top industries, as Aerospace, Aeronautical, Petroleum, Biomedical, Mechanical and Electronics Engineering challenges. A commercially available G_CFRP was warmed up to three different levels of thermal energy to identify the main damage mechanisms and some evidences for their intrinsic transitions. An experimental test rig for systematize a heat flux was developed in this dissertation, based on the Joule Effect. It was built using an isothermal container, an internal heat source and a real-time measurement system for test a sample by time. A standard conical-cylindrical tip was inserted into a soldering iron, commercially available and identified by three different levels of nominal electrical power, 40W (manufacturer A), 40W (manufacturer B), 100W and 150W, selected after screening tests: these power levels for the heat source, after one hour of heating and one hour of cooling in situ, carried out three different zones of degradation in the composite surface. The bench was instrumented with twelve thermocouples, a wattmeter and a video camera. The twelve specimens tested suffered different degradation mechanisms, analyzed by DSC (Differential Scanning Calorimetry) and TG (Thermogravimetry) techniques, Scanning Electron Microscopy (SEM) and Energy-Dispersive X-Rays (EDX) Analysis. Before and after each testing, it was measured the hardness of the sample by HRM (Hardness Rockwell M). Excellent correlations (R2=1) were obtained in the plots of the evaporated area after one hour of heating and one hour of cooling in situ versus (1) the respective power of heat source and (2) the central temperature of the sample. However, as resulting of the differential degradation of G_CFRP and their anisotropy, confirmed by their variable thermal properties, viscoelastic and plastic properties, there were both linear and non-linear behaviour between the temperature field and Rockwell M hardness measured in the radial and circumferential directions of the samples. Some morphological features of the damaged zones are presented and discussed, as, for example, the crazing and skeletonization mechanism of G_CFRP