986 resultados para Geometrical non-linearity
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.
Resumo:
Changes in modern structural design have created a demand for products which are light but possess high strength. The objective is a reduction in fuel consumption and weight of materials to satisfy both economic and environmental criteria. Cold roll forming has the potential to fulfil this requirement. The bending process is controlled by the shape of the profile machined on the periphery of the rolls. A CNC lathe can machine complicated profiles to a high standard of precision, but the expertise of a numerical control programmer is required. A computer program was developed during this project, using the expert system concept, to calculate tool paths and consequently to expedite the procurement of the machine control tapes whilst removing the need for a skilled programmer. Codifying the expertise of a human and the encapsulation of knowledge within a computer memory, destroys the dependency on highly trained people whose services can be costly, inconsistent and unreliable. A successful cold roll forming operation, where the product is geometrically correct and free from visual defects, is not easy to attain. The geometry of the sheet after travelling through the rolling mill depends on the residual strains generated by the elastic-plastic deformation. Accurate evaluation of the residual strains can provide the basis for predicting the geometry of the section. A study of geometric and material non-linearity, yield criteria, material hardening and stress-strain relationships was undertaken in this research project. The finite element method was chosen to provide a mathematical model of the bending process and, to ensure an efficient manipulation of the large stiffness matrices, the frontal solution was applied. A series of experimental investigations provided data to compare with corresponding values obtained from the theoretical modelling. A computer simulation, capable of predicting that a design will be satisfactory prior to the manufacture of the rolls, would allow effort to be concentrated into devising an optimum design where costs are minimised.
Resumo:
The development of ultra-long (UL) cavity (hundreds of meters to several kilometres) mode-locked fibre lasers for the generation of high-energy light pulses with relatively low (sub-megahertz) repetition rates has emerged as a new rapidly advancing area of laser physics. The first demonstration of high pulse energy laser of this type was followed by a number of publications from many research groups on long-cavity Ytterbium and Erbium lasers featuring a variety of configurations with rather different mode-locked operations. The substantial interest to this new approach is stimulated both by non-trivial underlying physics and by the potential of high pulse energy laser sources with unique parameters for a range of applications in industry, bio-medicine, metrology and telecommunications. It is well known, that pulse generation regimes in mode-locked fibre lasers are determined by the intra-cavity balance between the effects of dispersion and non-linearity, and the processes of energy attenuation and amplification. The highest per-pulse energy has been achieved in normal-dispersion UL fibre lasers mode-locked through nonlinear polarization evolution (NPE) for self-modelocking operation. In such lasers are generated the so-called dissipative optical solitons. The uncompensated net normal dispersion in long-cavity resonatorsusually leads to very high chirp and, consequently, to a relatively long duration of generated pulses. This thesis presents the results of research Er-doped ultra-long (more than 1 km cavity length) fibre lasers mode-locked based on NPE. The self-mode-locked erbium-based 3.5-km-long all-fiber laser with the 1.7 µJ pulse energy at a wavelength of 1.55 µm was developed as a part of this research. It has resulted in direct generation of short laser pulses with an ultralow repetition rate of 35.1 kHz. The laser cavity has net normal-dispersion and has been fabricated from commercially-available telecom fibers and optical-fiber elements. Its unconventional linear-ring design with compensation for polarization instability ensures high reliability of the self-mode-locking operation, despite the use of a non polarization-maintaining fibers. The single pulse generation regime in all-fibre erbium mode-locking laser based on NPE with a record cavity length of 25 km was demonstrated. Modelocked lasers with such a long cavity have never been studied before. Our result shows a feasibility of stable mode-locked operation even for an ultra-long cavity length. A new design of fibre laser cavity – “y-configuration”, that offers a range of new functionalities for optimization and stabilization of mode-locked lasing regimes was proposed. This novel cavity configuration has been successfully implemented into a long-cavity normal-dispersion self-mode-locked Er-fibre laser. In particular, it features compensation for polarization instability, suppression of ASE, reduction of pulse duration, prevention of in-cavity wave breaking, and stabilization of the lasing wavelength. This laser along with a specially designed double-pass EDFA have allowed us to demonstrate anenvironmentally stable all-fibre laser system able to deliver sub-nanosecond high-energy pulses with low level of ASE noise.
Resumo:
2002 Mathematics Subject Classification: 65C05
Resumo:
Pythagoras, Plato and Euclid’s paved the way for Classical Geometry. The idea of shapes that can be mathematically defined by equations led to the creation of great structures of modern and ancient civilizations, and milestones in mathematics and science. However, classical geometry fails to explain the complexity of non-linear shapes replete in nature such as the curvature of a flower or the wings of a Butterfly. Such non-linearity can be explained by fractal geometry which creates shapes that emulate those found in nature with remarkable accuracy. Such phenomenon begs the question of architectural origin for biological existence within the universe. While the concept of a unifying equation of life has yet to be discovered, the Fibonacci sequence may establish an origin for such a development. The observation of the Fibonacci sequence is existent in almost all aspects of life ranging from the leaves of a fern tree, architecture, and even paintings, makes it highly unlikely to be a stochastic phenomenon. Despite its wide-spread occurrence and existence, the Fibonacci series and the Rule of Golden Proportions has not been widely documented in the human body. This paper serves to review the observed documentation of the Fibonacci sequence in the human body.
Resumo:
In this paper, we will demonstrate the possibility of opening a new telecommunications transmission window around the 2 μm wavelength, in order to exploit the potential low loss of hollow-core photonic bandgap fibers, with the benefits of significantly lower non-linearity and latency. We will show recent efforts developing a dense wavelength division multiplexing testbed at this waveband, with 100 GHz spacing wavelength channels and 105 Gbit/s total capacity achieved.
Resumo:
In recent years modern numerical methods have been employed in the design of Wave Energy Converters (WECs), however the high computational costs associated with their use makes it prohibitive to undertake simulations involving statistically relevant numbers of wave cycles. Experimental tests in wave tanks could also be performed more efficiently and economically if short time traces, consisting of only a few wave cycles, could be used to evaluate the hydrodynamic characteristics of a particular device or design modification. Ideally, accurate estimations of device performance could be made utilizing results obtained from investigations with a relatively small number of wave cycles. However the difficulty here is that many WECs, such as the Oscillating Wave Surge Converter (OWSC), exhibit significant non-linearity in their response. Thus it is challenging to make accurate predictions of annual energy yield for a given spectral sea state using short duration realisations of that sea. This is because the non-linear device response to particular phase couplings of sinusoidal components within those time traces might influence the estimate of mean power capture obtained. As a result it is generally accepted that the most appropriate estimate of mean power capture for a sea state be obtained over many hundreds (or thousands) of wave cycles. This ensures that the potential influence of phase locking is negligible in comparison to the predictions made. In this paper, potential methods of providing reasonable estimates of relative variations in device performance using short duration sea states are introduced. The aim of the work is to establish the shortness of sea state required to provide statistically significant estimations of the mean power capture of a particular type of Wave Energy Converter. The results show that carefully selected wave traces can be used to reliably assess variations in power output due to changes in the hydrodynamic design or wave climate.
Resumo:
Empirical evidence has demonstrated the benefits of using simulation games in enhancing learning especially in terms of cognitive gains. This is to be expected as the dynamism and non-linearity of simulation games are more cognitively demanding. However, the other effects of simulation games, specifically in terms of learners’ emotions, have not been given much attention and are under-investigated. This study aims to demonstrate that simulation games stimulate positive emotions from learners that help to enhance learning. The study finds that the affect-based constructs of interest, engagement and appreciation are positively correlated to learning. A stepwise multiple regression analysis shows that a model involving interest and engagement are significantly associated with learning. The emotions of learners should be considered in the development of curriculum, and the delivery of learning and teaching as positive emotions enhances learning.
Resumo:
Reinforced concrete creep is a phenomenon of great importance. Despite being appointed as the main cause of several pathologies, its effects are yet considered in a simplified way by the structural designers. In addition to studying the phenomenon in reinforced concrete structures and its current account used in the structural analysis, this paper compares creep strains at simply supported reinforced concrete beams in analytical and in experimental forms with the finite element method (FEM) simulation results. The strains and deflections obtained through the analytical form were calculated with the Brazilian code NBR 6118 (2014) recommendations and the simplified method from CEB-FIP 90 and the experimental results were extracted from tests available in the literature. Finite element simulations are performed using ANSYS Workbench software, using its 3D SOLID 186 elements and the structure symmetry. Analyzes of convergence using 2D PLANE 183 elements are held as well. At the end, it is concluded that FEM analyses are quantitative and qualitative efficient for the estimation of this non-linearity and that the method utilized to obtain the creep coefficients values is sufficiently accurate.
Resumo:
A presente dissertação visa uma aplicação de séries temporais, na modelação do índice financeiro FTSE100. Com base na série de retornos, foram estudadas a estacionaridade através do teste Phillips-Perron, a normalidade pelo Teste Jarque-Bera, a independência analisada pela função de autocorrelação e pelo teste de Ljung-Box, e utilizados modelos GARCH, com a finalidade de modelar e prever a variância condicional (volatilidade) da série financeira em estudo. As séries temporais financeiras apresentam características peculiares, revelando períodos mais voláteis do que outros. Esses períodos encontram-se distribuídos em clusters, sugerindo um grau de dependência no tempo. Atendendo à presença de tais grupos de volatilidade (não linearidade), torna-se necessário o recurso a modelos heterocedásticos condicionais, isto é, modelos que consideram que a variância condicional de uma série temporal não é constante e dependente do tempo. Face à grande variabilidade das séries temporais financeiras ao longo do tempo, os modelos ARCH (Engle, 1982) e a sua generalização GARCH (Bollerslev, 1986) revelam-se os mais adequados para o estudo da volatilidade. Em particular, estes modelos não lineares apresentam uma variância condicional aleatória, sendo possível, através do seu estudo, estimar e prever a volatilidade futura da série. Por fim, é apresentado o estudo empírico que se baseia numa proposta de modelação e previsão de um conjunto de dados reais do índice financeiro FTSE100.
Resumo:
Dissertação de mest. em Engenharia de Sistemas e Computação - Área de Sistemas de Controlo, Faculdade de Ciências e Tecnologia, Univ.do Algarve, 2001
Resumo:
Este trabajo exploratorio estudia al movimiento político Mesa de la Unidad Democrática (MUD), creada con el fin de oponerse la Gobierno socialista existente en venezuela. La crítica que este documento realiza, parte desde el punto de vista de la Ciencia de la Complejidad. Algunos conceptos clave de sistemas complejos han sido utilizados para explicar el funcionamiento y organización de la MUD, esto con el objetivo de generar un diagnóstico integral de los problemas que enfrenta, y evidenciar las nuevas percepciones sobre comportamientos perjudiciales que el partido tiene actualmente. Con el enfoque de la complejidad se pretende ayudar a comprender mejor el contexto que enmarca al partido y, para, finalmente aportar una serie de soluciones a los problemas de cohesión que presen
Resumo:
Analog In-memory Computing (AIMC) has been proposed in the context of Beyond Von Neumann architectures as a valid strategy to reduce internal data transfers energy consumption and latency, and to improve compute efficiency. The aim of AIMC is to perform computations within the memory unit, typically leveraging the physical features of memory devices. Among resistive Non-volatile Memories (NVMs), Phase-change Memory (PCM) has become a promising technology due to its intrinsic capability to store multilevel data. Hence, PCM technology is currently investigated to enhance the possibilities and the applications of AIMC. This thesis aims at exploring the potential of new PCM-based architectures as in-memory computational accelerators. In a first step, a preliminar experimental characterization of PCM devices has been carried out in an AIMC perspective. PCM cells non-idealities, such as time-drift, noise, and non-linearity have been studied to develop a dedicated multilevel programming algorithm. Measurement-based simulations have been then employed to evaluate the feasibility of PCM-based operations in the fields of Deep Neural Networks (DNNs) and Structural Health Monitoring (SHM). Moreover, a first testchip has been designed and tested to evaluate the hardware implementation of Multiply-and-Accumulate (MAC) operations employing PCM cells. This prototype experimentally demonstrates the possibility to reach a 95% MAC accuracy with a circuit-level compensation of cells time drift and non-linearity. Finally, empirical circuit behavior models have been included in simulations to assess the use of this technology in specific DNN applications, and to enhance the potentiality of this innovative computation approach.
Resumo:
We present a new quantum description for the Oppenheimer-Snyder model of gravitational collapse of a ball of dust. Starting from the geodesic equation for dust in spherical symmetry, we introduce a time-independent Schrödinger equation for the radius of the ball. The resulting spectrum is similar to that of the Hydrogen atom and Newtonian gravity. However, the non-linearity of General Relativity implies that the ground state is characterised by a principal quantum number proportional to the square of the ADM mass of the dust. For a ball with ADM mass much larger than the Planck scale, the collapse is therefore expected to end in a macroscopically large core and the singularity predicted by General Relativity is avoided. Mathematical properties of the spectrum are investigated and the ground state is found to have support essentially inside the gravitational radius, which makes it a quantum model for the matter core of Black Holes. In fact, the scaling of the ADM mass with the principal quantum number agrees with the Bekenstein area law and the corpuscular model of Black Holes. Finally, the uncertainty on the size of the ground state is interpreted within the framework of an Uncertainty Principle.