923 resultados para Complex non-linear paradigm, Non-linearity
Resumo:
-scale vary from a planetary scale and million years for convection problems to 100km and 10 years for fault systems simulations. Various techniques are in use to deal with the time dependency (e.g. Crank-Nicholson), with the non-linearity (e.g. Newton-Raphson) and weakly coupled equations (e.g. non-linear Gauss-Seidel). Besides these high-level solution algorithms discretization methods (e.g. finite element method (FEM), boundary element method (BEM)) are used to deal with spatial derivatives. Typically, large-scale, three dimensional meshes are required to resolve geometrical complexity (e.g. in the case of fault systems) or features in the solution (e.g. in mantel convection simulations). The modelling environment escript allows the rapid implementation of new physics as required for the development of simulation codes in earth sciences. Its main object is to provide a programming language, where the user can define new models and rapidly develop high-level solution algorithms. The current implementation is linked with the finite element package finley as a PDE solver. However, the design is open and other discretization technologies such as finite differences and boundary element methods could be included. escript is implemented as an extension of the interactive programming environment python (see www.python.org). Key concepts introduced are Data objects, which are holding values on nodes or elements of the finite element mesh, and linearPDE objects, which are defining linear partial differential equations to be solved by the underlying discretization technology. In this paper we will show the basic concepts of escript and will show how escript is used to implement a simulation code for interacting fault systems. We will show some results of large-scale, parallel simulations on an SGI Altix system. Acknowledgements: Project work is supported by Australian Commonwealth Government through the Australian Computational Earth Systems Simulator Major National Research Facility, Queensland State Government Smart State Research Facility Fund, The University of Queensland and SGI.
Resumo:
Esta pesquisa reflete sobre questões da ética contemporânea na publicidade dirigida ao público feminino. A discussão de tais questões debruça sobre a vertente deontológica (convicção). O objetivo do estudo é investigar como os anúncios publicados nas revistas Claudia e Nova articulam questões de tal ética. Assim, buscou-se verificar, por meio da análise de conteúdo, se os anúncios seguiam os princípios contidos no Código Brasileiro de Auto-Regulamentação Publicitária. Em uma segunda etapa, pretendeu-se investigar, por meio da análise de discurso, como se deu a construção dos anúncios sob o enfoque da ética e da mulher na sociedade dos dias de hoje. Concluiu-se que as representações da ética deontológica na publicidade feminina ocorrem de maneira não linear e fragmentada. A não linearidade se refere ao não cumprimento dos princípios éticos por parte de alguns anúncios analisados. Já a fragmentação diz respeito ao modo como a mulher é retratada e como os produtos são divulgados nos anúncios, a partir de diferentes padrões de conduta (princípios) e baseados em valores diversificados. Ora os anúncios apresentam os produtos de maneira verdadeira ou não, ora as mulheres aparecem sob um enfoque baseado em valores contemporâneos ou em valores tradicionais de modo diferenciado.(AU)
Resumo:
This thesis is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variant of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here two new extended frameworks are derived and presented that are based on basis function expansions and local polynomial approximations of a recently proposed variational Bayesian algorithm. It is shown that the new extensions converge to the original variational algorithm and can be used for state estimation (smoothing). However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new methods are numerically validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein-Uhlenbeck process, for which the exact likelihood can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz '63 (3-dimensional model). The algorithms are also applied to the 40 dimensional stochastic Lorenz '96 system. In this investigation these new approaches are compared with a variety of other well known methods such as the ensemble Kalman filter / smoother, a hybrid Monte Carlo sampler, the dual unscented Kalman filter (for jointly estimating the systems states and model parameters) and full weak-constraint 4D-Var. Empirical analysis of their asymptotic behaviour as a function of observation density or length of time window increases is provided.
Resumo:
In this paper we explore the practical use of neural networks for controlling complex non-linear systems. The system used to demonstrate this approach is a simulation of a gas turbine engine typical of those used to power commercial aircraft. The novelty of the work lies in the requirement for multiple controllers which are used to maintain system variables in safe operating regions as well as governing the engine thrust.
Resumo:
The object of this thesis is to develop a method for calculating the losses developed in steel conductors of circular cross-section and at temperatures below 100oC, by the direct passage of a sinusoidally alternating current. Three cases are considered. 1. Isolated solid or tubular conductor. 2. Concentric arrangement of tube and solid return conductor. 3. Concentric arrangement of two tubes. These cases find applications in process temperature maintenance of pipelines, resistance heating of bars and design of bus-bars. The problems associated with the non-linearity of steel are examined. Resistance heating of bars and methods of surface heating of pipelines are briefly described. Magnetic-linear solutions based on Maxwell's equations are critically examined and conditions under which various formulae apply investigated. The conditions under which a tube is electrically equivalent to a solid conductor and to a semi-infinite plate are derived. Existing solutions for the calculation of losses in isolated steel conductors of circular cross-section are reviewed, evaluated and compared. Two methods of solution are developed for the three cases considered. The first is based on the magnetic-linear solutions and offers an alternative to the available methods which are not universal. The second solution extends the existing B/H step-function approximation method to small diameter conductors and to tubes in isolation or in a concentric arrangement. A comprehensive experimental investigation is presented for cases 1 and 2 above which confirms the validity of the proposed methods of solution. These are further supported by experimental results reported in the literature. Good agreement is obtained between measured and calculated loss values for surface field strengths beyond the linear part of the d.c. magnetisation characteristic. It is also shown that there is a difference in the electrical behaviour of a small diameter conductor or thin tube under resistance or induction heating conditions.
Resumo:
This work is concerned with approximate inference in dynamical systems, from a variational Bayesian perspective. When modelling real world dynamical systems, stochastic differential equations appear as a natural choice, mainly because of their ability to model the noise of the system by adding a variation of some stochastic process to the deterministic dynamics. Hence, inference in such processes has drawn much attention. Here a new extended framework is derived that is based on a local polynomial approximation of a recently proposed variational Bayesian algorithm. The paper begins by showing that the new extension of this variational algorithm can be used for state estimation (smoothing) and converges to the original algorithm. However, the main focus is on estimating the (hyper-) parameters of these systems (i.e. drift parameters and diffusion coefficients). The new approach is validated on a range of different systems which vary in dimensionality and non-linearity. These are the Ornstein–Uhlenbeck process, the exact likelihood of which can be computed analytically, the univariate and highly non-linear, stochastic double well and the multivariate chaotic stochastic Lorenz ’63 (3D model). As a special case the algorithm is also applied to the 40 dimensional stochastic Lorenz ’96 system. In our investigation we compare this new approach with a variety of other well known methods, such as the hybrid Monte Carlo, dual unscented Kalman filter, full weak-constraint 4D-Var algorithm and analyse empirically their asymptotic behaviour as a function of observation density or length of time window increases. In particular we show that we are able to estimate parameters in both the drift (deterministic) and the diffusion (stochastic) part of the model evolution equations using our new methods.
Resumo:
This paper presents some forecasting techniques for energy demand and price prediction, one day ahead. These techniques combine wavelet transform (WT) with fixed and adaptive machine learning/time series models (multi-layer perceptron (MLP), radial basis functions, linear regression, or GARCH). To create an adaptive model, we use an extended Kalman filter or particle filter to update the parameters continuously on the test set. The adaptive GARCH model is a new contribution, broadening the applicability of GARCH methods. We empirically compared two approaches of combining the WT with prediction models: multicomponent forecasts and direct forecasts. These techniques are applied to large sets of real data (both stationary and non-stationary) from the UK energy markets, so as to provide comparative results that are statistically stronger than those previously reported. The results showed that the forecasting accuracy is significantly improved by using the WT and adaptive models. The best models on the electricity demand/gas price forecast are the adaptive MLP/GARCH with the multicomponent forecast; their MSEs are 0.02314 and 0.15384 respectively.
Resumo:
The development of ultra-long (UL) cavity (hundreds of meters to several kilometres) mode-locked fibre lasers for the generation of high-energy light pulses with relatively low (sub-megahertz) repetition rates has emerged as a new rapidly advancing area of laser physics. The first demonstration of high pulse energy laser of this type was followed by a number of publications from many research groups on long-cavity Ytterbium and Erbium lasers featuring a variety of configurations with rather different mode-locked operations. The substantial interest to this new approach is stimulated both by non-trivial underlying physics and by the potential of high pulse energy laser sources with unique parameters for a range of applications in industry, bio-medicine, metrology and telecommunications. It is well known, that pulse generation regimes in mode-locked fibre lasers are determined by the intra-cavity balance between the effects of dispersion and non-linearity, and the processes of energy attenuation and amplification. The highest per-pulse energy has been achieved in normal-dispersion UL fibre lasers mode-locked through nonlinear polarization evolution (NPE) for self-modelocking operation. In such lasers are generated the so-called dissipative optical solitons. The uncompensated net normal dispersion in long-cavity resonatorsusually leads to very high chirp and, consequently, to a relatively long duration of generated pulses. This thesis presents the results of research Er-doped ultra-long (more than 1 km cavity length) fibre lasers mode-locked based on NPE. The self-mode-locked erbium-based 3.5-km-long all-fiber laser with the 1.7 µJ pulse energy at a wavelength of 1.55 µm was developed as a part of this research. It has resulted in direct generation of short laser pulses with an ultralow repetition rate of 35.1 kHz. The laser cavity has net normal-dispersion and has been fabricated from commercially-available telecom fibers and optical-fiber elements. Its unconventional linear-ring design with compensation for polarization instability ensures high reliability of the self-mode-locking operation, despite the use of a non polarization-maintaining fibers. The single pulse generation regime in all-fibre erbium mode-locking laser based on NPE with a record cavity length of 25 km was demonstrated. Modelocked lasers with such a long cavity have never been studied before. Our result shows a feasibility of stable mode-locked operation even for an ultra-long cavity length. A new design of fibre laser cavity – “y-configuration”, that offers a range of new functionalities for optimization and stabilization of mode-locked lasing regimes was proposed. This novel cavity configuration has been successfully implemented into a long-cavity normal-dispersion self-mode-locked Er-fibre laser. In particular, it features compensation for polarization instability, suppression of ASE, reduction of pulse duration, prevention of in-cavity wave breaking, and stabilization of the lasing wavelength. This laser along with a specially designed double-pass EDFA have allowed us to demonstrate anenvironmentally stable all-fibre laser system able to deliver sub-nanosecond high-energy pulses with low level of ASE noise.
Resumo:
2002 Mathematics Subject Classification: 65C05
Resumo:
Pythagoras, Plato and Euclid’s paved the way for Classical Geometry. The idea of shapes that can be mathematically defined by equations led to the creation of great structures of modern and ancient civilizations, and milestones in mathematics and science. However, classical geometry fails to explain the complexity of non-linear shapes replete in nature such as the curvature of a flower or the wings of a Butterfly. Such non-linearity can be explained by fractal geometry which creates shapes that emulate those found in nature with remarkable accuracy. Such phenomenon begs the question of architectural origin for biological existence within the universe. While the concept of a unifying equation of life has yet to be discovered, the Fibonacci sequence may establish an origin for such a development. The observation of the Fibonacci sequence is existent in almost all aspects of life ranging from the leaves of a fern tree, architecture, and even paintings, makes it highly unlikely to be a stochastic phenomenon. Despite its wide-spread occurrence and existence, the Fibonacci series and the Rule of Golden Proportions has not been widely documented in the human body. This paper serves to review the observed documentation of the Fibonacci sequence in the human body.
Resumo:
This qualitative study explores the barriers and dilemmas faced by beginning and novice mentors in post-compulsory education in the southeast of England. It analyses critical incidents (Tripp, 2012) taken from the everyday practice of mentors who were supporting new teachers and lecturers in the southeast of England. It categorises different types of critical incidents that mentors encountered and describes the strategies and rationales mentors used to support mentees and (indirectly) their learners and colleagues. The study explores ways in which mentors' own values, beliefs and life experiences affected their mentoring practice. Methodology As part of a specialist master’s-level professional development module, 21 mentors wrote about two critical incidents (Tripp, 2012) taken from their own professional experiences, which aimed to demonstrate their support for their mentee’s range of complex needs. These critical incidents were written up as short case studies, which justified the rationale for their interventions and demonstrated the mentors' own professional development in mentoring. Critical incidents were used as units of analysis and categorised thematically by topic, sector and mentoring strategies used. Findings The research demonstrated the complex nature of decision-making and the potential for professional learning within a mentoring dyad. The study of these critical incidents found that mentors most frequently cited the controversial nature of teaching observations, the mentor’s role in mediating professional relationships, the importance of inculcating professional dispositions in education, and the need to support new teachers so that they can use effective behaviour management strategies. This study contributes to our understanding of the central importance of mentoring for professional growth within teacher education. It identifies common dilemmas that novice mentors face in post-compulsory education, justifies the rationale for their interventions and mentoring strategies, and helps to identify ways in which mentors' professional development needs can be met. It demonstrates that mentoring is complex, non-linear and mediated by mentors’ motivation and values.
Resumo:
In recent years modern numerical methods have been employed in the design of Wave Energy Converters (WECs), however the high computational costs associated with their use makes it prohibitive to undertake simulations involving statistically relevant numbers of wave cycles. Experimental tests in wave tanks could also be performed more efficiently and economically if short time traces, consisting of only a few wave cycles, could be used to evaluate the hydrodynamic characteristics of a particular device or design modification. Ideally, accurate estimations of device performance could be made utilizing results obtained from investigations with a relatively small number of wave cycles. However the difficulty here is that many WECs, such as the Oscillating Wave Surge Converter (OWSC), exhibit significant non-linearity in their response. Thus it is challenging to make accurate predictions of annual energy yield for a given spectral sea state using short duration realisations of that sea. This is because the non-linear device response to particular phase couplings of sinusoidal components within those time traces might influence the estimate of mean power capture obtained. As a result it is generally accepted that the most appropriate estimate of mean power capture for a sea state be obtained over many hundreds (or thousands) of wave cycles. This ensures that the potential influence of phase locking is negligible in comparison to the predictions made. In this paper, potential methods of providing reasonable estimates of relative variations in device performance using short duration sea states are introduced. The aim of the work is to establish the shortness of sea state required to provide statistically significant estimations of the mean power capture of a particular type of Wave Energy Converter. The results show that carefully selected wave traces can be used to reliably assess variations in power output due to changes in the hydrodynamic design or wave climate.
Resumo:
The use of human brain electroencephalography (EEG) signals for automatic person identi cation has been investigated for a decade. It has been found that the performance of an EEG-based person identication system highly depends on what feature to be extracted from multi-channel EEG signals. Linear methods such as Power Spectral Density and Autoregressive Model have been used to extract EEG features. However these methods assumed that EEG signals are stationary. In fact, EEG signals are complex, non-linear, non-stationary, and random in nature. In addition, other factors such as brain condition or human characteristics may have impacts on the performance, however these factors have not been investigated and evaluated in previous studies. It has been found in the literature that entropy is used to measure the randomness of non-linear time series data. Entropy is also used to measure the level of chaos of braincomputer interface systems. Therefore, this thesis proposes to study the role of entropy in non-linear analysis of EEG signals to discover new features for EEG-based person identi- cation. Five dierent entropy methods including Shannon Entropy, Approximate Entropy, Sample Entropy, Spectral Entropy, and Conditional Entropy have been proposed to extract entropy features that are used to evaluate the performance of EEG-based person identication systems and the impacts of epilepsy, alcohol, age and gender characteristics on these systems. Experiments were performed on the Australian EEG and Alcoholism datasets. Experimental results have shown that, in most cases, the proposed entropy features yield very fast person identication, yet with compatible accuracy because the feature dimension is low. In real life security operation, timely response is critical. The experimental results have also shown that epilepsy, alcohol, age and gender characteristics have impacts on the EEG-based person identication systems.
Resumo:
A presente dissertação visa uma aplicação de séries temporais, na modelação do índice financeiro FTSE100. Com base na série de retornos, foram estudadas a estacionaridade através do teste Phillips-Perron, a normalidade pelo Teste Jarque-Bera, a independência analisada pela função de autocorrelação e pelo teste de Ljung-Box, e utilizados modelos GARCH, com a finalidade de modelar e prever a variância condicional (volatilidade) da série financeira em estudo. As séries temporais financeiras apresentam características peculiares, revelando períodos mais voláteis do que outros. Esses períodos encontram-se distribuídos em clusters, sugerindo um grau de dependência no tempo. Atendendo à presença de tais grupos de volatilidade (não linearidade), torna-se necessário o recurso a modelos heterocedásticos condicionais, isto é, modelos que consideram que a variância condicional de uma série temporal não é constante e dependente do tempo. Face à grande variabilidade das séries temporais financeiras ao longo do tempo, os modelos ARCH (Engle, 1982) e a sua generalização GARCH (Bollerslev, 1986) revelam-se os mais adequados para o estudo da volatilidade. Em particular, estes modelos não lineares apresentam uma variância condicional aleatória, sendo possível, através do seu estudo, estimar e prever a volatilidade futura da série. Por fim, é apresentado o estudo empírico que se baseia numa proposta de modelação e previsão de um conjunto de dados reais do índice financeiro FTSE100.
Resumo:
Dissertação de mest. em Engenharia de Sistemas e Computação - Área de Sistemas de Controlo, Faculdade de Ciências e Tecnologia, Univ.do Algarve, 2001