884 resultados para Piecewise linear systems
Resumo:
The use of special units for logarithmic ratio quantities is reviewed. The neper is used with a natural logarithm (logarithm to the base e) to express the logarithm of the amplitude ratio of two pure sinusoidal signals, particularly in the context of linear systems where it is desired to represent the gain or loss in amplitude of a single-frequency signal between the input and output. The bel, and its more commonly used submultiple, the decibel, are used with a decadic logarithm (logarithm to the base 10) to measure the ratio of two power-like quantities, such as a mean square signal or a mean square sound pressure in acoustics. Thus two distinctly different quantities are involved. In this review we define the quantities first, without reference to the units, as is standard practice in any system of quantities and units. We show that two different definitions of the quantity power level, or logarithmic power ratio, are possible. We show that this leads to two different interpretations for the meaning and numerical values of the units bel and decibel. We review the question of which of these alternative definitions is actually used, or is used by implication, by workers in the field. Finally, we discuss the relative advantages of the alternative definitions.
Resumo:
Liquid chromatography-mass spectrometry (LC-MS) datasets can be compared or combined following chromatographic alignment. Here we describe a simple solution to the specific problem of aligning one LC-MS dataset and one LC-MS/MS dataset, acquired on separate instruments from an enzymatic digest of a protein mixture, using feature extraction and a genetic algorithm. First, the LC-MS dataset is searched within a few ppm of the calculated theoretical masses of peptides confidently identified by LC-MS/MS. A piecewise linear function is then fitted to these matched peptides using a genetic algorithm with a fitness function that is insensitive to incorrect matches but sufficiently flexible to adapt to the discrete shifts common when comparing LC datasets. We demonstrate the utility of this method by aligning ion trap LC-MS/MS data with accurate LC-MS data from an FTICR mass spectrometer and show how hybrid datasets can improve peptide and protein identification by combining the speed of the ion trap with the mass accuracy of the FTICR, similar to using a hybrid ion trap-FTICR instrument. We also show that the high resolving power of FTICR can improve precision and linear dynamic range in quantitative proteomics. The alignment software, msalign, is freely available as open source.
Nonlinear system identification using particle swarm optimisation tuned radial basis function models
Resumo:
A novel particle swarm optimisation (PSO) tuned radial basis function (RBF) network model is proposed for identification of non-linear systems. At each stage of orthogonal forward regression (OFR) model construction process, PSO is adopted to tune one RBF unit's centre vector and diagonal covariance matrix by minimising the leave-one-out (LOO) mean square error (MSE). This PSO aided OFR automatically determines how many tunable RBF nodes are sufficient for modelling. Compared with the-state-of-the-art local regularisation assisted orthogonal least squares algorithm based on the LOO MSE criterion for constructing fixed-node RBF network models, the PSO tuned RBF model construction produces more parsimonious RBF models with better generalisation performance and is often more efficient in model construction. The effectiveness of the proposed PSO aided OFR algorithm for constructing tunable node RBF models is demonstrated using three real data sets.
Resumo:
Data assimilation algorithms are a crucial part of operational systems in numerical weather prediction, hydrology and climate science, but are also important for dynamical reconstruction in medical applications and quality control for manufacturing processes. Usually, a variety of diverse measurement data are employed to determine the state of the atmosphere or to a wider system including land and oceans. Modern data assimilation systems use more and more remote sensing data, in particular radiances measured by satellites, radar data and integrated water vapor measurements via GPS/GNSS signals. The inversion of some of these measurements are ill-posed in the classical sense, i.e. the inverse of the operator H which maps the state onto the data is unbounded. In this case, the use of such data can lead to significant instabilities of data assimilation algorithms. The goal of this work is to provide a rigorous mathematical analysis of the instability of well-known data assimilation methods. Here, we will restrict our attention to particular linear systems, in which the instability can be explicitly analyzed. We investigate the three-dimensional variational assimilation and four-dimensional variational assimilation. A theory for the instability is developed using the classical theory of ill-posed problems in a Banach space framework. Further, we demonstrate by numerical examples that instabilities can and will occur, including an example from dynamic magnetic tomography.
Resumo:
Total ozone trends are typically studied using linear regression models that assume a first-order autoregression of the residuals [so-called AR(1) models]. We consider total ozone time series over 60°S–60°N from 1979 to 2005 and show that most latitude bands exhibit long-range correlated (LRC) behavior, meaning that ozone autocorrelation functions decay by a power law rather than exponentially as in AR(1). At such latitudes the uncertainties of total ozone trends are greater than those obtained from AR(1) models and the expected time required to detect ozone recovery correspondingly longer. We find no evidence of LRC behavior in southern middle-and high-subpolar latitudes (45°–60°S), where the long-term ozone decline attributable to anthropogenic chlorine is the greatest. We thus confirm an earlier prediction based on an AR(1) analysis that this region (especially the highest latitudes, and especially the South Atlantic) is the optimal location for the detection of ozone recovery, with a statistically significant ozone increase attributable to chlorine likely to be detectable by the end of the next decade. In northern middle and high latitudes, on the other hand, there is clear evidence of LRC behavior. This increases the uncertainties on the long-term trend attributable to anthropogenic chlorine by about a factor of 1.5 and lengthens the expected time to detect ozone recovery by a similar amount (from ∼2030 to ∼2045). If the long-term changes in ozone are instead fit by a piecewise-linear trend rather than by stratospheric chlorine loading, then the strong decrease of northern middle- and high-latitude ozone during the first half of the 1990s and its subsequent increase in the second half of the 1990s projects more strongly on the trend and makes a smaller contribution to the noise. This both increases the trend and weakens the LRC behavior at these latitudes, to the extent that ozone recovery (according to this model, and in the sense of a statistically significant ozone increase) is already on the verge of being detected. The implications of this rather controversial interpretation are discussed.
Resumo:
With the prospect of exascale computing, computational methods requiring only local data become especially attractive. Consequently, the typical domain decomposition of atmospheric models means horizontally-explicit vertically-implicit (HEVI) time-stepping schemes warrant further attention. In this analysis, Runge-Kutta implicit-explicit schemes from the literature are analysed for their stability and accuracy using a von Neumann stability analysis of two linear systems. Attention is paid to the numerical phase to indicate the behaviour of phase and group velocities. Where the analysis is tractable, analytically derived expressions are considered. For more complicated cases, amplification factors have been numerically generated and the associated amplitudes and phase diagnosed. Analysis of a system describing acoustic waves has necessitated attributing the three resultant eigenvalues to the three physical modes of the system. To do so, a series of algorithms has been devised to track the eigenvalues across the frequency space. The result enables analysis of whether the schemes exactly preserve the non-divergent mode; and whether there is evidence of spurious reversal in the direction of group velocities or asymmetry in the damping for the pair of acoustic modes. Frequency ranges that span next-generation high-resolution weather models to coarse-resolution climate models are considered; and a comparison is made of errors accumulated from multiple stability-constrained shorter time-steps from the HEVI scheme with a single integration from a fully implicit scheme over the same time interval. Two schemes, “Trap2(2,3,2)” and “UJ3(1,3,2)”, both already used in atmospheric models, are identified as offering consistently good stability and representation of phase across all the analyses. Furthermore, according to a simple measure of computational cost, “Trap2(2,3,2)” is the least expensive.
Resumo:
Wave solutions to a mechanochemical model for cytoskeletal activity are studied and the results applied to the waves of chemical and mechanical activity that sweep over an egg shortly after fertilization. The model takes into account the calcium-controlled presence of actively contractile units in the cytoplasm, and consists of a viscoelastic force equilibrium equation and a conservation equation for calcium. Using piecewise linear caricatures, we obtain analytic solutions for travelling waves on a strip and demonstrate uiat the full nonlinear system behaves as predicted by the analytic solutions. The equations are solved on a sphere and the numerical results are similar to the analytic solutions. We indicate how the speed of the waves can be used as a diagnostic tool with which the chemical reactivity of the egg surface can be measured.
Resumo:
This work deals with a mathematical fundament for digital signal processing under point view of interval mathematics. Intend treat the open problem of precision and repesention of data in digital systems, with a intertval version of signals representation. Signals processing is a rich and complex area, therefore, this work makes a cutting with focus in systems linear invariant in the time. A vast literature in the area exists, but, some concepts in interval mathematics need to be redefined or to be elaborated for the construction of a solid theory of interval signal processing. We will construct a basic fundaments for signal processing in the interval version, such as basic properties linearity, stability, causality, a version to intervalar of linear systems e its properties. They will be presented interval versions of the convolution and the Z-transform. Will be made analysis of convergences of systems using interval Z-transform , a essentially interval distance, interval complex numbers , application in a interval filter.
Resumo:
Este trabalho propõe um ambiente computacional aplicado ao ensino de sistemas de controle, denominado de ModSym. O software implementa uma interface gráfica para a modelagem de sistemas físicos lineares e mostra, passo a passo, o processamento necessário à obtenção de modelos matemáticos para esses sistemas. Um sistema físico pode ser representado, no software, de três formas diferentes. O sistema pode ser representado por um diagrama gráfico a partir de elementos dos domínios elétrico, mecânico translacional, mecânico rotacional e hidráulico. Pode também ser representado a partir de grafos de ligação ou de diagramas de fluxo de sinal. Uma vez representado o sistema, o ModSym possibilita o cálculo de funções de transferência do sistema na forma simbólica, utilizando a regra de Mason. O software calcula também funções de transferência na forma numérica e funções de sensibilidade paramétrica. O trabalho propõe ainda um algoritmo para obter o diagrama de fluxo de sinal de um sistema físico baseado no seu grafo de ligação. Este algoritmo e a metodologia de análise de sistemas conhecida por Network Method permitiram a utilização da regra de Mason no cálculo de funções de transferência dos sistemas modelados no software
Resumo:
In this work a modification on ANFIS (Adaptive Network Based Fuzzy Inference System) structure is proposed to find a systematic method for nonlinear plants, with large operational range, identification and control, using linear local systems: models and controllers. This method is based on multiple model approach. This way, linear local models are obtained and then those models are combined by the proposed neurofuzzy structure. A metric that allows a satisfactory combination of those models is obtained after the structure training. It results on plant s global identification. A controller is projected for each local model. The global control is obtained by mixing local controllers signals. This is done by the modified ANFIS. The modification on ANFIS architecture allows the two neurofuzzy structures knowledge sharing. So the same metric obtained to combine models can be used to combine controllers. Two cases study are used to validate the new ANFIS structure. The knowledge sharing is evaluated in the second case study. It shows that just one modified ANFIS structure is necessary to combine linear models to identify, a nonlinear plant, and combine linear controllers to control this plant. The proposed method allows the usage of any identification and control techniques for local models and local controllers obtaining. It also reduces the complexity of ANFIS usage for identification and control. This work has prioritized simpler techniques for the identification and control systems to simplify the use of the method
Resumo:
The present work presents the study and implementation of an adaptive bilinear compensated generalized predictive controller. This work uses conventional techniques of predictive control and includes techniques of adaptive control for better results. In order to solve control problems frequently found in the chemical industry, bilinear models are considered to represent the dynamics of the studied systems. Bilinear models are simpler than general nonlinear model, however it can to represent the intrinsic not-linearities of industrial processes. The linearization of the model, by the approach to time step quasilinear , is used to allow the application of the equations of the generalized predictive controller (GPC). Such linearization, however, generates an error of prediction, which is minimized through a compensation term. The term in study is implemented in an adaptive form, due to the nonlinear relationship between the input signal and the prediction error.Simulation results show the efficiency of adaptive predictive bilinear controller in comparison with the conventional.
Resumo:
The objective of this paper is the numerical study of the behavior of reinforced concrete beams and columns by non-linear numerical simulations. The numerical analysis is based on the finite element method implemented in CASTEM 2000. This program uses the constitutive elastoplastic perfect model for the steel, the Drucker-Prager model for the concrete and the Newton-Raphson for the solution of non-linear systems. This work concentrates on the determination of equilibrium curves to the beams and force-strain curves to the columns. The numeric responses are confronted with experimental results found in the literature in order to check there liability of the numerical analyses.
Resumo:
This paper deals with approaches for sparse matrix substitutions using vector processing. Many publications have used the W-matrix method to solve the forward/backward substitutions on vector computer. Recently a different approach has been presented using dependency-based substitution algorithm (DBSA). In this paper the focus is on new algorithms able to explore the sparsity of the vectors. The efficiency is tested using linear systems from power systems with 118, 320, 725 and 1729 buses. The tests were performed on a CRAY Y MP2E/232. The speedups for a fast-forward/fast-backward using a 1729-bus system are near 19 and 14 for real and complex arithmetic operations, respectively. When forward/backward is employed the speedups are about 8 and 6 to perform the same simulations.
Resumo:
Relaxed conditions for stability of nonlinear continuous-time systems given by fuzzy models axe presented. A theoretical analysis shows that the proposed method provides better or at least the same results of the methods presented in the literature. Digital simulations exemplify this fact. This result is also used for fuzzy regulators design. The nonlinear systems are represented by fuzzy models proposed by Takagi and Sugeno. The stability analysis and the design of controllers axe described by LMIs (Linear Matrix Inequalities), that can be solved efficiently using convex programming techniques.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)