942 resultados para nonlinear function
Resumo:
Sufficient conditions are given for the L2-stability of a class of feedback systems consisting of a linear operator G and a nonlinear gain function, either odd monotone or restricted by a power-law, in cascade, in a negative feedback loop. The criterion takes the form of a frequency-domain inequality, Re[1 + Z(jω)] G(jω) δ > 0 ω ε (−∞, +∞), where Z(jω) is given by, Z(jω) = β[Y1(jω) + Y2(jω)] + (1 − β)[Y3(jω) − Y3(−jω)], with 0 β 1 and the functions y1(·), y2(·) and y3(·) satisfying the time-domain inequalities, ∝−∞+∞¦y1(t) + y2(t)¦ dt 1 − ε, y1(·) = 0, t < 0, y2(·) = 0, t > 0 and ε > 0, and , c2 being a constant depending on the order of the power-law restricting the nonlinear function. The criterion is derived using Zames' passive operator theory and is shown to be more general than the existing criteria
Resumo:
Pyruvate conversion to acetyl-CoA by the pyruvate dehydrogenase (PDH) multienzyme complex is known as a key node in affecting the metabolic fluxes of animal cell culture. However, its possible role in causing possible nonlinear dynamic behavior such as oscillations and multiplicity of animal cells has received little attention. In this work, the kinetic and dynamic behavior of PDH of eucaryotic cells has been analyzed by using both in vitro and simplified in vivo models. With the in vitro model the overall reaction rate (v(1)) of PDH is shown to be a nonlinear function of pyruvate concentration, leading to oscillations under certain conditions. All enzyme components affect v, and the nonlinearity of PDH significantly, the protein X and the core enzyme dihydrolipoamide acyltransferase (E2) being mostly predominant. By considering the synthesis rates of pyruvate and PDH components the in vitro model is expanded to emulate in vivo conditions. Analysis using the in vivo model reveals another interesting kinetic feature of the PDH system, namely, multiple steady states. Depending on the pyruvate and enzyme levels or the operation mode, either a steady state with high pyruvate decarboxylation rate or a steady state with significantly lower decarboxylation rate can be achieved under otherwise identical conditions. In general, the more efficient steady state is associated with a lower pyruvate concentration. A possible time delay in the substrate supply and enzyme synthesis can also affect the steady state to be achieved and lead's to oscillations under certain conditions. Overall, the predictions of multiplicity for the PDH system agree qualitatively well with recent experimental observations in animal cell cultures. The model analysis gives some hints for improving pyruavte metabolism in animal cell culture.
Resumo:
We find that the Rashba spin splitting is intrinsically a nonlinear function of the momentum, and the linear Rashba model may overestimate it significantly, especially in narrow-gap semiconductors. A nonlinear Rashba model is proposed, which is in good agreement with the numerical results from the eight-band k center dot p theory. Using this model, we find pronounced suppression of the D'yakonov-Perel' spin relaxation rate at large electron densities, and a nonmonotonic dependence of the resonance peak position of the electron spin lifetime on the electron density in [111]-oriented quantum wells, both in qualitative disagreement with the predictions of the linear Rashba model.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel method using artificial neural networks to solve robust parameter estimation problems for nonlinear models with unknown-but-bounded errors and uncertainties. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.
Resumo:
The objective of this work was to evaluate the Nelore beef cattle, growth curve parameters using the Von Bertalanffy function in a nested Bayesian procedure that allowed estimation of the joint posterior distribution of growth curve parameters, their (co)variance components, and the environmental and additive genetic components affecting them. A hierarchical model was applied; each individual had a growth trajectory described by the nonlinear function, and each parameter of this function was considered to be affected by genetic and environmental effects that were described by an animal model. Random samples of the posterior distributions were drawn using Gibbs sampling and Metropolis-Hastings algorithms. The data set consisted of a total of 145,961 BW recorded from 15,386 animals. Even though the curve parameters were estimated for animals with few records, given that the information from related animals and the structure of systematic effects were considered in the curve fitting, all mature BW predicted were suitable. A large additive genetic variance for mature BW was observed. The parameter a of growth curves, which represents asymptotic adult BW, could be used as a selection criterion to control increases in adult BW when selecting for growth rate. The effect of maternal environment on growth was carried through to maturity and should be considered when evaluating adult BW. Other growth curve parameters showed small additive genetic and maternal effects. Mature BW and parameter k, related to the slope of the curve, presented a large, positive genetic correlation. The results indicated that selection for growth rate would increase adult BW without substantially changing the shape of the growth curve. Selection to change the slope of the growth curve without modifying adult BW would be inefficient because their genetic correlation is large. However, adult BW could be considered in a selection index with its corresponding economic weight to improve the overall efficiency of beef cattle production.
Resumo:
The ability of neural networks to realize some complex nonlinear function makes them attractive for system identification. This paper describes a novel barrier method using artificial neural networks to solve robust parameter estimation problems for nonlinear model with unknown-but-bounded errors and uncertainties. This problem can be represented by a typical constrained optimization problem. More specifically, a modified Hopfield network is developed and its internal parameters are computed using the valid-subspace technique. These parameters guarantee the network convergence to the equilibrium points. A solution for the robust estimation problem with unknown-but-bounded error corresponds to an equilibrium point of the network. Simulation results are presented as an illustration of the proposed approach.
Resumo:
Artificial Neural Networks are widely used in various applications in engineering, as such solutions of nonlinear problems. The implementation of this technique in reconfigurable devices is a great challenge to researchers by several factors, such as floating point precision, nonlinear activation function, performance and area used in FPGA. The contribution of this work is the approximation of a nonlinear function used in ANN, the popular hyperbolic tangent activation function. The system architecture is composed of several scenarios that provide a tradeoff of performance, precision and area used in FPGA. The results are compared in different scenarios and with current literature on error analysis, area and system performance. © 2013 IEEE.
Resumo:
In this paper the dynamical interactions of a double pendulum arm and an electromechanical shaker is investigated. The double pendulum is a three degree of freedom system coupled to an RLC circuit based nonlinear shaker through a magnetic field, and the capacitor voltage is a nonlinear function of the instantaneous electric charge. Numerical simulations show the existence of chaotic behavior for some regions in the parameter space and this behaviour is characterized by power spectral density and Lyapunov exponents. The bifurcation diagram is constructed to explore the qualitative behaviour of the system. This kind of electromechanical system is frequently found in robotic systems, and in order to suppress the chaotic motion, the State-Dependent Riccati Equation (SDRE) control and the Nonlinear Saturation control (NSC) techniques are analyzed. The robustness of these two controllers is tested by a sensitivity analysis to parametric uncertainties.
Resumo:
This paper is devoted to the quantization of the degree of nonlinearity of the relationship between two biological variables when one of the variables is a complex nonstationary oscillatory signal. An example of the situation is the indicial responses of pulmonary blood pressure (P) to step changes of oxygen tension (ΔpO2) in the breathing gas. For a step change of ΔpO2 beginning at time t1, the pulmonary blood pressure is a nonlinear function of time and ΔpO2, which can be written as P(t-t1 | ΔpO2). An effective method does not exist to examine the nonlinear function P(t-t1 | ΔpO2). A systematic approach is proposed here. The definitions of mean trends and oscillations about the means are the keys. With these keys a practical method of calculation is devised. We fit the mean trends of blood pressure with analytic functions of time, whose nonlinearity with respect to the oxygen level is clarified here. The associated oscillations about the mean can be transformed into Hilbert spectrum. An integration of the square of the Hilbert spectrum over frequency yields a measure of oscillatory energy, which is also a function of time, whose mean trends can be expressed by analytic functions. The degree of nonlinearity of the oscillatory energy with respect to the oxygen level also is clarified here. Theoretical extension of the experimental nonlinear indicial functions to arbitrary history of hypoxia is proposed. Application of the results to tissue remodeling and tissue engineering of blood vessels is discussed.
Resumo:
A novel surrogate model is proposed in lieu of Computational Fluid Dynamics (CFD) solvers, for fast nonlinear aerodynamic and aeroelastic modeling. A nonlinear function is identified on selected interpolation points by
a discrete empirical interpolation method (DEIM). The flow field is then reconstructed using a least square approximation of the flow modes extracted
by proper orthogonal decomposition (POD). The aeroelastic reduce order
model (ROM) is completed by introducing a nonlinear mapping function
between displacements and the DEIM points. The proposed model is investigated to predict the aerodynamic forces due to forced motions using
a N ACA 0012 airfoil undergoing a prescribed pitching oscillation. To investigate aeroelastic problems at transonic conditions, a pitch/plunge airfoil
and a cropped delta wing aeroelastic models are built using linear structural models. The presence of shock-waves triggers the appearance of limit
cycle oscillations (LCO), which the model is able to predict. For all cases
tested, the new ROM shows the ability to replicate the nonlinear aerodynamic forces, structural displacements and reconstruct the complete flow
field with sufficient accuracy at a fraction of the cost of full order CFD
model.
Resumo:
A novel surrogate model is proposed in lieu of computational fluid dynamic (CFD) code for fast nonlinear aerodynamic modeling. First, a nonlinear function is identified on selected interpolation points defined by discrete empirical interpolation method (DEIM). The flow field is then reconstructed by a least square approximation of flow modes extracted by proper orthogonal decomposition (POD). The proposed model is applied in the prediction of limit cycle oscillation for a plunge/pitch airfoil and a delta wing with linear structural model, results are validate against a time accurate CFD-FEM code. The results show the model is able to replicate the aerodynamic forces and flow fields with sufficient accuracy while requiring a fraction of CFD cost.
Resumo:
This thesis is devoted to the study of linear relationships in symmetric block ciphers. A block cipher is designed so that the ciphertext is produced as a nonlinear function of the plaintext and secret master key. However, linear relationships within the cipher can still exist if the texts and components of the cipher are manipulated in a number of ways, as shown in this thesis. There are four main contributions of this thesis. The first contribution is the extension of the applicability of integral attacks from word-based to bitbased block ciphers. Integral attacks exploit the linear relationship between texts at intermediate stages of encryption. This relationship can be used to recover subkey bits in a key recovery attack. In principle, integral attacks can be applied to bit-based block ciphers. However, specific tools to define the attack on these ciphers are not available. This problem is addressed in this thesis by introducing a refined set of notations to describe the attack. The bit patternbased integral attack is successfully demonstrated on reduced-round variants of the block ciphers Noekeon, Present and Serpent. The second contribution is the discovery of a very small system of equations that describe the LEX-AES stream cipher. LEX-AES is based heavily on the 128-bit-key (16-byte) Advanced Encryption Standard (AES) block cipher. In one instance, the system contains 21 equations and 17 unknown bytes. This is very close to the upper limit for an exhaustive key search, which is 16 bytes. One only needs to acquire 36 bytes of keystream to generate the equations. Therefore, the security of this cipher depends on the difficulty of solving this small system of equations. The third contribution is the proposal of an alternative method to measure diffusion in the linear transformation of Substitution-Permutation-Network (SPN) block ciphers. Currently, the branch number is widely used for this purpose. It is useful for estimating the possible success of differential and linear attacks on a particular SPN cipher. However, the measure does not give information on the number of input bits that are left unchanged by the transformation when producing the output bits. The new measure introduced in this thesis is intended to complement the current branch number technique. The measure is based on fixed points and simple linear relationships between the input and output words of the linear transformation. The measure represents the average fraction of input words to a linear diffusion transformation that are not effectively changed by the transformation. This measure is applied to the block ciphers AES, ARIA, Serpent and Present. It is shown that except for Serpent, the linear transformations used in the block ciphers examined do not behave as expected for a random linear transformation. The fourth contribution is the identification of linear paths in the nonlinear round function of the SMS4 block cipher. The SMS4 block cipher is used as a standard in the Chinese Wireless LAN Wired Authentication and Privacy Infrastructure (WAPI) and hence, the round function should exhibit a high level of nonlinearity. However, the findings in this thesis on the existence of linear relationships show that this is not the case. It is shown that in some exceptional cases, the first four rounds of SMS4 are effectively linear. In these cases, the effective number of rounds for SMS4 is reduced by four, from 32 to 28. The findings raise questions about the security provided by SMS4, and might provide clues on the existence of a flaw in the design of the cipher.
Resumo:
The main goal of this research is to design an efficient compression al~ gorithm for fingerprint images. The wavelet transform technique is the principal tool used to reduce interpixel redundancies and to obtain a parsimonious representation for these images. A specific fixed decomposition structure is designed to be used by the wavelet packet in order to save on the computation, transmission, and storage costs. This decomposition structure is based on analysis of information packing performance of several decompositions, two-dimensional power spectral density, effect of each frequency band on the reconstructed image, and the human visual sensitivities. This fixed structure is found to provide the "most" suitable representation for fingerprints, according to the chosen criteria. Different compression techniques are used for different subbands, based on their observed statistics. The decision is based on the effect of each subband on the reconstructed image according to the mean square criteria as well as the sensitivities in human vision. To design an efficient quantization algorithm, a precise model for distribution of the wavelet coefficients is developed. The model is based on the generalized Gaussian distribution. A least squares algorithm on a nonlinear function of the distribution model shape parameter is formulated to estimate the model parameters. A noise shaping bit allocation procedure is then used to assign the bit rate among subbands. To obtain high compression ratios, vector quantization is used. In this work, the lattice vector quantization (LVQ) is chosen because of its superior performance over other types of vector quantizers. The structure of a lattice quantizer is determined by its parameters known as truncation level and scaling factor. In lattice-based compression algorithms reported in the literature the lattice structure is commonly predetermined leading to a nonoptimized quantization approach. In this research, a new technique for determining the lattice parameters is proposed. In the lattice structure design, no assumption about the lattice parameters is made and no training and multi-quantizing is required. The design is based on minimizing the quantization distortion by adapting to the statistical characteristics of the source in each subimage. 11 Abstract Abstract Since LVQ is a multidimensional generalization of uniform quantizers, it produces minimum distortion for inputs with uniform distributions. In order to take advantage of the properties of LVQ and its fast implementation, while considering the i.i.d. nonuniform distribution of wavelet coefficients, the piecewise-uniform pyramid LVQ algorithm is proposed. The proposed algorithm quantizes almost all of source vectors without the need to project these on the lattice outermost shell, while it properly maintains a small codebook size. It also resolves the wedge region problem commonly encountered with sharply distributed random sources. These represent some of the drawbacks of the algorithm proposed by Barlaud [26). The proposed algorithm handles all types of lattices, not only the cubic lattices, as opposed to the algorithms developed by Fischer [29) and Jeong [42). Furthermore, no training and multiquantizing (to determine lattice parameters) is required, as opposed to Powell's algorithm [78). For coefficients with high-frequency content, the positive-negative mean algorithm is proposed to improve the resolution of reconstructed images. For coefficients with low-frequency content, a lossless predictive compression scheme is used to preserve the quality of reconstructed images. A method to reduce bit requirements of necessary side information is also introduced. Lossless entropy coding techniques are subsequently used to remove coding redundancy. The algorithms result in high quality reconstructed images with better compression ratios than other available algorithms. To evaluate the proposed algorithms their objective and subjective performance comparisons with other available techniques are presented. The quality of the reconstructed images is important for a reliable identification. Enhancement and feature extraction on the reconstructed images are also investigated in this research. A structural-based feature extraction algorithm is proposed in which the unique properties of fingerprint textures are used to enhance the images and improve the fidelity of their characteristic features. The ridges are extracted from enhanced grey-level foreground areas based on the local ridge dominant directions. The proposed ridge extraction algorithm, properly preserves the natural shape of grey-level ridges as well as precise locations of the features, as opposed to the ridge extraction algorithm in [81). Furthermore, it is fast and operates only on foreground regions, as opposed to the adaptive floating average thresholding process in [68). Spurious features are subsequently eliminated using the proposed post-processing scheme.
Resumo:
Existing algebraic analyses of the ZUC cipher indicate that the cipher should be secure against algebraic attacks. In this paper, we present an alternative algebraic analysis method for the ZUC stream cipher, where a combiner is used to represent the nonlinear function and to derive equations representing the cipher. Using this approach, the initial states of ZUC can be recovered from 2^97 observed words of keystream, with a complexity of 2^282 operations. This method is more successful when applied to a modified version of ZUC, where the number of output words per clock is increased. If the cipher outputs 120 bits of keystream per clock, the attack can succeed with 219 observed keystream bits and 2^47 operations. Therefore, the security of ZUC against algebraic attack could be significantly reduced if its throughput was to be increased for efficiency.