898 resultados para NONLINEAR-ANALYSIS
Resumo:
"Research was supported by the United States Air Force through the Air Force Office of Scientific Research, Air Research and Development Command."
Resumo:
Photocopy of original: Berkeley : Structural Engineering Laboratory, University of California, 1974.
Resumo:
This paper investigates the performance analysis of separation of mutually independent sources in nonlinear models. The nonlinear mapping constituted by an unsupervised linear mixture is followed by an unknown and invertible nonlinear distortion, are found in many signal processing cases. Generally, blind separation of sources from their nonlinear mixtures is rather difficult. We propose using a kernel density estimator incorporated with equivariant gradient analysis to separate the sources with nonlinear distortion. The kernel density estimator parameters of which are iteratively updated to minimize the output independence expressed as a mutual information criterion. The equivariant gradient algorithm has the form of nonlinear decorrelation to perform the convergence analysis. Experiments are proposed to illustrate these results.
Resumo:
This paper investigates the performance of EASI algorithm and the proposed EKENS algorithm for linear and nonlinear mixtures. The proposed EKENS algorithm is based on the modified equivariant algorithm and kernel density estimation. Theory and characteristic of both the algorithms are discussed for blind source separation model. The separation structure of nonlinear mixtures is based on a nonlinear stage followed by a linear stage. Simulations with artificial and natural data demonstrate the feasibility and good performance of the proposed EKENS algorithm.
Resumo:
We investigate the feasibility of simultaneous suppressing of the amplification noise and nonlinearity, representing the most fundamental limiting factors in modern optical communication. To accomplish this task we developed a general design optimisation technique, based on concepts of noise and nonlinearity management. We demonstrate the immense efficiency of the novel approach by applying it to a design optimisation of transmission lines with periodic dispersion compensation using Raman and hybrid Raman-EDFA amplification. Moreover, we showed, using nonlinearity management considerations, that the optimal performance in high bit-rate dispersion managed fibre systems with hybrid amplification is achieved for a certain amplifier spacing – which is different from commonly known optimal noise performance corresponding to fully distributed amplification. Required for an accurate estimation of the bit error rate, the complete knowledge of signal statistics is crucial for modern transmission links with strong inherent nonlinearity. Therefore, we implemented the advanced multicanonical Monte Carlo (MMC) method, acknowledged for its efficiency in estimating distribution tails. We have accurately computed acknowledged for its efficiency in estimating distribution tails. We have accurately computed marginal probability density functions for soliton parameters, by numerical modelling of Fokker-Plank equation applying the MMC simulation technique. Moreover, applying a powerful MMC method we have studied the BER penalty caused by deviations from the optimal decision level in systems employing in-line 2R optical regeneration. We have demonstrated that in such systems the analytical linear approximation that makes a better fit in the central part of the regenerator nonlinear transfer function produces more accurate approximation of the BER and BER penalty. We present a statistical analysis of RZ-DPSK optical signal at direct detection receiver with Mach-Zehnder interferometer demodulation
Resumo:
This thesis presents the results from an investigation into the merits of analysing Magnetoencephalographic (MEG) data in the context of dynamical systems theory. MEG is the study of both the methods for the measurement of minute magnetic flux variations at the scalp, resulting from neuro-electric activity in the neocortex, as well as the techniques required to process and extract useful information from these measurements. As a result of its unique mode of action - by directly measuring neuronal activity via the resulting magnetic field fluctuations - MEG possesses a number of useful qualities which could potentially make it a powerful addition to any brain researcher's arsenal. Unfortunately, MEG research has so far failed to fulfil its early promise, being hindered in its progress by a variety of factors. Conventionally, the analysis of MEG has been dominated by the search for activity in certain spectral bands - the so-called alpha, delta, beta, etc that are commonly referred to in both academic and lay publications. Other efforts have centred upon generating optimal fits of "equivalent current dipoles" that best explain the observed field distribution. Many of these approaches carry the implicit assumption that the dynamics which result in the observed time series are linear. This is despite a variety of reasons which suggest that nonlinearity might be present in MEG recordings. By using methods that allow for nonlinear dynamics, the research described in this thesis avoids these restrictive linearity assumptions. A crucial concept underpinning this project is the belief that MEG recordings are mere observations of the evolution of the true underlying state, which is unobservable and is assumed to reflect some abstract brain cognitive state. Further, we maintain that it is unreasonable to expect these processes to be adequately described in the traditional way: as a linear sum of a large number of frequency generators. One of the main objectives of this thesis will be to prove that much more effective and powerful analysis of MEG can be achieved if one were to assume the presence of both linear and nonlinear characteristics from the outset. Our position is that the combined action of a relatively small number of these generators, coupled with external and dynamic noise sources, is more than sufficient to account for the complexity observed in the MEG recordings. Another problem that has plagued MEG researchers is the extremely low signal to noise ratios that are obtained. As the magnetic flux variations resulting from actual cortical processes can be extremely minute, the measuring devices used in MEG are, necessarily, extremely sensitive. The unfortunate side-effect of this is that even commonplace phenomena such as the earth's geomagnetic field can easily swamp signals of interest. This problem is commonly addressed by averaging over a large number of recordings. However, this has a number of notable drawbacks. In particular, it is difficult to synchronise high frequency activity which might be of interest, and often these signals will be cancelled out by the averaging process. Other problems that have been encountered are high costs and low portability of state-of-the- art multichannel machines. The result of this is that the use of MEG has, hitherto, been restricted to large institutions which are able to afford the high costs associated with the procurement and maintenance of these machines. In this project, we seek to address these issues by working almost exclusively with single channel, unaveraged MEG data. We demonstrate the applicability of a variety of methods originating from the fields of signal processing, dynamical systems, information theory and neural networks, to the analysis of MEG data. It is noteworthy that while modern signal processing tools such as independent component analysis, topographic maps and latent variable modelling have enjoyed extensive success in a variety of research areas from financial time series modelling to the analysis of sun spot activity, their use in MEG analysis has thus far been extremely limited. It is hoped that this work will help to remedy this oversight.
Resumo:
A nonlinear dynamic model of microbial growth is established based on the theories of the diffusion response of thermodynamics and the chemotactic response of biology. Except for the two traditional variables, i.e. the density of bacteria and the concentration of attractant, the pH value, a crucial influencing factor to the microbial growth, is also considered in this model. The pH effect on the microbial growth is taken as a Gaussian function G0e-(f- fc)2/G1, where G0, G1 and fc are constants, f represents the pH value and fc represents the critical pH value that best fits for microbial growth. To study the effects of the reproduction rate of the bacteria and the pH value on the stability of the system, three parameters a, G0 and G1 are studied in detail, where a denotes the reproduction rate of the bacteria, G0 denotes the impacting intensity of the pH value to microbial growth and G1 denotes the bacterial adaptability to the pH value. When the effect of the pH value of the solution which microorganisms live in is ignored in the governing equations of the model, the microbial system is more stable with larger a. When the effect of the bacterial chemotaxis is ignored, the microbial system is more stable with the larger G1 and more unstable with the larger G0 for f0 > fc. However, the stability of the microbial system is almost unaffected by the variation G0 and G1 and it is always stable for f0 < fc under the assumed conditions in this paper. In the whole system model, it is more unstable with larger G1 and more stable with larger G0 for f0 < fc. The system is more stable with larger G1 and more unstable with larger G0 for f0 > fc. However, the system is more unstable with larger a for f0 < fc and the stability of the system is almost unaffected by a for f0 > fc. The results obtained in this study provide a biophysical insight into the understanding of the growth and stability behavior of microorganisms.
Resumo:
The standard reference clinical score quantifying average Parkinson's disease (PD) symptom severity is the Unified Parkinson's Disease Rating Scale (UPDRS). At present, UPDRS is determined by the subjective clinical evaluation of the patient's ability to adequately cope with a range of tasks. In this study, we extend recent findings that UPDRS can be objectively assessed to clinically useful accuracy using simple, self-administered speech tests, without requiring the patient's physical presence in the clinic. We apply a wide range of known speech signal processing algorithms to a large database (approx. 6000 recordings from 42 PD patients, recruited to a six-month, multi-centre trial) and propose a number of novel, nonlinear signal processing algorithms which reveal pathological characteristics in PD more accurately than existing approaches. Robust feature selection algorithms select the optimal subset of these algorithms, which is fed into non-parametric regression and classification algorithms, mapping the signal processing algorithm outputs to UPDRS. We demonstrate rapid, accurate replication of the UPDRS assessment with clinically useful accuracy (about 2 UPDRS points difference from the clinicians' estimates, p < 0.001). This study supports the viability of frequent, remote, cost-effective, objective, accurate UPDRS telemonitoring based on self-administered speech tests. This technology could facilitate large-scale clinical trials into novel PD treatments.
Resumo:
One major drawback of coherent optical orthogonal frequency-division multiplexing (CO-OFDM) that hitherto remains unsolved is its vulnerability to nonlinear fiber effects due to its high peak-to-average power ratio. Several digital signal processing techniques have been investigated for the compensation of fiber nonlinearities, e.g., digital back-propagation, nonlinear pre- and post-compensation and nonlinear equalizers (NLEs) based on the inverse Volterra-series transfer function (IVSTF). Alternatively, nonlinearities can be mitigated using nonlinear decision classifiers such as artificial neural networks (ANNs) based on a multilayer perceptron. In this paper, ANN-NLE is presented for a 16QAM CO-OFDM system. The capability of the proposed approach to compensate the fiber nonlinearities is numerically demonstrated for up to 100-Gb/s and over 1000km and compared to the benchmark IVSTF-NLE. Results show that in terms of Q-factor, for 100-Gb/s at 1000km of transmission, ANN-NLE outperforms linear equalization and IVSTF-NLE by 3.2dB and 1dB, respectively.
Resumo:
Small errors proved catastrophic. Our purpose to remark that a very small cause which escapes our notice determined a considerable effect that we cannot fail to see, and then we say that the effect is due to chance. Small differences in the initial conditions produce very great ones in the final phenomena. A small error in the former will produce an enormous error in the latter. When dealing with any kind of electrical device specification, it is important to note that there exists a pair of test conditions that define a test: the forcing function and the limit. Forcing functions define the external operating constraints placed upon the device tested. The actual test defines how well the device responds to these constraints. Forcing inputs to threshold for example, represents the most difficult testing because this put those inputs as close as possible to the actual switching critical points and guarantees that the device will meet the Input-Output specifications. ^ Prediction becomes impossible by classical analytical analysis bounded by Newton and Euclides. We have found that non linear dynamics characteristics is the natural state of being in all circuits and devices. Opportunities exist for effective error detection in a nonlinear dynamics and chaos environment. ^ Nowadays there are a set of linear limits established around every aspect of a digital or analog circuits out of which devices are consider bad after failing the test. Deterministic chaos circuit is a fact not a possibility as it has been revived by our Ph.D. research. In practice for linear standard informational methodologies, this chaotic data product is usually undesirable and we are educated to be interested in obtaining a more regular stream of output data. ^ This Ph.D. research explored the possibilities of taking the foundation of a very well known simulation and modeling methodology, introducing nonlinear dynamics and chaos precepts, to produce a new error detector instrument able to put together streams of data scattered in space and time. Therefore, mastering deterministic chaos and changing the bad reputation of chaotic data as a potential risk for practical system status determination. ^
Resumo:
The present work consists of a detailed numerical analysis of a 4-way joint made of a precast column and two partially precast beams. The structure has been previously built and experimentally analyzed through a series of cyclic loads at the Laboratory of Tests on Structures (Laboratorio di Prove su Strutture, La. P. S.) of the University of Bologna. The aim of this work is to design a 3D model of the joint and then apply the techniques of nonlinear finite element analysis (FEA) to computationally reproduce the behavior of the structure under cyclic loads. Once the model has been calibrated to correctly emulate the joint, it is possible to obtain new insights useful to understand and explain the physical phenomena observed in the laboratory and to describe the properties of the structure, such as the cracking patterns, the force-displacement and the moment-curvature relations, as well as the deformations and displacements of the various elements composing the joint.
Resumo:
In this work the split-field finite-difference time-domain method (SF-FDTD) has been extended for the analysis of two-dimensionally periodic structures with third-order nonlinear media. The accuracy of the method is verified by comparisons with the nonlinear Fourier Modal Method (FMM). Once the formalism has been validated, examples of one- and two-dimensional nonlinear gratings are analysed. Regarding the 2D case, the shifting in resonant waveguides is corroborated. Here, not only the scalar Kerr effect is considered, the tensorial nature of the third-order nonlinear susceptibility is also included. The consideration of nonlinear materials in this kind of devices permits to design tunable devices such as variable band filters. However, the third-order nonlinear susceptibility is usually small and high intensities are needed in order to trigger the nonlinear effect. Here, a one-dimensional CBG is analysed in both linear and nonlinear regime and the shifting of the resonance peaks in both TE and TM are achieved numerically. The application of a numerical method based on the finite- difference time-domain method permits to analyse this issue from the time domain, thus bistability curves are also computed by means of the numerical method. These curves show how the nonlinear effect modifies the properties of the structure as a function of variable input pump field. When taking the nonlinear behaviour into account, the estimation of the electric field components becomes more challenging. In this paper, we present a set of acceleration strategies based on parallel software and hardware solutions.