898 resultados para Nonlinear analysis
Resumo:
The phenomenon of patterned distribution of pH near the cell membrane of the algae Chara corallina upon illumination is well-known. In this paper, we develop a mathematical model, based on the detailed kinetic analysis of proton fluxes across the cell membrane, to explain this phenomenon. The model yields two coupled nonlinear partial differential equations which describe the spatial dynamics of proton concentration changes and transmembrane potential generation. The experimental observation of pH pattern formation, its period and amplitude of oscillation, and also its hysteresis in response to changing illumination, are all reproduced by our model. A comparison of experimental results and predictions of our theory is made. Finally, a mechanism for pattern formation in Chara corallina is proposed.
Resumo:
The ability to predict the properties of magnetic materials in a device is essential to ensuring the correct operation and optimization of the design as well as the device behavior over a wide range of input frequencies. Typically, development and simulation of wide-bandwidth models requires detailed, physics-based simulations that utilize significant computational resources. Balancing the trade-offs between model computational overhead and accuracy can be cumbersome, especially when the nonlinear effects of saturation and hysteresis are included in the model. This study focuses on the development of a system for analyzing magnetic devices in cases where model accuracy and computational intensity must be carefully and easily balanced by the engineer. A method for adjusting model complexity and corresponding level of detail while incorporating the nonlinear effects of hysteresis is presented that builds upon recent work in loss analysis and magnetic equivalent circuit (MEC) modeling. The approach utilizes MEC models in conjunction with linearization and model-order reduction techniques to process magnetic devices based on geometry and core type. The validity of steady-state permeability approximations is also discussed.
Resumo:
Microsecond long Molecular Dynamics (MD) trajectories of biomolecular processes are now possible due to advances in computer technology. Soon, trajectories long enough to probe dynamics over many milliseconds will become available. Since these timescales match the physiological timescales over which many small proteins fold, all atom MD simulations of protein folding are now becoming popular. To distill features of such large folding trajectories, we must develop methods that can both compress trajectory data to enable visualization, and that can yield themselves to further analysis, such as the finding of collective coordinates and reduction of the dynamics. Conventionally, clustering has been the most popular MD trajectory analysis technique, followed by principal component analysis (PCA). Simple clustering used in MD trajectory analysis suffers from various serious drawbacks, namely, (i) it is not data driven, (ii) it is unstable to noise and change in cutoff parameters, and (iii) since it does not take into account interrelationships amongst data points, the separation of data into clusters can often be artificial. Usually, partitions generated by clustering techniques are validated visually, but such validation is not possible for MD trajectories of protein folding, as the underlying structural transitions are not well understood. Rigorous cluster validation techniques may be adapted, but it is more crucial to reduce the dimensions in which MD trajectories reside, while still preserving their salient features. PCA has often been used for dimension reduction and while it is computationally inexpensive, being a linear method, it does not achieve good data compression. In this thesis, I propose a different method, a nonmetric multidimensional scaling (nMDS) technique, which achieves superior data compression by virtue of being nonlinear, and also provides a clear insight into the structural processes underlying MD trajectories. I illustrate the capabilities of nMDS by analyzing three complete villin headpiece folding and six norleucine mutant (NLE) folding trajectories simulated by Freddolino and Schulten [1]. Using these trajectories, I make comparisons between nMDS, PCA and clustering to demonstrate the superiority of nMDS. The three villin headpiece trajectories showed great structural heterogeneity. Apart from a few trivial features like early formation of secondary structure, no commonalities between trajectories were found. There were no units of residues or atoms found moving in concert across the trajectories. A flipping transition, corresponding to the flipping of helix 1 relative to the plane formed by helices 2 and 3 was observed towards the end of the folding process in all trajectories, when nearly all native contacts had been formed. However, the transition occurred through a different series of steps in all trajectories, indicating that it may not be a common transition in villin folding. The trajectories showed competition between local structure formation/hydrophobic collapse and global structure formation in all trajectories. Our analysis on the NLE trajectories confirms the notion that a tight hydrophobic core inhibits correct 3-D rearrangement. Only one of the six NLE trajectories folded, and it showed no flipping transition. All the other trajectories get trapped in hydrophobically collapsed states. The NLE residues were found to be buried deeply into the core, compared to the corresponding lysines in the villin headpiece, thereby making the core tighter and harder to undo for 3-D rearrangement. Our results suggest that the NLE may not be a fast folder as experiments suggest. The tightness of the hydrophobic core may be a very important factor in the folding of larger proteins. It is likely that chaperones like GroEL act to undo the tight hydrophobic core of proteins, after most secondary structure elements have been formed, so that global rearrangement is easier. I conclude by presenting facts about chaperone-protein complexes and propose further directions for the study of protein folding.
Resumo:
This thesis focuses on digital equalization of nonlinear fiber impairments for coherent optical transmission systems. Building from well-known physical models of signal propagation in single-mode optical fibers, novel nonlinear equalization techniques are proposed, numerically assessed and experimentally demonstrated. The structure of the proposed algorithms is strongly driven by the optimization of the performance versus complexity tradeoff, envisioning the near-future practical application in commercial real-time transceivers. The work is initially focused on the mitigation of intra-channel nonlinear impairments relying on the concept of digital backpropagation (DBP) associated with Volterra-based filtering. After a comprehensive analysis of the third-order Volterra kernel, a set of critical simplifications are identified, culminating in the development of reduced complexity nonlinear equalization algorithms formulated both in time and frequency domains. The implementation complexity of the proposed techniques is analytically described in terms of computational effort and processing latency, by determining the number of real multiplications per processed sample and the number of serial multiplications, respectively. The equalization performance is numerically and experimentally assessed through bit error rate (BER) measurements. Finally, the problem of inter-channel nonlinear compensation is addressed within the context of 400 Gb/s (400G) superchannels for long-haul and ultra-long-haul transmission. Different superchannel configurations and nonlinear equalization strategies are experimentally assessed, demonstrating that inter-subcarrier nonlinear equalization can provide an enhanced signal reach while requiring only marginal added complexity.
Resumo:
We consider a periodic problem driven by the scalar $p-$Laplacian and with a jumping (asymmetric) reaction. We prove two multiplicity theorems. The first concerns the nonlinear problem ($1
analysis are variational and Morse theoretic.
Resumo:
The flow rates of drying and nebulizing gas, heat block and desolvation line temperatures and interface voltage are potential electrospray ionization parameters as they may enhance sensitivity of the mass spectrometer. The conditions that give higher sensitivity of 13 pharmaceuticals were explored. First, Plackett-Burman design was implemented to screen significant factors, and it was concluded that interface voltage and nebulizing gas flow were the only factors that influence the intensity signal for all pharmaceuticals. This fractionated factorial design was projected to set a full 2(2) factorial design with center points. The lack-of-fit test proved to be significant. Then, a central composite face-centered design was conducted. Finally, a stepwise multiple linear regression and subsequently an optimization problem solving were carried out. Two main drug clusters were found concerning the signal intensities of all runs of the augmented factorial design. p-Aminophenol, salicylic acid, and nimesulide constitute one cluster as a result of showing much higher sensitivity than the remaining drugs. The other cluster is more homogeneous with some sub-clusters comprising one pharmaceutical and its respective metabolite. It was observed that instrumental signal increased when both significant factors increased with maximum signal occurring when both codified factors are set at level +1. It was also found that, for most of the pharmaceuticals, interface voltage influences the intensity of the instrument more than the nebulizing gas flowrate. The only exceptions refer to nimesulide where the relative importance of the factors is reversed and still salicylic acid where both factors equally influence the instrumental signal. Graphical Abstract ᅟ.
Resumo:
Wood is considered an ideal solution for floors and roofs building construction, due the mechanical and thermal properties, associated with acoustic conditions. These constructions have good sound absorption, heat insulation and relevant architectonic characteristics. They are used in many civil applications: concert and conference halls, auditoriums, ceilings, walls… However, the high vulnerability of wooden elements submitted to fire conditions requires the evaluation of its structural behaviour with accuracy. The main objective of this work is to present a numerical model to assess the fire resistance of wooden cellular slabs with different perforations. Also the thermal behaviour of the wooden slabs will be compared considering different material insulation, with different sizes, inside the cavities. A transient thermal analysis with nonlinear material behaviour will be solved using ANSYS© program. This study allows to verify the fire resistance, the temperature evolution and the char-layer, throughout a wooden cellular slab with perforations and considering the insulation effect inside the cavities.
Resumo:
We find approximations to travelling breather solutions of the one-dimensional Fermi-Pasta-Ulam (FPU) lattice. Both bright breather and dark breather solutions are found. We find that the existence of localised (bright) solutions depends upon the coefficients of cubic and quartic terms of the potential energy, generalising an earlier inequality derived by James [CR Acad Sci Paris 332, 581, (2001)]. We use the method of multiple scales to reduce the equations of motion for the lattice to a nonlinear Schr{\"o}dinger equation at leading order and hence construct an asymptotic form for the breather. We show that in the absence of a cubic potential energy term, the lattice supports combined breathing-kink waveforms. The amplitude of breathing-kinks can be arbitrarily small, as opposed to traditional monotone kinks, which have a nonzero minimum amplitude in such systems. We also present numerical simulations of the lattice, verifying the shape and velocity of the travelling waveforms, and confirming the long-lived nature of all such modes.
Resumo:
We investigate the structure of strongly nonlinear Rayleigh–Bénard convection cells in the asymptotic limit of large Rayleigh number and fixed, moderate Prandtl number. Unlike the flows analyzed in prior theoretical studies of infinite Prandtl number convection, our cellular solutions exhibit dynamically inviscid constant-vorticity cores. By solving an integral equation for the cell-edge temperature distribution, we are able to predict, as a function of cell aspect ratio, the value of the core vorticity, details of the flow within the thin boundary layers and rising/falling plumes adjacent to the edges of the convection cell, and, in particular, the bulk heat flux through the layer. The results of our asymptotic analysis are corroborated using full pseudospectral numerical simulations and confirm that the heat flux is maximized for convection cells that are roughly square in cross section.
Resumo:
We study a climatologically important interaction of two of the main components of the geophysical system by adding an energy balance model for the averaged atmospheric temperature as dynamic boundary condition to a diagnostic ocean model having an additional spatial dimension. In this work, we give deeper insight than previous papers in the literature, mainly with respect to the 1990 pioneering model by Watts and Morantine. We are taking into consideration the latent heat for the two phase ocean as well as a possible delayed term. Non-uniqueness for the initial boundary value problem, uniqueness under a non-degeneracy condition and the existence of multiple stationary solutions are proved here. These multiplicity results suggest that an S-shaped bifurcation diagram should be expected to occur in this class of models generalizing previous energy balance models. The numerical method applied to the model is based on a finite volume scheme with nonlinear weighted essentially non-oscillatory reconstruction and Runge–Kutta total variation diminishing for time integration.
Resumo:
Metamamterials are 1D, 2D or 3D arrays of articial atoms. The articial atoms, called "meta-atoms", can be any component with tailorable electromagnetic properties, such as resonators, LC circuits, nano particles, and so on. By designing the properties of individual meta-atoms and the interaction created by putting them in a lattice, one can create a metamaterial with intriguing properties not found in nature. My Ph. D. work examines the meta-atoms based on radio frequency superconducting quantum interference devices (rf-SQUIDs); their tunability with dc magnetic field, rf magnetic field, and temperature are studied. The rf-SQUIDs are superconducting split ring resonators in which the usual capacitance is supplemented with a Josephson junction, which introduces strong nonlinearity in the rf properties. At relatively low rf magnetic field, a magnetic field tunability of the resonant frequency of up to 80 THz/Gauss by dc magnetic field is observed, and a total frequency tunability of 100% is achieved. The macroscopic quantum superconducting metamaterial also shows manipulative self-induced broadband transparency due to a qualitatively novel nonlinear mechanism that is different from conventional electromagnetically induced transparency (EIT) or its classical analogs. A near complete disappearance of resonant absorption under a range of applied rf flux is observed experimentally and explained theoretically. The transparency comes from the intrinsic bi-stability and can be tuned on/ off easily by altering rf and dc magnetic fields, temperature and history. Hysteretic in situ 100% tunability of transparency paves the way for auto-cloaking metamaterials, intensity dependent filters, and fast-tunable power limiters. An rf-SQUID metamaterial is shown to have qualitatively the same behavior as a single rf-SQUID with regards to dc flux, rf flux and temperature tuning. The two-tone response of self-resonant rf-SQUID meta-atoms and metamaterials is then studied here via intermodulation (IM) measurement over a broad range of tone frequencies and tone powers. A sharp onset followed by a surprising strongly suppressed IM region near the resonance is observed. This behavior can be understood employing methods in nonlinear dynamics; the sharp onset, and the gap of IM, are due to sudden state jumps during a beat of the two-tone sum input signal. The theory predicts that the IM can be manipulated with tone power, center frequency, frequency difference between the two tones, and temperature. This quantitative understanding potentially allows for the design of rf-SQUID metamaterials with either very low or very high IM response.
Resumo:
The well-known degrees of freedom problem originally introduced by Nikolai Bernstein (1967) results from the high abundance of degrees of freedom in the musculoskeletal system. Such abundance in motor control have two sides: i) because it is unlikely that the Central Nervous System controls each degree of freedom independently, the complexity of the control needs to be reduced, and ii) because there are many options to perform a movement, a repetition of a given movement is never the same. It leads to two main topics in motor control and biomechanics: motor coordination and motor variability. The present thesis aimed to understand how motor systems behave and adapt under specific conditions. This thesis comprises three studies that focused on three topics of major interest in the field of sports sciences and medicine: expertise, injury risk and fatigue. The first study (expertise) has focused on the muscle coordination topic to further investigate the effect of expertise on the muscle synergistic organization, which ultimately may represent the underlying neural strategies. Studies 2 (excessive medial knee displacement) and 3 (fatigue) both aimed to better understand its impact on the dynamic local stability. The main findings of the present thesis suggest: 1) there is a great robustness in muscle synergistic organization between swimmers at different levels of expertise (study 1, chapter II), which ultimately indicate that differences in muscle coordination is mainly explained by peripheral adaptations; 2) injury risk factors such as excessive medial knee displacement (study 2, chapter III) and fatigue (study 3, chapter IV) alter the dynamic local stability of the neuromuscular system towards a more unstable state. This change in dynamic local stability represents a loss of adaptability in the neuromuscular system reducing the flexibility to adapt to a perturbation.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Renewable or sustainable energy (SE) sources have attracted the attention of many countries because the power generated is environmentally friendly, and the sources are not subject to the instability of price and availability. This dissertation presents new trends in the DC-AC converters (inverters) used in renewable energy sources, particularly for photovoltaic (PV) energy systems. A review of the existing technologies is performed for both single-phase and three-phase systems, and the pros and cons of the best candidates are investigated. In many modern energy conversion systems, a DC voltage, which is provided from a SE source or energy storage device, must be boosted and converted to an AC voltage with a fixed amplitude and frequency. A novel switching pattern based on the concept of the conventional space-vector pulse-width-modulated (SVPWM) technique is developed for single-stage, boost-inverters using the topology of current source inverters (CSI). The six main switching states, and two zeros, with three switches conducting at any given instant in conventional SVPWM techniques are modified herein into three charging states and six discharging states with only two switches conducting at any given instant. The charging states are necessary in order to boost the DC input voltage. It is demonstrated that the CSI topology in conjunction with the developed switching pattern is capable of providing the required residential AC voltage from a low DC voltage of one PV panel at its rated power for both linear and nonlinear loads. In a micro-grid, the active and reactive power control and consequently voltage regulation is one of the main requirements. Therefore, the capability of the single-stage boost-inverter in controlling the active power and providing the reactive power is investigated. It is demonstrated that the injected active and reactive power can be independently controlled through two modulation indices introduced in the proposed switching algorithm. The system is capable of injecting a desirable level of reactive power, while the maximum power point tracking (MPPT) dictates the desirable active power. The developed switching pattern is experimentally verified through a laboratory scaled three-phase 200W boost-inverter for both grid-connected and stand-alone cases and the results are presented.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.