944 resultados para Nonlinear Schrodinger model
Resumo:
The design of shell and spatial structures represents an important challenge even with the use of the modern computer technology.If we concentrate in the concrete shell structures many problems must be faced,such as the conceptual and structural disposition, optimal shape design, analysis, construction methods, details etc. and all these problems are interconnected among them. As an example the shape optimization requires the use of several disciplines like structural analysis, sensitivity analysis, optimization strategies and geometrical design concepts. Similar comments can be applied to other space structures such as steel trusses with single or double shape and tension structures. In relation to the analysis the Finite Element Method appears to be the most extended and versatile technique used in the practice. In the application of this method several issues arise. First the derivation of the pertinent shell theory or alternatively the degenerated 3-D solid approach should be chosen. According to the previous election the suitable FE model has to be adopted i.e. the displacement,stress or mixed formulated element. The good behavior of the shell structures under dead loads that are carried out towards the supports by mainly compressive stresses is impaired by the high imperfection sensitivity usually exhibited by these structures. This last effect is important particularly if large deformation and material nonlinearities of the shell may interact unfavorably, as can be the case for thin reinforced shells. In this respect the study of the stability of the shell represents a compulsory step in the analysis. Therefore there are currently very active fields of research such as the different descriptions of consistent nonlinear shell models given by Simo, Fox and Rifai, Mantzenmiller and Buchter and Ramm among others, the consistent formulation of efficient tangent stiffness as the one presented by Ortiz and Schweizerhof and Wringgers, with application to concrete shells exhibiting creep behavior given by Scordelis and coworkers; and finally the development of numerical techniques needed to trace the nonlinear response of the structure. The objective of this paper is concentrated in the last research aspect i.e. in the presentation of a state-of-the-art on the existing solution techniques for nonlinear analysis of structures. In this presentation the following excellent reviews on this subject will be mainly used.
Resumo:
The monkey anterior intraparietal area (AIP) encodes visual information about three-dimensional object shape that is used to shape the hand for grasping. In robotics a similar role has been played by modules that fit point cloud data to the superquadric family of shapes and its various extensions. We developed a model of shape tuning in AIP based on cosine tuning to superquadric parameters. However, the model did not fit the data well, and we also found that it was difficult to accurately reproduce these parameters using neural networks with the appropriate inputs (modelled on the caudal intraparietal area, CIP). The latter difficulty was related to the fact that there are large discontinuities in the superquadric parameters between very similar shapes. To address these limitations we adopted an alternative shape parameterization based on an Isomap nonlinear dimension reduction. The Isomap was built using gradients and curvatures of object surface depth. This alternative parameterization was low-dimensional (like superquadrics), but data-driven (similar to an alternative clustering approach that is also sometimes used in robotics) and lacked large discontinuities. Isomaps with 16 or more dimensions reproduced the AIP data fairly well. Moreover, we found that the Isomap parameters could be approximated from CIP-like input much more accurately than the superquadric parameters. We conclude that Isomaps, or perhaps alternative dimension reductions of CIP signals, provide a promising model of AIP tuning. We have now started to integrate our model with a robot hand, to explore the efficacy of Isomap shape reductions in grasp planning. Future work will consider dynamics of spike responses and integration with related visual and motor area models.
Resumo:
This work is concerned with the numerical solution of the evolution equations of thermomechanical systems, in such a way that the scheme itself satisfies the laws of thermodynamics. Within this framework, we present a novel integration scheme for the dynamics of viscoelastic continuum bodies in isothermal conditions. This method intrinsically satisfies the laws of thermodynamics arising from the continuum, as well as the possible additional symmetries. The resulting solutions are physically accurate since they preserve the fundamental physical properties of the model. Furthermore, the method gives an excellent performance with respect to robustness and stability. Proof for these claims as well as numerical examples that illustrate the performance of the novel scheme are provided
Resumo:
Arch bridge structural solution has been known for centuries, in fact the simple nature of arch that require low tension and shear strength was an advantage as the simple materials like stone and brick were the only option back in ancient centuries. By the pass of time especially after industrial revolution, the new materials were adopted in construction of arch bridges to reach longer spans. Nowadays one long span arch bridge is made of steel, concrete or combination of these two as "CFST", as the result of using these high strength materials, very long spans can be achieved. The current record for longest arch belongs to Chaotianmen bridge over Yangtze river in China with 552 meters span made of steel and the longest reinforced concrete type is Wanxian bridge which also cross the Yangtze river through a 420 meters span. Today the designer is no longer limited by span length as long as arch bridge is the most applicable solution among other approaches, i.e. cable stayed and suspended bridges are more reasonable if very long span is desired. Like any super structure, the economical and architectural aspects in construction of a bridge is extremely important, in other words, as a narrower bridge has better appearance, it also require smaller volume of material which make the design more economical. Design of such bridge, beside the high strength materials, requires precise structural analysis approaches capable of integrating the combination of material behaviour and complex geometry of structure and various types of loads which may be applied to bridge during its service life. Depend on the design strategy, analysis may only evaluates the linear elastic behaviour of structure or consider the nonlinear properties as well. Although most of structures in the past were designed to act in their elastic range, the rapid increase in computational capacity allow us to consider different sources of nonlinearities in order to achieve a more realistic evaluations where the dynamic behaviour of bridge is important especially in seismic zones where large movements may occur or structure experience P - _ effect during the earthquake. The above mentioned type of analysis is computationally expensive and very time consuming. In recent years, several methods were proposed in order to resolve this problem. Discussion of recent developments on these methods and their application on long span concrete arch bridges is the main goal of this research. Accordingly available long span concrete arch bridges have been studied to gather the critical information about their geometrical aspects and properties of their materials. Based on concluded information, several concrete arch bridges were designed for further studies. The main span of these bridges range from 100 to 400 meters. The Structural analysis methods implemented in in this study are as following: Elastic Analysis: Direct Response History Analysis (DRHA): This method solves the direct equation of motion over time history of applied acceleration or imposed load in linear elastic range. Modal Response History Analysis (MRHA): Similar to DRHA, this method is also based on time history, but the equation of motion is simplified to single degree of freedom system and calculates the response of each mode independently. Performing this analysis require less time than DRHA. Modal Response Spectrum Analysis (MRSA): As it is obvious from its name, this method calculates the peak response of structure for each mode and combine them using modal combination rules based on the introduced spectra of ground motion. This method is expected to be fastest among Elastic analysis. Inelastic Analysis: Nonlinear Response History Analysis (NL-RHA): The most accurate strategy to address significant nonlinearities in structural dynamics is undoubtedly the nonlinear response history analysis which is similar to DRHA but extended to inelastic range by updating the stiffness matrix for every iteration. This onerous task, clearly increase the computational cost especially for unsymmetrical buildings that requires to be analyzed in a full 3D model for taking the torsional effects in to consideration. Modal Pushover Analysis (MPA): The Modal Pushover Analysis is basically the MRHA but extended to inelastic stage. After all, the MRHA cannot solve the system of dynamics because the resisting force fs(u; u_ ) is unknown for inelastic stage. The solution of MPA for this obstacle is using the previously recorded fs to evaluate system of dynamics. Extended Modal Pushover Analysis (EMPA): Expanded Modal pushover is a one of very recent proposed methods which evaluates response of structure under multi-directional excitation using the modal pushover analysis strategy. In one specific mode,the original pushover neglect the contribution of the directions different than characteristic one, this is reasonable in regular symmetric building but a structure with complex shape like long span arch bridges may go through strong modal coupling. This method intend to consider modal coupling while it take same time of computation as MPA. Coupled Nonlinear Static Pushover Analysis (CNSP): The EMPA includes the contribution of non-characteristic direction to the formal MPA procedure. However the static pushovers in EMPA are performed individually for every mode, accordingly the resulted values from different modes can be combined but this is only valid in elastic phase; as soon as any element in structure starts yielding the neutral axis of that section is no longer fixed for both response during the earthquake, meaning the longitudinal deflection unavoidably affect the transverse one or vice versa. To overcome this drawback, the CNSP suggests executing pushover analysis for governing modes of each direction at the same time. This strategy is estimated to be more accurate than MPA and EMPA, moreover the calculation time is reduced because only one pushover analysis is required. Regardless of the strategy, the accuracy of structural analysis is highly dependent on modelling and numerical integration approaches used in evaluation of each method. Therefore the widely used Finite Element Method is implemented in process of all analysis performed in this research. In order to address the study, chapter 2, starts with gathered information about constructed long span arch bridges, this chapter continuous with geometrical and material definition of new models. Chapter 3 provides the detailed information about structural analysis strategies; furthermore the step by step description of procedure of all methods is available in Appendix A. The document ends with the description of results and conclusion of chapter 4.
Resumo:
In this paper, a fuzzy feedback linearization is used to control nonlinear systems described by Takagi-Suengo (T-S) fuzzy systems. In this work, an optimal controller is designed using the linear quadratic regulator (LQR). The well known weighting parameters approach is applied to optimize local and global approximation and modelling capability of T-S fuzzy model to improve the choice of the performance index and minimize it. The approach used here can be considered as a generalized version of T-S method. Simulation results indicate the potential, simplicity and generality of the estimation method and the robustness of the proposed optimal LQR algorithm.
Resumo:
Nowadays robots have made their way into real applications that were prohibitive and unthinkable thirty years ago. This is mainly due to the increase in power computations and the evolution in the theoretical field of robotics and control. Even though there is plenty of information in the current literature on this topics, it is not easy to find clear concepts of how to proceed in order to design and implement a controller for a robot. In general, the design of a controller requires of a complete understanding and knowledge of the system to be controlled. Therefore, for advanced control techniques the systems must be first identified. Once again this particular objective is cumbersome and is never straight forward requiring of great expertise and some criteria must be adopted. On the other hand, the particular problem of designing a controller is even more complex when dealing with Parallel Manipulators (PM), since their closed-loop structures give rise to a highly nonlinear system. Under this basis the current work is developed, which intends to resume and gather all the concepts and experiences involve for the control of an Hydraulic Parallel Manipulator. The main objective of this thesis is to provide a guide remarking all the steps involve in the designing of advanced control technique for PMs. The analysis of the PM under study is minced up to the core of the mechanism: the hydraulic actuators. The actuators are modeled and experimental identified. Additionally, some consideration regarding traditional PID controllers are presented and an adaptive controller is finally implemented. From a macro perspective the kinematic and dynamic model of the PM are presented. Based on the model of the system and extending the adaptive controller of the actuator, a control strategy for the PM is developed and its performance is analyzed with simulation.
Resumo:
The purpose of this Project is, first and foremost, to disclose the topic of nonlinear vibrations and oscillations in mechanical systems and, namely, nonlinear normal modes NNMs to a greater audience of researchers and technicians. To do so, first of all, the dynamical behavior and properties of nonlinear mechanical systems is outlined from the analysis of a pair of exemplary models with the harmonic balanced method. The conclusions drawn are contrasted with the Linear Vibration Theory. Then, it is argued how the nonlinear normal modes could, in spite of their limitations, predict the frequency response of a mechanical system. After discussing those introductory concepts, I present a Matlab package called 'NNMcont' developed by a group of researchers from the University of Liege. This package allows the analysis of nonlinear normal modes of vibration in a range of mechanical systems as extensions of the linear modes. This package relies on numerical methods and a 'continuation algorithm' for the computation of the nonlinear normal modes of a conservative mechanical system. In order to prove its functionality, a two degrees of freedom mechanical system with elastic nonlinearities is analized. This model comprises a mass suspended on a foundation by means of a spring-viscous damper mechanism -analogous to a very simplified model of most suspended structures and machines- that has attached a mass damper as a passive vibration control system. The results of the computation are displayed on frequency energy plots showing the NNMs branches along with modal curves and time-series plots for each normal mode. Finally, a critical analysis of the results obtained is carried out with an eye on devising what they can tell the researcher about the dynamical properties of the system.
Resumo:
El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.
Resumo:
Peer reviewed
Resumo:
A form of two-dimensional (2D) vibrational spectroscopy, which uses two ultrafast IR laser pulses, is used to examine the structure of a cyclic penta-peptide in solution. Spectrally resolved cross peaks occur in the off-diagonal region of the 2D IR spectrum of the amide I region, analogous to those in 2D NMR spectroscopy. These cross peaks measure the coupling between the different amide groups in the structure. Their intensities and polarizations relate directly to the three-dimensional structure of the peptide. With the help of a model coupling Hamiltonian, supplemented by density functional calculations, the spectra of this penta-peptide can be regenerated from the known solution phase structure. This 2D-IR measurement, with an intrinsic time resolution of less than 1 ps, could be used in all time regimes of interest in biology.
Resumo:
At the level of the cochlear nucleus (CN), the auditory pathway divides into several parallel circuits, each of which provides a different representation of the acoustic signal. Here, the representation of the power spectrum of an acoustic signal is analyzed for two CN principal cells—chopper neurons of the ventral CN and type IV neurons of the dorsal CN. The analysis is based on a weighting function model that relates the discharge rate of a neuron to first- and second-order transformations of the power spectrum. In chopper neurons, the transformation of spectral level into rate is a linear (i.e., first-order) or nearly linear function. This transformation is a predominantly excitatory process involving multiple frequency components, centered in a narrow frequency range about best frequency, that usually are processed independently of each other. In contrast, type IV neurons encode spectral information linearly only near threshold. At higher stimulus levels, these neurons are strongly inhibited by spectral notches, a behavior that cannot be explained by level transformations of first- or second-order. Type IV weighting functions reveal complex excitatory and inhibitory interactions that involve frequency components spanning a wider range than that seen in choppers. These findings suggest that chopper and type IV neurons form parallel pathways of spectral information transmission that are governed by two different mechanisms. Although choppers use a predominantly linear mechanism to transmit tonotopic representations of spectra, type IV neurons use highly nonlinear processes to signal the presence of wide-band spectral features.
Resumo:
Electrogram recordings of ventricular fibrillation appear complex and possibly chaotic. However, sequences of beat-to-beat intervals obtained from these recordings are generally short, making it difficult to explicitly demonstrate nonlinear dynamics. Motivated by the work of Sugihara on atmospheric dynamics and the Durbin-Watson test for nonlinearity, we introduce a new statistical test that recovers significant dynamical patterns from smoothed lag plots. This test is used to show highly significant nonlinear dynamics in a stable canine model of ventricular fibrillation.
Resumo:
This paper introduces a new mathematical model for the simultaneous synthesis of heat exchanger networks (HENs), wherein the handling pressure of process streams is used to enhance the heat integration. The proposed approach combines generalized disjunctive programming (GDP) and mixed-integer nonlinear programming (MINLP) formulation, in order to minimize the total annualized cost composed by operational and capital expenses. A multi-stage superstructure is developed for the HEN synthesis, assuming constant heat capacity flow rates and isothermal mixing, and allowing for streams splits. In this model, the pressure and temperature of streams must be treated as optimization variables, increasing further the complexity and difficulty to solve the problem. In addition, the model allows for coupling of compressors and turbines to save energy. A case study is performed to verify the accuracy of the proposed model. In this example, the optimal integration between the heat and work decreases the need for thermal utilities in the HEN design. As a result, the total annualized cost is also reduced due to the decrease in the operational expenses related to the heating and cooling of the streams.
Resumo:
Regular vine copulas are multivariate dependence models constructed from pair-copulas (bivariate copulas). In this paper, we allow the dependence parameters of the pair-copulas in a D-vine decomposition to be potentially time-varying, following a nonlinear restricted ARMA(1,m) process, in order to obtain a very flexible dependence model for applications to multivariate financial return data. We investigate the dependence among the broad stock market indexes from Germany (DAX), France (CAC 40), Britain (FTSE 100), the United States (S&P 500) and Brazil (IBOVESPA) both in a crisis and in a non-crisis period. We find evidence of stronger dependence among the indexes in bear markets. Surprisingly, though, the dynamic D-vine copula indicates the occurrence of a sharp decrease in dependence between the indexes FTSE and CAC in the beginning of 2011, and also between CAC and DAX during mid-2011 and in the beginning of 2008, suggesting the absence of contagion in these cases. We also evaluate the dynamic D-vine copula with respect to Value-at-Risk (VaR) forecasting accuracy in crisis periods. The dynamic D-vine outperforms the static D-vine in terms of predictive accuracy for our real data sets.