943 resultados para Predictor-corrector primal-dual nonlinear rescaling method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The purpose of this Project is, first and foremost, to disclose the topic of nonlinear vibrations and oscillations in mechanical systems and, namely, nonlinear normal modes NNMs to a greater audience of researchers and technicians. To do so, first of all, the dynamical behavior and properties of nonlinear mechanical systems is outlined from the analysis of a pair of exemplary models with the harmonic balanced method. The conclusions drawn are contrasted with the Linear Vibration Theory. Then, it is argued how the nonlinear normal modes could, in spite of their limitations, predict the frequency response of a mechanical system. After discussing those introductory concepts, I present a Matlab package called 'NNMcont' developed by a group of researchers from the University of Liege. This package allows the analysis of nonlinear normal modes of vibration in a range of mechanical systems as extensions of the linear modes. This package relies on numerical methods and a 'continuation algorithm' for the computation of the nonlinear normal modes of a conservative mechanical system. In order to prove its functionality, a two degrees of freedom mechanical system with elastic nonlinearities is analized. This model comprises a mass suspended on a foundation by means of a spring-viscous damper mechanism -analogous to a very simplified model of most suspended structures and machines- that has attached a mass damper as a passive vibration control system. The results of the computation are displayed on frequency energy plots showing the NNMs branches along with modal curves and time-series plots for each normal mode. Finally, a critical analysis of the results obtained is carried out with an eye on devising what they can tell the researcher about the dynamical properties of the system.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

El uso de aritmética de punto fijo es una opción de diseño muy extendida en sistemas con fuertes restricciones de área, consumo o rendimiento. Para producir implementaciones donde los costes se minimicen sin impactar negativamente en la precisión de los resultados debemos llevar a cabo una asignación cuidadosa de anchuras de palabra. Encontrar la combinación óptima de anchuras de palabra en coma fija para un sistema dado es un problema combinatorio NP-hard al que los diseñadores dedican entre el 25 y el 50 % del ciclo de diseño. Las plataformas hardware reconfigurables, como son las FPGAs, también se benefician de las ventajas que ofrece la aritmética de coma fija, ya que éstas compensan las frecuencias de reloj más bajas y el uso más ineficiente del hardware que hacen estas plataformas respecto a los ASICs. A medida que las FPGAs se popularizan para su uso en computación científica los diseños aumentan de tamaño y complejidad hasta llegar al punto en que no pueden ser manejados eficientemente por las técnicas actuales de modelado de señal y ruido de cuantificación y de optimización de anchura de palabra. En esta Tesis Doctoral exploramos distintos aspectos del problema de la cuantificación y presentamos nuevas metodologías para cada uno de ellos: Las técnicas basadas en extensiones de intervalos han permitido obtener modelos de propagación de señal y ruido de cuantificación muy precisos en sistemas con operaciones no lineales. Nosotros llevamos esta aproximación un paso más allá introduciendo elementos de Multi-Element Generalized Polynomial Chaos (ME-gPC) y combinándolos con una técnica moderna basada en Modified Affine Arithmetic (MAA) estadístico para así modelar sistemas que contienen estructuras de control de flujo. Nuestra metodología genera los distintos caminos de ejecución automáticamente, determina las regiones del dominio de entrada que ejercitarán cada uno de ellos y extrae los momentos estadísticos del sistema a partir de dichas soluciones parciales. Utilizamos esta técnica para estimar tanto el rango dinámico como el ruido de redondeo en sistemas con las ya mencionadas estructuras de control de flujo y mostramos la precisión de nuestra aproximación, que en determinados casos de uso con operadores no lineales llega a tener tan solo una desviación del 0.04% con respecto a los valores de referencia obtenidos mediante simulación. Un inconveniente conocido de las técnicas basadas en extensiones de intervalos es la explosión combinacional de términos a medida que el tamaño de los sistemas a estudiar crece, lo cual conlleva problemas de escalabilidad. Para afrontar este problema presen tamos una técnica de inyección de ruidos agrupados que hace grupos con las señales del sistema, introduce las fuentes de ruido para cada uno de los grupos por separado y finalmente combina los resultados de cada uno de ellos. De esta forma, el número de fuentes de ruido queda controlado en cada momento y, debido a ello, la explosión combinatoria se minimiza. También presentamos un algoritmo de particionado multi-vía destinado a minimizar la desviación de los resultados a causa de la pérdida de correlación entre términos de ruido con el objetivo de mantener los resultados tan precisos como sea posible. La presente Tesis Doctoral también aborda el desarrollo de metodologías de optimización de anchura de palabra basadas en simulaciones de Monte-Cario que se ejecuten en tiempos razonables. Para ello presentamos dos nuevas técnicas que exploran la reducción del tiempo de ejecución desde distintos ángulos: En primer lugar, el método interpolativo aplica un interpolador sencillo pero preciso para estimar la sensibilidad de cada señal, y que es usado después durante la etapa de optimización. En segundo lugar, el método incremental gira en torno al hecho de que, aunque es estrictamente necesario mantener un intervalo de confianza dado para los resultados finales de nuestra búsqueda, podemos emplear niveles de confianza más relajados, lo cual deriva en un menor número de pruebas por simulación, en las etapas iniciales de la búsqueda, cuando todavía estamos lejos de las soluciones optimizadas. Mediante estas dos aproximaciones demostramos que podemos acelerar el tiempo de ejecución de los algoritmos clásicos de búsqueda voraz en factores de hasta x240 para problemas de tamaño pequeño/mediano. Finalmente, este libro presenta HOPLITE, una infraestructura de cuantificación automatizada, flexible y modular que incluye la implementación de las técnicas anteriores y se proporciona de forma pública. Su objetivo es ofrecer a desabolladores e investigadores un entorno común para prototipar y verificar nuevas metodologías de cuantificación de forma sencilla. Describimos el flujo de trabajo, justificamos las decisiones de diseño tomadas, explicamos su API pública y hacemos una demostración paso a paso de su funcionamiento. Además mostramos, a través de un ejemplo sencillo, la forma en que conectar nuevas extensiones a la herramienta con las interfaces ya existentes para poder así expandir y mejorar las capacidades de HOPLITE. ABSTRACT Using fixed-point arithmetic is one of the most common design choices for systems where area, power or throughput are heavily constrained. In order to produce implementations where the cost is minimized without negatively impacting the accuracy of the results, a careful assignment of word-lengths is required. The problem of finding the optimal combination of fixed-point word-lengths for a given system is a combinatorial NP-hard problem to which developers devote between 25 and 50% of the design-cycle time. Reconfigurable hardware platforms such as FPGAs also benefit of the advantages of fixed-point arithmetic, as it compensates for the slower clock frequencies and less efficient area utilization of the hardware platform with respect to ASICs. As FPGAs become commonly used for scientific computation, designs constantly grow larger and more complex, up to the point where they cannot be handled efficiently by current signal and quantization noise modelling and word-length optimization methodologies. In this Ph.D. Thesis we explore different aspects of the quantization problem and we present new methodologies for each of them: The techniques based on extensions of intervals have allowed to obtain accurate models of the signal and quantization noise propagation in systems with non-linear operations. We take this approach a step further by introducing elements of MultiElement Generalized Polynomial Chaos (ME-gPC) and combining them with an stateof- the-art Statistical Modified Affine Arithmetic (MAA) based methodology in order to model systems that contain control-flow structures. Our methodology produces the different execution paths automatically, determines the regions of the input domain that will exercise them, and extracts the system statistical moments from the partial results. We use this technique to estimate both the dynamic range and the round-off noise in systems with the aforementioned control-flow structures. We show the good accuracy of our approach, which in some case studies with non-linear operators shows a 0.04 % deviation respect to the simulation-based reference values. A known drawback of the techniques based on extensions of intervals is the combinatorial explosion of terms as the size of the targeted systems grows, which leads to scalability problems. To address this issue we present a clustered noise injection technique that groups the signals in the system, introduces the noise terms in each group independently and then combines the results at the end. In this way, the number of noise sources in the system at a given time is controlled and, because of this, the combinato rial explosion is minimized. We also present a multi-way partitioning algorithm aimed at minimizing the deviation of the results due to the loss of correlation between noise terms, in order to keep the results as accurate as possible. This Ph.D. Thesis also covers the development of methodologies for word-length optimization based on Monte-Carlo simulations in reasonable times. We do so by presenting two novel techniques that explore the reduction of the execution times approaching the problem in two different ways: First, the interpolative method applies a simple but precise interpolator to estimate the sensitivity of each signal, which is later used to guide the optimization effort. Second, the incremental method revolves on the fact that, although we strictly need to guarantee a certain confidence level in the simulations for the final results of the optimization process, we can do it with more relaxed levels, which in turn implies using a considerably smaller amount of samples, in the initial stages of the process, when we are still far from the optimized solution. Through these two approaches we demonstrate that the execution time of classical greedy techniques can be accelerated by factors of up to ×240 for small/medium sized problems. Finally, this book introduces HOPLITE, an automated, flexible and modular framework for quantization that includes the implementation of the previous techniques and is provided for public access. The aim is to offer a common ground for developers and researches for prototyping and verifying new techniques for system modelling and word-length optimization easily. We describe its work flow, justifying the taken design decisions, explain its public API and we do a step-by-step demonstration of its execution. We also show, through an example, the way new extensions to the flow should be connected to the existing interfaces in order to expand and improve the capabilities of HOPLITE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Long-term potentiation (LTP) of excitatory transmission is an important candidate cellular mechanism for the storage of memories in the mammalian brain. The subcellular phenomena that underlie the persistent increase in synaptic strength, however, are incompletely understood. A potentially powerful method to detect a presynaptic increase in glutamate release is to examine the effect of LTP induction on the rate at which the use-dependent blocker MK-801 attenuates successive N-methyl-d-aspartic acid (NMDA) receptor-mediated synaptic signals. This method, however, has given apparently contradictory results when applied in hippocampal CA1. The inconsistency could be explained if NMDA receptors were opened by glutamate not only released from local presynaptic terminals, but also diffusing from synapses on neighboring cells where LTP was not induced. Here we examine the effect of pairing-induced LTP on the MK-801 blocking rate in two afferent inputs to dentate granule cells. LTP in the medial perforant path is associated with a significant increase in the MK-801 blocking rate, implying a presynaptic increase in glutamate release probability. An enhanced MK-801 blocking rate is not seen, however, in the lateral perforant path. This result still could be compatible with a presynaptic contribution to LTP in the lateral perforant path if intersynaptic cross-talk occurred. In support of this hypothesis, we show that NMDA receptors consistently sense more quanta of glutamate than do α-amino-3-hydroxy-5-methyl-4-isoxazolepropionic acid receptors. In the medial perforant path, in contrast, there is no significant difference in the number of quanta mediated by the two receptors. These results support a presynaptic contribution to LTP and imply that differences in intersynaptic cross-talk can complicate the interpretation of experiments designed to detect changes in transmitter release.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a general procedure for solving incomplete data estimation problems. The procedure can be used to find the maximum likelihood estimate or to solve estimating equations in difficult cases such as estimation with the censored or truncated regression model, the nonlinear structural measurement error model, and the random effects model. The procedure is based on the general principle of stochastic approximation and the Markov chain Monte-Carlo method. Applying the theory on adaptive algorithms, we derive conditions under which the proposed procedure converges. Simulation studies also indicate that the proposed procedure consistently converges to the maximum likelihood estimate for the structural measurement error logistic regression model.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A definite diagnosis of prion diseases such as Creutzfeldt–Jakob disease (CJD) relies on the detection of pathological prion protein (PrPSc). However, no test for PrPSc in cerebrospinal fluid (CSF) has been available thus far. Based on a setup for confocal dual-color fluorescence correlation spectroscopy, a technique suitable for single molecule detection, we developed a highly sensitive detection method for PrPSc. Pathological prion protein aggregates were labeled by specific antibody probes tagged with fluorescent dyes, resulting in intensely fluorescent targets, which were measured by dual-color fluorescence intensity distribution analysis in a confocal scanning setup. In a diagnostic model system, PrPSc aggregates were detected down to a concentration of 2 pM PrPSc, corresponding to an aggregate concentration of approximately 2 fM, which was more than one order of magnitude more sensitive than Western blot analysis. A PrPSc-specific signal could also be detected in a number of CSF samples from patients with CJD but not in control samples, providing the basis for a rapid and specific test for CJD and other prion diseases. Furthermore, this method could be adapted to the sensitive detection of other disease-associated amyloid aggregates such as in Alzheimer's disease.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper is devoted to the quantization of the degree of nonlinearity of the relationship between two biological variables when one of the variables is a complex nonstationary oscillatory signal. An example of the situation is the indicial responses of pulmonary blood pressure (P) to step changes of oxygen tension (ΔpO2) in the breathing gas. For a step change of ΔpO2 beginning at time t1, the pulmonary blood pressure is a nonlinear function of time and ΔpO2, which can be written as P(t-t1 | ΔpO2). An effective method does not exist to examine the nonlinear function P(t-t1 | ΔpO2). A systematic approach is proposed here. The definitions of mean trends and oscillations about the means are the keys. With these keys a practical method of calculation is devised. We fit the mean trends of blood pressure with analytic functions of time, whose nonlinearity with respect to the oxygen level is clarified here. The associated oscillations about the mean can be transformed into Hilbert spectrum. An integration of the square of the Hilbert spectrum over frequency yields a measure of oscillatory energy, which is also a function of time, whose mean trends can be expressed by analytic functions. The degree of nonlinearity of the oscillatory energy with respect to the oxygen level also is clarified here. Theoretical extension of the experimental nonlinear indicial functions to arbitrary history of hypoxia is proposed. Application of the results to tissue remodeling and tissue engineering of blood vessels is discussed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the analysis of heart rate variability (HRV) are used temporal series that contains the distances between successive heartbeats in order to assess autonomic regulation of the cardiovascular system. These series are obtained from the electrocardiogram (ECG) signal analysis, which can be affected by different types of artifacts leading to incorrect interpretations in the analysis of the HRV signals. Classic approach to deal with these artifacts implies the use of correction methods, some of them based on interpolation, substitution or statistical techniques. However, there are few studies that shows the accuracy and performance of these correction methods on real HRV signals. This study aims to determine the performance of some linear and non-linear correction methods on HRV signals with induced artefacts by quantification of its linear and nonlinear HRV parameters. As part of the methodology, ECG signals of rats measured using the technique of telemetry were used to generate real heart rate variability signals without any error. In these series were simulated missing points (beats) in different quantities in order to emulate a real experimental situation as accurately as possible. In order to compare recovering efficiency, deletion (DEL), linear interpolation (LI), cubic spline interpolation (CI), moving average window (MAW) and nonlinear predictive interpolation (NPI) were used as correction methods for the series with induced artifacts. The accuracy of each correction method was known through the results obtained after the measurement of the mean value of the series (AVNN), standard deviation (SDNN), root mean square error of the differences between successive heartbeats (RMSSD), Lomb\'s periodogram (LSP), Detrended Fluctuation Analysis (DFA), multiscale entropy (MSE) and symbolic dynamics (SD) on each HRV signal with and without artifacts. The results show that, at low levels of missing points the performance of all correction techniques are very similar with very close values for each HRV parameter. However, at higher levels of losses only the NPI method allows to obtain HRV parameters with low error values and low quantity of significant differences in comparison to the values calculated for the same signals without the presence of missing points.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The subject of this thesis is the real-time implementation of algebraic derivative estimators as observers in nonlinear control of magnetic levitation systems. These estimators are based on operational calculus and implemented as FIR filters, resulting on a feasible real-time implementation. The algebraic method provide a fast, non-asymptotic state estimation. For the magnetic levitation systems, the algebraic estimators may replace the standard asymptotic observers assuring very good performance and robustness. To validate the estimators as observers in closed-loop control, several nonlinear controllers are proposed and implemented in a experimental magnetic levitation prototype. The results show an excellent performance of the proposed control laws together with the algebraic estimators.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The Municipality of Anchorage (MOA) is required to better manage, operate and control municipal solid waste (MSW) after the Anchorage Assembly instituted a Zero Waste Policy. Two household curbside recycling programs (CRPs), pay-as-you-throw (PAYT) and single-stream, were compared and evaluated to determine an optimal municipal solid waste diversion method for households within the MOA. The analyses find: (1) a CRP must be designed from comprehensive analysis, models and data correlation that combine demographic and psychographic variables; and (2) CRPs can be easily adjusted towards community-specific goals using technology, such as Geographic Information System (GIS) and Radio Frequency Identification (RFID). Combining resources of policy-makers, businesses, and other viable actors are necessary components to produce a sustainable, economically viable curbside recycling program.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Information of crop phenology is essential for evaluating crop productivity. In a previous work, we determined phenological stages with remote sensing data using a dynamic system framework and an extended Kalman filter (EKF) approach. In this paper, we demonstrate that the particle filter is a more reliable method to infer any phenological stage compared to the EKF. The improvements achieved with this approach are discussed. In addition, this methodology enables the estimation of key cultivation dates, thus providing a practical product for many applications. The dates of some important stages, as the sowing date and the day when the crop reaches the panicle initiation stage, have been chosen to show the potential of this technique.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article provides results guarateeing that the optimal value of a given convex infinite optimization problem and its corresponding surrogate Lagrangian dual coincide and the primal optimal value is attainable. The conditions ensuring converse strong Lagrangian (in short, minsup) duality involve the weakly-inf-(locally) compactness of suitable functions and the linearity or relative closedness of some sets depending on the data. Applications are given to different areas of convex optimization, including an extension of the Clark-Duffin Theorem for ordinary convex programs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Dual-phase-lagging (DPL) models constitute a family of non-Fourier models of heat conduction that allow for the presence of time lags in the heat flux and the temperature gradient. These lags may need to be considered when modeling microscale heat transfer, and thus DPL models have found application in the last years in a wide range of theoretical and technical heat transfer problems. Consequently, analytical solutions and methods for computing numerical approximations have been proposed for particular DPL models in different settings. In this work, a compact difference scheme for second order DPL models is developed, providing higher order precision than a previously proposed method. The scheme is shown to be unconditionally stable and convergent, and its accuracy is illustrated with numerical examples.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Thesis--University of Illinois at Urbana-Champaign.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photocopy of original: Berkeley : Structural Engineering Laboratory, University of California, 1974.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper investigates the nonlinear vibration of imperfect shear deformable laminated rectangular plates comprising a homogeneous substrate and two layers of functionally graded materials (FGMs). A theoretical formulation based on Reddy's higher-order shear deformation plate theory is presented in terms of deflection, mid-plane rotations, and the stress function. A semi-analytical method, which makes use of the one-dimensional differential quadrature method, the Galerkin technique, and an iteration process, is used to obtain the vibration frequencies for plates with various boundary conditions. Material properties are assumed to be temperature-dependent. Special attention is given to the effects of sine type imperfection, localized imperfection, and global imperfection on linear and nonlinear vibration behavior. Numerical results are presented in both dimensionless tabular and graphical forms for laminated plates with graded silicon nitride/stainless steel layers. It is shown that the vibration frequencies are very much dependent on the vibration amplitude and the imperfection mode and its magnitude. While most of the imperfect laminated plates show the well-known hard-spring vibration, those with free edges can display soft-spring vibration behavior at certain imperfection levels. The influences of material composition, temperature-dependence of material properties and side-to-thickness ratio are also discussed. (C) 2004 Elsevier Ltd. All rights reserved.