942 resultados para Nonlinear Static Analysis
Resumo:
The purpose of this work is to propose a structure for simulating power systems using behavioral models of nonlinear DC to DC converters implemented through a look-up table of gains. This structure is specially designed for converters whose output impedance depends on the load current level, e.g. quasi-resonant converters. The proposed model is a generic one whose parameters can be obtained by direct measuring the transient response at different operating points. It also includes optional functionalities for modeling converters with current limitation and current sharing in paralleling characteristics. The pusposed structured also allows including aditional characteristics of the DC to DC converter as the efficency as a function of the input voltage and the output current or overvoltage and undervoltage protections. In addition, this proposed model is valid for overdamped and underdamped situations.
Resumo:
Los terremotos constituyen una de las más importantes fuentes productoras de cargas dinámicas que actúan sobre las estructuras y sus cimentaciones. Cuando se produce un terremoto la energía liberada genera movimientos del terreno en forma de ondas sísmicas que pueden provocar asientos en las cimentaciones de los edificios, empujes sobre los muros de contención, vuelco de las estructuras y el suelo puede licuar perdiendo su capacidad de soporte. Los efectos de los terremotos en estructuras constituyen unos de los aspectos que involucran por su condición de interacción sueloestructura, disciplinas diversas como el Análisis Estructural, la Mecánica de Suelo y la Ingeniería Sísmica. Uno de los aspectos que han sido poco estudiados en el cálculo de estructuras sometidas a la acciones de los terremotos son los efectos del comportamiento no lineal del suelo y de los movimientos que pueden producirse bajo la acción de cargas sísmicas, tales como posibles despegues y deslizamientos. En esta Tesis se estudian primero los empujes sísmicos y posibles deslizamientos de muros de contención y se comparan las predicciones de distintos tipos de cálculos: métodos pseudo-estáticos como el de Mononobe-Okabe (1929) con la contribución de Whitman-Liao (1985), y formulaciones analíticas como la desarrollada por Veletsos y Younan (1994). En segundo lugar se estudia el efecto del comportamiento no lineal del terreno en las rigideces de una losa de cimentación superficial y circular, como la correspondiente a la chimenea de una Central Térmica o al edificio del reactor de una Central Nuclear, considerando su variación con frecuencia y con el nivel de cargas. Finalmente se estudian los posibles deslizamientos y separación de las losas de estas dos estructuras bajo la acción de terremotos, siguiendo la formulación propuesta por Wolf (1988). Para estos estudios se han desarrollado una serie de programas específicos (MUROSIS, VELETSOS, INTESES y SEPARSE) cuyos listados y detalles se incluyen en los Apéndices. En el capítulo 6 se incluyen las conclusiones resultantes de estos estudios y recomendaciones para futuras investigaciones. ABSTRACT Earthquakes constitute one of the most important sources of dynamic loads that acting on structures and foundations. When an earthquake occurs the liberated energy generates seismic waves that can give rise to structural vibrations, settlements of the foundations of buildings, pressures on retaining walls, and possible sliding, uplifting or even overturning of structures. The soil can also liquefy losing its capacity of support The study of the effects of earthquakes on structures involve the use of diverse disciplines such as Structural Analysis, Soil Mechanics and Earthquake Engineering. Some aspects that have been the subject of limited research in relation to the behavior of structures subjected to earthquakes are the effects of nonlinear soil behavior and geometric nonlinearities such as sliding and uplifting of foundations. This Thesis starts with the study of the seismic pressures and potential displacements of retaining walls comparing the predictions of two types of formulations and assessing their range of applicability and limitations: pseudo-static methods as proposed by Mononobe-Okabe (1929), with the contribution of Whitman-Liao (1985), and analytical formulations as the one developed by Veletsos and Younan (1994) for rigid walls. The Thesis deals next with the effects of nonlinear soil behavior on the dynamic stiffness of circular mat foundations like the chimney of a Thermal Power Station or the reactor building of a Nuclear Power Plant, as a function of frequency and level of forces. Finally the seismic response of these two structures accounting for the potential sliding and uplifting of the foundation under a given earthquake are studied, following an approach suggested by Wolf (1988). In order to carry out these studies a number of special purposes computer programs were developed (MUROSIS, VELETSOS, INTESES and SEPARSE). The listing and details of these programs are included in the appendices. The conclusions derived from these studies and recommendations for future work are presented in Chapter 6.
Resumo:
Static analyses of object-oriented programs usually rely on intermediate representations that respect the original semantics while having a more uniform and basic syntax. Most of the work involving object-oriented languages and abstract interpretation usually omits the description of that language or just refers to the Control Flow Graph(CFG) it represents. However, this lack of formalization on one hand results in an absence of assurances regarding the correctness of the transformation and on the other it typically strongly couples the analysis to the source language. In this work we present a framework for analysis of object-oriented languages in which in a first phase we transform the input program into a representation based on Horn clauses. This allows on one hand proving the transformation correct attending to a simple condition and on the other being able to apply an existing analyzer for (constraint) logic programming to automatically derive a safe approximation of the semantics of the original program. The approach is flexible in the sense that the first phase decouples the analyzer from most languagedependent features, and correct because the set of Horn clauses returned by the transformation phase safely approximates the standard semantics of the input program. The resulting analysis is also reasonably scalable due to the use of mature, modular (C)LP-based analyzers. The overall approach allows us to report results for medium-sized programs.
Resumo:
We propose a general framework for assertion-based debugging of constraint logic programs. Assertions are linguistic constructions for expressing properties of programs. We define several assertion schemas for writing (partial) specifications for constraint logic programs using quite general properties, including user-defined programs. The framework is aimed at detecting deviations of the program behavior (symptoms) with respect to the given assertions, either at compile-time (i.e., statically) or run-time (i.e., dynamically). We provide techniques for using information from global analysis both to detect at compile-time assertions which do not hold in at least one of the possible executions (i.e., static symptoms) and assertions which hold for all possible executions (i.e., statically proved assertions). We also provide program transformations which introduce tests in the program for checking at run-time those assertions whose status cannot be determined at compile-time. Both the static and the dynamic checking are provably safe in the sense that all errors flagged are definite violations of the pecifications. Finally, we report briefly on the currently implemented instances of the generic framework.
Resumo:
We present a method for the static resource usage analysis of MiniZinc models. The analysis can infer upper bounds on the usage that a MiniZinc model will make of some resources such as the number of constraints of a given type (equality, disequality, global constraints, etc.), the number of variables (search variables or temporary variables), or the size of the expressions before calling the solver. These bounds are obtained from the models independently of the concrete input data (the instance data) and are in general functions of sizes of such data. In our approach, MiniZinc models are translated into Ciao programs which are then analysed by the CiaoPP system. CiaoPP includes a parametric analysis framework for resource usage in which the user can define resources and express the resource usage of library procedures (and certain program construets) by means of a language of assertions. We present the approach and report on a preliminary implementation, which shows the feasibility of the approach, and provides encouraging results.
Resumo:
Automatic cost analysis of programs has been traditionally concentrated on a reduced number of resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certiflcation of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually application-dependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a datábase, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-deflnable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these deflnitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering a signiflcant set of interesting resources.
Resumo:
Automatic cost analysis of programs has been traditionally studied in terms of a number of concrete, predefined resources such as execution steps, time, or memory. However, the increasing relevance of analysis applications such as static debugging and/or certification of user-level properties (including for mobile code) makes it interesting to develop analyses for resource notions that are actually applicationdependent. This may include, for example, bytes sent or received by an application, number of files left open, number of SMSs sent or received, number of accesses to a database, money spent, energy consumption, etc. We present a fully automated analysis for inferring upper bounds on the usage that a Java bytecode program makes of a set of application programmer-definable resources. In our context, a resource is defined by programmer-provided annotations which state the basic consumption that certain program elements make of that resource. From these definitions our analysis derives functions which return an upper bound on the usage that the whole program (and individual blocks) make of that resource for any given set of input data sizes. The analysis proposed is independent of the particular resource. We also present some experimental results from a prototype implementation of the approach covering an ample set of interesting resources.
Resumo:
After the experience gained during the past years it seems clear that nonlinear analysis of bridges are very important to compute ductility demands and to localize potential hinges. This is specially true for irregular bridges in which it is not clear weather or not it is possible to use a linear computation followed by a correction using a behaviour factor. To simplify the numerical effort several approximate methods have been proposed. Among them, the so-called Dynamic Plastic Hinge Method in which an evolutionary shape function is used to reduce the structure to a single degree of freedom system seems to mantein a good balance between accuracy and simplicity. This paper presents results obtained in a parametric study conducted under the auspicies of PREC-8 european research program.
Resumo:
Algorithms for distributed agreement are a powerful means for formulating distributed versions of existing centralized algorithms. We present a toolkit for this task and show how it can be used systematically to design fully distributed algorithms for static linear Gaussian models, including principal component analysis, factor analysis, and probabilistic principal component analysis. These algorithms do not rely on a fusion center, require only low-volume local (1-hop neighborhood) communications, and are thus efficient, scalable, and robust. We show how they are also guaranteed to asymptotically converge to the same solution as the corresponding existing centralized algorithms. Finally, we illustrate the functioning of our algorithms on two examples, and examine the inherent cost-performance tradeoff.
Resumo:
The study of lateral dynamics of running trains on bridges is of importance mainly for the safety of the traffic, and may be relevant for laterally compliant bridges. These studies require 3D coupled vehicle-bridge models, and consideration of wheel to rail contact, a phenomenon which is complex and costly to model in detail. We describe here a fully nonlinear coupled model, described in absolute coordinates and incorporated into a commercial finite element framework. Two applications are presented, firstly to a vehicle subject to a strong wind gust traversing a br idge, showing the relevance of the nonlinear wheel-rail contact model as well as the interaction between bridge and vehicle. The second application is to a real viaduct in a high-speed line, with a long continuous deck and tall piers with high lateral compliance. The results show the safety of the traffic as well as the relevance of considering the wind action and the nonlinear response.
Resumo:
In this paper fault detection and isolation (FDI) schemes are applied in the context of the surveillance of emerging faults in an electrical circuit. The FDI problem is studied on a noisy nonlinear circuit, where both abrupt and incipient faults in the voltage source are considered. A rigorous analysis of fault detectability precedes the application of the fault detection (FD) scheme; then, the fault isolation (FI) phase is accomplished with two alternative FI approaches, proposed as new extensions of that FD approach. Numerical simulations illustrate the applicability of the mentioned schemes.
Resumo:
Seismic hazard study in “La Hispaniola” island in connection with the land tenure situation in the region, in order to define priority areas with a high risk, where some land management recommendations are proposed. The seismic hazard assessment has been carried out following the probabilistic method with a seismogenic zonation and including the major faults of the region as independent units. In order to identify the priority areas, it has taken into account, besides the seismic hazard study, the map of changes of static Coulomb failure stress and the landslide hazard map.
Resumo:
The analysis of complex nonlinear systems is often carried out using simpler piecewise linear representations of them. A principled and practical technique is proposed to linearize and evaluate arbitrary continuous nonlinear functions using polygonal (continuous piecewise linear) models under the L1 norm. A thorough error analysis is developed to guide an optimal design of two kinds of polygonal approximations in the asymptotic case of a large budget of evaluation subintervals N. The method allows the user to obtain the level of linearization (N) for a target approximation error and vice versa. It is suitable for, but not limited to, an efficient implementation in modern Graphics Processing Units (GPUs), allowing real-time performance of computationally demanding applications. The quality and efficiency of the technique has been measured in detail on two nonlinear functions that are widely used in many areas of scientific computing and are expensive to evaluate.
Resumo:
This paper presents a numerical implementation of the cohesive crack model for the anal-ysis of quasibrittle materials based on the strong discontinuity approach in the framework of the finite element method. A simple central force model is used for the stress versus crack opening curve. The additional degrees of freedom defining the crack opening are determined at the crack level, thus avoiding the need for performing a static condensation at the element level. The need for a tracking algorithm is avoided by using a consistent pro-cedure for the selection of the separated nodes. Such a model is then implemented into a commercial program by means of a user subroutine, consequently being contrasted with the experimental results. The model takes into account the anisotropy of the material. Numerical simulations of well-known experiments are presented to show the ability of the proposed model to simulate the fracture of quasibrittle materials such as mortar, concrete and masonry.
Resumo:
Nitrous oxide emissions from a network of agricultural experiments in Europe were used to explore the relative importance of site and management controls of emissions. At each site, a selection of management interventions were compared within replicated experimental designs in plot-based experiments. Arable experiments were conducted at Beano in Italy, El Encin in Spain, Foulum in Denmark, Logarden in Sweden, Maulde in Belgium CE1, Paulinenaue in Germany, and Tulloch in the UK. Grassland experiments were conducted at Crichton, Nafferton and Peaknaze in the UK, Godollo in Hungary, Rzecin in Poland, Zarnekow in Germany and Theix in France. Nitrous oxide emissions were measured at each site over a period of at least two years using static chambers. Emissions varied widely between sites and as a result of manipulation treatments. Average site emissions (throughout the study period) varied between 0.04 and 21.21 kg N2O-N ha−1yr−1, with the largest fluxes and variability associated with the grassland sites. Total nitrogen addition was found to be the single most important deter- minant of emissions, accounting for 15 % of the variance (using linear regression) in the data from the arable sites (p<0.0001), and 77 % in the grassland sites. The annual emissions from arable sites were significantly greater than those that would be predicted by IPCC default emission fac- tors. Variability of N2O emissions within sites that occurred as a result of manipulation treatments was greater than that resulting from site-to-site and year-to-year variation, highlighting the importance of management interventions in contributing to greenhouse gas mitigation