990 resultados para Numerical Problems
Resumo:
The goal of this simulation thesis is to present a tool for studying and eliminating various numerical problems observed while analyzing the behavior of the MIND cable during fast voltage polarity reversal. The tool is built on the MATLAB environment, where several simulations were run to achieve oscillation-free results. This thesis will add to earlier research on HVDC cables subjected to polarity reversals. Initially, the code does numerical simulations to analyze the electric field and charge density behavior of a MIND cable for certain scenarios such as before, during, and after polarity reversal. However, the primary goal is to reduce numerical oscillations from the charge density profile. The generated code is notable for its usage of the Arithmetic Mean Approach and the Non-Uniform Field Approach for filtering and minimizing oscillations even under time and temperature variations.
Resumo:
We derive a new class of iterative schemes for accelerating the convergence of the EM algorithm, by exploiting the connection between fixed point iterations and extrapolation methods. First, we present a general formulation of one-step iterative schemes, which are obtained by cycling with the extrapolation methods. We, then square the one-step schemes to obtain the new class of methods, which we call SQUAREM. Squaring a one-step iterative scheme is simply applying it twice within each cycle of the extrapolation method. Here we focus on the first order or rank-one extrapolation methods for two reasons, (1) simplicity, and (2) computational efficiency. In particular, we study two first order extrapolation methods, the reduced rank extrapolation (RRE1) and minimal polynomial extrapolation (MPE1). The convergence of the new schemes, both one-step and squared, is non-monotonic with respect to the residual norm. The first order one-step and SQUAREM schemes are linearly convergent, like the EM algorithm but they have a faster rate of convergence. We demonstrate, through five different examples, the effectiveness of the first order SQUAREM schemes, SqRRE1 and SqMPE1, in accelerating the EM algorithm. The SQUAREM schemes are also shown to be vastly superior to their one-step counterparts, RRE1 and MPE1, in terms of computational efficiency. The proposed extrapolation schemes can fail due to the numerical problems of stagnation and near breakdown. We have developed a new hybrid iterative scheme that combines the RRE1 and MPE1 schemes in such a manner that it overcomes both stagnation and near breakdown. The squared first order hybrid scheme, SqHyb1, emerges as the iterative scheme of choice based on our numerical experiments. It combines the fast convergence of the SqMPE1, while avoiding near breakdowns, with the stability of SqRRE1, while avoiding stagnations. The SQUAREM methods can be incorporated very easily into an existing EM algorithm. They only require the basic EM step for their implementation and do not require any other auxiliary quantities such as the complete data log likelihood, and its gradient or hessian. They are an attractive option in problems with a very large number of parameters, and in problems where the statistical model is complex, the EM algorithm is slow and each EM step is computationally demanding.
Resumo:
Михаил Константинов, Весела Пашева, Петко Петков - Разгледани са някои числени проблеми при използването на компютърната система MATLAB в учебната дейност: пресмятане на тригонометрични функции, повдигане на матрица на степен, спектрален анализ на целочислени матрици от нисък ред и пресмятане на корените на алгебрични уравнения. Причините за възникналите числени трудности могат да се обяснят с особеностите на използваната двоичната аритметика с плаваща точка.
Resumo:
Steel fiber reinforced concrete (SFRC) is widely applied in the construction industry. Numerical elastoplastic analysis of the macroscopic behavior is complex. This typically involves a piecewise linear failure curve including corner singularities. This paper presents a single smooth biaxial failure curve for SFRC based on a semianalytical approximation. Convexity of the proposed model is guaranteed so that numerical problems are avoided. The model has sufficient flexibility to closely match experimental results. The failure curve is also suitable for modeling plain concrete under biaxial loading. Since this model is capable of simulating the failure states in all stress regimes with a single envelope, the elastoplastic formulation is very concise and simple. The finite element implementation is developed to demonstrate the conciseness and the effectiveness of the model. The computed results display good agreement with published experimental data.
Resumo:
Informe de investigación elaborado a partir de una estancia en el Laboratorio de Diseño Computacional en Aeroespacial en el Massachusetts Institute of Technology (MIT), Estados Unidos, entre noviembre de 2006 y agosto de 2007. La aerodinámica es una rama de la dinámica de fluidos referida al estudio de los movimientos de los líquidos o gases, cuya meta principal es predecir las fuerzas aerodinámicas en un avión o cualquier tipo de vehículo, incluyendo los automóviles. Las ecuaciones de Navier-Stokes representan un estado dinámico del equilibrio de las fuerzas que actúan en cualquier región dada del fluido. Son uno de los sistemas de ecuaciones más útiles porque describen la física de una gran cantidad de fenómenos como corrientes del océano, flujos alrededor de una superficie de sustentación, etc. En el contexto de una tesis doctoral, se está estudiando un flujo viscoso e incompresible, solucionando las ecuaciones de Navier- Stokes incompresibles de una manera eficiente. Durante la estancia en el MIT, se ha utilizado un método de Galerkin discontinuo para solucionar las ecuaciones de Navier-Stokes incompresibles usando, o bien un parámetro de penalti para asegurar la continuidad de los flujos entre elementos, o bien un método de Galerkin discontinuo compacto. Ambos métodos han dado buenos resultados y varios ejemplos numéricos se han simulado para validar el buen comportamiento de los métodos desarrollados. También se han estudiado elementos particulares, los elementos de Raviart y Thomas, que se podrían utilizar en una formulación mixta para obtener un algoritmo eficiente para solucionar problemas numéricos complejos.
Resumo:
The focus of this dissertation is to develop finite elements based on the absolute nodal coordinate formulation. The absolute nodal coordinate formulation is a nonlinear finite element formulation, which is introduced for special requirements in the field of flexible multibody dynamics. In this formulation, a special definition for the rotation of elements is employed to ensure the formulation will not suffer from singularities due to large rotations. The absolute nodal coordinate formulation can be used for analyzing the dynamics of beam, plate and shell type structures. The improvements of the formulation are mainly concentrated towards the description of transverse shear deformation. Additionally, the formulation is verified by using conventional iso-parametric solid finite element and geometrically exact beam theory. Previous claims about especially high eigenfrequencies are studied by introducing beam elements based on the absolute nodal coordinate formulation in the framework of the large rotation vector approach. Additionally, the same high eigenfrequency problem is studied by using constraints for transverse deformation. It was determined that the improvements for shear deformation in the transverse direction lead to clear improvements in computational efficiency. This was especially true when comparative stress must be defined, for example when using elasto-plastic material. Furthermore, the developed plate element can be used to avoid certain numerical problems, such as shear and curvature lockings. In addition, it was shown that when compared to conventional solid elements, or elements based on nonlinear beam theory, elements based on the absolute nodal coordinate formulation do not lead to an especially stiff system for the equations of motion.
Resumo:
The objective of this dissertation is to improve the dynamic simulation of fluid power circuits. A fluid power circuit is a typical way to implement power transmission in mobile working machines, e.g. cranes, excavators etc. Dynamic simulation is an essential tool in developing controllability and energy-efficient solutions for mobile machines. Efficient dynamic simulation is the basic requirement for the real-time simulation. In the real-time simulation of fluid power circuits there exist numerical problems due to the software and methods used for modelling and integration. A simulation model of a fluid power circuit is typically created using differential and algebraic equations. Efficient numerical methods are required since differential equations must be solved in real time. Unfortunately, simulation software packages offer only a limited selection of numerical solvers. Numerical problems cause noise to the results, which in many cases leads the simulation run to fail. Mathematically the fluid power circuit models are stiff systems of ordinary differential equations. Numerical solution of the stiff systems can be improved by two alternative approaches. The first is to develop numerical solvers suitable for solving stiff systems. The second is to decrease the model stiffness itself by introducing models and algorithms that either decrease the highest eigenvalues or neglect them by introducing steady-state solutions of the stiff parts of the models. The thesis proposes novel methods using the latter approach. The study aims to develop practical methods usable in dynamic simulation of fluid power circuits using explicit fixed-step integration algorithms. In this thesis, twomechanisms whichmake the systemstiff are studied. These are the pressure drop approaching zero in the turbulent orifice model and the volume approaching zero in the equation of pressure build-up. These are the critical areas to which alternative methods for modelling and numerical simulation are proposed. Generally, in hydraulic power transmission systems the orifice flow is clearly in the turbulent area. The flow becomes laminar as the pressure drop over the orifice approaches zero only in rare situations. These are e.g. when a valve is closed, or an actuator is driven against an end stopper, or external force makes actuator to switch its direction during operation. This means that in terms of accuracy, the description of laminar flow is not necessary. But, unfortunately, when a purely turbulent description of the orifice is used, numerical problems occur when the pressure drop comes close to zero since the first derivative of flow with respect to the pressure drop approaches infinity when the pressure drop approaches zero. Furthermore, the second derivative becomes discontinuous, which causes numerical noise and an infinitely small integration step when a variable step integrator is used. A numerically efficient model for the orifice flow is proposed using a cubic spline function to describe the flow in the laminar and transition areas. Parameters for the cubic spline function are selected such that its first derivative is equal to the first derivative of the pure turbulent orifice flow model in the boundary condition. In the dynamic simulation of fluid power circuits, a tradeoff exists between accuracy and calculation speed. This investigation is made for the two-regime flow orifice model. Especially inside of many types of valves, as well as between them, there exist very small volumes. The integration of pressures in small fluid volumes causes numerical problems in fluid power circuit simulation. Particularly in realtime simulation, these numerical problems are a great weakness. The system stiffness approaches infinity as the fluid volume approaches zero. If fixed step explicit algorithms for solving ordinary differential equations (ODE) are used, the system stability would easily be lost when integrating pressures in small volumes. To solve the problem caused by small fluid volumes, a pseudo-dynamic solver is proposed. Instead of integration of the pressure in a small volume, the pressure is solved as a steady-state pressure created in a separate cascade loop by numerical integration. The hydraulic capacitance V/Be of the parts of the circuit whose pressures are solved by the pseudo-dynamic method should be orders of magnitude smaller than that of those partswhose pressures are integrated. The key advantage of this novel method is that the numerical problems caused by the small volumes are completely avoided. Also, the method is freely applicable regardless of the integration routine applied. The superiority of both above-mentioned methods is that they are suited for use together with the semi-empirical modelling method which necessarily does not require any geometrical data of the valves and actuators to be modelled. In this modelling method, most of the needed component information can be taken from the manufacturer’s nominal graphs. This thesis introduces the methods and shows several numerical examples to demonstrate how the proposed methods improve the dynamic simulation of various hydraulic circuits.
Resumo:
This article discusses three possible ways to derive time domain boundary integral representations for elastodynamics. This discussion points out possible difficulties found when using those formulations to deal with practical applications. The discussion points out recommendations to select the convenient integral representation to deal with elastodynamic problems and opens the possibility of deriving simplified schemes. The proper way to take into account initial conditions applied to the body is an interesting topict shown. It illustrates the main differences between the discussed boundary integral representation expressions, their singularities and possible numerical problems. The correct way to use collocation points outside the analyzed domain is carefully described. Some applications are shown at the end of the paper, in order to demonstrate the capabilities of the technique when properly used.
Resumo:
Although there is evidence that exact calculation recruits left hemisphere perisylvian language systems, recent work has shown that exact calculation can be retained despite severe damage to these networks. In this study, we sought to identify a “core” network for calculation and hence to determine the extent to which left hemisphere language areas are part of this network. We examined performance on addition and subtraction problems in two modalities: one using conventional two-digit problems that can be easily encoded into language; the other using novel shape representations. With regard to numerical problems, our results revealed increased left fronto-temporal activity in addition, and increased parietal activity in subtraction, potentially reflecting retrieval of linguistically encoded information during addition. The shape problems elicited activations of occipital, parietal and dorsal temporal regions, reflecting visual reasoning processes. A core activation common to both calculation types involved the superior parietal lobule bilaterally, right temporal sub-gyral area, and left lateralized activations in inferior parietal (BA 40), frontal (BA 6/8/32) and occipital (BA 18) regions. The large bilateral parietal activation could be attributed to visuo-spatial processing in calculation. The inferior parietal region, and particularly the left angular gyrus, was part of the core calculation network. However, given its activation in both shape and number tasks, its role is unlikely to reflect linguistic processing per se. A possibility is that it serves to integrate right hemisphere visuo-spatial and left hemisphere linguistic and executive processing in calculation.
Resumo:
Pós-graduação em Engenharia Mecânica - FEIS
Resumo:
Flat or worn wheels rolling on rough or corrugated tracks can provoke airborne noise and ground-borne vibration, which can be a serious concern for nearby neighbours of urban rail transit lines. Among the various treatments used to reduce vibration and noise, resilient wheels play an important role. In conventional resilient wheels, a slightly prestressed Vshaped rubber ring is mounted between the steel wheel centre and tyre. The elastic layer enhances rolling noise and vibration suppression, as well as impact reduction on the track. In this paper the effectiveness of resilient wheels in underground lines, in comparison to monobloc ones, is assessed. The analysed resilient wheel is able to carry greater loads than standard resilient wheels used for light vehicles. It also presents a greater radial resiliency and a higher axial stiffness than conventional Vwheels. The finite element method was used in this study. A quarter car model was defined, in which the wheelset was modelled as an elastic body. Several simulations were performed in order to assess the vibrational behaviour of elastic wheels, including modal, harmonic and random vibration analysis, the latter allowing the introduction of realistic vertical track irregularities, as well as the influence of the running speed. Due to numerical problems some simplifications were needed. Parametric variations were also performed, in which the sensitivity of the whole system to variations of rubber prestress and Poisson’s ratio of the elastic material was assessed.Results are presented in the frequency domain, showing a better performance of the resilient wheels for frequencies over 200 Hz. This result reveals the ability of the analyzed design to mitigate rolling noise, but not structural vibrations, which are primarily found in the lower frequency range.
Resumo:
Irregular computations pose sorne of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures, which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. Starting in the mid 80s there has been significant progress in the development of parallelizing compilers for logic programming (and more recently, constraint programming) resulting in quite capable parallelizers. The typical applications of these paradigms frequently involve irregular computations, and make heavy use of dynamic data structures with pointers, since logical variables represent in practice a well-behaved form of pointers. This arguably makes the techniques used in these compilers potentially interesting. In this paper, we introduce in a tutoríal way, sorne of the problems faced by parallelizing compilers for logic and constraint programs and provide pointers to sorne of the significant progress made in the area. In particular, this work has resulted in a series of achievements in the areas of inter-procedural pointer aliasing analysis for independence detection, cost models and cost analysis, cactus-stack memory management, techniques for managing speculative and irregular computations through task granularity control and dynamic task allocation such as work-stealing schedulers), etc.
Resumo:
Irregular computations pose some of the most interesting and challenging problems in automatic parallelization. Irregularity appears in certain kinds of numerical problems and is pervasive in symbolic applications. Such computations often use dynamic data structures which make heavy use of pointers. This complicates all the steps of a parallelizing compiler, from independence detection to task partitioning and placement. In the past decade there has been significant progress in the development of parallelizing compilers for logic programming and, more recently, constraint programming. The typical applications of these paradigms frequently involve irregular computations, which arguably makes the techniques used in these compilers potentially interesting. In this paper we introduce in a tutorial way some of the problems faced by parallelizing compilers for logic and constraint programs. These include the need for inter-procedural pointer aliasing analysis for independence detection and having to manage speculative and irregular computations through task granularity control and dynamic task allocation. We also provide pointers to some of the progress made in these áreas. In the associated talk we demónstrate representatives of several generations of these parallelizing compilers.
Resumo:
Hoy día nadie discute la importancia de predecir el comportamiento vibroacústico de estructuras (edificios, vehículos aeronaves, satélites). También se ha hecho patente, con el tiempo, que el rango espectral en el que la respuesta es importante se ha desplazado hacia alta frecuencia en prácticamente todos los campos. Esto ha hecho que los métodos de análisis en este rango alto de frecuencias cobren importancia y actualidad. Uno de los métodos más extendidos para este fin es el basado en el Análisis Estadístico de la Energía, SEA. Es un método que ha mostrado proporcionar un buen equilibrio entre potencia de calculo, precisión y fiabilidad. En un SEA el sistema (estructura, cavidades o aire circundante) se modela mediante una matriz de coeficientes que dependen directamente de los factores de pérdidas de las distintas partes del sistema. Formalmente es un método de análisis muy cómodo e intuitivo de manejar cuya mayor dificultad es precisamente la determinación de esos factores de pérdidas. El catálogo de expresiones analíticas o numéricas para su determinación no es suficientemente amplio por lo que normalmente siempre se suele acabar necesitando hacer uso de herramientas experimentales, ya sea para su obtención o la comprobación de los valores utilizados. La determinación experimental tampoco está exenta de problemas, su obtención necesita de configuraciones experimentales grandes y complejas con requisitos que pueden llegar a ser muy exigentes y en las que además, se ven involucrados problemas numéricos relacionados con los valores de los propios factores de pérdidas, el valor relativo entre ellos y las características de las matrices que conforman. Este trabajo estudia la caracterización de sistemas vibroacústicos mediante el análisis estadístico de energía. Se centra en la determinación precisa de los valores de los factores de pérdidas. Dados los problemas que puede presentar un sistema experimental de estas características, en una primera parte se estudia la influencia de todas las magnitudes que intervienen en la determinación de los factores de pérdidas mediante un estudio de incertidumbres relativas, que, por medio de los coeficientes de sensibilidad normalizados, indicará la importancia de cada una de las magnitudes de entrada (esencialmente energías y potencias) en los resultados. De esta parte se obtiene una visión general sobre a qué mensurados se debe prestar más atención, y de qué problemas pueden ser los que más influyan en la falta de estabilidad (o incoherencia) de los resultados. Además, proporciona un modelo de incertidumbres válido para los casos estudiados y ha permitido evaluar el error cometido por algún método utilizado habitualmente para la caracterización de factores de pérdidas como la aproximación a 2 subsistemas En una segunda parte se hace uso de las conclusiones obtenidas en la primera, de forma que el trabajo se orienta en dos direcciones. Una dirigida a la determi nación suficientemente fiel de la potencia de entrada que permita simplificar en lo posible la configuración experimental. Otra basada en un análisis detallado de las propiedades de la matriz que caracteriza un SEA y que conduce a la propuesta de un método para su determinación robusta, basada en un filtrado de Montecarlo y que, además, muestra cómo los problemas numéricos de la matriz SEA pueden no ser tan insalvables como se apunta en la literatura. Por último, además, se plantea una solución al caso en el que no todos los subsistemas en los que se divide el sistema puedan ser excitados. El método propuesto aquí no permite obtener el conjunto completo de coeficientes necesarios para definir un sistema, pero el solo hecho de poder obtener conjuntos parciales ya es un avance importante y, sobre todo, abre la puerta al desarrollo de métodos que permitan relajar de forma importante las exigencias que la determinación experimental de matrices SEA tiene. ABSTRACT At present there is an agreement about the importance to predict the vibroacoustic response of structures (buildings, vehicles, aircrafts, satellites, etc.). In addition, there has become clear over the time that the frequency range over which the response is important has been shift to higher frequencies in almost all the engineering fields. As a consequence, the numerical methods for high frequency analysis have increase in importance. One the most widespread methods for this type of analysis is the one based on the Statistical Energy Analysis, SEA. This method has shown to provide a good balance among calculation power, accuracy and liability. Within a SEA, a system (structure, cavities, surrounding air) is modeled by a coefficients matrix that depends directly on the loss factors of the different parts of the system. Formally, SEA is a very handy and intuitive analysis method whose greatest handicap is precisely the determination of the loss factors. The existing set of analytical or numerical equations to obtain the loss factor values is not enough, so that usually it is necessary to use experimental techniques whether it is to its determination to to check the estimated values by another mean. The experimental determination presents drawbacks, as well. To obtain them great and complex experimental setups are needed including requirements that can be very demanding including numerical problems related to the values of the loss factors themselves, their relative value and the characteristics of the matrices they define. The present work studies the characterization of vibroacousti systems by this SEA method. It focuses on the accurate determination of the loss factors values. Given all the problems an experimental setup of these characteristics can show, the work is divided in two parts. In the first part, the influence of all the quantities involved on the determination of the loss factors is studied by a relative uncertainty estimation, which, by means of the normalised sensitivity coefficients, will provide an insight about the importance of every input quantities (energies and input powers, mainly) on the final result. Besides, this part, gives an uncertainty model that has allowed assessing the error of one of the methods more widely used to characterize the loss factors: the 2-subsystem approach. In the second part, use of the former conclusions is used. An equation for the input power into the subsystems is proposed. This equation allows simplifying the experimental setup without changing the liability of the test. A detailed study of the SEA matrix properties leads to propose a robust determination method of this SEA matrix by a Monte Carlo filtering. In turn, this new method shows how the numerical problems of the SEA matrix can be overcome Finally, a solution is proposed for the case where not all the subsystems are excited. The method proposed do not allows obtaining the whole set of coefficients of the SEA matrix, but the simple fact of getting partial sets of loss factors is a significant advance and, over all, it opens the door to the development of new methods that loosen the requirements that an experimental determination of a SEA matrix have.
Resumo:
Over the past decade, the numerical modeling of the magnetic field evolution in astrophysical scenarios has become an increasingly important field. In the crystallized crust of neutron stars the evolution of the magnetic field is governed by the Hall induction equation. In this equation the relative contribution of the two terms (Hall term and Ohmic dissipation) varies depending on the local conditions of temperature and magnetic field strength. This results in the transition from the purely parabolic character of the equations to the hyperbolic regime as the magnetic Reynolds number increases, which presents severe numerical problems. Up to now, most attempts to study this problem were based on spectral methods, but they failed in representing the transition to large magnetic Reynolds numbers. We present a new code based on upwind finite differences techniques that can handle situations with arbitrary low magnetic diffusivity and it is suitable for studying the formation of sharp current sheets during the evolution. The code is thoroughly tested in different limits and used to illustrate the evolution of the crustal magnetic field in a neutron star in some representative cases. Our code, coupled to cooling codes, can be used to perform long-term simulations of the magneto-thermal evolution of neutron stars.