905 resultados para Finite element analyze
Resumo:
A method for formulating and algorithmically solving the equations of finite element problems is presented. The method starts with a parametric partition of the domain in juxtaposed strips that permits sweeping the whole region by a sequential addition (or removal) of adjacent strips. The solution of the difference equations constructed over that grid proceeds along with the addition removal of strips in a manner resembling the transfer matrix approach, except that different rules of composition that lead to numerically stable algorithms are used for the stiffness matrices of the strips. Dynamic programming and invariant imbedding ideas underlie the construction of such rules of composition. Among other features of interest, the present methodology provides to some extent the analyst's control over the type and quantity of data to be computed. In particular, the one-sweep method presented in Section 9, with no apparent counterpart in standard methods, appears to be very efficient insofar as time and storage is concerned. The paper ends with the presentation of a numerical example
Resumo:
Two mathematical models are used to simulate pollution in the Bay of Santander. The first is the hydrodynamic model that provides the velocity field and height of the water. The second gives the pollutant concentration field as a resultant. Both models are formulated in two-dimensional equations. Linear triangular finite elements are used in the Galerkin procedure for spatial discretization. A finite difference scheme is used for the time integration. At each time step the calculated results of the first model are input to the second model as field data. The efficiency and accuracy of the models are tested by their application to a simple illustrative example. Finally a case study in simulation of pollution evolution in the Bay of Santander is presented
Resumo:
A consistent Finite Element formulation was developed for four classical 1-D beam models. This formulation is based upon the solution of the homogeneous differential equation (or equations) associated with each model. Results such as the shape functions, stiffness matrices and consistent force vectors for the constant section beam were found. Some of these results were compared with the corresponding ones obtained by the standard Finite Element Method (i.e. using polynomial expansions for the field variables). Some of the difficulties reported in the literature concerning some of these models may be avoided by this technique and some numerical sensitivity analysis on this subject are presented.
Resumo:
The existing seismic isolation systems are based on well-known and accepted physical principles, but they are still having some functional drawbacks. As an attempt of improvement, the Roll-N-Cage (RNC) isolator has been recently proposed. It is designed to achieve a balance in controlling isolator displacement demands and structural accelerations. It provides in a single unit all the necessary functions of vertical rigid support, horizontal flexibility with enhanced stability, resistance to low service loads and minor vibration, and hysteretic energy dissipation characteristics. It is characterized by two unique features that are a self-braking (buffer) and a self-recentering mechanism. This paper presents an advanced representation of the main and unique features of the RNC isolator using an available finite element code called SAP2000. The validity of the obtained SAP2000 model is then checked using experimental, numerical and analytical results. Then, the paper investigates the merits and demerits of activating the built-in buffer mechanism on both structural pounding mitigation and isolation efficiency. The paper addresses the problem of passive alleviation of possible inner pounding within the RNC isolator, which may arise due to the activation of its self-braking mechanism under sever excitations such as near-fault earthquakes. The results show that the obtained finite element code-based model can closely match and accurately predict the overall behavior of the RNC isolator with effectively small errors. Moreover, the inherent buffer mechanism of the RNC isolator could mitigate or even eliminate direct structure-tostructure pounding under severe excitation considering limited septation gaps between adjacent structures. In addition, the increase of inherent hysteretic damping of the RNC isolator can efficiently limit its peak displacement together with the severity of the possibly developed inner pounding and, therefore, alleviate or even eliminate the possibly arising negative effects of the buffer mechanism on the overall RNC-isolated structural responses.
Resumo:
Civil buildings are not specifically designed to support blast loads, but it is important to take into account these potential scenarios because of their catastrophic effects, on persons and structures. A practical way to consider explosions on reinforced concrete structures is necessary. With this objective we propose a methodology to evaluate blast loads on large concrete buildings, using LS-DYNA code for calculation, with Lagrangian finite elements and explicit time integration. The methodology has three steps. First, individual structural elements of the building like columns and slabs are studied, using continuum 3D elements models subjected to blast loads. In these models reinforced concrete is represented with high precision, using advanced material models such as CSCM_CONCRETE model, and segregated rebars constrained within the continuum mesh. Regrettably this approach cannot be used for large structures because of its excessive computational cost. Second, models based on structural elements are developed, using shells and beam elements. In these models concrete is represented using CONCRETE_EC2 model and segregated rebars with offset formulation, being calibrated with continuum elements models from step one to obtain the same structural response: displacement, velocity, acceleration, damage and erosion. Third, models basedon structural elements are used to develop large models of complete buildings. They are used to study the global response of buildings subjected to blast loads and progressive collapse. This article carries out different techniques needed to calibrate properly the models based on structural elements, using shells and beam elements, in order to provide results of sufficient accuracy that can be used with moderate computational cost.
Resumo:
En la presente tesis desarrollamos una estrategia para la simulación numérica del comportamiento mecánico de la aorta humana usando modelos de elementos finitos no lineales. Prestamos especial atención a tres aspectos claves relacionados con la biomecánica de los tejidos blandos. Primero, el análisis del comportamiento anisótropo característico de los tejidos blandos debido a las familias de fibras de colágeno. Segundo, el análisis del ablandamiento presentado por los vasos sanguíneos cuando estos soportan cargas fuera del rango de funcionamiento fisiológico. Y finalmente, la inclusión de las tensiones residuales en las simulaciones en concordancia con el experimento de apertura de ángulo. El análisis del daño se aborda mediante dos aproximaciones diferentes. En la primera aproximación se presenta una formulación de daño local con regularización. Esta formulación tiene dos ingredientes principales. Por una parte, usa los principios de la teoría de la fisura difusa para garantizar la objetividad de los resultados con diferentes mallas. Por otra parte, usa el modelo bidimensional de Hodge-Petruska para describir el comportamiento mesoscópico de los fibriles. Partiendo de este modelo mesoscópico, las propiedades macroscópicas de las fibras de colágeno son obtenidas a través de un proceso de homogenización. En la segunda aproximación se presenta un modelo de daño no-local enriquecido con el gradiente de la variable de daño. El modelo se construye a partir del enriquecimiento de la función de energía con un término que contiene el gradiente material de la variable de daño no-local. La inclusión de este término asegura una regularización implícita de la implementación por elementos finitos, dando lugar a resultados de las simulaciones que no dependen de la malla. La aplicabilidad de este último modelo a problemas de biomecánica se estudia por medio de una simulación de un procedimiento quirúrgico típico conocido como angioplastia de balón. In the present thesis we develop a framework for the numerical simulation of the mechanical behaviour of the human aorta using non-linear finite element models. Special attention is paid to three key aspects related to the biomechanics of soft tissues. First, the modelling of the characteristic anisotropic behaviour of the softue due to the collagen fibre families. Secondly, the modelling of damage-related softening that blood vessels exhibit when subjected to loads beyond their physiological range. And finally, the inclusion of the residual stresses in the simulations in accordance with the opening-angle experiment The modelling of damage is addressed with two major and different approaches. In the first approach a continuum local damage formulation with regularisation is presented. This formulation has two principal ingredients. On the one hand, it makes use of the principles of the smeared crack theory to avoid the mesh size dependence of the structural response in softening. On the other hand, it uses a Hodge-Petruska bidimensional model to describe the fibrils as staggered arrays of tropocollagen molecules, and from this mesoscopic model the macroscopic material properties of the collagen fibres are obtained using an homogenisation process. In the second approach a non-local gradient-enhanced damage formulation is introduced. The model is built around the enhancement of the free energy function by means of a term that contains the referential gradient of the non-local damage variable. The inclusion of this term ensures an implicit regularisation of the finite element implementation, yielding mesh-objective results of the simulations. The applicability of the later model to biomechanically-related problems is studied by means of the simulation of a typical surgical procedure, namely, the balloon angioplasty.
Resumo:
This paper presents a Finite Element Model, which has been used for forecasting the diffusion of innovations in time and space. Unlike conventional models used in diffusion literature, the model considers the spatial heterogeneity. The implementation steps of the model are explained by applying it to the case of diffusion of photovoltaic systems in a local region in southern Germany. The applied model is based on a parabolic partial differential equation that describes the diffusion ratio of photovoltaic systems in a given region over time. The results of the application show that the Finite Element Model constitutes a powerful tool to better understand the diffusion of an innovation as a simultaneous space-time process. For future research, model limitations and possible extensions are also discussed.
Resumo:
In the thin-film photovoltaic industry, to achieve a high light scattering in one or more of the cell interfaces is one of the strategies that allow an enhancement of light absorption inside the cell and, therefore, a better device behavior and efficiency. Although chemical etching is the standard method to texture surfaces for that scattering improvement, laser light has shown as a new way for texturizing different materials, maintaining a good control of the final topography with a unique, clean, and quite precise process. In this work AZO films with different texture parameters are fabricated. The typical parameters used to characterize them, as the root mean square roughness or the haze factor, are discussed and, for deeper understanding of the scattering mechanisms, the light behavior in the films is simulated using a finite element method code. This method gives information about the light intensity in each point of the system, allowing the precise characterization of the scattering behavior near the film surface, and it can be used as well to calculate a simulated haze factor that can be compared with experimental measurements. A discussion of the validation of the numerical code, based in a comprehensive comparison with experimental data is included.
Resumo:
In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.
Resumo:
Peer reviewed
Resumo:
Thermal buckling behavior of automotive clutch and brake discs is studied by making the use of finite element method. It is found that the temperature distribution along the radius and the thickness affects the critical buckling load considerably. The results indicate that a monotonic temperature profile leads to a coning mode with the highest temperature located at the inner radius. Whereas a temperature profile with the maximum temperature located in the middle leads to a dominant non-axisymmetric buckling mode, which results in a much higher buckling temperature. A periodic variation of temperature cannot lead to buckling. The temperature along the thickness can be simplified by the mean temperature method in the single material model. The thermal buckling analysis of friction discs with friction material layer, cone angle geometry and fixed teeth boundary conditions are also studied in detail. The angular geometry and the fixed teeth can improve the buckling temperature significantly. Young’s Modulus has no effect when single material is applied in the free or restricted conditions. Several equations are derived to validate the result. Young’s modulus ratio is a useful factor when the clutch has several material layers. The research findings from this paper are useful for automotive clutch and brake discs design against structural instability induced by thermal buckling.
Resumo:
Most of the analytical models devoted to determine the acoustic properties of a rigid perforated panel consider the acoustic impedance of a single hole and then use the porosity to determine the impedance for the whole panel. However, in the case of not homogeneous hole distribution or more complex configurations this approach is no longer valid. This work explores some of these limitations and proposes a finite element methodology that implements the linearized Navier Stokes equations in the frequency domain to analyse the acoustic performance under normal incidence of perforated panel absorbers. Some preliminary results for a homogenous perforated panel show that the sound absorption coefficient derived from the Maa analytical model does not match those from the simulations. These differences are mainly attributed to the finite geometry effect and to the spatial distribution of the perforations for the numerical case. In order to confirm these statements, the acoustic field in the vicinities of the perforations is analysed for a more complex configuration of perforated panel. Additionally, experimental studies are carried out in an impedance tube for the same configuration and then compared to previous methods. The proposed methodology is shown to be in better agreement with the laboratorial measurements than the analytical approach.
Resumo:
Subsidence is a hazard that may have natural or anthropogenic origin causing important economic losses. The area of Murcia city (SE Spain) has been affected by subsidence due to groundwater overexploitation since the year 1992. The main observed historical piezometric level declines occurred in the periods 1982–1984, 1992–1995 and 2004–2008 and showed a close correlation with the temporal evolution of ground displacements. Since 2008, the pressure recovery in the aquifer has led to an uplift of the ground surface that has been detected by the extensometers. In the present work an elastic hydro-mechanical finite element code has been used to compute the subsidence time series for 24 geotechnical boreholes, prescribing the measured groundwater table evolution. The achieved results have been compared with the displacements estimated through an advanced DInSAR technique and measured by the extensometers. These spatio-temporal comparisons have showed that, in spite of the limited geomechanical data available, the model has turned out to satisfactorily reproduce the subsidence phenomenon affecting Murcia City. The model will allow the prediction of future induced deformations and the consequences of any piezometric level variation in the study area.
Resumo:
Numerical modelling methodologies are important by their application to engineering and scientific problems, because there are processes where analytical mathematical expressions cannot be obtained to model them. When the only available information is a set of experimental values for the variables that determine the state of the system, the modelling problem is equivalent to determining the hyper-surface that best fits the data. This paper presents a methodology based on the Galerkin formulation of the finite elements method to obtain representations of relationships that are defined a priori, between a set of variables: y = z(x1, x2,...., xd). These representations are generated from the values of the variables in the experimental data. The approximation, piecewise, is an element of a Sobolev space and has derivatives defined in a general sense into this space. The using of this approach results in the need of inverting a linear system with a structure that allows a fast solver algorithm. The algorithm can be used in a variety of fields, being a multidisciplinary tool. The validity of the methodology is studied considering two real applications: a problem in hydrodynamics and a problem of engineering related to fluids, heat and transport in an energy generation plant. Also a test of the predictive capacity of the methodology is performed using a cross-validation method.
Resumo:
National Highway Traffic Safety Administration, Washington, D.C.