998 resultados para 2D elasticity problems


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We study preconditioning techniques for discontinuous Galerkin discretizations of isotropic linear elasticity problems in primal (displacement) formulation. We propose subspace correction methods based on a splitting of the vector valued piecewise linear discontinuous finite element space, that are optimal with respect to the mesh size and the Lamé parameters. The pure displacement, the mixed and the traction free problems are discussed in detail. We present a convergence analysis of the proposed preconditioners and include numerical examples that validate the theory and assess the performance of the preconditioners.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents an HP-Adaptive Procedure with Hierarchical formulation for the Boundary Element Method in 2-D Elasticity problems. Firstly, H, P and HP formulations are defined. Then, the hierarchical concept, which allows a substantial reduction in the dimension of equation system, is introduced. The error estimator used is based on the residual computation over each node inside an element. Finally, the HP strategy is defined and applied to two examples.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

We propose a discontinuous-Galerkin-based immersed boundary method for elasticity problems. The resulting numerical scheme does not require boundary fitting meshes and avoids boundary locking by switching the elements intersected by the boundary to a discontinuous Galerkin approximation. Special emphasis is placed on the construction of a method that retains an optimal convergence rate in the presence of non-homogeneous essential and natural boundary conditions. The role of each one of the approximations introduced is illustrated by analyzing an analog problem in one spatial dimension. Finally, extensive two- and three-dimensional numerical experiments on linear and nonlinear elasticity problems verify that the proposed method leads to optimal convergence rates under combinations of essential and natural boundary conditions. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

As it is well known B.E.M. works efficiently in the treatment of a bread class of potential and elasticity problems. In this paper we present the results of several runs established with linear elements in plane potential theory and treating the importance of singularities and the pattern and size of elements used in the boundary discretization.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

A general theory that describes the B.I.E. linear approximation in potential and elasticity problems, is developed. A method to tread the Dirichlet condition in sharp vertex is presented. Though the study is developed for linear elements, its extension to higher order interpolation is straightforward. A new direct assembling procedure of the global of equations to be solved, is finally showed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In dieser Arbeit werden zwei Aspekte bei Randwertproblemen der linearen Elastizitätstheorie untersucht: die Approximation von Lösungen auf unbeschränkten Gebieten und die Änderung von Symmetrieklassen unter speziellen Transformationen. Ausgangspunkt der Dissertation ist das von Specovius-Neugebauer und Nazarov in "Artificial boundary conditions for Petrovsky systems of second order in exterior domains and in other domains of conical type"(Math. Meth. Appl. Sci, 2004; 27) eingeführte Verfahren zur Untersuchung von Petrovsky-Systemen zweiter Ordnung in Außenraumgebieten und Gebieten mit konischen Ausgängen mit Hilfe der Methode der künstlichen Randbedingungen. Dabei werden für die Ermittlung von Lösungen der Randwertprobleme die unbeschränkten Gebiete durch das Abschneiden mit einer Kugel beschränkt, und es wird eine künstliche Randbedingung konstruiert, um die Lösung des Problems möglichst gut zu approximieren. Das Verfahren wird dahingehend verändert, dass das abschneidende Gebiet ein Polyeder ist, da es für die Lösung des Approximationsproblems mit üblichen Finite-Element-Diskretisierungen von Vorteil sei, wenn das zu triangulierende Gebiet einen polygonalen Rand besitzt. Zu Beginn der Arbeit werden die wichtigsten funktionalanalytischen Begriffe und Ergebnisse der Theorie elliptischer Differentialoperatoren vorgestellt. Danach folgt der Hauptteil der Arbeit, der sich in drei Bereiche untergliedert. Als erstes wird für abschneidende Polyedergebiete eine formale Konstruktion der künstlichen Randbedingungen angegeben. Danach folgt der Nachweis der Existenz und Eindeutigkeit der Lösung des approximativen Randwertproblems auf dem abgeschnittenen Gebiet und im Anschluss wird eine Abschätzung für den resultierenden Abschneidefehler geliefert. An die theoretischen Ausführungen schließt sich die Betrachtung von Anwendungsbereiche an. Hier werden ebene Rissprobleme und Polarisationsmatrizen dreidimensionaler Außenraumprobleme der Elastizitätstheorie erläutert. Der letzte Abschnitt behandelt den zweiten Aspekt der Arbeit, den Bereich der Algebraischen Äquivalenzen. Hier geht es um die Transformation von Symmetrieklassen, um die Kenntnis der Fundamentallösung der Elastizitätsprobleme für transversalisotrope Medien auch für Medien zu nutzen, die nicht von transversalisotroper Struktur sind. Eine allgemeine Darstellung aller Klassen konnte hier nicht geliefert werden. Als Beispiel für das Vorgehen wird eine Klasse von orthotropen Medien im dreidimensionalen Fall angegeben, die sich auf den Fall der Transversalisotropie reduzieren lässt.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Topological optimization problems based on stress criteria are solved using two techniques in this paper. The first technique is the conventional Evolutionary Structural Optimization (ESO), which is known as hard kill, because the material is discretely removed; that is, the elements under low stress that are being inefficiently utilized have their constitutive matrix has suddenly reduced. The second technique, proposed in a previous paper, is a variant of the ESO procedure and is called Smooth ESO (SESO), which is based on the philosophy that if an element is not really necessary for the structure, its contribution to the structural stiffness will gradually diminish until it no longer influences the structure; its removal is thus performed smoothly. This procedure is known as "soft-kill"; that is, not all of the elements removed from the structure using the ESO criterion are discarded. Thus, the elements returned to the structure must provide a good conditioning system that will be resolved in the next iteration, and they are considered important to the optimization process. To evaluate elasticity problems numerically, finite element analysis is applied, but instead of using conventional quadrilateral finite elements, a plane-stress triangular finite element was implemented with high-order modes for solving complex geometric problems. A number of typical examples demonstrate that the proposed approach is effective for solving problems of bi-dimensional elasticity. (C) 2014 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper presents solutions of the NURISP VVER lattice benchmark using APOLLO2, TRIPOLI4 and COBAYA3 pin-by-pin. The main objective is to validate MOC based calculation schemes for pin-by-pin cross-section generation with APOLLO2 against TRIPOLI4 reference results. A specific objective is to test the APOLLO2 generated cross-sections and interface discontinuity factors in COBAYA3 pin-by-pin calculations with unstructured mesh. The VVER-1000 core consists of large hexagonal assemblies with 2mm inter-assembly water gaps which require the use of unstructured meshes in the pin-by-pin core simulators. The considered 2D benchmark problems include 19-pin clusters, fuel assemblies and 7-assembly clusters. APOLLO2 calculation schemes with the step characteristic method (MOC) and the higher-order Linear Surface MOC have been tested. The comparison of APOLLO2 vs.TRIPOLI4 results shows a very close agreement. The 3D lattice solver in COBAYA3 uses transport corrected multi-group diffusion approximation with interface discontinuity factors of GET or Black Box Homogenization type. The COBAYA3 pin-by-pin results in 2, 4 and 8 energy groups are close to the reference solutions when using side-dependent interface discontinuity factors.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a Finite Element (FE) analysis of elastic solids several items are usually considered, namely, type and shape of the elements, number of nodes per element, node positions, FE mesh, total number of degrees of freedom (dot) among others. In this paper a method to improve a given FE mesh used for a particular analysis is described. For the improvement criterion different objective functions have been chosen (Total potential energy and Average quadratic error) and the number of nodes and dof's of the new mesh remain constant and equal to the initial FE mesh. In order to find the mesh producing the minimum of the selected objective function the steepest descent gradient technique has been applied as optimization algorithm. However this efficient technique has the drawback that demands a large computation power. Extensive application of this methodology to different 2-D elasticity problems leads to the conclusion that isometric isostatic meshes (ii-meshes) produce better results than the standard reasonably initial regular meshes used in practice. This conclusion seems to be independent on the objective function used for comparison. These ii-meshes are obtained by placing FE nodes along the isostatic lines, i.e. curves tangent at each point to the principal direction lines of the elastic problem to be solved and they should be regularly spaced in order to build regular elements. That means ii-meshes are usually obtained by iteration, i.e. with the initial FE mesh the elastic analysis is carried out. By using the obtained results of this analysis the net of isostatic lines can be drawn and in a first trial an ii-mesh can be built. This first ii-mesh can be improved, if it necessary, by analyzing again the problem and generate after the FE analysis the new and improved ii-mesh. Typically, after two first tentative ii-meshes it is sufficient to produce good FE results from the elastic analysis. Several example of this procedure are presented.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mathematics Subject Classification 2010: 45DB05, 45E05, 78A45.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Recent advances in airborne Light Detection and Ranging (LIDAR) technology allow rapid and inexpensive measurements of topography over large areas. Airborne LIDAR systems usually return a 3-dimensional cloud of point measurements from reflective objects scanned by the laser beneath the flight path. This technology is becoming a primary method for extracting information of different kinds of geometrical objects, such as high-resolution digital terrain models (DTMs), buildings and trees, etc. In the past decade, LIDAR gets more and more interest from researchers in the field of remote sensing and GIS. Compared to the traditional data sources, such as aerial photography and satellite images, LIDAR measurements are not influenced by sun shadow and relief displacement. However, voluminous data pose a new challenge for automated extraction the geometrical information from LIDAR measurements because many raster image processing techniques cannot be directly applied to irregularly spaced LIDAR points. ^ In this dissertation, a framework is proposed to filter out information about different kinds of geometrical objects, such as terrain and buildings from LIDAR automatically. They are essential to numerous applications such as flood modeling, landslide prediction and hurricane animation. The framework consists of several intuitive algorithms. Firstly, a progressive morphological filter was developed to detect non-ground LIDAR measurements. By gradually increasing the window size and elevation difference threshold of the filter, the measurements of vehicles, vegetation, and buildings are removed, while ground data are preserved. Then, building measurements are identified from no-ground measurements using a region growing algorithm based on the plane-fitting technique. Raw footprints for segmented building measurements are derived by connecting boundary points and are further simplified and adjusted by several proposed operations to remove noise, which is caused by irregularly spaced LIDAR measurements. To reconstruct 3D building models, the raw 2D topology of each building is first extracted and then further adjusted. Since the adjusting operations for simple building models do not work well on 2D topology, 2D snake algorithm is proposed to adjust 2D topology. The 2D snake algorithm consists of newly defined energy functions for topology adjusting and a linear algorithm to find the minimal energy value of 2D snake problems. Data sets from urbanized areas including large institutional, commercial, and small residential buildings were employed to test the proposed framework. The results demonstrated that the proposed framework achieves a very good performance. ^

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Mechanical fatigue is a failure phenomenon that occurs due to repeated application of mechanical loads. Very High Cycle Fatigue (VHCF) is considered as the domain of fatigue life greater than 10 million load cycles. Increasing numbers of structural components have service life in the VHCF regime, for instance in automotive and high speed train transportation, gas turbine disks, and components of paper production machinery. Safe and reliable operation of these components depends on the knowledge of their VHCF properties. In this thesis both experimental tools and theoretical modelling were utilized to develop better understanding of the VHCF phenomena. In the experimental part, ultrasonic fatigue testing at 20 kHz of cold rolled and hot rolled stainless steel grades was conducted and fatigue strengths in the VHCF regime were obtained. The mechanisms for fatigue crack initiation and short crack growth were investigated using electron microscopes. For the cold rolled stainless steels crack initiation and early growth occurred through the formation of the Fine Granular Area (FGA) observed on the fracture surface and in TEM observations of cross-sections. The crack growth in the FGA seems to control more than 90% of the total fatigue life. For the hot rolled duplex stainless steels fatigue crack initiation occurred due to accumulation of plastic fatigue damage at the external surface, and early crack growth proceeded through a crystallographic growth mechanism. Theoretical modelling of complex cracks involving kinks and branches in an elastic half-plane under static loading was carried out by using the Distributed Dislocation Dipole Technique (DDDT). The technique was implemented for 2D crack problems. Both fully open and partially closed crack cases were analyzed. The main aim of the development of the DDDT was to compute the stress intensity factors. Accuracy of 2% in the computations was attainable compared to the solutions obtained by the Finite Element Method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In the traceless Oldroyd viscoelastic model, the viscoelastic extra stress tensor is decomposed into its traceless (deviatoric) and spherical parts, leading to a reformulation of the classical Oldroyd model. The equivalence of the two models is established comparing model predictions for simple test cases. The new model is validated using several 2D benchmark problems. The structure and behavior of the new model are discussed and the future use of the new model in envisioned, both on the theoretical and numerical perspectives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A traceless variant of the Johnson-Segalman viscoelastic model is presented. The viscoelastic extra stress tensor is de composed into its traceless (deviatoric) and spherical parts, leading to a reformulation of the classical Johnson-Segalman model. The equivalente of the two models is established comparing model predictions for simple test cases. The new model is validated using several 2D benchmark problems.The structure and behavior of the new model are discussed.