967 resultados para boundary element


Relevância:

30.00% 30.00%

Publicador:

Resumo:

In the last decades, research on knowledge economies has taken central stage. Within this broader research field, research on the role of digital technologies and the creative industries has become increasingly important for researchers, academics and policy makers with particular focus on their development, supply-chains and models of production. Furthermore, many have recognised that, despite the important role played by digital technologies and innovation in the development of the creative industries, these dynamics are hard to capture and quantify. Digital technologies are embedded in the production and market structures of the creative industries and are also partially distinct and discernible from it. They also seem to play a key role in innovation of access and delivery of creative content. This chapter tries to assess the role played by digital technologies focusing on a key element of their implementation and application: human capital. Using student micro-data collected by the Higher Education Statistical Agency (HESA) in the United Kingdom, we explore the characteristics and location patterns of graduates who entered the creative industries, specifically comparing graduates in the creative arts and graduates from digital technology subjects. We highlight patterns of geographical specialisation but also how different context are able to better integrate creativity and innovation in their workforce. The chapter deals specifically with understanding whether these skills are uniformly embedded across the creative sector or are concentrated in specific sub-sectors of the creative industries. Furthermore, it explores the role that these graduates play in different sub-sector of the creative economy, their economic rewards and their geographical determinants.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We propose a discontinuous-Galerkin-based immersed boundary method for elasticity problems. The resulting numerical scheme does not require boundary fitting meshes and avoids boundary locking by switching the elements intersected by the boundary to a discontinuous Galerkin approximation. Special emphasis is placed on the construction of a method that retains an optimal convergence rate in the presence of non-homogeneous essential and natural boundary conditions. The role of each one of the approximations introduced is illustrated by analyzing an analog problem in one spatial dimension. Finally, extensive two- and three-dimensional numerical experiments on linear and nonlinear elasticity problems verify that the proposed method leads to optimal convergence rates under combinations of essential and natural boundary conditions. (C) 2009 Elsevier B.V. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A numerical method to approximate partial differential equations on meshes that do not conform to the domain boundaries is introduced. The proposed method is conceptually simple and free of user-defined parameters. Starting with a conforming finite element mesh, the key ingredient is to switch those elements intersected by the Dirichlet boundary to a discontinuous-Galerkin approximation and impose the Dirichlet boundary conditions strongly. By virtue of relaxing the continuity constraint at those elements. boundary locking is avoided and optimal-order convergence is achieved. This is shown through numerical experiments in reaction-diffusion problems. Copyright (c) 2008 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The aim of this study was to evaluate the influence of the platform-switching technique on stress distribution in implant, abutment, and pen-implant tissues, through a 3-dimensional finite element study. Three 3-dimensional mandibular models were fabricated using the Solid Works 2006 and InVesalius software. Each model was composed of a bone block with one implant 10 mm long and of different diameters (3.75 and 5.00 mm). The UCLA abutments also ranged in diameter from 5.00 mm to 4.1 mm. After obtaining the geometries, the models were transferred to the software FEMAP 10.0 for pre- and postprocessing of finite elements to generate the mesh, loading, and boundary conditions. A total load of 200 N was applied in axial (0 degrees), oblique (45 degrees), and lateral (90) directions. The models were solved by the software NeiNastran 9.0 and transferred to the software FEMAP 10.0 to obtain the results that were visualized through von Mises and maximum principal stress maps. Model A (implants with 3.75 mm/abutment with 4.1 mm) exhibited the highest area of stress concentration with all loadings (axial, oblique, and lateral) for the implant and the abutment. All models presented the stress areas at the abutment level and at the implant/abutment interface. Models B (implant with 5.0 mm/abutment with 5.0 mm) and C (implant with 5.0 mm/abutment with 4.1 mm) presented minor areas of stress concentration and similar distribution pattern. For the cortical bone, low stress concentration was observed in the pen-implant region for models B and C in comparison to model A. The trabecular bone exhibited low stress that was well distributed in models B and C. Model A presented the highest stress concentration. Model B exhibited better stress distribution. There was no significant difference between the large-diameter implants (models B and C).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objective: The non-homogenous aspect of periodontal ligament (PDL) has been examined using finite element analysis (FEA) to better simulate PDL behavior. The aim of this study was to assess, by 2-D FEA, the influence of non-homogenous PDL on the stress distribution when the free-end saddle removable partial denture (RPD) is partially supported by an osseointegrated implant. Material and Methods: Six finite element (FE) models of a partially edentulous mandible were created to represent two types of PDL (non-homogenous and homogenous) and two types of RPD (conventional RPD, supported by tooth and fibromucosa; and modified RPD, supported by tooth and implant [10.00x3.75 mm]). Two additional FE models without RPD were used as control models. The non-homogenous PDL was modeled using beam elements to simulate the crest, horizontal, oblique and apical fibers. The load (50 N) was applied in each cusp simultaneously. Regarding boundary conditions the border of alveolar ridge was fixed along the x axis. The FE software (Ansys 10.0) was used to compute the stress fields, and the von Mises stress criterion (sigma vM) was applied to analyze the results. Results: The peak of sigma vM in non-homogenous PDL was higher than that for the homogenous condition. The benefits of implants were enhanced for the non-homogenous PDL condition, with drastic sigma vM reduction on the posterior half of the alveolar ridge. The implant did not reduce the stress on the support tooth for both PDL conditions. Conclusion: The PDL modeled in the non-homogeneous form increased the benefits of the osseointegrated implant in comparison with the homogeneous condition. Using the non-homogenous PDL, the presence of osseointegrated implant did not reduce the stress on the supporting tooth.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

An improvement to the quality bidimensional Delaunay mesh generation algorithm, which combines the mesh refinement algorithms strategy of Ruppert and Shewchuk is proposed in this research. The developed technique uses diametral lenses criterion, introduced by L. P. Chew, with the purpose of eliminating the extremely obtuse triangles in the boundary mesh. This method splits the boundary segment and obtains an initial prerefinement, and thus reducing the number of necessary iterations to generate a high quality sequential triangulation. Moreover, it decreases the intensity of the communication and synchronization between subdomains in parallel mesh refinement. © 2008 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Objectives: The aim of this study was to analyze the stress distribution on dentin/adhesive interface (d/a) through a 3-D finite element analysis (FEA) varying the number and diameter of the dentin tubules orifice according to dentin depth, keeping hybrid layer (HL) thickness and TAǴs length constant. Materials and Methods: 3 models were built through the SolidWorks software: SD - specimen simulating superficial dentin (41 x 41 x 82 μm), with a 3 μm thick HL, a 17 μm length Tag, and 8 tubules with a 0.9 μm diameter restored with composite resin. MD - similar to M1 with 12 tubules with a 1.2 μm diameter, simulating medium dentin. DD - similar to M1 with 16 tubules with a 2.5 μm diameter, simulating deep dentin. Other two models were built in order to keep the diameter constant in 2.5 μm: MS - similar to SD with 8 tubules; and MM - similar to MD with 12 tubules. The boundary condition was applied to the base surface of each specimen. Tensile load (0.03N) was performed on the composite resin top surface. Stress field (maximum principal stress in tension - σMAX) was performed using Ansys Wokbench 10.0. Results: The peak of σMAX (MPa) were similar between SD (110) and MD (106), and higher for DD (134). The stress distribution pathway was similar for all models, starting from peritubular dentin to adhesive layer, intertubular dentin and hybrid layer. The peak of σMAX (MPa) for those structures was, respectively: 134 (DD), 56.9 (SD), 45.5 (DD), and 36.7 (MD). Conclusions: The number of dentin tubules had no influence in the σMAX at the dentin/adhesive interface. Peritubular and intertubular dentin showed higher stress with the bigger dentin tubules orifice condition. The σMAX in the hybrid layer and adhesive layer were going down from superficial dentin to deeper dentin. In a failure scenario, the hybrid layer in contact with peritubular dentin and adhesive layer is the first region for breaking the adhesion. © 2011 Nova Science Publishers, Inc.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main feature of partition of unity methods such as the generalized or extended finite element method is their ability of utilizing a priori knowledge about the solution of a problem in the form of enrichment functions. However, analytical derivation of enrichment functions with good approximation properties is mostly limited to two-dimensional linear problems. This paper presents a procedure to numerically generate proper enrichment functions for three-dimensional problems with confined plasticity where plastic evolution is gradual. This procedure involves the solution of boundary value problems around local regions exhibiting nonlinear behavior and the enrichment of the global solution space with the local solutions through the partition of unity method framework. This approach can produce accurate nonlinear solutions with a reduced computational cost compared to standard finite element methods since computationally intensive nonlinear iterations can be performed on coarse global meshes after the creation of enrichment functions properly describing localized nonlinear behavior. Several three-dimensional nonlinear problems based on the rate-independent J (2) plasticity theory with isotropic hardening are solved using the proposed procedure to demonstrate its robustness, accuracy and computational efficiency.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The generalized finite element method (GFEM) is applied to a nonconventional hybrid-mixed stress formulation (HMSF) for plane analysis. In the HMSF, three approximation fields are involved: stresses and displacements in the domain and displacement fields on the static boundary. The GFEM-HMSF shape functions are then generated by the product of a partition of unity associated to each field and the polynomials enrichment functions. In principle, the enrichment can be conducted independently over each of the HMSF approximation fields. However, stability and convergence features of the resulting numerical method can be affected mainly by spurious modes generated when enrichment is arbitrarily applied to the displacement fields. With the aim to efficiently explore the enrichment possibilities, an extension to GFEM-HMSF of the conventional Zienkiewicz-Patch-Test is proposed as a necessary condition to ensure numerical stability. Finally, once the extended Patch-Test is satisfied, some numerical analyses focusing on the selective enrichment over distorted meshes formed by bilinear quadrilateral finite elements are presented, thus showing the performance of the GFEM-HMSF combination.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper, a sample of planetary nebulae in the Galaxy's inner-disk and bulge is used to find the galactocentric distance that optimally separates these two populations in terms of their abundances. Statistical distance scales were used to investigate the distribution of abundances across the disk–bulge interface, while a Kolmogorov–Smirnov test was used to find the distance at which the chemical properties of these regions separate optimally. The statistical analysis indicates that, on average, the inner population is characterized by lower abundances than the outer component. Additionally, for the α-element abundances, the inner population does not follow the disk's radial gradient toward the Galactic Center. Based on our results, we suggest a bulge–disk interface at 1.5 kpc, marking the transition between the bulge and the inner disk of the Galaxy as defined by the intermediate-mass population.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hermite interpolation is increasingly showing to be a powerful numerical solution tool, as applied to different kinds of second order boundary value problems. In this work we present two Hermite finite element methods to solve viscous incompressible flows problems, in both two- and three-dimension space. In the two-dimensional case we use the Zienkiewicz triangle to represent the velocity field, and in the three-dimensional case an extension of this element to tetrahedra, still called a Zienkiewicz element. Taking as a model the Stokes system, the pressure is approximated with continuous functions, either piecewise linear or piecewise quadratic, according to the version of the Zienkiewicz element in use, that is, with either incomplete or complete cubics. The methods employ both the standard Galerkin or the Petrov–Galerkin formulation first proposed in Hughes et al. (1986) [18], based on the addition of a balance of force term. A priori error analyses point to optimal convergence rates for the PG approach, and for the Galerkin formulation too, at least in some particular cases. From the point of view of both accuracy and the global number of degrees of freedom, the new methods are shown to have a favorable cost-benefit ratio, as compared to velocity Lagrange finite elements of the same order, especially if the Galerkin approach is employed.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

ZusammenfassungDie Bildung von mittelozeanischen Rückenbasalten (MORB) ist einer der wichtigsten Stoffflüsse der Erde. Jährlich wird entlang der 75.000 km langen mittelozeanischen Rücken mehr als 20 km3 neue magmatische Kruste gebildet, das sind etwa 90 Prozent der globalen Magmenproduktion. Obwohl ozeanische Rücken und MORB zu den am meisten untersuchten geologischen Themenbereichen gehören, existieren weiterhin einige Streit-fragen. Zu den wichtigsten zählt die Rolle von geodynamischen Rahmenbedingungen, wie etwa Divergenzrate oder die Nähe zu Hotspots oder Transformstörungen, sowie der absolute Aufschmelzgrad, oder die Tiefe, in der die Aufschmelzung unter den Rücken beginnt. Diese Dissertation widmet sich diesen Themen auf der Basis von Haupt- und Spurenelementzusammensetzungen in Mineralen ozeanischer Mantelgesteine.Geochemische Charakteristika von MORB deuten darauf hin, dass der ozeanische Mantel im Stabilitätsfeld von Granatperidotit zu schmelzen beginnt. Neuere Experimente zeigen jedoch, dass die schweren Seltenerdelemente (SEE) kompatibel im Klinopyroxen (Cpx) sind. Aufgrund dieser granatähnlichen Eigenschaft von Cpx wird Granat nicht mehr zur Erklärung der MORB Daten benötigt, wodurch sich der Beginn der Aufschmelzung zu geringeren Drucken verschiebt. Aus diesem Grund ist es wichtig zu überprüfen, ob diese Hypothese mit Daten von abyssalen Peridotiten in Einklang zu bringen ist. Diese am Ozeanboden aufgeschlossenen Mantelfragmente stellen die Residuen des Aufschmelz-prozesses dar, und ihr Mineralchemismus enthält Information über die Bildungs-bedingungen der Magmen. Haupt- und Spurenelementzusammensetzungen von Peridotit-proben des Zentralindischen Rückens (CIR) wurden mit Mikrosonde und Ionensonde bestimmt, und mit veröffentlichten Daten verglichen. Cpx der CIR Peridotite weisen niedrige Verhältnisse von mittleren zu schweren SEE und hohe absolute Konzentrationen der schweren SEE auf. Aufschmelzmodelle eines Spinellperidotits unter Anwendung von üblichen, inkompatiblen Verteilungskoeffizienten (Kd's) können die gemessenen Fraktionierungen von mittleren zu schweren SEE nicht reproduzieren. Die Anwendung der neuen Kd's, die kompatibles Verhalten der schweren SEE im Cpx vorhersagen, ergibt zwar bessere Resultate, kann jedoch nicht die am stärksten fraktionierten Proben erklären. Darüber hinaus werden sehr hohe Aufschmelzgrade benötigt, was nicht mit Hauptelementdaten in Einklang zu bringen ist. Niedrige (~3-5%) Aufschmelzgrade im Stabilitätsfeld von Granatperidotit, gefolgt von weiterer Aufschmelzung von Spinellperidotit kann jedoch die Beobachtungen weitgehend erklären. Aus diesem Grund muss Granat weiterhin als wichtige Phase bei der Genese von MORB betrachtet werden (Kapitel 1).Eine weitere Hürde zum quantitativen Verständnis von Aufschmelzprozessen unter mittelozeanischen Rücken ist die fehlende Korrelation zwischen Haupt- und Spuren-elementen in residuellen abyssalen Peridotiten. Das Cr/(Cr+Al) Verhältnis (Cr#) in Spinell wird im Allgemeinen als guter qualitativer Indikator für den Aufschmelzgrad betrachtet. Die Mineralchemie der CIR Peridotite und publizierte Daten von anderen abyssalen Peridotiten zeigen, dass die schweren SEE sehr gut (r2 ~ 0.9) mit Cr# der koexistierenden Spinelle korreliert. Die Auswertung dieser Korrelation ergibt einen quantitativen Aufschmelz-indikator für Residuen, welcher auf dem Spinellchemismus basiert. Damit kann der Schmelzgrad als Funktion von Cr# in Spinell ausgedrückt werden: F = 0.10×ln(Cr#) + 0.24 (Hellebrand et al., Nature, in review; Kapitel 2). Die Anwendung dieses Indikators auf Mantelproben, für die keine Ionensondendaten verfügbar sind, ermöglicht es, geochemische und geophysikalischen Daten zu verbinden. Aus geodynamischer Perspektive ist der Gakkel Rücken im Arktischen Ozean von großer Bedeutung für das Verständnis von Aufschmelzprozessen, da er weltweit die niedrigste Divergenzrate aufweist und große Transformstörungen fehlen. Publizierte Basaltdaten deuten auf einen extrem niedrigen Aufschmelzgrad hin, was mit globalen Korrelationen im Einklang steht. Stark alterierte Mantelperidotite einer Lokalität entlang des kaum beprobten Gakkel Rückens wurden deshalb auf Primärminerale untersucht. Nur in einer Probe sind oxidierte Spinellpseudomorphosen mit Spuren primärer Spinelle erhalten geblieben. Ihre Cr# ist signifikant höher als die einiger Peridotite von schneller divergierenden Rücken und ihr Schmelzgrad ist damit höher als aufgrund der Basaltzusammensetzungen vermutet. Der unter Anwendung des oben erwähnten Indikators ermittelte Schmelzgrad ermöglicht die Berechnung der Krustenmächtigkeit am Gakkel Rücken. Diese ist wesentlich größer als die aus Schweredaten ermittelte Mächtigkeit, oder die aus der globalen Korrelation zwischen Divergenzrate und mittels Seismik erhaltene Krustendicke. Dieses unerwartete Ergebnis kann möglicherweise auf kompositionelle Heterogenitäten bei niedrigen Schmelzgraden, oder auf eine insgesamt größere Verarmung des Mantels unter dem Gakkel Rücken zurückgeführt werden (Hellebrand et al., Chem.Geol., in review; Kapitel 3).Zusätzliche Informationen zur Modellierung und Analytik sind im Anhang A-C aufgeführt

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis deals with the study of optimal control problems for the incompressible Magnetohydrodynamics (MHD) equations. Particular attention to these problems arises from several applications in science and engineering, such as fission nuclear reactors with liquid metal coolant and aluminum casting in metallurgy. In such applications it is of great interest to achieve the control on the fluid state variables through the action of the magnetic Lorentz force. In this thesis we investigate a class of boundary optimal control problems, in which the flow is controlled through the boundary conditions of the magnetic field. Due to their complexity, these problems present various challenges in the definition of an adequate solution approach, both from a theoretical and from a computational point of view. In this thesis we propose a new boundary control approach, based on lifting functions of the boundary conditions, which yields both theoretical and numerical advantages. With the introduction of lifting functions, boundary control problems can be formulated as extended distributed problems. We consider a systematic mathematical formulation of these problems in terms of the minimization of a cost functional constrained by the MHD equations. The existence of a solution to the flow equations and to the optimal control problem are shown. The Lagrange multiplier technique is used to derive an optimality system from which candidate solutions for the control problem can be obtained. In order to achieve the numerical solution of this system, a finite element approximation is considered for the discretization together with an appropriate gradient-type algorithm. A finite element object-oriented library has been developed to obtain a parallel and multigrid computational implementation of the optimality system based on a multiphysics approach. Numerical results of two- and three-dimensional computations show that a possible minimum for the control problem can be computed in a robust and accurate manner.