993 resultados para 2D elasticity problems
Plane wave discontinuous Galerkin methods for the 2D Helmholtz equation: analysis of the $p$-version
Resumo:
Plane wave discontinuous Galerkin (PWDG) methods are a class of Trefftz-type methods for the spatial discretization of boundary value problems for the Helmholtz operator $-\Delta-\omega^2$, $\omega>0$. They include the so-called ultra weak variational formulation from [O. Cessenat and B. Després, SIAM J. Numer. Anal., 35 (1998), pp. 255–299]. This paper is concerned with the a priori convergence analysis of PWDG in the case of $p$-refinement, that is, the study of the asymptotic behavior of relevant error norms as the number of plane wave directions in the local trial spaces is increased. For convex domains in two space dimensions, we derive convergence rates, employing mesh skeleton-based norms, duality techniques from [P. Monk and D. Wang, Comput. Methods Appl. Mech. Engrg., 175 (1999), pp. 121–136], and plane wave approximation theory.
Resumo:
We extend extreme learning machine (ELM) classifiers to complex Reproducing Kernel Hilbert Spaces (RKHS) where the input/output variables as well as the optimization variables are complex-valued. A new family of classifiers, called complex-valued ELM (CELM) suitable for complex-valued multiple-input–multiple-output processing is introduced. In the proposed method, the associated Lagrangian is computed using induced RKHS kernels, adopting a Wirtinger calculus approach formulated as a constrained optimization problem similarly to the conventional ELM classifier formulation. When training the CELM, the Karush–Khun–Tuker (KKT) theorem is used to solve the dual optimization problem that consists of satisfying simultaneously smallest training error as well as smallest norm of output weights criteria. The proposed formulation also addresses aspects of quaternary classification within a Clifford algebra context. For 2D complex-valued inputs, user-defined complex-coupled hyper-planes divide the classifier input space into four partitions. For 3D complex-valued inputs, the formulation generates three pairs of complex-coupled hyper-planes through orthogonal projections. The six hyper-planes then divide the 3D space into eight partitions. It is shown that the CELM problem formulation is equivalent to solving six real-valued ELM tasks, which are induced by projecting the chosen complex kernel across the different user-defined coordinate planes. A classification example of powdered samples on the basis of their terahertz spectral signatures is used to demonstrate the advantages of the CELM classifiers compared to their SVM counterparts. The proposed classifiers retain the advantages of their ELM counterparts, in that they can perform multiclass classification with lower computational complexity than SVM classifiers. Furthermore, because of their ability to perform classification tasks fast, the proposed formulations are of interest to real-time applications.
Resumo:
Given a fixed set of identical or different-sized circular items, the problem we deal with consists on finding the smallest object within which the items can be packed. Circular, triangular, squared, rectangular and also strip objects are considered. Moreover, 2D and 3D problems are treated. Twice-differentiable models for all these problems are presented. A strategy to reduce the complexity of evaluating the models is employed and, as a consequence, instances with a large number of items can be considered. Numerical experiments show the flexibility and reliability of the new unified approach. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Due to health problems and the negative externalities associated with cigarette consumption, many governments try to discourage cigarette consumption by increasing its price through taxation. However, cigarette, like the other addictive goods, is viewed as that it is not sensitive to demand rules and the market forces. This study analyses the effect of price increase on cigarette consumption. We used Swedish time series data from 1970 to 2010. Our results reveal that though cigarette is addictive substance its demand is sensitive to changes in the price. Estimates from this study indicate short-run price-elasticity of -0.29 and the long run price elasticity of -0.47.
Resumo:
A constraint satisfaction problem is a classical artificial intelligence paradigm characterized by a set of variables (each variable with an associated domain of possible values), and a set of constraints that specify relations among subsets of these variables. Solutions are assignments of values to all variables that satisfy all the constraints. Many real world problems may be modelled by means of constraints. The range of problems that can use this representation is very diverse and embraces areas like resource allocation, scheduling, timetabling or vehicle routing. Constraint programming is a form of declarative programming in the sense that instead of specifying a sequence of steps to execute, it relies on properties of the solutions to be found, which are explicitly defined by constraints. The idea of constraint programming is to solve problems by stating constraints which must be satisfied by the solutions. Constraint programming is based on specialized constraint solvers that take advantage of constraints to search for solutions. The success and popularity of complex problem solving tools can be greatly enhanced by the availability of friendly user interfaces. User interfaces cover two fundamental areas: receiving information from the user and communicating it to the system; and getting information from the system and deliver it to the user. Despite its potential impact, adequate user interfaces are uncommon in constraint programming in general. The main goal of this project is to develop a graphical user interface that allows to, intuitively, represent constraint satisfaction problems. The idea is to visually represent the variables of the problem, their domains and the problem constraints and enable the user to interact with an adequate constraint solver to process the constraints and compute the solutions. Moreover, the graphical interface should be capable of configure the solver’s parameters and present solutions in an appealing interactive way. As a proof of concept, the developed application – GraphicalConstraints – focus on continuous constraint programming, which deals with real valued variables and numerical constraints (equations and inequalities). RealPaver, a state-of-the-art solver in continuous domains, was used in the application. The graphical interface supports all stages of constraint processing, from the design of the constraint network to the presentation of the end feasible space solutions as 2D or 3D boxes.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Statement of problem. Two problems found in prostheses with soft liners are bond failure to the acrylic resin base and loss of elasticity due to material aging.Purpose. This in vitro study evaluated the effect of thermocycling on the bond strength and elasticity of 4 long-term soft denture liners to acrylic resin bases.Material and methods. Four soft lining materials (Molloplast-B, Flexor, Permasoft, and Pro Tech) and 2 acrylic resins (Classico, and Lucitone 199) were processed for testing according to manufacturers' instructions. Twenty rectangular specimens (10 X 10-mm(2) cross-sectional area) and twenty cylinder specimens (12.7-mm diameter X 19.0-mm height) for each liner/resin combination were used for the tensile and deformation tests, respectively. Specimen shape and liner thickness were standardized. Samples were divided into a test group that was thermocycled 3000 times and a control group that was stored for 24 hours in water at 37degreesC. Mean bond strength, expressed in megapascals (Wa), was determined in the tensile test with the use of a universal testing machine at a crosshead speed of 5 mm/min. Elasticity, expressed as percent of permanent deformation, was calculated with an instrument for measuring permanent deformation described in ADA/ANSI specification 18. Data from both tests were examined with 1-way analysis of variance and a Tukey test, with calculation of a Scheffe interval at a 95% confidence level.Results. In the tensile test under control conditions, Molloplast-B (1.51 +/- 0.28 MPa [mean SD]) and Pro Tech (1.44 +/- 0.27 MPa) liners had higher bond strength values than the others (P < .05). With regard to the permanent deformation test, the lowest values were observed for Molloplast-B (0.48% +/- 0.19%) and Flexor (0.44% +/- 0.14%) (P < .05). Under thermocycling conditions, the highest bond strength occurred with Molloplast-B (1.37 +/- 0.24 MPa) (P < .05) With regard to the deformation test, Flexor (0.46% +/- 0.13%) and Molloplast-B (0.44% +/- 0.17%) liners had lower deformation values than the others (P < .05).Conclusion. The results of this in vitro study indicated that bond strength and permanent deformity values of the 4 soft denture liners tested varied according to their chemical composition. These tests are not completely valid for application to dental restorations because the forces they encounter are more closely related to shear and tear. However, the above protocol serves as a good method of investigation to evaluate differences between thermocycled and control groups.
Resumo:
This paper is concerned with an overview of upwinding schemes, and further nonlinear applications of a recently introduced high resolution upwind differencing scheme, namely the ADBQUICKEST [V.G. Ferreira, F.A. Kurokawa, R.A.B. Queiroz, M.K. Kaibara, C.M. Oishi, J.A.Cuminato, A.F. Castelo, M.F. Tomé, S. McKee, assessment of a high-order finite difference upwind scheme for the simulation of convection-diffusion problems, International Journal for Numerical Methods in Fluids 60 (2009) 1-26]. The ADBQUICKEST scheme is a new TVD version of the QUICKEST [B.P. Leonard, A stable and accurate convective modeling procedure based on quadratic upstream interpolation, Computer Methods in Applied Mechanics and Engineering 19 (1979) 59-98] for solving nonlinear balance laws. The scheme is based on the concept of NV and TVD formalisms and satisfies a convective boundedness criterion. The accuracy of the scheme is compared with other popularly used convective upwinding schemes (see, for example, Roe (1985) [19], Van Leer (1974) [18] and Arora & Roe (1997) [17]) for solving nonlinear conservation laws (for example, Buckley-Leverett, shallow water and Euler equations). The ADBQUICKEST scheme is then used to solve six types of fluid flow problems of increasing complexity: namely, 2D aerosol filtration by fibrous filters; axisymmetric flow in a tubular membrane; 2D two-phase flow in a fluidized bed; 2D compressible Orszag-Tang MHD vortex; axisymmetric jet onto a flat surface at low Reynolds number and full 3D incompressible flows involving moving free surfaces. The numerical simulations indicate that this convective upwinding scheme is a good generic alternative for solving complex fluid dynamics problems. © 2012.
Resumo:
O Feixe Gaussiano (FG) é uma solução assintótica da equação da elastodinâmica na vizinhança paraxial de um raio central, a qual se aproxima melhor do campo de ondas do que a aproximação de ordem zero da Teoria do Raio. A regularidade do FG na descrição do campo de ondas, assim como a sua elevada precisão em algumas regiões singulares do meio de propagação, proporciona uma forte alternativa no imageamento sísmicos. Nesta dissertação, apresenta-se um novo procedimento de migração sísmica pré-empilhamento em profundidade com amplitudes verdadeiras, que combina a flexibilidade da migração tipo Kirchhoff e a robustez da migração baseada na utilização de Feixes Gaussianos para a representação do campo de ondas. O algoritmo de migração proposto é constituído por dois processos de empilhamento: o primeiro é o empilhamento de feixes (“beam stack”) aplicado a subconjuntos de dados sísmicos multiplicados por uma função peso definida de modo que o operador de empilhamento tenha a mesma forma da integral de superposição de Feixes Gaussianos; o segundo empilhamento corresponde à migração Kirchhoff tendo como entrada os dados resultantes do primeiro empilhamento. Pelo exposto justifica-se a denominação migração Kirchhoff-Gaussian-Beam (KGB).Afim de comparar os métodos Kirchhoff e KGB com respeito à sensibilidade em relação ao comprimento da discretização, aplicamos no conjunto de dados conhecido como Marmousi 2-D quatro grids de velocidade, ou seja, 60m, 80m 100m e 150m. Como resultado, temos que ambos os métodos apresentam uma imagem muito melhor para o menor intervalo de discretização da malha de velocidade. O espectro de amplitude das seções migradas nos fornece o conteúdo de frequência espacial das seções das imagens obtidas.
Resumo:
This paper deals with topology optimization in plane elastic-linear problems considering the influence of the self weight in efforts in structural elements. For this purpose it is used a numerical technique called SESO (Smooth ESO), which is based on the procedure for progressive decrease of the inefficient stiffness element contribution at lower stresses until he has no more influence. The SESO is applied with the finite element method and is utilized a triangular finite element and high order. This paper extends the technique SESO for application its self weight where the program, in computing the volume and specific weight, automatically generates a concentrated equivalent force to each node of the element. The evaluation is finalized with the definition of a model of strut-and-tie resulting in regions of stress concentration. Examples are presented with optimum topology structures obtaining optimal settings. (C) 2012 CIMNE (Universitat Politecnica de Catalunya). Published by Elsevier Espana, S.L.U. All rights reserved.
Resumo:
A direct reconstruction algorithm for complex conductivities in W-2,W-infinity(Omega), where Omega is a bounded, simply connected Lipschitz domain in R-2, is presented. The framework is based on the uniqueness proof by Francini (2000 Inverse Problems 6 107-19), but equations relating the Dirichlet-to-Neumann to the scattering transform and the exponentially growing solutions are not present in that work, and are derived here. The algorithm constitutes the first D-bar method for the reconstruction of conductivities and permittivities in two dimensions. Reconstructions of numerically simulated chest phantoms with discontinuities at the organ boundaries are included.
A 2D BEM-FEM approach for time harmonic fluid-structure interaction analysis of thin elastic bodies.
Resumo:
[EN]This paper deals with two-dimensional time harmonic fluid-structure interaction problems when the fluid is at rest, and the elastic bodies have small thicknesses. A BEM-FEM numerical approach is used, where the BEM is applied to the fluid, and the structural FEM is applied to the thin elastic bodies.
Resumo:
In this work we develop and analyze an adaptive numerical scheme for simulating a class of macroscopic semiconductor models. At first the numerical modelling of semiconductors is reviewed in order to classify the Energy-Transport models for semiconductors that are later simulated in 2D. In this class of models the flow of charged particles, that are negatively charged electrons and so-called holes, which are quasi-particles of positive charge, as well as their energy distributions are described by a coupled system of nonlinear partial differential equations. A considerable difficulty in simulating these convection-dominated equations is posed by the nonlinear coupling as well as due to the fact that the local phenomena such as "hot electron effects" are only partially assessable through the given data. The primary variables that are used in the simulations are the particle density and the particle energy density. The user of these simulations is mostly interested in the current flow through parts of the domain boundary - the contacts. The numerical method considered here utilizes mixed finite-elements as trial functions for the discrete solution. The continuous discretization of the normal fluxes is the most important property of this discretization from the users perspective. It will be proven that under certain assumptions on the triangulation the particle density remains positive in the iterative solution algorithm. Connected to this result an a priori error estimate for the discrete solution of linear convection-diffusion equations is derived. The local charge transport phenomena will be resolved by an adaptive algorithm, which is based on a posteriori error estimators. At that stage a comparison of different estimations is performed. Additionally a method to effectively estimate the error in local quantities derived from the solution, so-called "functional outputs", is developed by transferring the dual weighted residual method to mixed finite elements. For a model problem we present how this method can deliver promising results even when standard error estimator fail completely to reduce the error in an iterative mesh refinement process.
Resumo:
The Factorization Method localizes inclusions inside a body from measurements on its surface. Without a priori knowing the physical parameters inside the inclusions, the points belonging to them can be characterized using the range of an auxiliary operator. The method relies on a range characterization that relates the range of the auxiliary operator to the measurements and is only known for very particular applications. In this work we develop a general framework for the method by considering symmetric and coercive operators between abstract Hilbert spaces. We show that the important range characterization holds if the difference between the inclusions and the background medium satisfies a coerciveness condition which can immediately be translated into a condition on the coefficients of a given real elliptic problem. We demonstrate how several known applications of the Factorization Method are covered by our general results and deduce the range characterization for a new example in linear elasticity.
Resumo:
The aim of this work is to present various aspects of numerical simulation of particle and radiation transport for industrial and environmental protection applications, to enable the analysis of complex physical processes in a fast, reliable, and efficient way. In the first part we deal with speed-up of numerical simulation of neutron transport for nuclear reactor core analysis. The convergence properties of the source iteration scheme of the Method of Characteristics applied to be heterogeneous structured geometries has been enhanced by means of Boundary Projection Acceleration, enabling the study of 2D and 3D geometries with transport theory without spatial homogenization. The computational performances have been verified with the C5G7 2D and 3D benchmarks, showing a sensible reduction of iterations and CPU time. The second part is devoted to the study of temperature-dependent elastic scattering of neutrons for heavy isotopes near to the thermal zone. A numerical computation of the Doppler convolution of the elastic scattering kernel based on the gas model is presented, for a general energy dependent cross section and scattering law in the center of mass system. The range of integration has been optimized employing a numerical cutoff, allowing a faster numerical evaluation of the convolution integral. Legendre moments of the transfer kernel are subsequently obtained by direct quadrature and a numerical analysis of the convergence is presented. In the third part we focus our attention to remote sensing applications of radiative transfer employed to investigate the Earth's cryosphere. The photon transport equation is applied to simulate reflectivity of glaciers varying the age of the layer of snow or ice, its thickness, the presence or not other underlying layers, the degree of dust included in the snow, creating a framework able to decipher spectral signals collected by orbiting detectors.