884 resultados para 010301 Numerical Analysis
Resumo:
Centrifugal pumps are vastly used in many industrial applications. Knowledge of how these components behave in several circumstances is crucial for the development of more efficient and, therefore, less expensive pumping installations. The combination of multiple impellers, vaned diffusers and a volute might introduce several complex flow characteristics that largely deviate from regular inviscid pump flow theory. Computational Fluid Dynamics can be very helpful to extract information about which physical phenomena are involved in such flows. In this sense, this work performs a numerical study of the flow in a two-stage centrifugal pump (Imbil ITAP 65-330/2) with a vaned diffuser and a volute. The flow in the pump is modeled using the software Ansys CFX, by means of a multi-block, transient rotor-stator technique, with structured grids for all pump parts. The simulations were performed using water and a mixture of water and glycerin as work fluids. Several viscosities were considered, in a range between 87 and 720 cP. Comparisons between experimental data obtained by Amaral (2007) and numerical head curves showed a good agreement, with an average deviation of 6.8% for water. The behavior of velocity, pressure and turbulence kinetic energy fields was evaluated for several operational conditions. In general, the results obtained by this work achieved the proposed goals and are a significant contribution to the understanding of the flow studied.
Resumo:
In this work, we introduce a new class of numerical schemes for rarefied gas dynamic problems described by collisional kinetic equations. The idea consists in reformulating the problem using a micro-macro decomposition and successively in solving the microscopic part by using asymptotic preserving Monte Carlo methods. We consider two types of decompositions, the first leading to the Euler system of gas dynamics while the second to the Navier-Stokes equations for the macroscopic part. In addition, the particle method which solves the microscopic part is designed in such a way that the global scheme becomes computationally less expensive as the solution approaches the equilibrium state as opposite to standard methods for kinetic equations which computational cost increases with the number of interactions. At the same time, the statistical error due to the particle part of the solution decreases as the system approach the equilibrium state. This causes the method to degenerate to the sole solution of the macroscopic hydrodynamic equations (Euler or Navier-Stokes) in the limit of infinite number of collisions. In a last part, we will show the behaviors of this new approach in comparisons to standard Monte Carlo techniques for solving the kinetic equation by testing it on different problems which typically arise in rarefied gas dynamic simulations.
Resumo:
This work is concerned with the design and analysis of hp-version discontinuous Galerkin (DG) finite element methods for boundary-value problems involving the biharmonic operator. The first part extends the unified approach of Arnold, Brezzi, Cockburn & Marini (SIAM J. Numer. Anal. 39, 5 (2001/02), 1749-1779) developed for the Poisson problem, to the design of DG methods via an appropriate choice of numerical flux functions for fourth order problems; as an example we retrieve the interior penalty DG method developed by Suli & Mozolevski (Comput. Methods Appl. Mech. Engrg. 196, 13-16 (2007), 1851-1863). The second part of this work is concerned with a new a-priori error analysis of the hp-version interior penalty DG method, when the error is measured in terms of both the energy-norm and L2-norm, as well certain linear functionals of the solution, for elemental polynomial degrees $p\ge 2$. Also, provided that the solution is piecewise analytic in an open neighbourhood of each element, exponential convergence is also proven for the p-version of the DG method. The sharpness of the theoretical developments is illustrated by numerical experiments.
Resumo:
Accurate estimation of road pavement geometry and layer material properties through the use of proper nondestructive testing and sensor technologies is essential for evaluating pavement’s structural condition and determining options for maintenance and rehabilitation. For these purposes, pavement deflection basins produced by the nondestructive Falling Weight Deflectometer (FWD) test data are commonly used. The nondestructive FWD test drops weights on the pavement to simulate traffic loads and measures the created pavement deflection basins. Backcalculation of pavement geometry and layer properties using FWD deflections is a difficult inverse problem, and the solution with conventional mathematical methods is often challenging due to the ill-posed nature of the problem. In this dissertation, a hybrid algorithm was developed to seek robust and fast solutions to this inverse problem. The algorithm is based on soft computing techniques, mainly Artificial Neural Networks (ANNs) and Genetic Algorithms (GAs) as well as the use of numerical analysis techniques to properly simulate the geomechanical system. A widely used pavement layered analysis program ILLI-PAVE was employed in the analyses of flexible pavements of various pavement types; including full-depth asphalt and conventional flexible pavements, were built on either lime stabilized soils or untreated subgrade. Nonlinear properties of the subgrade soil and the base course aggregate as transportation geomaterials were also considered. A computer program, Soft Computing Based System Identifier or SOFTSYS, was developed. In SOFTSYS, ANNs were used as surrogate models to provide faster solutions of the nonlinear finite element program ILLI-PAVE. The deflections obtained from FWD tests in the field were matched with the predictions obtained from the numerical simulations to develop SOFTSYS models. The solution to the inverse problem for multi-layered pavements is computationally hard to achieve and is often not feasible due to field variability and quality of the collected data. The primary difficulty in the analysis arises from the substantial increase in the degree of non-uniqueness of the mapping from the pavement layer parameters to the FWD deflections. The insensitivity of some layer properties lowered SOFTSYS model performances. Still, SOFTSYS models were shown to work effectively with the synthetic data obtained from ILLI-PAVE finite element solutions. In general, SOFTSYS solutions very closely matched the ILLI-PAVE mechanistic pavement analysis results. For SOFTSYS validation, field collected FWD data were successfully used to predict pavement layer thicknesses and layer moduli of in-service flexible pavements. Some of the very promising SOFTSYS results indicated average absolute errors on the order of 2%, 7%, and 4% for the Hot Mix Asphalt (HMA) thickness estimation of full-depth asphalt pavements, full-depth pavements on lime stabilized soils and conventional flexible pavements, respectively. The field validations of SOFTSYS data also produced meaningful results. The thickness data obtained from Ground Penetrating Radar testing matched reasonably well with predictions from SOFTSYS models. The differences observed in the HMA and lime stabilized soil layer thicknesses observed were attributed to deflection data variability from FWD tests. The backcalculated asphalt concrete layer thickness results matched better in the case of full-depth asphalt flexible pavements built on lime stabilized soils compared to conventional flexible pavements. Overall, SOFTSYS was capable of producing reliable thickness estimates despite the variability of field constructed asphalt layer thicknesses.
Resumo:
We propose a pre-processing mesh re-distribution algorithm based upon harmonic maps employed in conjunction with discontinuous Galerkin approximations of advection-diffusion-reaction problems. Extensive two-dimensional numerical experiments with different choices of monitor functions, including monitor functions derived from goal-oriented a posteriori error indicators are presented. The examples presented clearly demonstrate the capabilities and the benefits of combining our pre-processing mesh movement algorithm with both uniform, as well as, adaptive isotropic and anisotropic mesh refinement.
Resumo:
Nos dias de hoje, a ligação adesiva de estruturas complexas que não poderiam ou não seriam tão fáceis de ser fabricadas numa só peça é cada vez mais usual. As juntas adesivas têm vindo a substituir muitos outros métodos de ligação, como por exemplo ligações aparafusadas, rebitas ou soldadas, devido às vantagens de facilidade na sua fabricação, resistência superior e capacidade de unir materiais diferentes. Por esta razão as juntas adesivas têm vindo a ser aplicadas cada vez mais em várias industrias como aeroespacial, aeronáutica, automóvel, naval e calçado. O tipo de adesivo a usar em determinada aplicação é principalmente escolhido consoante as suas características mecânicas e o tipo de resposta pretendida às solicitações impostas. Como exemplo de adesivo resistente e frágil existe o Araldite® AV138. Por outro lado, o adesivo Araldite® 2015 é menos resistente, mas apresenta maior ductilidade e flexibilidade. Além dos adesivos Araldite® comerciais, existem adesivos de poliuretano que combinam características de elevada resistência com características de grande ductilidade e flexibilidade, como por exemplo o Sikaforce® 7752. Esta dissertação tem como objetivo estudar experimentalmente e numericamente, através de modelos de dano coesivo (MDC), o comportamento de diferentes configurações de junta em T quando sujeitas a solicitações de arrancamento. Consideram-se os adesivos anteriormente mencionados para testar as juntas sob diferentes tipos de adesivos. A junta em T é constituída por 2 aderentes em L de alumínio e um aderente base também em alumínio, unidos por uma camada de adesivo. Experimentalmente é feito um estudo da resistência da junta com a variação da espessura dos aderentes em L (tP2). Com a análise numérica são estudadas as distribuições de tensões, evolução do dano, modos de rotura e resistência. Além disso, realizou-se um estudo numérico da existência ou não de adesivo de preenchimento na zona da curvatura dos aderentes em L nas tensões e na resistência da junta. Mostrouse que a variação da geometria nos aderentes em L, a presença de adesivo de preenchimento e o tipo de adesivo têm uma influência direta na resistência de junta. Os ensaios experimentais validaram os resultados numéricos e permitiram concluir que os MDC são uma técnica precisa para o estudo das geometrias das juntas em T.
Resumo:
We introduce a residual-based a posteriori error indicator for discontinuous Galerkin discretizations of the biharmonic equation with essential boundary conditions. We show that the indicator is both reliable and efficient with respect to the approximation error measured in terms of a natural energy norm, under minimal regularity assumptions. We validate the performance of the indicator within an adaptive mesh refinement procedure and show its asymptotic exactness for a range of test problems.
Resumo:
Dissertação (mestrado)—Universidade de Brasília, Faculdade de Tecnologia, Departamento de Engenharia Civil e Ambiental, 2016.
Resumo:
We shall consider the weak formulation of a linear elliptic model problem with discontinuous Dirichlet boundary conditions. Since such problems are typically not well-defined in the standard H^1-H^1 setting, we will introduce a suitable saddle point formulation in terms of weighted Sobolev spaces. Furthermore, we will discuss the numerical solution of such problems. Specifically, we employ an hp-discontinuous Galerkin method and derive an L^2-norm a posteriori error estimate. Numerical experiments demonstrate the effectiveness of the proposed error indicator in both the h- and hp-version setting. Indeed, in the latter case exponential convergence of the error is attained as the mesh is adaptively refined.
Resumo:
Given a 2manifold triangular mesh \(M \subset {\mathbb {R}}^3\), with border, a parameterization of \(M\) is a FACE or trimmed surface \(F=\{S,L_0,\ldots, L_m\}\) -- \(F\) is a connected subset or region of a parametric surface \(S\), bounded by a set of LOOPs \(L_0,\ldots ,L_m\) such that each \(L_i \subset S\) is a closed 1manifold having no intersection with the other \(L_j\) LOOPs -- The parametric surface \(S\) is a statistical fit of the mesh \(M\) -- \(L_0\) is the outermost LOOP bounding \(F\) and \(L_i\) is the LOOP of the ith hole in \(F\) (if any) -- The problem of parameterizing triangular meshes is relevant for reverse engineering, tool path planning, feature detection, redesign, etc -- Stateofart mesh procedures parameterize a rectangular mesh \(M\) -- To improve such procedures, we report here the implementation of an algorithm which parameterizes meshes \(M\) presenting holes and concavities -- We synthesize a parametric surface \(S \subset {\mathbb {R}}^3\) which approximates a superset of the mesh \(M\) -- Then, we compute a set of LOOPs trimming \(S\), and therefore completing the FACE \(F=\ {S,L_0,\ldots ,L_m\}\) -- Our algorithm gives satisfactory results for \(M\) having low Gaussian curvature (i.e., \(M\) being quasi-developable or developable) -- This assumption is a reasonable one, since \(M\) is the product of manifold segmentation preprocessing -- Our algorithm computes: (1) a manifold learning mapping \(\phi : M \rightarrow U \subset {\mathbb {R}}^2\), (2) an inverse mapping \(S: W \subset {\mathbb {R}}^2 \rightarrow {\mathbb {R}}^3\), with \ (W\) being a rectangular grid containing and surpassing \(U\) -- To compute \(\phi\) we test IsoMap, Laplacian Eigenmaps and Hessian local linear embedding (best results with HLLE) -- For the back mapping (NURBS) \(S\) the crucial step is to find a control polyhedron \(P\), which is an extrapolation of \(M\) -- We calculate \(P\) by extrapolating radial basis functions that interpolate points inside \(\phi (M)\) -- We successfully test our implementation with several datasets presenting concavities, holes, and are extremely nondevelopable -- Ongoing work is being devoted to manifold segmentation which facilitates mesh parameterization
Resumo:
The behavior of the fluid flux in oil fields is influenced by different factors and it has a big impact on the recovery of hydrocarbons. There is a need of evaluating and adapting the actual technology to the worldwide reservoirs reality, not only on the exploration (reservoir discovers) but also on the development of those that were already discovered, however not yet produced. The in situ combustion (ISC) is a suitable technique for these recovery of hydrocarbons, although it remains complex to be implemented. The main objective of this research was to study the application of the ISC as an advanced oil recovery technique through a parametric analysis of the process using vertical wells within a semi synthetic reservoir that had the characteristics from the brazilian northwest, in order to determine which of those parameters could influence the process, verifying the technical and economical viability of the method on the oil industry. For that analysis, a commercial reservoir simulation program for thermal processes was used, called steam thermal and advanced processes reservoir simulator (STARS) from the computer modeling group (CMG). This study aims, through the numerical analysis, find results that help improve mainly the interpretation and comprehension of the main problems related to the ISC method, which are not yet dominated. From the results obtained, it was proved that the mediation promoted by the thermal process ISC over the oil recovery is very important, with rates and cumulated production positively influenced by the method application. It was seen that the application of the method improves the oil mobility as a function of the heating when the combustion front forms inside the reservoir. Among all the analyzed parameters, the activation energy presented the bigger influence, it means, the lower the activation energy the bigger the fraction of recovered oil, as a function of the chemical reactions speed rise. It was also verified that the higher the enthalpy of the reaction, the bigger the fraction of recovered oil, due to a bigger amount of released energy inside the system, helping the ISC. The reservoir parameters: porosity and permeability showed to have lower influence on the ISC. Among the operational parameters that were analyzed, the injection rate was the one that showed a stronger influence on the ISC method, because, the higher the value of the injection rate, the higher was the result obtained, mainly due to maintaining the combustion front. In connection with the oxygen concentration, an increase of the percentage of this parameter translates into a higher fraction of recovered oil, because the quantity of fuel, helping the advance and the maintenance of the combustion front for a longer period of time. About the economic analysis, the ISC method showed to be economically feasible when evaluated through the net present value (NPV), considering the injection rates: the higher the injection rate, the higher the financial incomes of the final project
Resumo:
En este trabajo se realizan simulaciones de excavaciones profundas en suelos de origen aluvial en la ciudad de Sabaneta, mediante el empleo de modelos en elementos finitos integrados por el software PLAXIS® -- Los desplazamientos horizontales son comparados con mediciones de inclinómetros instalados en el trasdós del muro diafragma anclado del proyecto Centro Comercial Mayorca Fase III, localizado en el municipio de Sabaneta, Antioquia -- Finalmente, se concluye acerca de la sensibilidad de los parámetros más relevantes según el modelo constitutivo empleado y la viabilidad en su aplicación para la solución del problema evaluado
Resumo:
Oil production and exploration techniques have evolved in the last decades in order to increase fluid flows and optimize how the required equipment are used. The base functioning of Electric Submersible Pumping (ESP) lift method is the use of an electric downhole motor to move a centrifugal pump and transport the fluids to the surface. The Electric Submersible Pumping is an option that has been gaining ground among the methods of Artificial Lift due to the ability to handle a large flow of liquid in onshore and offshore environments. The performance of a well equipped with ESP systems is intrinsically related to the centrifugal pump operation. It is the pump that has the function to turn the motor power into Head. In this present work, a computer model to analyze the three-dimensional flow in a centrifugal pump used in Electric Submersible Pumping has been developed. Through the commercial program, ANSYS® CFX®, initially using water as fluid flow, the geometry and simulation parameters have been defined in order to obtain an approximation of what occurs inside the channels of the impeller and diffuser pump in terms of flow. Three different geometry conditions were initially tested to determine which is most suitable to solving the problem. After choosing the most appropriate geometry, three mesh conditions were analyzed and the obtained values were compared to the experimental characteristic curve of Head provided by the manufacturer. The results have approached the experimental curve, the simulation time and the model convergence were satisfactory if it is considered that the studied problem involves numerical analysis. After the tests with water, oil was used in the simulations. The results were compared to a methodology used in the petroleum industry to correct viscosity. In general, for models with water and oil, the results with single-phase fluids were coherent with the experimental curves and, through three-dimensional computer models, they are a preliminary evaluation for the analysis of the two-phase flow inside the channels of centrifugal pump used in ESP systems
Resumo:
In the past few years, human facial age estimation has drawn a lot of attention in the computer vision and pattern recognition communities because of its important applications in age-based image retrieval, security control and surveillance, biomet- rics, human-computer interaction (HCI) and social robotics. In connection with these investigations, estimating the age of a person from the numerical analysis of his/her face image is a relatively new topic. Also, in problems such as Image Classification the Deep Neural Networks have given the best results in some areas including age estimation. In this work we use three hand-crafted features as well as five deep features that can be obtained from pre-trained deep convolutional neural networks. We do a comparative study of the obtained age estimation results with these features.
Resumo:
In this study we set out to dissociate the developmental time course of automatic symbolic number processing and cognitive control functions in grade 1-3 British primary school children. Event-related potential (ERP) and behavioral data were collected in a physical size discrimination numerical Stroop task. Task-irrelevant numerical information was processed automatically already in grade 1. Weakening interference and strengthening facilitation indicated the parallel development of general cognitive control and automatic number processing. Relationships among ERP and behavioral effects suggest that control functions play a larger role in younger children and that automaticity of number processing increases from grade 1 to 3.