993 resultados para Numerical Algorithms
Resumo:
This paper deals with the numerical assessment of the influence of parameters such as pre-compression level, aspect ratio, vertical and horizontal reinforcement ratios and boundary conditions on the lateral strength of masonry walls under in-plane loading. The numerical study is performed through the software DIANA (R) based on the Finite Element Method. The validation of the numerical model is carried out from a database of available experimental results on masonry walls tested under cyclic lateral loading. Numerical results revealed that boundary conditions play a central role on the lateral behavior of masonry walls under in-plane loading and determine the influence of level of pre-compression as well as the reinforcement ratio on the wall strength. The lateral capacity of walls decreases with the increase of aspect ratio and with the decrease of pre-compression. Vertical steel bars appear to have almost no influence in the shear strength of masonry walls and horizontal reinforcement only increases the lateral strength of masonry walls if the shear response of the walls is determinant for failure, which is directly related to the boundary conditions. (C) 2011 Elsevier Ltd. All rights reserved.
Resumo:
We consider a class of two-dimensional problems in classical linear elasticity for which material overlapping occurs in the absence of singularities. Of course, material overlapping is not physically realistic, and one possible way to prevent it uses a constrained minimization theory. In this theory, a minimization problem consists of minimizing the total potential energy of a linear elastic body subject to the constraint that the deformation field must be locally invertible. Here, we use an interior and an exterior penalty formulation of the minimization problem together with both a standard finite element method and classical nonlinear programming techniques to compute the minimizers. We compare both formulations by solving a plane problem numerically in the context of the constrained minimization theory. The problem has a closed-form solution, which is used to validate the numerical results. This solution is regular everywhere, including the boundary. In particular, we show numerical results which indicate that, for a fixed finite element mesh, the sequences of numerical solutions obtained with both the interior and the exterior penalty formulations converge to the same limit function as the penalization is enforced. This limit function yields an approximate deformation field to the plane problem that is locally invertible at all points in the domain. As the mesh is refined, this field converges to the exact solution of the plane problem.
Resumo:
This paper presents a strategy for the solution of the WDM optical networks planning. Specifically, the problem of Routing and Wavelength Allocation (RWA) in order to minimize the amount of wavelengths used. In this case, the problem is known as the Min-RWA. Two meta-heuristics (Tabu Search and Simulated Annealing) are applied to take solutions of good quality and high performance. The key point is the degradation of the maximum load on the virtual links in favor of minimization of number of wavelengths used; the objective is to find a good compromise between the metrics of virtual topology (load in Gb/s) and of the physical topology (quantity of wavelengths). The simulations suggest good results when compared to some existing in the literature.
Resumo:
The results concerning on an experimental and a numerical study related to SFRCP are presented. Eighteen pipes with an internal diameter of 600 mm and fibre dosages of 10, 20 and 40 kg/m(3) were manufactured and tested. Some technological aspects were concluded. Likewise, a numerical parameterized model was implemented. With this model, the simulation of the resistant behaviour of SFRCP can be performed. In this sense, the results experimentally obtained were contrasted with those suggested by means MAP reaching very satisfactory correlations. Taking it into account, it could be said that the numerical model is a useful tool for the optimal design of the SFRCP fibre dosages, avoiding the need of the systematic employment of the test as an indirect design method. Consequently, the use of this model would reduce the overall cost of the pipes and would give fibres a boost as a solution for this structural typology.
Resumo:
The continuous growth of peer-to-peer networks has made them responsible for a considerable portion of the current Internet traffic. For this reason, improvements in P2P network resources usage are of central importance. One effective approach for addressing this issue is the deployment of locality algorithms, which allow the system to optimize the peers` selection policy for different network situations and, thus, maximize performance. To date, several locality algorithms have been proposed for use in P2P networks. However, they usually adopt heterogeneous criteria for measuring the proximity between peers, which hinders a coherent comparison between the different solutions. In this paper, we develop a thoroughly review of popular locality algorithms, based on three main characteristics: the adopted network architecture, distance metric, and resulting peer selection algorithm. As result of this study, we propose a novel and generic taxonomy for locality algorithms in peer-to-peer networks, aiming to enable a better and more coherent evaluation of any individual locality algorithm.
Resumo:
In this paper a computational implementation of an evolutionary algorithm (EA) is shown in order to tackle the problem of reconfiguring radial distribution systems. The developed module considers power quality indices such as long duration interruptions and customer process disruptions due to voltage sags, by using the Monte Carlo simulation method. Power quality costs are modeled into the mathematical problem formulation, which are added to the cost of network losses. As for the EA codification proposed, a decimal representation is used. The EA operators, namely selection, recombination and mutation, which are considered for the reconfiguration algorithm, are herein analyzed. A number of selection procedures are analyzed, namely tournament, elitism and a mixed technique using both elitism and tournament. The recombination operator was developed by considering a chromosome structure representation that maps the network branches and system radiality, and another structure that takes into account the network topology and feasibility of network operation to exchange genetic material. The topologies regarding the initial population are randomly produced so as radial configurations are produced through the Prim and Kruskal algorithms that rapidly build minimum spanning trees. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Line-start permanent magnet motor (LSPMM) is a very attractive alternative to replace induction motors due to its very high efficiency and constant speed operation with load variations. However, designing this kind of hybrid motor is hard work and requires a good understanding of motor behavior. The calculation of load angle is an important step in motor design and can not be neglected. This paper uses the finite element method to show a simple methodology to calculate the load angle of a three-phase LSPMM combining the dynamic and steady-state simulations. The methodology is used to analyze a three-phase LSPMM.
Resumo:
This paper presents a study of the stationary phenomenon of superheated or metastable liquid jets, flashing into a two-dimensional axisymmetric domain, while in the two-phase region. In general, the phenomenon starts off when a high-pressure, high-temperature liquid jet emerges from a small nozzle or orifice expanding into a low-pressure chamber, below its saturation pressure taken at the injection temperature. As the process evolves, crossing the saturation curve, one observes that the fluid remains in the liquid phase reaching a superheated condition. Then, the liquid undergoes an abrupt phase change by means of an oblique evaporation wave. Across this phase change the superheated liquid becomes a two-phase high-speed mixture in various directions, expanding to supersonic velocities. In order to reach the downstream pressure, the supersonic fluid continues to expand, crossing a complex bow shock wave. The balance equations that govern the phenomenon are mass conservation, momentum conservation, and energy conservation, plus an equation-of-state for the substance. A false-transient model is implemented using the shock capturing scheme: dispersion-controlled dissipative (DCD), which was used to calculate the flow conditions as the steady-state condition is reached. Numerical results with computational code DCD-2D vI have been analyzed. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
The objective of the present work is to propose a numerical and statistical approach, using computational fluid dynamics, for the study of the atmospheric pollutant dispersion. Modifications in the standard k-epsilon turbulence model and additional equations for the calculation of the variance of concentration are introduced to enhance the prediction of the flow field and scalar quantities. The flow field, the mean concentration and the variance of a flow over a two-dimensional triangular hill, with a finite-size point pollutant source, are calculated by a finite volume code and compared with published experimental results. A modified low Reynolds k-epsilon turbulence model was employed in this work, using the constant of the k-epsilon model C(mu)=0.03 to take into account the inactive atmospheric turbulence. The numerical results for the velocity profiles and the position of the reattachment point are in good agreement with the experimental results. The results for the mean and the variance of the concentration are also in good agreement with experimental results from the literature. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
A two-dimensional numeric simulator is developed to predict the nonlinear, convective-reactive, oxygen mass exchange in a cross-flow hollow fiber blood oxygenator. The numeric simulator also calculates the carbon dioxide mass exchange, as hemoglobin affinity to oxygen is affected by the local pH value, which depends mostly on the local carbon dioxide content in blood. Blood pH calculation inside the oxygenator is made by the simultaneous solution of an equation that takes into account the blood buffering capacity and the classical Henderson-Hasselbach equation. The modeling of the mass transfer conductance in the blood comprises a global factor, which is a function of the Reynolds number, and a local factor, which takes into account the amount of oxygen reacted to hemoglobin. The simulator is calibrated against experimental data for an in-line fiber bundle. The results are: (i) the calibration process allows the precise determination of the mass transfer conductance for both oxygen and carbon dioxide; (ii) very alkaline pH values occur in the blood path at the gas inlet side of the fiber bundle; (iii) the parametric analysis of the effect of the blood base excess (BE) shows that V(CO2) is similar in the case of blood metabolic alkalosis, metabolic acidosis, or normal BE, for a similar blood inlet P(CO2), although the condition of metabolic alkalosis is the worst case, as the pH in the vicinity of the gas inlet is the most alkaline; (iv) the parametric analysis of the effect of the gas flow to blood flow ratio (Q(G)/Q(B)) shows that V(CO2) variation with the gas flow is almost linear up to Q(G)/Q(B) = 2.0. V(O2) is not affected by the gas flow as it was observed that by increasing the gas flow up to eight times, the V(O2) grows only 1%. The mass exchange of carbon dioxide uses the full length of the hollow-fiber only if Q(G)/Q(B) > 2.0, as it was observed that only in this condition does the local variation of pH and blood P(CO2) comprise the whole fiber bundle.
Resumo:
This paper presents first material tests on HDPE and PVC, and subsequently impact tests on plates made of the same materials. Finally, numerical simulations of the plate impact tests are compared with the experimental results. A rather comprehensive series of mechanical material tests were performed to disclose the behaviour of PVC and HDPE in tension and compression. Quasi-static tests were carried out at three rates in compression and two in tension. Digital image correlation. DIC, was used to measure the in-plane strains, revealing true stress-strain curves and allowing to analyze strain-rate sensitivity and isotropy of Poisson`s ratio. In addition, dynamic compression tests were carried out in a split-Hopkinson pressure bar. Quasi-static and dynamic tests were also performed on clamped plates made of the same PVC and HDPE materials, using an optical technique to measure the full-field out-of-plane deformations. These tests, together with the material data, were used for comparative purposes of a finite element analysis. A reasonable agreement between experimental and numerical results was achieved. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
A key issue in the design of tyres is their capability to sustain intense impact loads. Hence, the development of a reliable experimental data basis is important, against which numerical models can be compared. Experimental data on tyre impact in the open literature is somewhat rare. In this article, a specially design rig was developed for tyre impact tests. It holds the test piece in a given position, allowing a drop mass with a round indenter to hit pressurised tyres with different impact energies. A high-speed camera and a laser velocimeter were used to track the impact event. From the laser measurement it was possible to obtain the impact force and the local indentation. A finite element study was then conducted using material properties from the open literature. By comparing the experimental measurements with the numerical results, it became evident that the model was capable of predicting the major features of the impact of a mass on a tyre. This model is therefore of value for the assessment of the performance of a tyre in extreme cases of mass impact. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper presents a family of algorithms for approximate inference in credal networks (that is, models based on directed acyclic graphs and set-valued probabilities) that contain only binary variables. Such networks can represent incomplete or vague beliefs, lack of data, and disagreements among experts; they can also encode models based on belief functions and possibilistic measures. All algorithms for approximate inference in this paper rely on exact inferences in credal networks based on polytrees with binary variables, as these inferences have polynomial complexity. We are inspired by approximate algorithms for Bayesian networks; thus the Loopy 2U algorithm resembles Loopy Belief Propagation, while the Iterated Partial Evaluation and Structured Variational 2U algorithms are, respectively, based on Localized Partial Evaluation and variational techniques. (C) 2007 Elsevier Inc. All rights reserved.