233 resultados para Computer Science Applications
Resumo:
This paper presents a study of the stationary phenomenon of superheated or metastable liquid jets, flashing into a two-dimensional axisymmetric domain, while in the two-phase region. In general, the phenomenon starts off when a high-pressure, high-temperature liquid jet emerges from a small nozzle or orifice expanding into a low-pressure chamber, below its saturation pressure taken at the injection temperature. As the process evolves, crossing the saturation curve, one observes that the fluid remains in the liquid phase reaching a superheated condition. Then, the liquid undergoes an abrupt phase change by means of an oblique evaporation wave. Across this phase change the superheated liquid becomes a two-phase high-speed mixture in various directions, expanding to supersonic velocities. In order to reach the downstream pressure, the supersonic fluid continues to expand, crossing a complex bow shock wave. The balance equations that govern the phenomenon are mass conservation, momentum conservation, and energy conservation, plus an equation-of-state for the substance. A false-transient model is implemented using the shock capturing scheme: dispersion-controlled dissipative (DCD), which was used to calculate the flow conditions as the steady-state condition is reached. Numerical results with computational code DCD-2D vI have been analyzed. Copyright (C) 2009 John Wiley & Sons, Ltd.
Resumo:
In this paper a bond graph methodology is used to model incompressible fluid flows with viscous and thermal effects. The distinctive characteristic of these flows is the role of pressure, which does not behave as a state variable but as a function that must act in such a way that the resulting velocity field has divergence zero. Velocity and entropy per unit volume are used as independent variables for a single-phase, single-component flow. Time-dependent nodal values and interpolation functions are introduced to represent the flow field, from which nodal vectors of velocity and entropy are defined as state variables. The system for momentum and continuity equations is coincident with the one obtained by using the Galerkin method for the weak formulation of the problem in finite elements. The integral incompressibility constraint is derived based on the integral conservation of mechanical energy. The weak formulation for thermal energy equation is modeled with true bond graph elements in terms of nodal vectors of temperature and entropy rates, resulting a Petrov-Galerkin method. The resulting bond graph shows the coupling between mechanical and thermal energy domains through the viscous dissipation term. All kind of boundary conditions are handled consistently and can be represented as generalized effort or flow sources. A procedure for causality assignment is derived for the resulting graph, satisfying the Second principle of Thermodynamics. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
The arteriovenous fistula (AVF) is characterized by enhanced blood flow and is the most widely used vascular access for chronic haemodialysis (Sivanesan et al., 1998). A large proportion of the AVF late failures are related to local haemodynamics (Sivanesan et al., 1999a). As in AVF, blood flow dynamics plays an important role in growth, rupture, and surgical treatment of aneurysm. Several techniques have been used to study the flow patterns in simplified models of vascular anastomose and aneurysm. In the present investigation, Computational Fluid Dynamics (CFD) is used to analyze the flow patterns in AVF and aneurysm through the velocity waveform obtained from experimental surgeries in dogs (Galego et al., 2000), as well as intra-operative blood flow recordings of patients with radiocephalic AVF ( Sivanesan et al., 1999b) and physiological pulses (Aires, 1991), respectively. The flow patterns in AVF for dog and patient surgeries data are qualitatively similar. Perturbation, recirculation and separation zones appeared during cardiac cycle, and these were intensified in the diastole phase for the AVF and aneurysm models. The values of wall shear stress presented in this investigation of AVF and aneurysm models oscillated in the range that can both cause damage to endothelial cells and develop atherosclerosis.
Resumo:
By means of continuous topology optimization, this paper discusses the influence of material gradation and layout in the overall stiffness behavior of functionally graded structures. The formulation is associated to symmetry and pattern repetition constraints, including material gradation effects at both global and local levels. For instance, constraints associated with pattern repetition are applied by considering material gradation either on the global structure or locally over the specific pattern. By means of pattern repetition, we recover previous results in the literature which were obtained using homogenization and optimization of cellular materials.
Resumo:
Load cells are used extensively in engineering fields. This paper describes a novel structural optimization method for single- and multi-axis load cell structures. First, we briefly explain the topology optimization method that uses the solid isotropic material with penalization (SIMP) method. Next, we clarify the mechanical requirements and design specifications of the single- and multi-axis load cell structures, which are formulated as an objective function. In the case of multi-axis load cell structures, a methodology based on singular value decomposition is used. The sensitivities of the objective function with respect to the design variables are then formulated. On the basis of these formulations, an optimization algorithm is constructed using finite element methods and the method of moving asymptotes (MMA). Finally, we examine the characteristics of the optimization formulations and the resultant optimal configurations. We confirm the usefulness of our proposed methodology for the optimization of single- and multi-axis load cell structures.
Resumo:
This work deals with the problem of minimizing the waste of space that occurs on a rotational placement of a set of irregular two dimensional polygons inside a two dimensional container. This problem is approached with an heuristic based on simulated annealing. Traditional 14 external penalization"" techniques are avoided through the application of the no-fit polygon, that determinates the collision free area for each polygon before its placement. The simulated annealing controls: the rotation applied, the placement and the sequence of placement of the polygons. For each non placed polygon, a limited depth binary search is performed to find a scale factor that when applied to the polygon, would allow it to be fitted in the container. It is proposed a crystallization heuristic, in order to increase the number of accepted solutions. The bottom left and larger first deterministic heuristics were also studied. The proposed process is suited for non convex polygons and containers, the containers can have holes inside. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
The following papers constitute a selection among the best papers presented at the Ninth IEEE/IAS International Conference on Industry Applications (INDUSCON) held in Sao Paulo from 8(th) to 10(th) of November, 2010. This event gathered a significant number of people from academia and industry interested in applications of Electrical and Electronic engineering to industry.
Resumo:
The ability to control both the minimum size of holes and the minimum size of structural members are essential requirements in the topology optimization design process for manufacturing. This paper addresses both requirements by means of a unified approach involving mesh-independent projection techniques. An inverse projection is developed to control the minimum hole size while a standard direct projection scheme is used to control the minimum length of structural members. In addition, a heuristic scheme combining both contrasting requirements simultaneously is discussed. Two topology optimization implementations are contributed: one in which the projection (either inverse or direct) is used at each iteration; and the other in which a two-phase scheme is explored. In the first phase, the compliance minimization is carried out without any projection until convergence. In the second phase, the chosen projection scheme is applied iteratively until a solution is obtained while satisfying either the minimum member size or minimum hole size. Examples demonstrate the various features of the projection-based techniques presented.
Resumo:
The computational design of a composite where the properties of its constituents change gradually within a unit cell can be successfully achieved by means of a material design method that combines topology optimization with homogenization. This is an iterative numerical method, which leads to changes in the composite material unit cell until desired properties (or performance) are obtained. Such method has been applied to several types of materials in the last few years. In this work, the objective is to extend the material design method to obtain functionally graded material architectures, i.e. materials that are graded at the local level (e.g. microstructural level). Consistent with this goal, a continuum distribution of the design variable inside the finite element domain is considered to represent a fully continuous material variation during the design process. Thus the topology optimization naturally leads to a smoothly graded material system. To illustrate the theoretical and numerical approaches, numerical examples are provided. The homogenization method is verified by considering one-dimensional material gradation profiles for which analytical solutions for the effective elastic properties are available. The verification of the homogenization method is extended to two dimensions considering a trigonometric material gradation, and a material variation with discontinuous derivatives. These are also used as benchmark examples to verify the optimization method for functionally graded material cell design. Finally the influence of material gradation on extreme materials is investigated, which includes materials with near-zero shear modulus, and materials with negative Poisson`s ratio.
Resumo:
Piezoresistive materials, materials whose resistivity properties change when subjected to mechanical stresses, are widely utilized in many industries as sensors, including pressure sensors, accelerometers, inclinometers, and load cells. Basic piezoresistive sensors consist of piezoresistive devices bonded to a flexible structure, such as a cantilever or a membrane, where the flexible structure transmits pressure, force, or inertial force due to acceleration, thereby causing a stress that changes the resistivity of the piezoresistive devices. By applying a voltage to a piezoresistive device, its resistivity can be measured and correlated with the amplitude of an applied pressure or force. The performance of a piezoresistive sensor is closely related to the design of its flexible structure. In this research, we propose a generic topology optimization formulation for the design of piezoresistive sensors where the primary aim is high response. First, the concept of topology optimization is briefly discussed. Next, design requirements are clarified, and corresponding objective functions and the optimization problem are formulated. An optimization algorithm is constructed based on these formulations. Finally, several design examples of piezoresistive sensors are presented to confirm the usefulness of the proposed method.
Resumo:
Systems of distributed artificial intelligence can be powerful tools in a wide variety of practical applications. Its most surprising characteristic, the emergent behavior, is also the most answerable for the difficulty in. projecting these systems. This work proposes a tool capable to beget individual strategies for the elements of a multi-agent system and thereof providing to the group means on obtaining wanted results, working in a coordinated and cooperative manner as well. As an application example, a problem was taken as a basis where a predators` group must catch a prey in a three-dimensional continuous ambient. A synthesis of system strategies was implemented of which internal mechanism involves the integration between simulators by Particle Swarm Optimization algorithm (PSO), a Swarm Intelligence technique. The system had been tested in several simulation settings and it was capable to synthesize automatically successful hunting strategies, substantiating that the developed tool can provide, as long as it works with well-elaborated patterns, satisfactory solutions for problems of complex nature, of difficult resolution starting from analytical approaches. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Simulated annealing (SA) is an optimization technique that can process cost functions with degrees of nonlinearities, discontinuities and stochasticity. It can process arbitrary boundary conditions and constraints imposed on these cost functions. The SA technique is applied to the problem of robot path planning. Three situations are considered here: the path is represented as a polyline; as a Bezier curve; and as a spline interpolated curve. In the proposed SA algorithm, the sensitivity of each continuous parameter is evaluated at each iteration increasing the number of accepted solutions. The sensitivity of each parameter is associated to its probability distribution in the definition of the next candidate. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Here, we study the stable integration of real time optimization (RTO) with model predictive control (MPC) in a three layer structure. The intermediate layer is a quadratic programming whose objective is to compute reachable targets to the MPC layer that lie at the minimum distance to the optimum set points that are produced by the RTO layer. The lower layer is an infinite horizon MPC with guaranteed stability with additional constraints that force the feasibility and convergence of the target calculation layer. It is also considered the case in which there is polytopic uncertainty in the steady state model considered in the target calculation. The dynamic part of the MPC model is also considered unknown but it is assumed to be represented by one of the models of a discrete set of models. The efficiency of the methods presented here is illustrated with the simulation of a low order system. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
In this study, the concept of cellular automata is applied in an innovative way to simulate the separation of phases in a water/oil emulsion. The velocity of the water droplets is calculated by the balance of forces acting on a pair of droplets in a group, and cellular automata is used to simulate the whole group of droplets. Thus, it is possible to solve the problem stochastically and to show the sequence of collisions of droplets and coalescence phenomena. This methodology enables the calculation of the amount of water that can be separated from the emulsion under different operating conditions, thus enabling the process to be optimized. Comparisons between the results obtained from the developed model and the operational performance of an actual desalting unit are carried out. The accuracy observed shows that the developed model is a good representation of the actual process. (C) 2010 Published by Elsevier Ltd.
Resumo:
This paper studies a simplified methodology to integrate the real time optimization (RTO) of a continuous system into the model predictive controller in the one layer strategy. The gradient of the economic objective function is included in the cost function of the controller. Optimal conditions of the process at steady state are searched through the use of a rigorous non-linear process model, while the trajectory to be followed is predicted with the use of a linear dynamic model, obtained through a plant step test. The main advantage of the proposed strategy is that the resulting control/optimization problem can still be solved with a quadratic programming routine at each sampling step. Simulation results show that the approach proposed may be comparable to the strategy that solves the full economic optimization problem inside the MPC controller where the resulting control problem becomes a non-linear programming problem with a much higher computer load. (C) 2010 Elsevier Ltd. All rights reserved.