958 resultados para Cost Optimization
Resumo:
In this work, we explore simultaneous geometry design and material selection for statically determinate trusses by posing it as a continuous optimization problem. The underlying principles of our approach are structural optimization and Ashby’s procedure for material selection from a database. For simplicity and ease of initial implementation, only static loads are considered in this work with the intent of maximum stiffness, minimum weight/cost, and safety against failure. Safety of tensile and compression members in the truss is treated differently to prevent yield and buckling failures, respectively. Geometry variables such as lengths and orientations of members are taken to be the design variables in an assumed layout. Areas of cross-section of the members are determined to satisfy the failure constraints in each member. Along the lines of Ashby’s material indices, a new design index is derived for trusses. The design index helps in choosing the most suitable material for any geometry of the truss. Using the design index, both the design space and the material database are searched simultaneously using gradient-based optimization algorithms. The important feature of our approach is that the formulated optimization problem is continuous, although the material selection from a database is an inherently discrete problem. A few illustrative examples are included. It is observed that the method is capable of determining the optimal topology in addition to optimal geometry when the assumed layout contains more links than are necessary for optimality.
Resumo:
We present a generic method/model for multi-objective design optimization of laminated composite components, based on vector evaluated particle swarm optimization (VEPSO) algorithm. VEPSO is a novel, co-evolutionary multi-objective variant of the popular particle swarm optimization algorithm (PSO). In the current work a modified version of VEPSO algorithm for discrete variables has been developed and implemented successfully for the, multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are - the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria; failure mechanism based failure criteria, Maximum stress failure criteria and the Tsai-Wu failure criteria. The optimization method is validated for a number of different loading configurations - uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Modern database systems incorporate a query optimizer to identify the most efficient "query execution plan" for executing the declarative SQL queries submitted by users. A dynamic-programming-based approach is used to exhaustively enumerate the combinatorially large search space of plan alternatives and, using a cost model, to identify the optimal choice. While dynamic programming (DP) works very well for moderately complex queries with up to around a dozen base relations, it usually fails to scale beyond this stage due to its inherent exponential space and time complexity. Therefore, DP becomes practically infeasible for complex queries with a large number of base relations, such as those found in current decision-support and enterprise management applications. To address the above problem, a variety of approaches have been proposed in the literature. Some completely jettison the DP approach and resort to alternative techniques such as randomized algorithms, whereas others have retained DP by using heuristics to prune the search space to computationally manageable levels. In the latter class, a well-known strategy is "iterative dynamic programming" (IDP) wherein DP is employed bottom-up until it hits its feasibility limit, and then iteratively restarted with a significantly reduced subset of the execution plans currently under consideration. The experimental evaluation of IDP indicated that by appropriate choice of algorithmic parameters, it was possible to almost always obtain "good" (within a factor of twice of the optimal) plans, and in the few remaining cases, mostly "acceptable" (within an order of magnitude of the optimal) plans, and rarely, a "bad" plan. While IDP is certainly an innovative and powerful approach, we have found that there are a variety of common query frameworks wherein it can fail to consistently produce good plans, let alone the optimal choice. This is especially so when star or clique components are present, increasing the complexity of th- e join graphs. Worse, this shortcoming is exacerbated when the number of relations participating in the query is scaled upwards.
Resumo:
There are a number of large networks which occur in many problems dealing with the flow of power, communication signals, water, gas, transportable goods, etc. Both design and planning of these networks involve optimization problems. The first part of this paper introduces the common characteristics of a nonlinear network (the network may be linear, the objective function may be non linear, or both may be nonlinear). The second part develops a mathematical model trying to put together some important constraints based on the abstraction for a general network. The third part deals with solution procedures; it converts the network to a matrix based system of equations, gives the characteristics of the matrix and suggests two solution procedures, one of them being a new one. The fourth part handles spatially distributed networks and evolves a number of decomposition techniques so that we can solve the problem with the help of a distributed computer system. Algorithms for parallel processors and spatially distributed systems have been described.There are a number of common features that pertain to networks. A network consists of a set of nodes and arcs. In addition at every node, there is a possibility of an input (like power, water, message, goods etc) or an output or none. Normally, the network equations describe the flows amoungst nodes through the arcs. These network equations couple variables associated with nodes. Invariably, variables pertaining to arcs are constants; the result required will be flows through the arcs. To solve the normal base problem, we are given input flows at nodes, output flows at nodes and certain physical constraints on other variables at nodes and we should find out the flows through the network (variables at nodes will be referred to as across variables).The optimization problem involves in selecting inputs at nodes so as to optimise an objective function; the objective may be a cost function based on the inputs to be minimised or a loss function or an efficiency function. The above mathematical model can be solved using Lagrange Multiplier technique since the equalities are strong compared to inequalities. The Lagrange multiplier technique divides the solution procedure into two stages per iteration. Stage one calculates the problem variables % and stage two the multipliers lambda. It is shown that the Jacobian matrix used in stage one (for solving a nonlinear system of necessary conditions) occurs in the stage two also.A second solution procedure has also been imbedded into the first one. This is called total residue approach. It changes the equality constraints so that we can get faster convergence of the iterations.Both solution procedures are found to coverge in 3 to 7 iterations for a sample network.The availability of distributed computer systems — both LAN and WAN — suggest the need for algorithms to solve the optimization problems. Two types of algorithms have been proposed — one based on the physics of the network and the other on the property of the Jacobian matrix. Three algorithms have been deviced, one of them for the local area case. These algorithms are called as regional distributed algorithm, hierarchical regional distributed algorithm (both using the physics properties of the network), and locally distributed algorithm (a multiprocessor based approach with a local area network configuration). The approach used was to define an algorithm that is faster and uses minimum communications. These algorithms are found to converge at the same rate as the non distributed (unitary) case.
Resumo:
In this paper, we present a generic method/model for multi-objective design optimization of laminated composite components, based on Vector Evaluated Artificial Bee Colony (VEABC) algorithm. VEABC is a parallel vector evaluated type, swarm intelligence multi-objective variant of the Artificial Bee Colony algorithm (ABC). In the current work a modified version of VEABC algorithm for discrete variables has been developed and implemented successfully for the multi-objective design optimization of composites. The problem is formulated with multiple objectives of minimizing weight and the total cost of the composite component to achieve a specified strength. The primary optimization variables are the number of layers, its stacking sequence (the orientation of the layers) and thickness of each layer. The classical lamination theory is utilized to determine the stresses in the component and the design is evaluated based on three failure criteria: failure mechanism based failure criteria, maximum stress failure criteria and the tsai-wu failure criteria. The optimization method is validated for a number of different loading configurations-uniaxial, biaxial and bending loads. The design optimization has been carried for both variable stacking sequences, as well fixed standard stacking schemes and a comparative study of the different design configurations evolved has been presented. Finally the performance is evaluated in comparison with other nature inspired techniques which includes Particle Swarm Optimization (PSO), Artificial Immune System (AIS) and Genetic Algorithm (GA). The performance of ABC is at par with that of PSO, AIS and GA for all the loading configurations. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
Heat exchanger design is a complex task involving the selection of a large number of interdependent design parameters. There are no established general techniques for optimizing the design, though a few earlier attempts provide computer software based on gradient methods, case study methods, etc. The authors felt that it would be useful to determine the nature of the optimal and near-optimal feasible designs to devise an optimization technique. Therefore, in this article they have obtained a large number of feasible designs of shell and tube heat exchangers, intended to perform a given heat duty, by an exhaustive search method. They have studied how their capital and operating costs varied. The study reveals several interesting aspects of the dependence of capital and total costs on various design parameters. The authors considered a typical shell and tube heat exchanger used in an oil refinery. Its heat duty, inlet temperature and other details are given.
Resumo:
This work is a survey of the average cost control problem for discrete-time Markov processes. The authors have attempted to put together a comprehensive account of the considerable research on this problem over the past three decades. The exposition ranges from finite to Borel state and action spaces and includes a variety of methodologies to find and characterize optimal policies. The authors have included a brief historical perspective of the research efforts in this area and have compiled a substantial yet not exhaustive bibliography. The authors have also identified several important questions that are still open to investigation.
Resumo:
We develop four algorithms for simulation-based optimization under multiple inequality constraints. Both the cost and the constraint functions are considered to be long-run averages of certain state-dependent single-stage functions. We pose the problem in the simulation optimization framework by using the Lagrange multiplier method. Two of our algorithms estimate only the gradient of the Lagrangian, while the other two estimate both the gradient and the Hessian of it. In the process, we also develop various new estimators for the gradient and Hessian. All our algorithms use two simulations each. Two of these algorithms are based on the smoothed functional (SF) technique, while the other two are based on the simultaneous perturbation stochastic approximation (SPSA) method. We prove the convergence of our algorithms and show numerical experiments on a setting involving an open Jackson network. The Newton-based SF algorithm is seen to show the best overall performance.
Resumo:
We address the optimal control problem of a very general stochastic hybrid system with both autonomous and impulsive jumps. The planning horizon is infinite and we use the discounted-cost criterion for performance evaluation. Under certain assumptions, we show the existence of an optimal control. We then derive the quasivariational inequalities satisfied by the value function and establish well-posedness. Finally, we prove the usual verification theorem of dynamic programming.
Resumo:
A natural velocity field method for shape optimization of reinforced concrete (RC) flexural members has been demonstrated. The possibility of shape optimization by modifying the shape of an initially rectangular section, in addition to variation of breadth and depth along the length, has been explored. Necessary shape changes have been computed using the sequential quadratic programming (SQP) technique. Genetic algorithm (Goldberg and Samtani 1986) has been used to optimize the diameter and number of main reinforcement bars. A limit-state design approach has been adopted for the nonprismatic RC sections. Such relevant issues as formulation of optimization problem, finite-element modeling, and solution procedure have been described. Three design examples-a simply supported beam, a cantilever beam, and a two-span continuous beam, all under uniformly distributed loads-have been optimized. The results show a significant savings (40-56%) in material and cost and also result in aesthetically pleasing structures. This procedure will lead to considerable cost saving, particularly in cases of mass-produced precast members and a heavy cast-in-place member such as a bridge girder.
Resumo:
The focus of this paper is on designing useful compliant micro-mechanisms of high-aspect-ratio which can be microfabricated by the cost-effective wet etching of (110) orientation silicon (Si) wafers. Wet etching of (110) Si imposes constraints on the geometry of the realized mechanisms because it allows only etch-through in the form of slots parallel to the wafer's flat with a certain minimum length. In this paper, we incorporate this constraint in the topology optimization and obtain compliant designs that meet the specifications on the desired motion for given input forces. Using this design technique and wet etching, we show that we can realize high-aspect-ratio compliant micro-mechanisms. For a (110) Si wafer of 250 µm thickness, the minimum length of the etch opening to get a slot is found to be 866 µm. The minimum achievable width of the slot is limited by the resolution of the lithography process and this can be a very small value. This is studied by conducting trials with different mask layouts on a (110) Si wafer. These constraints are taken care of by using a suitable design parameterization rather than by imposing the constraints explicitly. Topology optimization, as is well known, gives designs using only the essential design specifications. In this work, we show that our technique also gives manufacturable mechanism designs along with lithography mask layouts. Some designs obtained are transferred to lithography masks and mechanisms are fabricated on (110) Si wafers.
Resumo:
Frequent accesses to the register file make it one of the major sources of energy consumption in ILP architectures. The large number of functional units connected to a large unified register file in VLIW architectures make power dissipation in the register file even worse because of the need for a large number of ports. High power dissipation in a relatively smaller area occupied by a register file leads to a high power density in the register file and makes it one of the prime hot-spots. This makes it highly susceptible to the possibility of a catastrophic heatstroke. This in turn impacts the performance and cost because of the need for periodic cool down and sophisticated packaging and cooling techniques respectively. Clustered VLIW architectures partition the register file among clusters of functional units and reduce the number of ports required thereby reducing the power dissipation. However, we observe that the aggregate accesses to register files in clustered VLIW architectures (and associated energy consumption) become very high compared to the centralized VLIW architectures and this can be attributed to a large number of explicit inter-cluster communications. Snooping based clustered VLIW architectures provide very limited but very fast way of inter-cluster communication by allowing some of the functional units to directly read some of the operands from the register file of some of the other clusters. In this paper, we propose instruction scheduling algorithms that exploit the limited snooping capability to reduce the register file energy consumption on an average by 12% and 18% and improve the overall performance by 5% and 11% for a 2-clustered and a 4-clustered machine respectively, over an earlier state-of-the-art clustered scheduling algorithm when evaluated in the context of snooping based clustered VLIW architectures.
Resumo:
We present two efficient discrete parameter simulation optimization (DPSO) algorithms for the long-run average cost objective. One of these algorithms uses the smoothed functional approximation (SFA) procedure, while the other is based on simultaneous perturbation stochastic approximation (SPSA). The use of SFA for DPSO had not been proposed previously in the literature. Further, both algorithms adopt an interesting technique of random projections that we present here for the first time. We give a proof of convergence of our algorithms. Next, we present detailed numerical experiments on a problem of admission control with dependent service times. We consider two different settings involving parameter sets that have moderate and large sizes, respectively. On the first setting, we also show performance comparisons with the well-studied optimal computing budget allocation (OCBA) algorithm and also the equal allocation algorithm. Note to Practitioners-Even though SPSA and SFA have been devised in the literature for continuous optimization problems, our results indicate that they can be powerful techniques even when they are adapted to discrete optimization settings. OCBA is widely recognized as one of the most powerful methods for discrete optimization when the parameter sets are of small or moderate size. On a setting involving a parameter set of size 100, we observe that when the computing budget is small, both SPSA and OCBA show similar performance and are better in comparison to SFA, however, as the computing budget is increased, SPSA and SFA show better performance than OCBA. Both our algorithms also show good performance when the parameter set has a size of 10(8). SFA is seen to show the best overall performance. Unlike most other DPSO algorithms in the literature, an advantage with our algorithms is that they are easily implementable regardless of the size of the parameter sets and show good performance in both scenarios.
Resumo:
In this work, we explore simultaneous design and material selection by posing it as an optimization problem. The underlying principles for our approach are Ashby's material selection procedure and structural optimization. For the simplicity and ease of initial implementation of the general procedure, truss structures under static load are considered in this work in view of maximum stiffness, minimum weight/cost and safety against failure. Along the lines of Ashby's material indices, a new design index is derived for trusses. This helps in choosing the most suitable material for any design of a truss. Using this, both the design space and material database are searched simultaneously using optimization algorithms. The important feature of our approach is that the formulated optimization problem is continuous even though the material selection is an inherently discrete problem.
Resumo:
Topology optimization methods have been shown to have extensive application in the design of microsystems. However, their utility in practical situations is restricted to predominantly planar configurations due to the limitations of most microfabrication techniques in realizing structures with arbitrary topologies in the direction perpendicular to the substrate. This study addresses the problem of synthesizing optimal topologies in the out-of-plane direction while obeying the constraints imposed by surface micromachining. A new formulation that achieves this by defining a design space that implicitly obeys the manufacturing constraints with a continuous design parameterization is presented in this paper. This is in contrast to including manufacturing cost in the objective function or constraints. The resulting solutions of the new formulation obtained with gradient-based optimization directly provide the photolithographic mask layouts. Two examples that illustrate the approach for the case of stiff structures are included.