841 resultados para Linearly constrained optimization
Resumo:
This paper is on the unit commitment problem, considering not only the economic perspective, but also the environmental perspective. We propose a bi-objective approach to handle the problem with conflicting profit and emission objectives. Numerical results based on the standard IEEE 30-bus test system illustrate the proficiency of the proposed approach.
Resumo:
In this paper a solution to an highly constrained and non-convex economical dispatch (ED) problem with a meta-heuristic technique named Sensing Cloud Optimization (SCO) is presented. The proposed meta-heuristic is based on a cloud of particles whose central point represents the objective function value and the remaining particles act as sensors "to fill" the search space and "guide" the central particle so it moves into the best direction. To demonstrate its performance, a case study with multi-fuel units and valve- point effects is presented.
Resumo:
Penalty and Barrier methods are normally used to solve Nonlinear Optimization Problems constrained problems. The problems appear in areas such as engineering and are often characterised by the fact that involved functions (objective and constraints) are non-smooth and/or their derivatives are not know. This means that optimization methods based on derivatives cannot net used. A Java based API was implemented, including only derivative-free optimizationmethods, to solve both constrained and unconstrained problems, which includes Penalty and Barriers methods. In this work a new penalty function, based on Fuzzy Logic, is presented. This function imposes a progressive penalization to solutions that violate the constraints. This means that the function imposes a low penalization when the violation of the constraints is low and a heavy penalisation when the violation is high. The value of the penalization is not known in beforehand, it is the outcome of a fuzzy inference engine. Numerical results comparing the proposed function with two of the classic penalty/barrier functions are presented. Regarding the presented results one can conclude that the prosed penalty function besides being very robust also exhibits a very good performance.
Resumo:
Search Optimization methods are needed to solve optimization problems where the objective function and/or constraints functions might be non differentiable, non convex or might not be possible to determine its analytical expressions either due to its complexity or its cost (monetary, computational, time,...). Many optimization problems in engineering and other fields have these characteristics, because functions values can result from experimental or simulation processes, can be modelled by functions with complex expressions or by noise functions and it is impossible or very difficult to calculate their derivatives. Direct Search Optimization methods only use function values and do not need any derivatives or approximations of them. In this work we present a Java API that including several methods and algorithms, that do not use derivatives, to solve constrained and unconstrained optimization problems. Traditional API access, by installing it on the developer and/or user computer, and remote API access to it, using Web Services, are also presented. Remote access to the API has the advantage of always allow the access to the latest version of the API. For users that simply want to have a tool to solve Nonlinear Optimization Problems and do not want to integrate these methods in applications, also two applications were developed. One is a standalone Java application and the other a Web-based application, both using the developed API.
Resumo:
Constraints nonlinear optimization problems can be solved using penalty or barrier functions. This strategy, based on solving the problems without constraints obtained from the original problem, have shown to be e ective, particularly when used with direct search methods. An alternative to solve the previous problems is the lters method. The lters method introduced by Fletcher and Ley er in 2002, , has been widely used to solve problems of the type mentioned above. These methods use a strategy di erent from the barrier or penalty functions. The previous functions de ne a new one that combine the objective function and the constraints, while the lters method treat optimization problems as a bi-objective problems that minimize the objective function and a function that aggregates the constraints. Motivated by the work of Audet and Dennis in 2004, using lters method with derivative-free algorithms, the authors developed works where other direct search meth- ods were used, combining their potential with the lters method. More recently. In a new variant of these methods was presented, where it some alternative aggregation restrictions for the construction of lters were proposed. This paper presents a variant of the lters method, more robust than the previous ones, that has been implemented with a safeguard procedure where values of the function and constraints are interlinked and not treated completely independently.
Resumo:
Nonlinear Optimization Problems are usual in many engineering fields. Due to its characteristics the objective function of some problems might not be differentiable or its derivatives have complex expressions. There are even cases where an analytical expression of the objective function might not be possible to determine either due to its complexity or its cost (monetary, computational, time, ...). In these cases Nonlinear Optimization methods must be used. An API, including several methods and algorithms to solve constrained and unconstrained optimization problems was implemented. This API can be accessed not only as traditionally, by installing it on the developer and/or user computer, but it can also be accessed remotely using Web Services. As long as there is a network connection to the server where the API is installed, applications always access to the latest API version. Also an Web-based application, using the proposed API, was developed. This application is to be used by users that do not want to integrate methods in applications, and simply want to have a tool to solve Nonlinear Optimization Problems.
Resumo:
We directly visualize the response of nematic liquid crystal drops of toroidal topology threaded in cellulosic fibers, suspended in air, to an AC electric field and at different temperatures over the N-I transition. This new liquid crystal system can exhibit non-trivial point defects, which can be energetically unstable against expanding into ring defects depending on the fiber constraining geometries. The director anchoring tangentially near the fiber surface and homeotropically at the air interface makes a hybrid shell distribution that in turn causes a ring disclination line around the main axis of the fiber at the center of the droplet. Upon application of an electric field, E, the disclination ring first expands and moves along the fiber main axis, followed by the appearance of a stable "spherical particle" object orbiting around the fiber at the center of the liquid crystal drop. The rotation speed of this particle was found to vary linearly with the applied voltage. This constrained liquid crystal geometry seems to meet the essential requirements in which soliton-like deformations can develop and exhibit stable orbiting in three dimensions upon application of an external electric field. On changing the temperature the system remains stable and allows the study of the defect evolution near the nematic-isotropic transition, showing qualitatively different behaviour on cooling and heating processes. The necklaces of such liquid crystal drops constitute excellent systems for the study of topological defects and their evolution and open new perspectives for application in microelectronics and photonics.
Resumo:
Demand response concept has been gaining increasing importance while the success of several recent implementations makes this resource benefits unquestionable. This happens in a power systems operation environment that also considers an intensive use of distributed generation. However, more adequate approaches and models are needed in order to address the small size consumers and producers aggregation, while taking into account these resources goals. The present paper focuses on the demand response programs and distributed generation resources management by a Virtual Power Player that optimally aims to minimize its operation costs taking the consumption shifting constraints into account. The impact of the consumption shifting in the distributed generation resources schedule is also considered. The methodology is applied to three scenarios based on 218 consumers and 4 types of distributed generation, in a time frame of 96 periods.
Resumo:
The authors propose a mathematical model to minimize the project total cost where there are multiple resources constrained by maximum availability. They assume the resources as renewable and the activities can use any subset of resources requiring any quantity from a limited real interval. The stochastic nature is inferred by means of a stochastic work content defined per resource within an activity and following a known distribution and the total cost is the sum of the resource allocation cost with the tardiness cost or earliness bonus in case the project finishes after or before the due date, respectively. The model was computationally implemented relying upon an interchange of two global optimization metaheuristics – the electromagnetism-like mechanism and the evolutionary strategies. Two experiments were conducted testing the implementation to projects with single and multiple resources, and with or without maximum availability constraints. The set of collected results shows good behavior in general and provide a tool to further assist project manager decision making in the planning phase.
Resumo:
The goal of the present work was assess the feasibility of using a pseudo-inverse and null-space optimization approach in the modeling of the shoulder biomechanics. The method was applied to a simplified musculoskeletal shoulder model. The mechanical system consisted in the arm, and the external forces were the arm weight, 6 scapulo-humeral muscles and the reaction at the glenohumeral joint, which was considered as a spherical joint. The muscle wrapping was considered around the humeral head assumed spherical. The dynamical equations were solved in a Lagrangian approach. The mathematical redundancy of the mechanical system was solved in two steps: a pseudo-inverse optimization to minimize the square of the muscle stress and a null-space optimization to restrict the muscle force to physiological limits. Several movements were simulated. The mathematical and numerical aspects of the constrained redundancy problem were efficiently solved by the proposed method. The prediction of muscle moment arms was consistent with cadaveric measurements and the joint reaction force was consistent with in vivo measurements. This preliminary work demonstrated that the developed algorithm has a great potential for more complex musculoskeletal modeling of the shoulder joint. In particular it could be further applied to a non-spherical joint model, allowing for the natural translation of the humeral head in the glenoid fossa.
Resumo:
BACKGROUND: Iterative reconstruction (IR) techniques reduce image noise in multidetector computed tomography (MDCT) imaging. They can therefore be used to reduce radiation dose while maintaining diagnostic image quality nearly constant. However, CT manufacturers offer several strength levels of IR to choose from. PURPOSE: To determine the optimal strength level of IR in low-dose MDCT of the cervical spine. MATERIAL AND METHODS: Thirty consecutive patients investigated by low-dose cervical spine MDCT were prospectively studied. Raw data were reconstructed using filtered back-projection and sinogram-affirmed IR (SAFIRE, strength levels 1 to 5) techniques. Image noise, signal-to-noise ratio (SNR), and contrast-to-noise ratio (CNR) were measured at C3-C4 and C6-C7 levels. Two radiologists independently and blindly evaluated various anatomical structures (both dense and soft tissues) using a 4-point scale. They also rated the overall diagnostic image quality using a 10-point scale. RESULTS: As IR strength levels increased, image noise decreased linearly, while SNR and CNR both increased linearly at C3-C4 and C6-C7 levels (P < 0.001). For the intervertebral discs, the content of neural foramina and dural sac, and for the ligaments, subjective image quality scores increased linearly with increasing IR strength level (P ≤ 0.03). Conversely, for the soft tissues and trabecular bone, the scores decreased linearly with increasing IR strength level (P < 0.001). Finally, the overall diagnostic image quality scores increased linearly with increasing IR strength level (P < 0.001). CONCLUSION: The optimal strength level of IR in low-dose cervical spine MDCT depends on the anatomical structure to be analyzed. For the intervertebral discs and the content of neural foramina, high strength levels of IR are recommended.
Resumo:
Canonical correspondence analysis and redundancy analysis are two methods of constrained ordination regularly used in the analysis of ecological data when several response variables (for example, species abundances) are related linearly to several explanatory variables (for example, environmental variables, spatial positions of samples). In this report I demonstrate the advantages of the fuzzy coding of explanatory variables: first, nonlinear relationships can be diagnosed; second, more variance in the responses can be explained; and third, in the presence of categorical explanatory variables (for example, years, regions) the interpretation of the resulting triplot ordination is unified because all explanatory variables are measured at a categorical level.
Resumo:
The purpose of this study was to simulate and to optimize integrated gasification for combine cycle (IGCC) for power generation and hydrogen (H2) production by using low grade Thar lignite coal and cotton stalk. Lignite coal is abundant of moisture and ash content, the idea of addition of cotton stalk is to increase the mass of combustible material per mass of feed use for the process, to reduce the consumption of coal and to increase the cotton stalk efficiently for IGCC process. Aspen plus software is used to simulate the process with different mass ratios of coal to cotton stalk and for optimization: process efficiencies, net power generation and H2 production etc. are considered while environmental hazard emissions are optimized to acceptance level. With the addition of cotton stalk in feed, process efficiencies started to decline along with the net power production. But for H2 production, it gave positive result at start but after 40% cotton stalk addition, H2 production also started to decline. It also affects negatively on environmental hazard emissions and mass of emissions/ net power production increases linearly with the addition of cotton stalk in feed mixture. In summation with the addition of cotton stalk, overall affects seemed to negative. But the effect is more negative after 40% cotton stalk addition so it is concluded that to get maximum process efficiencies and high production less amount of cotton stalk addition in feed is preferable and the maximum level of addition is estimated to 40%. Gasification temperature should keep lower around 1140 °C and prefer technique for studied feed in IGCC is fluidized bed (ash in dry form) rather than ash slagging gasifier
Resumo:
The last decade has shown that the global paper industry needs new processes and products in order to reassert its position in the industry. As the paper markets in Western Europe and North America have stabilized, the competition has tightened. Along with the development of more cost-effective processes and products, new process design methods are also required to break the old molds and create new ideas. This thesis discusses the development of a process design methodology based on simulation and optimization methods. A bi-level optimization problem and a solution procedure for it are formulated and illustrated. Computational models and simulation are used to illustrate the phenomena inside a real process and mathematical optimization is exploited to find out the best process structures and control principles for the process. Dynamic process models are used inside the bi-level optimization problem, which is assumed to be dynamic and multiobjective due to the nature of papermaking processes. The numerical experiments show that the bi-level optimization approach is useful for different kinds of problems related to process design and optimization. Here, the design methodology is applied to a constrained process area of a papermaking line. However, the same methodology is applicable to all types of industrial processes, e.g., the design of biorefiners, because the methodology is totally generalized and can be easily modified.
Resumo:
This work presents a formulation of the contact with friction between elastic bodies. This is a non linear problem due to unilateral constraints (inter-penetration of bodies) and friction. The solution of this problem can be found using optimization concepts, modelling the problem as a constrained minimization problem. The Finite Element Method is used to construct approximation spaces. The minimization problem has the total potential energy of the elastic bodies as the objective function, the non-inter-penetration conditions are represented by inequality constraints, and equality constraints are used to deal with the friction. Due to the presence of two friction conditions (stick and slip), specific equality constraints are present or not according to the current condition. Since the Coulomb friction condition depends on the normal and tangential contact stresses related to the constraints of the problem, it is devised a conditional dependent constrained minimization problem. An Augmented Lagrangian Method for constrained minimization is employed to solve this problem. This method, when applied to a contact problem, presents Lagrange Multipliers which have the physical meaning of contact forces. This fact allows to check the friction condition at each iteration. These concepts make possible to devise a computational scheme which lead to good numerical results.