949 resultados para optimization, heuristic, solver, operations, research
Resumo:
After an aggregated problem has been solved, it is often desirable to estimate the accuracy loss due to the fact that a simpler problem than the original one has been solved. One way of measuring this loss in accuracy is the difference in objective function values. To get the bounds for this difference, Zipkin (Operations Research 1980;28:406) has assumed, that a simple (knapsack-type) localization of an original optimal solution is known. Since then various extensions of Zipkin's bound have been proposed, but under the same assumption. A method to compute the bounds for variable aggregation for convex problems, based on general localization of the original solution is proposed. For some classes of the original problem it is shown how to construct the localization. Examples are given to illustrate the main constructions and a small numerical study is presented.
Resumo:
A comparative study of aggregation error bounds for the generalized transportation problem is presented. A priori and a posteriori error bounds were derived and a computational study was performed to (a) test the correlation between the a priori, the a posteriori, and the actual error and (b) quantify the difference of the error bounds from the actual error. Based on the results we conclude that calculating the a priori error bound can be considered as a useful strategy to select the appropriate aggregation level. The a posteriori error bound provides a good quantitative measure of the actual error.
Resumo:
The Capacitated p-median problem (CPMP) seeks to solve the optimal location of p facilities, considering distances and capacities for the service to be given by each median. In this paper we present a column generation approach to CPMP. The identified restricted master problem optimizes the covering of 1-median clusters satisfying the capacity constraints, and new columns are generated considering knapsack subproblems. The Lagrangean/surrogate relaxation has been used recently to accelerate subgradient like methods. In this work the Lagrangean/surrogate relaxation is directly identified from the master problem dual and provides new bounds and new productive columns through a modified knapsack subproblem. The overall column generation process is accelerated, even when multiple pricing is observed. Computational tests are presented using instances taken from real data from Sao Jose dos Campos' city.
Resumo:
As the number of simulation experiments increases, the necessity for validation and verification of these models demands special attention on the part of the simulation practitioners. By analyzing the current scientific literature, it is observed that the operational validation description presented in many papers does not agree on the importance designated to this process and about its applied techniques, subjective or objective. With the expectation of orienting professionals, researchers and students in simulation, this article aims to elaborate a practical guide through the compilation of statistical techniques in the operational validation of discrete simulation models. Finally, the guide's applicability was evaluated by using two study objects, which represent two manufacturing cells, one from the automobile industry and the other from a Brazilian tech company. For each application, the guide identified distinct steps, due to the different aspects that characterize the analyzed distributions. © 2011 Brazilian Operations Research Society.
Resumo:
This paper proposes a new strategy to reduce the combinatorial search space of a mixed integer linear programming (MILP) problem. The construction phase of greedy randomized adaptive search procedure (GRASP-CP) is employed to reduce the domain of the integer variables of the transportation model of the transmission expansion planning (TM-TEP) problem. This problem is a MILP and very difficult to solve specially for large scale systems. The branch and bound (BB) algorithm is used to solve the problem in both full and the reduced search space. The proposed method might be useful to reduce the search space of those kinds of MILP problems that a fast heuristic algorithm is available for finding local optimal solutions. The obtained results using some real test systems show the efficiency of the proposed method. © 2012 Springer-Verlag.
Resumo:
In this paper we present a mixed integer model that integrates lot sizing and lot scheduling decisions for the production planning of a soft drink company. The main contribution of the paper is to present a model that differ from others in the literature for the constraints related to the scheduling decisions. The proposed strategy is compared to other strategies presented in the literature.
Resumo:
The use of QoS parameters to evaluate the quality of service in a mesh network is essential mainly when providing multimedia services. This paper proposes an algorithm for planning wireless mesh networks in order to satisfy some QoS parameters, given a set of test points (TPs) and potential access points (APs). Examples of QoS parameters include: probability of packet loss and mean delay in responding to a request. The proposed algorithm uses a Mathematical Programming model to determine an adequate topology for the network and Monte Carlo simulation to verify whether the QoS parameters are being satisfied. The results obtained show that the proposed algorithm is able to find satisfactory solutions.
Resumo:
Pós-graduação em Matemática - IBILCE
Reformulações e relaxação Lagrangiana para o problema de dimensionamento de lotes com várias plantas
Resumo:
Pós-graduação em Matemática - IBILCE
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)
Resumo:
This work is quantified according to the ABEPRO areas, the number of works the Course Conclusion (TCCs) and Hours (CH) Course of Production Engineering, UNESP Guaratinguetá. Based on this quantification were found to be significant discrepancies between the TCCs and CH. Quality and Logistics observed for the larger discrepancies, with 26% and 17% of the number of TCCs, respectively. Both areas have only 6% of the time. They are also used data from researchers at the National Council of Scientific and Technological Development (CNPq) in the areas of Production Engineering. In the area of Quality and Logistics, the researchers account for 8% and 5%, respectively, but are most prominent researchers in the field of Operations Research, with 37%. However, this view can help the department to organize the curriculum, with the development of teaching projects, methods and means of education, and visibility to help future hiring for the department
Resumo:
This paper addresses the problem of survivable lightpath provisioning in wavelength-division-multiplexing (WDM) mesh networks, taking into consideration optical-layer protection and some realistic optical signal quality constraints. The investigated networks use sparsely placed optical–electrical–optical (O/E/O) modules for regeneration and wavelength conversion. Given a fixed network topology with a number of sparsely placed O/E/O modules and a set of connection requests, a pair of link-disjoint lightpaths is established for each connection. Due to physical impairments and wavelength continuity, both the working and protection lightpaths need to be regenerated at some intermediate nodes to overcome signal quality degradation and wavelength contention. In the present paper, resource-efficient provisioning solutions are achieved with the objective of maximizing resource sharing. The authors propose a resource-sharing scheme that supports three kinds of resource-sharing scenarios, including a conventional wavelength-link sharing scenario, which shares wavelength links between protection lightpaths, and two new scenarios, which share O/E/O modules between protection lightpaths and between working and protection lightpaths. An integer linear programming (ILP)-based solution approach is used to find optimal solutions. The authors also propose a local optimization heuristic approach and a tabu search heuristic approach to solve this problem for real-world, large mesh networks. Numerical results show that our solution approaches work well under a variety of network settings and achieves a high level of resource-sharing rates (over 60% for O/E/O modules and over 30% for wavelength links), which translate into great savings in network costs.
Resumo:
INVESTIGATION INTO CURRENT EFFICIENCY FOR PULSE ELECTROCHEMICAL MACHINING OF NICKEL ALLOY Yu Zhang, M.S. University of Nebraska, 2010 Adviser: Kamlakar P. Rajurkar Electrochemical machining (ECM) is a nontraditional manufacturing process that can machine difficult-to-cut materials. In ECM, material is removed by controlled electrochemical dissolution of an anodic workpiece in an electrochemical cell. ECM has extensive applications in automotive, petroleum, aerospace, textile, medical, and electronics industries. Improving current efficiency is a challenging task for any electro-physical or electrochemical machining processes. The current efficiency is defined as the ratio of the observed amount of metal dissolved to the theoretical amount predicted from Faraday’s law, for the same specified conditions of electrochemical equivalent, current, etc [1]. In macro ECM, electrolyte conductivity greatly influences the current efficiency of the process. Since there is a certain limit to enhance the conductivity of the electrolyte, a process innovation is needed for further improvement in current efficiency in ECM. Pulse electrochemical machining (PECM) is one such approach in which the electrolyte conductivity is improved by electrolyte flushing in pulse off-time. The aim of this research is to study the influence of major factors on current efficiency in a pulse electrochemical machining process in macro scale and to develop a linear regression model for predicting current efficiency of the process. An in-house designed electrochemical cell was used for machining nickel alloy (ASTM B435) by PECM. The effects of current density, type of electrolyte, and electrolyte flow rate, on current efficiency under different experimental conditions were studied. Results indicated that current efficiency is dependent on electrolyte, electrolyte flow rate, and current density. Linear regression models of current efficiency were compared with twenty new data points graphically and quantitatively. Models developed were close enough to the actual results to be reliable. In addition, an attempt has been made in this work to consider those factors in PECM that have not been investigated in earlier works. This was done by simulating the process by using COMSOL software. However, it was found that the results from this attempt were not substantially different from the earlier reported studies.
Resumo:
PREPARATION OF COATED MICROTOOLS FOR ELECTROCHEMICAL MACHINING APPLICATIONS Ajaya K. Swain, M.S. University of Nebraska, 2010 Advisor: K.P. Rajurkar Coated tools have improved the performance of both traditional and nontraditional machining processes and have resulted in higher material removal, better surface finish, and increased wear resistance. However, a study on the performance of coated tools in micromachining has not yet been adequately conducted. One possible reason is the difficulties associated with the preparation of coated microtools. Besides the technical requirement, economic and environmental aspects of the material and the coating technique used also play a significant role in coating microtools. This, in fact, restricts the range of coating materials and the type of coating process. Handling is another major issue in case of microtools purely because of their miniature size. This research focuses on the preparation of coated microtools for pulse electrochemical machining by electrodeposition. The motivation of this research is derived from the fact that although there were reports of improved machining by using insulating coatings on ECM tools, particularly in ECM drilling operations, not much literature was found relating to use of metallic coating materials in other ECM process types. An ideal ECM tool should be good thermal and electrical conductor, corrosion resistant, electrochemically stable, and stiff enough to withstand electrolyte pressure. Tungsten has almost all the properties desired in an ECM tool material except being electrochemically unstable. Tungsten can be oxidized during machining resulting in poor machining quality. Electrochemical stability of a tungsten ECM tool can be improved by electroplating it with nickel which has superior electrochemical resistance. Moreover, a tungsten tool can be coated in situ reducing the tool handling and breakage frequency. The tungsten microtool was electroplated with nickel with direct and pulse current. The effect of the various input parameters on the coating characteristics was studied and performance of the coated microtool was evaluated in pulse ECM. The coated tool removed more material (about 28%) than the uncoated tool under similar conditions and was more electrochemical stable. It was concluded that nickel coated tungsten microtool can improve the pulse ECM performance.
Resumo:
Real Options Analysis (ROA) has become a complimentary tool for engineering economics. It has become popular due to the limitations of conventional engineering valuation methods; specifically, the assumptions of uncertainty. Industry is seeking to quantify the value of engineering investments with uncertainty. One problem with conventional tools are that they may assume that cash flows are certain, therefore minimizing the possibility of the uncertainty of future values. Real options analysis provides a solution to this problem, but has been used sparingly by practitioners. This paper seeks to provide a new model, referred to as the Beta Distribution Real Options Pricing Model (BDROP), which addresses these limitations and can be easily used by practitioners. The positive attributes of this new model include unconstrained market assumptions, robust representation of the underlying asset‟s uncertainty, and an uncomplicated methodology. This research demonstrates the use of the model to evaluate the use of automation for inventory control.