998 resultados para pesquisa operacional


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia Elétrica - FEIS

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Agronomia (Irrigação e Drenagem) - FCA

Relevância:

60.00% 60.00%

Publicador:

Resumo:

A Pesquisa Operacional (PO) oferece ferramentas, usando modelos matemáticos, que descrevem situações do mundo real das empresas e do governo. Estes modelos permitem otimizar lucros ou minimizar custos através da Programação Linear (PL), considerada a maior descoberta da matemática aplicada do século XX. Sua aplicabilidade é imensa e a programação computacional é simples. Por exemplo, o uso de software como o Excel, garante encontrar a solução para problemas que podem envolver um número grande de variáveis e assim auxiliar os agentes na tomada de decisão. Vários Prêmios Nobel em Economia tiveram a PL envolvido em seu conteúdo, por exemplo, os prêmios dados a Leonid Kantorovich, Leonid Hurwicz, Tjalling Koopmans Kenneth, Kenneth J. Arrow e Robert Dorfman, Paul Samuelson e Robert Solow. A Teoria das Filas, utilizada neste trabalho, é uma ferramenta da PO que envolve distribuição de probabilidades e permite investigar a chegada e atendimento de clientes, a partir de certos números de canais disponíveis. O caso em análise é uma fila do caixa rápido do supermercado Oba Hortifruti, localizado na cidade de Indaiatuba - São Paulo. A fila possui característica M/M/1, no qual o primeiro M denota que a chegada de clientes à fila segue uma distribuição de Poisson, o segundo M denota que o tempo de atendimento dos clientes segue a distribuição Exponencial e o 1 significa que há apenas um canal de atendimento. A aplicação desta ferramenta sugere uma otimização do serviço para que o mesmo se torne estável (objetivo qualitativo), gerando assim uma maior satisfação do cliente com o atendimento, podendo elevar a margem de lucro do estabelecimento em estudo (objetivo quantitativo)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Operational Research (OR) is an eminent science to business competitiveness and the capacity of algorithms and spreadsheets that exist today allows people to apply them for a lower cost and with less complexity. However, spreadsheets linked to OR techniques, when directed to real problems, are still little explored in their full potential. In order to use them better, this article utilizes the Microsoft Office Excel to solve an optimization practical problem and decision-making of machining subcontracting. In general, although considered a frequent problem, is not of easy solution, optimize the mix of production versus outsourcing, because of the restrictions and resources available, it requests investments in specific software. In this way, this research aims to develop software to be called SOSU (Optimization Software for Machining Subcontracting). SOSU should introduce the best mix of internal and subcontracted machining for n types of parts that, over a certain period of time t, maximize capacity and meet all the demand at the lowest cost possible. The methodology adopted follows the bibliographic reference and it is assumed that the necessary data to prove from mathematical modeling of production areas and from a system of costs already structured. The nature of the problem justifies the application of Linear Programming (LP), Visual Basic for Applications (VBA) is used for computational implementation and interface with the user and the supplement Solver to find the solution. The analysis of the experiments show that the SOSU optimizes resources and improves the decision-making process, besides an easy operation, it can be implemented or quickly adapted and without the need of large investments.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This work aimed to develop an optimization methodology for reservoir sizing in rainwater harvesting systems in order to increase the economic viability of projects in this area. For this, concepts of Operations Research were used so as to develop mathematical programming problems related to minimizing the life cycle cost and maximizing efficiency. The results obtained for different sizing methods were presented based on a case study, emphasizing the importance of tools that are able to provide a more accurate analysis and tend to significantly increase the economic viability of rainwater harvesting systems.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Pós-graduação em Engenharia de Produção - FEB

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This article describes a real-world production planning and scheduling problem occurring at an integrated pulp and paper mill (P&P) which manufactures paper for cardboard out of produced pulp. During the cooking of wood chips in the digester, two by-products are produced: the pulp itself (virgin fibers) and the waste stream known as black liquor. The former is then mixed with recycled fibers and processed in a paper machine. Here, due to significant sequence-dependent setups in paper type changeovers, sizing and sequencing of lots have to be made simultaneously in order to efficiently use capacity. The latter is converted into electrical energy using a set of evaporators, recovery boilers and counter-pressure turbines. The planning challenge is then to synchronize the material flow as it moves through the pulp and paper mills, and energy plant, maximizing customer demand (as backlogging is allowed), and minimizing operation costs. Due to the intensive capital feature of P&P, the output of the digester must be maximized. As the production bottleneck is not fixed, to tackle this problem we propose a new model that integrates the critical production units associated to the pulp and paper mills, and energy plant for the first time. Simple stochastic mixed integer programming based local search heuristics are developed to obtain good feasible solutions for the problem. The benefits of integrating the three stages are discussed. The proposed approaches are tested on real-world data. Our work may help P&P companies to increase their competitiveness and reactiveness in dealing with demand pattern oscillations. (C) 2012 Elsevier Ltd. All rights reserved.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The integrated production scheduling and lot-sizing problem in a flow shop environment consists of establishing production lot sizes and allocating machines to process them within a planning horizon in a production line with machines arranged in series. The problem considers that demands must be met without backlogging, the capacity of the machines must be respected, and machine setups are sequence-dependent and preserved between periods of the planning horizon. The objective is to determine a production schedule to minimise the setup, production and inventory costs. A mathematical model from the literature is presented, as well as procedures for obtaining feasible solutions. However, some of the procedures have difficulty in obtaining feasible solutions for large-sized problem instances. In addition, we address the problem using different versions of the Asynchronous Team (A-Team) approach. The procedures were compared with literature heuristics based on Mixed Integer Programming. The proposed A-Team procedures outperformed the literature heuristics, especially for large instances. The developed methodologies and the results obtained are presented.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a general scheme for generating extra cuts during the execution of a Benders decomposition algorithm is presented. These cuts are based on feasible and infeasible master problem solutions generated by means of a heuristic. This article includes general guidelines and a case study with a fixed charge network design problem. Computational tests with instances of this problem show the efficiency of the strategy. The most important aspect of the proposed ideas is their generality, which allows them to be used in virtually any Benders decomposition implementation.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Industrial recurrent event data where an event of interest can be observed more than once in a single sample unit are presented in several areas, such as engineering, manufacturing and industrial reliability. Such type of data provide information about the number of events, time to their occurrence and also their costs. Nelson (1995) presents a methodology to obtain asymptotic confidence intervals for the cost and the number of cumulative recurrent events. Although this is a standard procedure, it can not perform well in some situations, in particular when the sample size available is small. In this context, computer-intensive methods such as bootstrap can be used to construct confidence intervals. In this paper, we propose a technique based on the bootstrap method to have interval estimates for the cost and the number of cumulative events. One of the advantages of the proposed methodology is the possibility for its application in several areas and its easy computational implementation. In addition, it can be a better alternative than asymptotic-based methods to calculate confidence intervals, according to some Monte Carlo simulations. An example from the engineering area illustrates the methodology.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this paper, a procedure for the on-line process control of variables is proposed. This procedure consists of inspecting the m-th item from every m produced items and deciding, at each inspection, whether the process is out-of-control. Two sets of limits, warning (µ0 ± W) and control (µ0 ± C), are used. If the value of the monitored statistic falls beyond the control limits or if a sequence of h observations falls between the warning limits and the control limits, the production is stopped for adjustment; otherwise, production goes on. The properties of an ergodic Markov chain are used to obtain an expression for the average cost per item. The parameters (the sampling interval m, the widths of the warning, the control limits W and C(W < C), and the sequence length (h) are optimized by minimizing the cost function. A numerical example illustrates the proposed procedure.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we compared the estimates of the parameters of ARCH models using a complete Bayesian method and an empirical Bayesian method in which we adopted a non-informative prior distribution and informative prior distribution, respectively. We also considered a reparameterization of those models in order to map the space of the parameters into real space. This procedure permits choosing prior normal distributions for the transformed parameters. The posterior summaries were obtained using Monte Carlo Markov chain methods (MCMC). The methodology was evaluated by considering the Telebras series from the Brazilian financial market. The results show that the two methods are able to adjust ARCH models with different numbers of parameters. The empirical Bayesian method provided a more parsimonious model to the data and better adjustment than the complete Bayesian method.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Many engineering sectors are challenged by multi-objective optimization problems. Even if the idea behind these problems is simple and well established, the implementation of any procedure to solve them is not a trivial task. The use of evolutionary algorithms to find candidate solutions is widespread. Usually they supply a discrete picture of the non-dominated solutions, a Pareto set. Although it is very interesting to know the non-dominated solutions, an additional criterion is needed to select one solution to be deployed. To better support the design process, this paper presents a new method of solving non-linear multi-objective optimization problems by adding a control function that will guide the optimization process over the Pareto set that does not need to be found explicitly. The proposed methodology differs from the classical methods that combine the objective functions in a single scale, and is based on a unique run of non-linear single-objective optimizers.