69 resultados para Operations Management


Relevância:

60.00% 60.00%

Publicador:

Resumo:

Iterated Local Search has many of the desirable features of a metaheuristic: it is simple, easy to implement, robust, and highly effective. The essential idea of Iterated Local Search lies in focusing the search not on the full space of solutions but on a smaller subspace defined by the solutions that are locally optimal for a given optimization engine. The success of Iterated Local Search lies in the biased sampling of this set of local optima. How effective this approach turns out to be depends mainly on the choice of the local search, the perturbations, and the acceptance criterion. So far, in spite of its conceptual simplicity, it has lead to a number of state-of-the-art results without the use of too much problem-specific knowledge. But with further work so that the different modules are well adapted to the problem at hand, Iterated Local Search can often become a competitive or even state of the artalgorithm. The purpose of this review is both to give a detailed description of this metaheuristic and to show where it stands in terms of performance.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

To understand whether retailers should consider consumer returns when merchandising, we study howthe optimal assortment of a price-taking retailer is influenced by its return policy. The retailer selects itsassortment from an exogenous set of horizontally differentiated products. Consumers make purchase andkeep/return decisions in nested multinomial logit fashion. Our main finding is that the optimal assortmenthas a counterintuitive structure for relatively strict return policies: It is optimal to offer a mix of the mostpopular and most eccentric products when the refund amount is sufficiently low, which can be viewed asa form of risk sharing between the retailer and consumers. In contrast, if the refund is sufficiently high, orwhen returns are disallowed, optimal assortment is composed of only the most popular products (a commonfinding in the literature). We provide preliminary empirical evidence for one of the key drivers of our results:more eccentric products have higher probability of return conditional on purchase. In light of our analyticalfindings and managerial insights, we conclude that retailers should take their return policies into accountwhen merchandising.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In today’s competitive markets, the importance of goodscheduling strategies in manufacturing companies lead to theneed of developing efficient methods to solve complexscheduling problems.In this paper, we studied two production scheduling problemswith sequence-dependent setups times. The setup times areone of the most common complications in scheduling problems,and are usually associated with cleaning operations andchanging tools and shapes in machines.The first problem considered is a single-machine schedulingwith release dates, sequence-dependent setup times anddelivery times. The performance measure is the maximumlateness.The second problem is a job-shop scheduling problem withsequence-dependent setup times where the objective is tominimize the makespan.We present several priority dispatching rules for bothproblems, followed by a study of their performance. Finally,conclusions and directions of future research are presented.

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Relevância:

60.00% 60.00%

Publicador:

Resumo:

We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper discusses the role of deterministic components in the DGP and in the auxiliaryregression model which underlies the implementation of the Fractional Dickey-Fuller (FDF) test for I(1) against I(d) processes with d [0, 1). This is an important test in many economic applications because I(d) processess with d < 1 are mean-reverting although, when 0.5 = d < 1, like I(1) processes, they are nonstationary. We show how simple is the implementation of the FDF in these situations, and argue that it has better properties than LM tests. A simple testing strategy entailing only asymptotically normally distributedtests is also proposed. Finally, an empirical application is provided where the FDF test allowing for deterministic components is used to test for long-memory in the per capita GDP of several OECD countries, an issue that has important consequences to discriminate between growth theories, and on which there is some controversy.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

In this work we discuss some ideas and opinions related with teaching Metaheuristics in Business Schools. The main purpose of the work is to initiate a discussion and collaboration about this topic,with the final objective to improve the teaching and publicity of the area. The main topics to be discussed are the environment and focus of this teaching. We also present a SWOT analysis which lead us to the conclusion that the area of Metaheuristics only can win with the presentation and discussion of metaheuristics and related topics in Business Schools, since it consists in a excellent Decision Support tools for future potential users.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The public transportation is gaining importance every year basically duethe population growth, environmental policies and, route and streetcongestion. Too able an efficient management of all the resources relatedto public transportation, several techniques from different areas are beingapplied and several projects in Transportation Planning Systems, indifferent countries, are being developed. In this work, we present theGIST Planning Transportation Systems, a Portuguese project involving twouniversities and six public transportation companies. We describe indetail one of the most relevant modules of this project, the crew-scheduling module. The crew-scheduling module is based on the application of meta-heuristics, in particular GRASP, tabu search and geneticalgorithm to solve the bus-driver-scheduling problem. The metaheuristicshave been successfully incorporated in the GIST Planning TransportationSystems and are actually used by several companies.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Highly competitive environments are leading companies to implement SupplyChain Management (SCM) to improve performance and gain a competitiveadvantage. SCM involves integration, co-ordination and collaborationacross organisations and throughout the supply chain. It means that SCMrequires internal (intraorganisational) and external (interorganisational)integration. This paper examines the Logistics-Production and Logistics-Marketing interfaces and their relation with the external integrationprocess. The study also investigates the causal impact of these internaland external relationships on the company s logistical service performance.To analyse this, an empirical study was conducted in the Spanish Fast MovingConsumer Goods (FMCG) sector.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

This paper proposes an explanation as to why some mergers fail, based on the interactionbetween the pre- and post-merger processes. We argue that failure may stem from informationalasymmetries arising from the pre-merger period, and problems of cooperation andcoordination within recently merged firms. We show that a partner may optimally agree tomerge and abstain from putting forth any post-merger effort, counting on the other partnerto make the necessary efforts. If both follow the same course of action, the merger goesahead but fails. Our unique equilibrium allows us to make predictions on which mergers aremore likely to fail.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The Maximum Capture problem (MAXCAP) is a decision model that addresses the issue of location in a competitive environment. This paper presents a new approach to determine which store s attributes (other than distance) should be included in the newMarket Capture Models and how they ought to be reflected using the Multiplicative Competitive Interaction model. The methodology involves the design and development of a survey; and the application of factor analysis and ordinary least squares. Themethodology has been applied to the supermarket sector in two different scenarios: Milton Keynes (Great Britain) and Barcelona (Spain).