54 resultados para integrated lot sizing and scheduling models
em Biblioteca Digital da Produção Intelectual da Universidade de São Paulo (BDPI/USP)
Resumo:
A lot sizing and scheduling problem prevalent in small market-driven foundries is studied. There are two related decision levels: (I the furnace scheduling of metal alloy production, and (2) moulding machine planning which specifies the type and size of production lots. A mixed integer programming (MIP) formulation of the problem is proposed, but is impractical to solve in reasonable computing time for non-small instances. As a result, a faster relax-and-fix (RF) approach is developed that can also be used on a rolling horizon basis where only immediate-term schedules are implemented. As well as a MIP method to solve the basic RF approach, three variants of a local search method are also developed and tested using instances based on the literature. Finally, foundry-based tests with a real-order book resulted in a very substantial reduction of delivery delays and finished inventory, better use of capacity, and much faster schedule definition compared to the foundry`s own practice. (c) 2006 Elsevier Ltd. All rights reserved.
Resumo:
An important production programming problem arises in paper industries coupling multiple machine scheduling with cutting stocks. Concerning machine scheduling: how can the production of the quantity of large rolls of paper of different types be determined. These rolls are cut to meet demand of items. Scheduling that minimizes setups and production costs may produce rolls which may increase waste in the cutting process. On the other hand, the best number of rolls in the point of view of minimizing waste may lead to high setup costs. In this paper, coupled modeling and heuristic methods are proposed. Computational experiments are presented.
Resumo:
Industrial production processes involving both lot-sizing and cutting stock problems are common in many industrial settings. However, they are usually treated in a separate way, which could lead to costly production plans. In this paper, a coupled mathematical model is formulated and a heuristic method based on Lagrangian relaxation is proposed. Computational results prove its effectiveness. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
In this paper we present a genetic algorithm with new components to tackle capacitated lot sizing and scheduling problems with sequence dependent setups that appear in a wide range of industries, from soft drink bottling to food manufacturing. Finding a feasible solution to highly constrained problems is often a very difficult task. Various strategies have been applied to deal with infeasible solutions throughout the search. We propose a new scheme of classifying individuals based on nested domains to determine the solutions according to the level of infeasibility, which in our case represents bands of additional production hours (overtime). Within each band, individuals are just differentiated by their fitness function. As iterations are conducted, the widths of the bands are dynamically adjusted to improve the convergence of the individuals into the feasible domain. The numerical experiments on highly capacitated instances show the effectiveness of this computational tractable approach to guide the search toward the feasible domain. Our approach outperforms other state-of-the-art approaches and commercial solvers. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
This paper addresses the independent multi-plant, multi-period, and multi-item capacitated lot sizing problem where transfers between the plants are allowed. This is an NP-hard combinatorial optimization problem and few solution methods have been proposed to solve it. We develop a GRASP (Greedy Randomized Adaptive Search Procedure) heuristic as well as a path-relinking intensification procedure to find cost-effective solutions for this problem. In addition, the proposed heuristics is used to solve some instances of the capacitated lot sizing problem with parallel machines. The results of the computational tests show that the proposed heuristics outperform other heuristics previously described in the literature. The results are confirmed by statistical tests. (C) 2009 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the capacitated lot sizing problem (CLSP) with a single stage composed of multiple plants, items and periods with setup carry-over among the periods. The CLSP is well studied and many heuristics have been proposed to solve it. Nevertheless, few researches explored the multi-plant capacitated lot sizing problem (MPCLSP), which means that few solution methods were proposed to solve it. Furthermore, to our knowledge, no study of the MPCLSP with setup carry-over was found in the literature. This paper presents a mathematical model and a GRASP (Greedy Randomized Adaptive Search Procedure) with path relinking to the MPCLSP with setup carry-over. This solution method is an extension and adaptation of a previously adopted methodology without the setup carry-over. Computational tests showed that the improvement of the setup carry-over is significant in terms of the solution value with a low increase in computational time.
Resumo:
Foundries can be found all over Brazil and they are very important to its economy. In 2008, a mixed integer-programming model for small market-driven foundries was published, attempting to minimize delivery delays. We undertook a study of that model. Here, we present a new approach based on the decomposition of the problem into two sub-problems: production planning of alloys and production planning of items. Both sub-problems are solved using a Lagrangian heuristic based on transferences. An important aspect of the proposed heuristic is its ability to take into account a secondary practice objective solution: the furnace waste. Computational tests show that the approach proposed here is able to generate good quality solutions that outperform prior results. Journal of the Operational Research Society (2010) 61, 108-114. doi:10.1057/jors.2008.151
Resumo:
This article addresses the interactions of the synthetic antimicrobial peptide dermaseptin 01 (GLWSTIKQKGKEAAIAAA-KAAGQAALGAL-NH(2), DS 01) with phospholipid (PL) monolayers comprising (i) a lipid-rich extract of Leishmania amazonensis (LRE-La), (ii) zwitterionic PL (dipalmitoylphosphatidylcholine, DPPC), and (iii) negatively charged PL (dipalmitoylphosphatidylglycerol, DPPG). The degree of interaction of DS 01 with the different biomembrane models was quantified from equilibrium and dynamic liquid-air interface parameters. At low peptide concentrations, interactions between DS 01 and zwitterionic PL, as well as with the LRE-La monolayers were very weak, whereas with negatively charged PLs the interactions were stronger. For peptide concentrations above 1 mu g/ml, a considerable expansion of negatively charged monolayers occurred. In the case of DPPC, it was possible to return to the original lipid area in the condensed phase, suggesting that the peptide was expelled from the monolayer. However, in the case of DPPG, the average area per lipid molecule in the presence of DS 01 was higher than pure PLs even at high surface pressures, suggesting that at least part of DS 01 remained incorporated in the monolayer. For the LRE-La monolayers, DS 01 also remained in the monolayer. This is the first report on the antiparasitic activity of AMPs using Langmuir monolayers of a natural lipid extract from L. amazonensis. Copyright (C) 2011 European Peptide Society and John Wiley & Sons, Ltd.
Resumo:
A novel technique for selecting the poles of orthonormal basis functions (OBF) in Volterra models of any order is presented. It is well-known that the usual large number of parameters required to describe the Volterra kernels can be significantly reduced by representing each kernel using an appropriate basis of orthonormal functions. Such a representation results in the so-called OBF Volterra model, which has a Wiener structure consisting of a linear dynamic generated by the orthonormal basis followed by a nonlinear static mapping given by the Volterra polynomial series. Aiming at optimizing the poles that fully parameterize the orthonormal bases, the exact gradients of the outputs of the orthonormal filters with respect to their poles are computed analytically by using a back-propagation-through-time technique. The expressions relative to the Kautz basis and to generalized orthonormal bases of functions (GOBF) are addressed; the ones related to the Laguerre basis follow straightforwardly as a particular case. The main innovation here is that the dynamic nature of the OBF filters is fully considered in the gradient computations. These gradients provide exact search directions for optimizing the poles of a given orthonormal basis. Such search directions can, in turn, be used as part of an optimization procedure to locate the minimum of a cost-function that takes into account the error of estimation of the system output. The Levenberg-Marquardt algorithm is adopted here as the optimization procedure. Unlike previous related work, the proposed approach relies solely on input-output data measured from the system to be modeled, i.e., no information about the Volterra kernels is required. Examples are presented to illustrate the application of this approach to the modeling of dynamic systems, including a real magnetic levitation system with nonlinear oscillatory behavior.
Resumo:
This study investigates the numerical simulation of three-dimensional time-dependent viscoelastic free surface flows using the Upper-Convected Maxwell (UCM) constitutive equation and an algebraic explicit model. This investigation was carried out to develop a simplified approach that can be applied to the extrudate swell problem. The relevant physics of this flow phenomenon is discussed in the paper and an algebraic model to predict the extrudate swell problem is presented. It is based on an explicit algebraic representation of the non-Newtonian extra-stress through a kinematic tensor formed with the scaled dyadic product of the velocity field. The elasticity of the fluid is governed by a single transport equation for a scalar quantity which has dimension of strain rate. Mass and momentum conservations, and the constitutive equation (UCM and algebraic model) were solved by a three-dimensional time-dependent finite difference method. The free surface of the fluid was modeled using a marker-and-cell approach. The algebraic model was validated by comparing the numerical predictions with analytic solutions for pipe flow. In comparison with the classical UCM model, one advantage of this approach is that computational workload is substantially reduced: the UCM model employs six differential equations while the algebraic model uses only one. The results showed stable flows with very large extrudate growths beyond those usually obtained with standard differential viscoelastic models. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Mixed models may be defined with or without reference to sampling, and can be used to predict realized random effects, as when estimating the latent values of study subjects measured with response error. When the model is specified without reference to sampling, a simple mixed model includes two random variables, one stemming from an exchangeable distribution of latent values of study subjects and the other, from the study subjects` response error distributions. Positive probabilities are assigned to both potentially realizable responses and artificial responses that are not potentially realizable, resulting in artificial latent values. In contrast, finite population mixed models represent the two-stage process of sampling subjects and measuring their responses, where positive probabilities are only assigned to potentially realizable responses. A comparison of the estimators over the same potentially realizable responses indicates that the optimal linear mixed model estimator (the usual best linear unbiased predictor, BLUP) is often (but not always) more accurate than the comparable finite population mixed model estimator (the FPMM BLUP). We examine a simple example and provide the basis for a broader discussion of the role of conditioning, sampling, and model assumptions in developing inference.
Resumo:
The pentrophic membrane (PM) is an anatomical structure surrounding the food bolus in most insects. Rejecting the idea that PM has evolved from coating mucus to play the same protective role as it, novel functions were proposed and experimentally tested. The theoretical principles underlying the digestive enzyme recycling mechanism were described and used to develop an algorithm to calculate enzyme distributions along the midgut and to infer secretory and absorptive sites. The activity of a Spodoptera frugiperda microvillar aminopeptidase decreases by 50% if placed in the presence of midgut contents. S. frugiperda trypsin preparations placed into dialysis bags in stirred and unstirred media have activities of 210 and 160%, respectively, over the activities of samples in a test tube. The ectoperitrophic fluid (EF) present in the midgut caeca of Rhynchosciara americana may be collected. If the enzymes restricted to this fluid are assayed in the presence of PM contents (PMC) their activities decrease by at least 58%. The lack of PM caused by calcofluor feeding impairs growth due to an increase in the metabolic cost associated with the conversion of food into body mass. This probably results from an increase in digestive enzyme excretion and useless homeostatic attempt to reestablish destroyed midgut gradients. The experimental models support the view that PM enhances digestive efficiency by: (a) prevention of non-specific binding of undigested material onto cell Surface; (b) prevention of excretion by allowing enzyme recycling powered by an ectoperitrophic counterflux of fluid; (c) removal from inside PM of the oligomeric molecules that may inhibit the enzymes involved in initial digestion; (d) restriction of oligomer hydrolases to ectoperitrophic space (ECS) to avoid probable partial inhibition by non-dispersed undigested food. Finally,PM functions are discussed regarding insects feeding on any diet. (C) 2008 Elsevier Ltd. All rights reserved.
Resumo:
Strawberries represent the main source of ellagic acid derivatives in the Brazilian diet, corresponding to more than 50% of all phenolic compounds found in the fruit. There is a particular interest in the determination of the ellagic acid content in fruits because of possible chemopreventive benefits. In the present study, the potential health benefits of purified ellagitannins from strawberries were evaluated in relation to the antiproliferative activity and in vitro inhibition of alpha-amylase, alpha-glucosidase, and angiotensin I-converting enzyme (ACE) relevant for potential management of hyperglycemia and hypertension. Therefore, a comparison among ellagic acid, purified ellagitannins, and a strawberry extract was done to evaluate the possible synergistic effects of phenolics. In relation to the antiproliferative activity, it was observed that ellagic acid had the highest percentage inhibition of cell proliferation. The strawberry extract had lower efficacy in inhibiting the cell proliferation, indicating that in the case of this fruit there is no synergism. Purified ellagitannins had high alpha-amylase and ACE inhibitory activities. However, these compounds had low alpha-glucosidase inhibitory activity. These results suggested that the ellagitannins and ellagic acid have good potential for the management of hyperglycemia and hypertension linked to type 2 diabetes. However, further studies with animal and human models are needed to advance the in vitro assay-based biochemical rationale from this study.
Resumo:
We construct and compare in this work a variety of simple models for strange stars, namely, hypothetical self-bound objects made of a cold stable version of the quark-gluon plasma. Exact, quasi-exact and numerical models are examined to find the most economical description for these objects. A simple and successful parametrization of them is given in terms of the central density, and the differences among the models are explicitly shown and discussed. In particular, we present a model starting with a Gaussian ansatz for the density profile that provides a very accurate and almost complete analytical integration of the problem, modulo a small difference for one of the metric potentials.
Resumo:
The kinematic expansion history of the universe is investigated by using the 307 supernovae type Ia from the Union Compilation set. Three simple model parameterizations for the deceleration parameter ( constant, linear and abrupt transition) and two different models that are explicitly parametrized by the cosmic jerk parameter ( constant and variable) are considered. Likelihood and Bayesian analyses are employed to find best fit parameters and compare models among themselves and with the flat Lambda CDM model. Analytical expressions and estimates for the deceleration and cosmic jerk parameters today (q(0) and j(0)) and for the transition redshift (z(t)) between a past phase of cosmic deceleration to a current phase of acceleration are given. All models characterize an accelerated expansion for the universe today and largely indicate that it was decelerating in the past, having a transition redshift around 0.5. The cosmic jerk is not strongly constrained by the present supernovae data. For the most realistic kinematic models the 1 sigma confidence limits imply the following ranges of values: q(0) is an element of [-0.96, -0.46], j(0) is an element of [-3.2,-0.3] and z(t) is an element of [0.36, 0.84], which are compatible with the Lambda CDM predictions, q(0) = -0.57 +/- 0.04, j(0) = -1 and z(t) = 0.71 +/- 0.08. We find that even very simple kinematic models are equally good to describe the data compared to the concordance Lambda CDM model, and that the current observations are not powerful enough to discriminate among all of them.