84 resultados para problem-oriented methodology
Resumo:
This paper addresses the non-preemptive single machine scheduling problem to minimize total tardiness. We are interested in the online version of this problem, where orders arrive at the system at random times. Jobs have to be scheduled without knowledge of what jobs will come afterwards. The processing times and the due dates become known when the order is placed. The order release date occurs only at the beginning of periodic intervals. A customized approximate dynamic programming method is introduced for this problem. The authors also present numerical experiments that assess the reliability of the new approach and show that it performs better than a myopic policy.
Resumo:
In this paper, we consider a real-life heterogeneous fleet vehicle routing problem with time windows and split deliveries that occurs in a major Brazilian retail group. A single depot attends 519 stores of the group distributed in 11 Brazilian states. To find good solutions to this problem, we propose heuristics as initial solutions and a scatter search (SS) approach. Next, the produced solutions are compared with the routes actually covered by the company. Our results show that the total distribution cost can be reduced significantly when such methods are used. Experimental testing with benchmark instances is used to assess the merit of our proposed procedure. (C) 2008 Published by Elsevier B.V.
Resumo:
Modern Integrated Circuit (IC) design is characterized by a strong trend of Intellectual Property (IP) core integration into complex system-on-chip (SOC) architectures. These cores require thorough verification of their functionality to avoid erroneous behavior in the final device. Formal verification methods are capable of detecting any design bug. However, due to state explosion, their use remains limited to small circuits. Alternatively, simulation-based verification can explore hardware descriptions of any size, although the corresponding stimulus generation, as well as functional coverage definition, must be carefully planned to guarantee its efficacy. In general, static input space optimization methodologies have shown better efficiency and results than, for instance, Coverage Directed Verification (CDV) techniques, although they act on different facets of the monitored system and are not exclusive. This work presents a constrained-random simulation-based functional verification methodology where, on the basis of the Parameter Domains (PD) formalism, irrelevant and invalid test case scenarios are removed from the input space. To this purpose, a tool to automatically generate PD-based stimuli sources was developed. Additionally, we have developed a second tool to generate functional coverage models that fit exactly to the PD-based input space. Both the input stimuli and coverage model enhancements, resulted in a notable testbench efficiency increase, if compared to testbenches with traditional stimulation and coverage scenarios: 22% simulation time reduction when generating stimuli with our PD-based stimuli sources (still with a conventional coverage model), and 56% simulation time reduction when combining our stimuli sources with their corresponding, automatically generated, coverage models.
Resumo:
In this paper, we devise a separation principle for the finite horizon quadratic optimal control problem of continuous-time Markovian jump linear systems driven by a Wiener process and with partial observations. We assume that the output variable and the jump parameters are available to the controller. It is desired to design a dynamic Markovian jump controller such that the closed loop system minimizes the quadratic functional cost of the system over a finite horizon period of time. As in the case with no jumps, we show that an optimal controller can be obtained from two coupled Riccati differential equations, one associated to the optimal control problem when the state variable is available, and the other one associated to the optimal filtering problem. This is a separation principle for the finite horizon quadratic optimal control problem for continuous-time Markovian jump linear systems. For the case in which the matrices are all time-invariant we analyze the asymptotic behavior of the solution of the derived interconnected Riccati differential equations to the solution of the associated set of coupled algebraic Riccati equations as well as the mean square stabilizing property of this limiting solution. When there is only one mode of operation our results coincide with the traditional ones for the LQG control of continuous-time linear systems.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
Hub-and-spoke networks are widely studied in the area of location theory. They arise in several contexts, including passenger airlines, postal and parcel delivery, and computer and telecommunication networks. Hub location problems usually involve three simultaneous decisions to be made: the optimal number of hub nodes, their locations and the allocation of the non-hub nodes to the hubs. In the uncapacitated single allocation hub location problem (USAHLP) hub nodes have no capacity constraints and non-hub nodes must be assigned to only one hub. In this paper, we propose three variants of a simple and efficient multi-start tabu search heuristic as well as a two-stage integrated tabu search heuristic to solve this problem. With multi-start heuristics, several different initial solutions are constructed and then improved by tabu search, while in the two-stage integrated heuristic tabu search is applied to improve both the locational and allocational part of the problem. Computational experiments using typical benchmark problems (Civil Aeronautics Board (CAB) and Australian Post (AP) data sets) as well as new and modified instances show that our approaches consistently return the optimal or best-known results in very short CPU times, thus allowing the possibility of efficiently solving larger instances of the USAHLP than those found in the literature. We also report the integer optimal solutions for all 80 CAB data set instances and the 12 AP instances up to 100 nodes, as well as for the corresponding new generated AP instances with reduced fixed costs. Published by Elsevier Ltd.
Resumo:
In this paper we proposed a new two-parameters lifetime distribution with increasing failure rate. The new distribution arises on a latent complementary risk problem base. The properties of the proposed distribution are discussed, including a formal proof of its probability density function and explicit algebraic formulae for its reliability and failure rate functions, quantiles and moments, including the mean and variance. A simple EM-type algorithm for iteratively computing maximum likelihood estimates is presented. The Fisher information matrix is derived analytically in order to obtaining the asymptotic covariance matrix. The methodology is illustrated on a real data set. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
This paper addresses the single machine scheduling problem with a common due date aiming to minimize earliness and tardiness penalties. Due to its complexity, most of the previous studies in the literature deal with this problem using heuristics and metaheuristics approaches. With the intention of contributing to the study of this problem, a branch-and-bound algorithm is proposed. Lower bounds and pruning rules that exploit properties of the problem are introduced. The proposed approach is examined through a computational comparative study with 280 problems involving different due date scenarios. In addition, the values of optimal solutions for small problems from a known benchmark are provided.
Resumo:
Probable consequences of the mitigation of citrus canker eradication methodology in Sao Paulo state Recently the Sao Paulo state government mitigated its citrus canker eradication methodology adopted since 1999. In April 2009 at least 99.8% of commercial sweet orange orchards were free of citrus canker in Sao Paulo state. Consequently the mitigation of the eradication methodology reduced the high level of safety and the competitiveness of the citrus production sector in Sao Paulo state, Brazil. Therefore we suggest the re-adoption of the same eradication methodology of citrus canker adopted in Sao Paulo from 1999 to 2009, or the adoption of a new methodology, effective for citrus canker suppression, because in new sample surveys citrus canker was detected in >0.36% of affected orchards. This incidence threshold was calculated by using the Duncan test (P <= 0.05) to compare the yearly sample surveys conducted in Sao Paulo state to estimate citrus canker incidence between 1999 and 2009. The calculated minimum significant level was 0.28% among sample surveys and the lowest citrus canker incidence in Sao Paulo state was 0.08%, occurring in 2001. Thus, as an alternative, we suggest the adoption of a new eradication methodology for citrus canker suppression when a new sample survey detected >0.36% of affected orchards in Sao Paulo state, Brazil.
Resumo:
The objective of this study was to develop a dessert that contains soy protein (SP) (1%, 2%, 3%) and guava juice (GJ) (22%, 27%, 32%) using Response Surface Methodology (RSM) as the optimisation technique. Water activity, physical stability, colour, acidity, pH, iron, and carotenoid contents were analysed. Affective tests were performed to determine the degree of liking of colour, creaminess, and acceptability. The results showed that GJ increased the values of redness, hue angle, chromaticity, acidity, and carotenoid content, while SP reduced water activity. Optimisation suggested a dessert containing 32% GJ and 1.17% SP as the best proportion of these components. This sample was considered a source of fibres, ascorbic acid, copper, and iron and garnered scores above the level of `slightly liked` for sensory attributes. Moreover, RSM was shown to be an adequate approach for modelling the physicochemical parameters and the degree of liking of creaminess of desserts. (C) 2010 Elsevier Ltd. All rights reserved.
Resumo:
Desserts made with soy cream, which are oil-in-water emulsions, are widely consumed by lactose-intolerant individuals in Brazil. In this regard, this study aimed at using response surface methodology (RSM) to optimize the sensory attributes of a soy-based emulsion over a range of pink guava juice (GJ: 22% to 32%) and soy protein (SP: 1% to 3%). WHC and backscattering were analyzed after 72 h of storage at 7 degrees C. Furthermore, a rating test was performed to determine the degree of liking of color, taste, creaminess, appearance, and overall acceptability. The data showed that the samples were stable against gravity and storage. The models developed by RSM adequately described the creaminess, taste, and appearance of the emulsions. The response surface of the desirability function was used successfully in the optimization of the sensory properties of dairy-free emulsions, suggesting that a product with 30.35% GJ and 3% SP was the best combination of these components. The optimized sample presented suitable sensory properties, in addition to being a source of dietary fiber, iron, copper, and ascorbic acid.
Resumo:
This study describes an accurate, sensitive, and specific chromatographic method for the simultaneous quantitative determination of lamivudine and zidovudine in human blood plasma, using stavudine as an internal standard. The chromatographic separation was performed using a C8 column (150 x 4.6 mm, 5 mu m), and ultraviolet absorbency detection at 270 nm with gradient elution. Two mobile phases were used. Phase A contained 10 mM potassium phosphate and 3% acetonitrile, whereas Phase B contained methanol. A linear gradient was used with a variability of A-B phase proportion from 98-2% to 72-28%, respectively. The drug extraction was performed with two 4 mL aliquots of ethyl acetate.
Resumo:
Exposure to oxygen may induce a lack of functionality of probiotic dairy foods because the anaerobic metabolism of probiotic bacteria compromises during storage the maintenance of their viability to provide benefits to consumer health. Glucose oxidase can constitute a potential alternative to increase the survival of probiotic bacteria in yogurt because it consumes the oxygen permeating to the inside of the pot during storage, thus making it possible to avoid the use of chemical additives. This research aimed to optimize the processing of probiotic yogurt supplemented with glucose oxidase using response surface methodology and to determine the levels of glucose and glucose oxidase that minimize the concentration of dissolved oxygen and maximize the Bifidobacterium longum count by the desirability function. Response surface methodology mathematical models adequately described the process, with adjusted determination coefficients of 83% for the oxygen and 94% for the B. longum. Linear and quadratic effects of the glucose oxidase were reported for the oxygen model, whereas for the B. longum count model an influence of the glucose oxidase at the linear level was observed followed by the quadratic influence of glucose and quadratic effect of glucose oxidase. The desirability function indicated that 62.32 ppm of glucose oxidase and 4.35 ppm of glucose was the best combination of these components for optimization of probiotic yogurt processing. An additional validation experiment was performed and results showed acceptable error between the predicted and experimental results.
Resumo:
Despite the increase in the use of natural compounds in place of synthetic derivatives as antioxidants in food products, the extent of this substitution is limited by cost constraints. Thus, the objective of this study was to explore the synergism on the antioxidant activity of natural compounds, for further application in food products. Three hydrosoluble compounds (x(1) = caffeic acid, x(2) = carnosic acid, and x(3) = glutathione) and three liposoluble compounds (x(1) = quercetin, x(2) = rutin, and x(3) = genistein) were mixed according to a ""centroid simplex design"". The antioxidant activity of the mixtures was analyzed by the ferric reducing antioxidant power (FRAP) and oxygen radical absorbance capacity (ORAL) methodologies, and activity was also evaluated in an oxidized mixed micelle prepared with linoleic acid (LAOX). Cubic polynomial models with predictive capacity were obtained when the mixtures were submitted to the LAOX methodology ((y) over cap = 0.56 x(1) + 0.59 x(2) + 0.04 x(3) + 0.41 x(1)x(2) - 0.41 x(1)x(3) - 1.12 x(2)x(3) - 4.01 x(1)x(2)x(3)) for the hydrosoluble compounds, and to FRAP methodology ((y) over cap = 3.26 x(1) + 2.39 x(2) + 0.04 x(3) + 1.51 x(1)x(2) + 1.03 x(1)x(3) + 0.29 x(1)x(3) + 3.20 x(1)x(2)x(3)) for the liposoluble compounds. Optimization of the models suggested that a mixture containing 47% caffeic acid + 53% carnosic acid and a mixture containing 67% quercetin + 33% rutin were potential synergistic combinations for further evaluation using a food matrix.
Resumo:
Purpose - Using Brandenburger and Nalebuff`s 1995 co-opetition model as a reference, the purpose of this paper is to seek to develop a tool that, based on the tenets of classical game theory, would enable scholars and managers to identify which games may be played in response to the different conflict of interest situations faced by companies in their business environments. Design/methodology/approach - The literature on game theory and business strategy are reviewed and a conceptual model, the strategic games matrix (SGM), is developed. Two novel games are described and modeled. Findings - The co-opetition model is not sufficient to realistically represent most of the conflict of interest situations faced by companies. It seeks to address this problem through development of the SGM, which expands upon Brandenburger and Nalebuff`s model by providing a broader perspective, through incorporation of an additional dimension (power ratio between players) and three novel, respectively, (rival, individualistic, and associative). Practical implications - This proposed model, based on the concepts of game theory, should be used to train decision- and policy-makers to better understand, interpret and formulate conflict management strategies. Originality/value - A practical and original tool to use game models in conflict of interest situations is generated. Basic classical games, such as Nash, Stackelberg, Pareto, and Minimax, are mapped on the SGM to suggest in which situations they Could be useful. Two innovative games are described to fit four different types of conflict situations that so far have no corresponding game in the literature. A test application of the SGM to a classic Intel Corporation strategic management case, in the complex personal computer industry, shows that the proposed method is able to describe, to interpret, to analyze, and to prescribe optimal competitive and/or cooperative strategies for each conflict of interest situation.