40 resultados para capacitated arc-routing problem, column generation, branch-and-price, dual-optimal inequalities
Resumo:
In this article we propose a 0-1 optimization model to determine a crop rotation schedule for each plot in a cropping area. The rotations have the same duration in all the plots and the crops are selected to maximize plot occupation. The crops may have different production times and planting dates. The problem includes planting constraints for adjacent plots and also for sequences of crops in the rotations. Moreover, cultivating crops for green manuring and fallow periods are scheduled into each plot. As the model has, in general, a great number of constraints and variables, we propose a heuristics based on column generation. To evaluate the performance of the model and the method, computational experiments using real-world data were performed. The solutions obtained indicate that the method generates good results.
Resumo:
Two fundamental processes usually arise in the production planning of many industries. The first one consists of deciding how many final products of each type have to be produced in each period of a planning horizon, the well-known lot sizing problem. The other process consists of cutting raw materials in stock in order to produce smaller parts used in the assembly of final products, the well-studied cutting stock problem. In this paper the decision variables of these two problems are dependent of each other in order to obtain a global optimum solution. Setups that are typically present in lot sizing problems are relaxed together with integer frequencies of cutting patterns in the cutting problem. Therefore, a large scale linear optimizations problem arises, which is exactly solved by a column generated technique. It is worth noting that this new combined problem still takes the trade-off between storage costs (for final products and the parts) and trim losses (in the cutting process). We present some sets of computational tests, analyzed over three different scenarios. These results show that, by combining the problems and using an exact method, it is possible to obtain significant gains when compared to the usual industrial practice, which solve them in sequence. (C) 2010 The Franklin Institute. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present a mathematically rigorous quantum-mechanical treatment of a one-dimensional motion of a particle in the Calogero potential alpha x(-2). Although the problem is quite old and well studied, we believe that our consideration based on a uniform approach to constructing a correct quantum-mechanical description for systems with singular potentials and/or boundaries, proposed in our previous works, adds some new points to its solution. To demonstrate that a consideration of the Calogero problem requires mathematical accuracy, we discuss some `paradoxes` inherent in the `naive` quantum-mechanical treatment. Using a self-adjoint extension method, we construct and study all possible self-adjoint operators (self-adjoint Hamiltonians) associated with a formal differential expression for the Calogero Hamiltonian. In particular, we discuss a spontaneous scale-symmetry breaking associated with self-adjoint extensions. A complete spectral analysis of all self-adjoint Hamiltonians is presented.
Resumo:
For a fixed family F of graphs, an F-packing in a graph G is a set of pairwise vertex-disjoint subgraphs of G, each isomorphic to an element of F. Finding an F-packing that maximizes the number of covered edges is a natural generalization of the maximum matching problem, which is just F = {K(2)}. In this paper we provide new approximation algorithms and hardness results for the K(r)-packing problem where K(r) = {K(2), K(3,) . . . , K(r)}. We show that already for r = 3 the K(r)-packing problem is APX-complete, and, in fact, we show that it remains so even for graphs with maximum degree 4. On the positive side, we give an approximation algorithm with approximation ratio at most 2 for every fixed r. For r = 3, 4, 5 we obtain better approximations. For r = 3 we obtain a simple 3/2-approximation, achieving a known ratio that follows from a more involved algorithm of Halldorsson. For r = 4, we obtain a (3/2 + epsilon)-approximation, and for r = 5 we obtain a (25/14 + epsilon)-approximation. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
The objective of this work was to develop and validate a rapid Reversed-Phase High-Performance Liquid Chromatography method for the quantification of 3,5,3 '-triiodothyroacetic acid (TRIAC) in nanoparticles delivery system prepared in different polymeric matrices. Special attention was given to developing a reliable reproductive technique for the pretreatment of the samples. Chromatographic runs were performed on an Agilent 1200 Series HPLC with a RP Phenomenex (R) Gemini C18 (150 x 4, 6 mm i.d., 5 mu m) column using acetonitrile and triethylamine buffer 0.1% (TEA) (40 : 60 v/v) as a mobile phase in an isocratic elution, pH 5.6 at a flow rate of 1 ml min(-1). TRIAC was detected at a wavelength of 220 nm. The injection volume was 20 mu l and the column temperature was maintained at 35 degrees C. The validation characteristics included accuracy, precision, specificity, linearity, recovery, and robustness. The standard curve was found to have a linear relationship (r(2) - 0.9996) over the analytical range of 5-100 mu g ml(-1) . The detection and quantitation limits were 1.3 and 3.8 mu g ml(-1), respectively. The recovery and loaded TRIAC in colloidal system delivery was nearly 100% and 98%, respectively. The method was successfully applied in polycaprolactone, polyhydroxybutyrate, and polymethylmethacrylate nanoparticles.
Resumo:
This paper presents a new approach to the transmission loss allocation problem in a deregulated system. This approach belongs to the set of incremental methods. It treats all the constraints of the network, i.e. control, state and functional constraints. The approach is based on the perturbation of optimum theorem. From a given optimal operating point obtained by the optimal power flow the loads are perturbed and a new optimal operating point that satisfies the constraints is determined by the sensibility analysis. This solution is used to obtain the allocation coefficients of the losses for the generators and loads of the network. Numerical results show the proposed approach in comparison to other methods obtained with well-known transmission networks, IEEE 14-bus. Other test emphasizes the importance of considering the operational constraints of the network. And finally the approach is applied to an actual Brazilian equivalent network composed of 787 buses, and it is compared with the technique used nowadays by the Brazilian Control Center. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
A general transition criterion is proposed in order to locate the core-annular flow pattern in horizontal and vertical oil-water flows. It is based on a rigorous one-dimensional two-fluid model of liquid-liquid two-phase flow and considers the existence of critical interfacial wave numbers related to a non-negligible interfacial tension term to which the linear stability theory still applies. The viscous laminar-laminar flow problem is fully resolved and turbulence effects on the stability are analyzed through experimentally obtained shape factors. The proposed general transition criterion includes in its formulation the inviscid Kelvin-Helmholtz`s discriminator. If a theoretical maximum wavelength is considered as a necessary condition for stability, a stability criterion in terms of the Eotvos number is achieved. Effects of interfacial tension, viscosity ratio, density difference, and shape factors on the stability of core-annular flow are analyzed in detail. The more complete modeling allowed for the analysis of the neutral-stability wave number and the results strongly suggest that the interfacial tension term plays an indispensable role in the correct prediction of the stable region of core-annular flow pattern. The incorporation of a theoretical minimum wavelength into the transition model produced significantly better results. The criterion predictions were compared with recent data from the literature and the agreement is encouraging. (C) 2007 American Institute of Chemical Engineers.
Resumo:
This paper presents an investigation of design code provisions for steel-concrete composite columns. The study covers the national building codes of United States, Canada and Brazil, and the transnational EUROCODE. The study is based on experimental results of 93 axially loaded concrete-filled tubular steel columns. This includes 36 unpublished, full scale experimental results by the authors and 57 results from the literature. The error of resistance models is determined by comparing experimental results for ultimate loads with code-predicted column resistances. Regression analysis is used to describe the variation of model error with column slenderness and to describe model uncertainty. The paper shows that Canadian and European codes are able to predict mean column resistance, since resistance models of these codes present detailed formulations for concrete confinement by a steel tube. ANSI/AISC and Brazilian codes have limited allowance for concrete confinement, and become very conservative for short columns. Reliability analysis is used to evaluate the safety level of code provisions. Reliability analysis includes model error and other random problem parameters like steel and concrete strengths, and dead and live loads. Design code provisions are evaluated in terms of sufficient and uniform reliability criteria. Results show that the four design codes studied provide uniform reliability, with the Canadian code being best in achieving this goal. This is a result of a well balanced code, both in terms of load combinations and resistance model. The European code is less successful in providing uniform reliability, a consequence of the partial factors used in load combinations. The paper also shows that reliability indexes of columns designed according to European code can be as low as 2.2, which is quite below target reliability levels of EUROCODE. (C) 2009 Elsevier Ltd. All rights reserved.
Resumo:
Compliant mechanisms can achieve a specified motion as a mechanism without relying on the use of joints and pins. They have broad application in precision mechanical devices and Micro-Electro Mechanical Systems (MEMS) but may lose accuracy and produce undesirable displacements when subjected to temperature changes. These undesirable effects can be reduced by using sensors in combination with control techniques and/or by applying special design techniques to reduce such undesirable effects at the design stage, a process generally termed ""design for precision"". This paper describes a design for precision method based on a topology optimization method (TOM) for compliant mechanisms that includes thermal compensation features. The optimization problem emphasizes actuator accuracy and it is formulated to yield optimal compliant mechanism configurations that maximize the desired output displacement when a force is applied, while minimizing undesirable thermal effects. To demonstrate the effectiveness of the method, two-dimensional compliant mechanisms are designed considering thermal compensation, and their performance is compared with compliant mechanisms designs that do not consider thermal compensation. (C) 2010 Elsevier B.V. All rights reserved.
Resumo:
We consider in this paper the optimal stationary dynamic linear filtering problem for continuous-time linear systems subject to Markovian jumps in the parameters (LSMJP) and additive noise (Wiener process). It is assumed that only an output of the system is available and therefore the values of the jump parameter are not accessible. It is a well known fact that in this setting the optimal nonlinear filter is infinite dimensional, which makes the linear filtering a natural numerically, treatable choice. The goal is to design a dynamic linear filter such that the closed loop system is mean square stable and minimizes the stationary expected value of the mean square estimation error. It is shown that an explicit analytical solution to this optimal filtering problem is obtained from the stationary solution associated to a certain Riccati equation. It is also shown that the problem can be formulated using a linear matrix inequalities (LMI) approach, which can be extended to consider convex polytopic uncertainties on the parameters of the possible modes of operation of the system and on the transition rate matrix of the Markov process. As far as the authors are aware of this is the first time that this stationary filtering problem (exact and robust versions) for LSMJP with no knowledge of the Markov jump parameters is considered in the literature. Finally, we illustrate the results with an example.
Resumo:
This work aims at proposing the use of the evolutionary computation methodology in order to jointly solve the multiuser channel estimation (MuChE) and detection problems at its maximum-likelihood, both related to the direct sequence code division multiple access (DS/CDMA). The effectiveness of the proposed heuristic approach is proven by comparing performance and complexity merit figures with that obtained by traditional methods found in literature. Simulation results considering genetic algorithm (GA) applied to multipath, DS/CDMA and MuChE and multi-user detection (MuD) show that the proposed genetic algorithm multi-user channel estimation (GAMuChE) yields a normalized mean square error estimation (nMSE) inferior to 11%, under slowly varying multipath fading channels, large range of Doppler frequencies and medium system load, it exhibits lower complexity when compared to both maximum likelihood multi-user channel estimation (MLMuChE) and gradient descent method (GrdDsc). A near-optimum multi-user detector (MuD) based on the genetic algorithm (GAMuD), also proposed in this work, provides a significant reduction in the computational complexity when compared to the optimum multi-user detector (OMuD). In addition, the complexity of the GAMuChE and GAMuD algorithms were (jointly) analyzed in terms of number of operations necessary to reach the convergence, and compared to other jointly MuChE and MuD strategies. The joint GAMuChE-GAMuD scheme can be regarded as a promising alternative for implementing third-generation (3G) and fourth-generation (4G) wireless systems in the near future. Copyright (C) 2010 John Wiley & Sons, Ltd.
Resumo:
The flowshop scheduling problem with blocking in-process is addressed in this paper. In this environment, there are no buffers between successive machines: therefore intermediate queues of jobs waiting in the system for their next operations are not allowed. Heuristic approaches are proposed to minimize the total tardiness criterion. A constructive heuristic that explores specific characteristics of the problem is presented. Moreover, a GRASP-based heuristic is proposed and Coupled with a path relinking strategy to search for better outcomes. Computational tests are presented and the comparisons made with an adaptation of the NEH algorithm and with a branch-and-bound algorithm indicate that the new approaches are promising. (c) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Sibutramine hydrochloride monohydrate, chemically 1-(4-chlorophenyl)-N,N-dimethyl-alpha-(2-methylpropyl) hydrochloride monohydrate (SB center dot HCl center dot H2O), was approved by the U.S. Food and Drug Administration for the treatment of obesity. The objective of this study was to develop, validate, and compare methods using UV-derivative spectrophotometry (UVDS) and reversed-phase high-performance liquid chromatography (HPLC) for the determination of SB center dot HCl center dot H2O in pharmaceutical drug products. The UVDS and HPLC methods were found to be rapid, precise, and accurate. Statistically, there was no significant difference between the proposed UVDS and HPLC methods. The enantiomeric separation of SB was obtained on an alpha-1 acid glycoprotein column. The R- and S-sibutramine were eluted in < 5 min with baseline separation of the chromatographic peaks (alpha = 1.9 and resolution = 1.9).
Resumo:
A three-phase hollow-fiber liquid-phase microextraction method for the analysis of rosiglitazone and its metabolites N-desmethyl rosiglitazone and p-hydroxy rosiglitazone in microsomal preparations is described for the first time. The drug and metabolites HPLC determination was carried out using an X-Terra RP-18 column, at 22 degrees C. The mobile phase was composed of water, acetonitrile and acetic acid (85:15:0.5, v/v/v) and the detection was performed at 245 nm. The hollow-fiber liquid-phase microextraction procedure was optimized using multifactorial experiments and the following optimal condition was established: sample agitation at 1750 rpm, extraction for 30 min, hydrochloric acid 0.01 mol/L as acceptor phase, 1-octanol as organic phase, and donor phase pH adjustment to 8.0. The recovery rates, obtained by using 1 mL of microsomal preparation, were 47-70%. The method presented LOQs of 50 ng/mL and it was linear over the concentration range of 50-6000 ng/mL, with correlation coefficients (r) higher than 0.9960, for all analytes. The validated method was employed to study the in vitro biotransformation of rosiglitazone using rat liver microsomal fraction.
Resumo:
A three-phase LPME (liquid-phase microextraction) method for the enantioselective analysis of venlafaxine (VF) metabolites (O-desmethylvenlafaxine (ODV) and N-desmethylvenlafaxine (NDV) in microsomal preparations is described for the first time. The assay involves the chiral HPLC separation of drug and metabolites using a Chiralpak AD column under normal-phase mode of elution and detection at 230 nm. The LPME procedure was optimized using multifactorial experiments and the following optimal condition was established: sample agitation at 1,750 rpm, 20 min of extraction, acetic acid 0.1 mol/L as acceptor phase, 1-octanol as organic phase and donor phase pH adjustment to 10.0. Under these conditions, the mean recoveries were 41% and 42% for (-)-(R)-ODV and (+)-(S)-ODV, respectively, and 47% and 48% for (-)-( R)-NDV and (+)-( S)-NDV, respectively. The method presented quantification limits of 200 ng/mL and it was linear over the concentration range of 200-5,000 ng/mL for all analytes. The validated method was employed to study the in vitro biotransformation of VF using rat liver microsomal fraction. The results demonstrated the enantioselective biotransformation of VF.