951 resultados para Mixed-integer linear programming


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Revenue management practices often include overbooking capacity to account for customerswho make reservations but do not show up. In this paper, we consider the network revenuemanagement problem with no-shows and overbooking, where the show-up probabilities are specificto each product. No-show rates differ significantly by product (for instance, each itinerary andfare combination for an airline) as sale restrictions and the demand characteristics vary byproduct. However, models that consider no-show rates by each individual product are difficultto handle as the state-space in dynamic programming formulations (or the variable space inapproximations) increases significantly. In this paper, we propose a randomized linear program tojointly make the capacity control and overbooking decisions with product-specific no-shows. Weestablish that our formulation gives an upper bound on the optimal expected total profit andour upper bound is tighter than a deterministic linear programming upper bound that appearsin the existing literature. Furthermore, we show that our upper bound is asymptotically tightin a regime where the leg capacities and the expected demand is scaled linearly with the samerate. We also describe how the randomized linear program can be used to obtain a bid price controlpolicy. Computational experiments indicate that our approach is quite fast, able to scale to industrialproblems and can provide significant improvements over standard benchmarks.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most research on single machine scheduling has assumedthe linearity of job holding costs, which is arguablynot appropriate in some applications. This motivates ourstudy of a model for scheduling $n$ classes of stochasticjobs on a single machine, with the objective of minimizingthe total expected holding cost (discounted or undiscounted). We allow general holding cost rates that are separable,nondecreasing and convex on the number of jobs in eachclass. We formulate the problem as a linear program overa certain greedoid polytope, and establish that it issolved optimally by a dynamic (priority) index rule,whichextends the classical Smith's rule (1956) for the linearcase. Unlike Smith's indices, defined for each class, ournew indices are defined for each extended class, consistingof a class and a number of jobs in that class, and yieldan optimal dynamic index rule: work at each time on a jobwhose current extended class has larger index. We furthershow that the indices possess a decomposition property,as they are computed separately for each class, andinterpret them in economic terms as marginal expected cost rate reductions per unit of expected processing time.We establish the results by deploying a methodology recentlyintroduced by us [J. Niño-Mora (1999). "Restless bandits,partial conservation laws, and indexability. "Forthcomingin Advances in Applied Probability Vol. 33 No. 1, 2001],based on the satisfaction by performance measures of partialconservation laws (PCL) (which extend the generalizedconservation laws of Bertsimas and Niño-Mora (1996)):PCL provide a polyhedral framework for establishing theoptimality of index policies with special structure inscheduling problems under admissible objectives, which weapply to the model of concern.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of scheduling a multiclass $M/M/m$ queue with Bernoulli feedback on $m$ parallel servers to minimize time-average linear holding costs. We analyze the performance of a heuristic priority-index rule, which extends Klimov's optimal solution to the single-server case: servers select preemptively customers with larger Klimov indices. We present closed-form suboptimality bounds (approximate optimality) for Klimov's rule, which imply that its suboptimality gap is uniformly bounded above with respect to (i) external arrival rates, as long as they stay within system capacity;and (ii) the number of servers. It follows that its relativesuboptimality gap vanishes in a heavy-traffic limit, as external arrival rates approach system capacity (heavy-traffic optimality). We obtain simpler expressions for the special no-feedback case, where the heuristic reduces to the classical $c \mu$ rule. Our analysis is based on comparing the expected cost of Klimov's ruleto the value of a strong linear programming (LP) relaxation of the system's region of achievable performance of mean queue lengths. In order to obtain this relaxation, we derive and exploit a new set ofwork decomposition laws for the parallel-server system. We further report on the results of a computational study on the quality of the $c \mu$ rule for parallel scheduling.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose: Cardiac 18F-FDG PET is considered as the gold standard to assess myocardial metabolism and infarct size. The myocardial demand for glucose can be influenced by fasting and/or following pharmacological preparation. In the rat, it has been previously shown that fasting combined with preconditioning with acipimox, a nicotinic acid derivate and lipidlowering agent, increased dramatically 18F-FDG uptake in the myocardium. Strategies aimed at reducing infarct scar are evaluated in a variety of mouse models. PET would particularly useful for assessing cardiac viability in the mouse. However, prior knowledge of the best preparation protocol is a prerequisite for accurate measurement of glucose uptake in mice. Therefore, we studied the effect of different protocols on 18F-FDG uptake in the mouse heart.Methods: Mice (n = 15) were separated into three treatment groups according to preconditioning and underwent a 18FDG PET scan. Group 1: No preconditioning (n = 3); Group 2: Overnight fasting (n = 8); and Group 3: Overnight fasting and acipimox (25mg/kg SC) (n = 4). MicroPET images were processed with PMOD to determine 18F-FDG mean standard uptake value (SUV) at 30 min for the whole left ventricle (LV) and for each region of the 17-segments AHA model. For comparisons, we used Mann-Whitney test and multilevel mixed-effects linear regression (Stata 11.0).Results: In total, 27 microPET were performed successfully in 15 animals. Overnight fasting led to a dramatic increase in LV-SUV compared to mice without preconditioning (8.6±0.7g/mL vs. 3.7±1.1g/mL, P<0.001). In addition, LV-SUV was slightly but not significantly higher in animals treated with acipimox compared to animals with overnight fasting alone (10.2±0.5 g/mL, P = 0.06). Fastening increased segmental SUV by 5.1±0.5g/mL as compared to free-feeding mice (from 3.7±0.8g/mL to 8.8±0.4g/mL, P<0.001); segmental-SUV also significantly increased after administration of acipimox (from 8.8±0.4g/mL to 10.1±0.4g/mL, P<0.001).Conclusion: Overnight fasting led to myocardial glucose deprivation and increases 18F-FDG myocardial uptake. Additional administration of acipimox enhances myocardial 18F-FDG uptake, at least at the segmental level. Thus, preconditioning with acipimox may provide better image quality that may help for assessing segmental myocardial metabolism.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We show that if performance measures in a stochastic scheduling problem satisfy a set of so-called partial conservation laws (PCL), which extend previously studied generalized conservation laws (GCL), then the problem is solved optimally by a priority-index policy for an appropriate range of linear performance objectives, where the optimal indices are computed by a one-pass adaptive-greedy algorithm, based on Klimov's. We further apply this framework to investigate the indexability property of restless bandits introduced by Whittle, obtaining the following results: (1) we identify a class of restless bandits (PCL-indexable) which are indexable; membership in this class is tested through a single run of the adaptive-greedy algorithm, which also computes the Whittle indices when the test is positive; this provides a tractable sufficient condition for indexability; (2) we further indentify the class of GCL-indexable bandits, which includes classical bandits, having the property that they are indexable under any linear reward objective. The analysis is based on the so-called achievable region method, as the results follow fromnew linear programming formulations for the problems investigated.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Network Revenue Management problem can be formulated as a stochastic dynamic programming problem (DP or the\optimal" solution V *) whose exact solution is computationally intractable. Consequently, a number of heuristics have been proposed in the literature, the most popular of which are the deterministic linear programming (DLP) model, and a simulation based method, the randomized linear programming (RLP) model. Both methods give upper bounds on the optimal solution value (DLP and PHLP respectively). These bounds are used to provide control values that can be used in practice to make accept/deny decisions for booking requests. Recently Adelman [1] and Topaloglu [18] have proposed alternate upper bounds, the affine relaxation (AR) bound and the Lagrangian relaxation (LR) bound respectively, and showed that their bounds are tighter than the DLP bound. Tight bounds are of great interest as it appears from empirical studies and practical experience that models that give tighter bounds also lead to better controls (better in the sense that they lead to more revenue). In this paper we give tightened versions of three bounds, calling themsAR (strong Affine Relaxation), sLR (strong Lagrangian Relaxation) and sPHLP (strong Perfect Hindsight LP), and show relations between them. Speciffically, we show that the sPHLP bound is tighter than sLR bound and sAR bound is tighter than the LR bound. The techniques for deriving the sLR and sPHLP bounds can potentially be applied to other instances of weakly-coupled dynamic programming.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

We address the problem of scheduling a multi-station multiclassqueueing network (MQNET) with server changeover times to minimizesteady-state mean job holding costs. We present new lower boundson the best achievable cost that emerge as the values ofmathematical programming problems (linear, semidefinite, andconvex) over relaxed formulations of the system's achievableperformance region. The constraints on achievable performancedefining these formulations are obtained by formulatingsystem's equilibrium relations. Our contributions include: (1) aflow conservation interpretation and closed formulae for theconstraints previously derived by the potential function method;(2) new work decomposition laws for MQNETs; (3) new constraints(linear, convex, and semidefinite) on the performance region offirst and second moments of queue lengths for MQNETs; (4) a fastbound for a MQNET with N customer classes computed in N steps; (5)two heuristic scheduling policies: a priority-index policy, anda policy extracted from the solution of a linear programmingrelaxation.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

General Summary Although the chapters of this thesis address a variety of issues, the principal aim is common: test economic ideas in an international economic context. The intention has been to supply empirical findings using the largest suitable data sets and making use of the most appropriate empirical techniques. This thesis can roughly be divided into two parts: the first one, corresponding to the first two chapters, investigates the link between trade and the environment, the second one, the last three chapters, is related to economic geography issues. Environmental problems are omnipresent in the daily press nowadays and one of the arguments put forward is that globalisation causes severe environmental problems through the reallocation of investments and production to countries with less stringent environmental regulations. A measure of the amplitude of this undesirable effect is provided in the first part. The third and the fourth chapters explore the productivity effects of agglomeration. The computed spillover effects between different sectors indicate how cluster-formation might be productivity enhancing. The last chapter is not about how to better understand the world but how to measure it and it was just a great pleasure to work on it. "The Economist" writes every week about the impressive population and economic growth observed in China and India, and everybody agrees that the world's center of gravity has shifted. But by how much and how fast did it shift? An answer is given in the last part, which proposes a global measure for the location of world production and allows to visualize our results in Google Earth. A short summary of each of the five chapters is provided below. The first chapter, entitled "Unraveling the World-Wide Pollution-Haven Effect" investigates the relative strength of the pollution haven effect (PH, comparative advantage in dirty products due to differences in environmental regulation) and the factor endowment effect (FE, comparative advantage in dirty, capital intensive products due to differences in endowments). We compute the pollution content of imports using the IPPS coefficients (for three pollutants, namely biological oxygen demand, sulphur dioxide and toxic pollution intensity for all manufacturing sectors) provided by the World Bank and use a gravity-type framework to isolate the two above mentioned effects. Our study covers 48 countries that can be classified into 29 Southern and 19 Northern countries and uses the lead content of gasoline as proxy for environmental stringency. For North-South trade we find significant PH and FE effects going in the expected, opposite directions and being of similar magnitude. However, when looking at world trade, the effects become very small because of the high North-North trade share, where we have no a priori expectations about the signs of these effects. Therefore popular fears about the trade effects of differences in environmental regulations might by exaggerated. The second chapter is entitled "Is trade bad for the Environment? Decomposing worldwide SO2 emissions, 1990-2000". First we construct a novel and large database containing reasonable estimates of SO2 emission intensities per unit labor that vary across countries, periods and manufacturing sectors. Then we use these original data (covering 31 developed and 31 developing countries) to decompose the worldwide SO2 emissions into the three well known dynamic effects (scale, technique and composition effect). We find that the positive scale (+9,5%) and the negative technique (-12.5%) effect are the main driving forces of emission changes. Composition effects between countries and sectors are smaller, both negative and of similar magnitude (-3.5% each). Given that trade matters via the composition effects this means that trade reduces total emissions. We next construct, in a first experiment, a hypothetical world where no trade happens, i.e. each country produces its imports at home and does no longer produce its exports. The difference between the actual and this no-trade world allows us (under the omission of price effects) to compute a static first-order trade effect. The latter now increases total world emissions because it allows, on average, dirty countries to specialize in dirty products. However, this effect is smaller (3.5%) in 2000 than in 1990 (10%), in line with the negative dynamic composition effect identified in the previous exercise. We then propose a second experiment, comparing effective emissions with the maximum or minimum possible level of SO2 emissions. These hypothetical levels of emissions are obtained by reallocating labour accordingly across sectors within each country (under the country-employment and the world industry-production constraints). Using linear programming techniques, we show that emissions are reduced by 90% with respect to the worst case, but that they could still be reduced further by another 80% if emissions were to be minimized. The findings from this chapter go together with those from chapter one in the sense that trade-induced composition effect do not seem to be the main source of pollution, at least in the recent past. Going now to the economic geography part of this thesis, the third chapter, entitled "A Dynamic Model with Sectoral Agglomeration Effects" consists of a short note that derives the theoretical model estimated in the fourth chapter. The derivation is directly based on the multi-regional framework by Ciccone (2002) but extends it in order to include sectoral disaggregation and a temporal dimension. This allows us formally to write present productivity as a function of past productivity and other contemporaneous and past control variables. The fourth chapter entitled "Sectoral Agglomeration Effects in a Panel of European Regions" takes the final equation derived in chapter three to the data. We investigate the empirical link between density and labour productivity based on regional data (245 NUTS-2 regions over the period 1980-2003). Using dynamic panel techniques allows us to control for the possible endogeneity of density and for region specific effects. We find a positive long run elasticity of density with respect to labour productivity of about 13%. When using data at the sectoral level it seems that positive cross-sector and negative own-sector externalities are present in manufacturing while financial services display strong positive own-sector effects. The fifth and last chapter entitled "Is the World's Economic Center of Gravity Already in Asia?" computes the world economic, demographic and geographic center of gravity for 1975-2004 and compares them. Based on data for the largest cities in the world and using the physical concept of center of mass, we find that the world's economic center of gravity is still located in Europe, even though there is a clear shift towards Asia. To sum up, this thesis makes three main contributions. First, it provides new estimates of orders of magnitudes for the role of trade in the globalisation and environment debate. Second, it computes reliable and disaggregated elasticities for the effect of density on labour productivity in European regions. Third, it allows us, in a geometrically rigorous way, to track the path of the world's economic center of gravity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

AIMS: In patients with alcohol dependence, health-related quality of life (QOL) is reduced compared with that of a normal healthy population. The objective of the current analysis was to describe the evolution of health-related QOL in adults with alcohol dependence during a 24-month period after initial assessment for alcohol-related treatment in a routine practice setting, and its relation to drinking pattern which was evaluated across clusters based on the predominant pattern of alcohol use, set against the influence of baseline variables METHODS: The Medical Outcomes Study 36-Item Short-Form Survey (MOS-SF-36) was used to measure QOL at baseline and quarterly for 2 years among participants in CONTROL, a prospective observational study of patients initiating treatment for alcohol dependence. The sample consisted of 160 adults with alcohol dependence (65.6% males) with a mean (SD) age of 45.6 (12.0) years. Alcohol use data were collected using TimeLine Follow-Back. Based on the participant's reported alcohol use, three clusters were identified: 52 (32.5%) mostly abstainers, 64 (40.0%) mostly moderate drinkers and 44 (27.5%) mostly heavy drinkers. Mixed-effect linear regression analysis was used to identify factors that were potentially associated with the mental and physical summary MOS-SF-36 scores at each time point. RESULTS: The mean (SD) MOS-SF-36 mental component summary score (range 0-100, norm 50) was 35.7 (13.6) at baseline [mostly abstainers: 40.4 (14.6); mostly moderate drinkers 35.6 (12.4); mostly heavy drinkers 30.1 (12.1)]. The score improved to 43.1 (13.4) at 3 months [mostly abstainers: 47.4 (12.3); mostly moderate drinkers 44.2 (12.7); mostly heavy drinkers 35.1 (12.9)], to 47.3 (11.4) at 12 months [mostly abstainers: 51.7 (9.7); mostly moderate drinkers 44.8 (11.9); mostly heavy drinkers 44.1 (11.3)], and to 46.6 (11.1) at 24 months [mostly abstainers: 49.2 (11.6); mostly moderate drinkers 45.7 (11.9); mostly heavy drinkers 43.7 (8.8)]. Mixed-effect linear regression multivariate analyses indicated that there was a significant association between a lower 2-year follow-up MOS-SF-36 mental score and being a mostly heavy drinker (-6.97, P < 0.001) or mostly moderate drinker (-3.34 points, P = 0.018) [compared to mostly abstainers], being female (-3.73, P = 0.004), and having a Beck Inventory scale score ≥8 (-6.54, P < 0.001), at baseline. The mean (SD) MOS-SF-36 physical component summary score was 48.8 (10.6) at baseline, remained stable over the follow-up and did not differ across the three clusters. Mixed-effect linear regression univariate analyses found that the average 2-year follow-up MOS-SF-36 physical score was increased (compared with mostly abstainers) in mostly heavy drinkers (+4.44, P = 0.007); no other variables tested influenced the MOS-SF-36 physical score. CONCLUSION: Among individuals with alcohol dependence, a rapid improvement was seen in the mental dimension of QOL following treatment initiation, which was maintained during 24 months. Improvement was associated with the pattern of alcohol use, becoming close to the general population norm in patients classified as mostly abstainers, improving substantially in mostly moderate drinkers and improving only slightly in mostly heavy drinkers. The physical dimension of QOL was generally in the normal range but was not associated with drinking patterns.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A linear programming model is used to optimally assign highway segments to highway maintenance garages using existing facilities. The model is also used to determine possible operational savings or losses associated with four alternatives for expanding, closing and/or relocating some of the garages in a study area. The study area contains 16 highway maintenance garages and 139 highway segments. The study recommends alternative No. 3 (close Tama and Blairstown garages and relocate new garage at Jct. U.S. 30 and Iowa 21) at an annual operational savings of approximately $16,250. These operational savings, however, are only the guidelines for decisionmakers and are subject to the required assumptions of the model used and limitations of the study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

OBJECTIVE: Previous studies suggest that arginine vasopressin may have a role in metabolic syndrome (MetS) and diabetes by altering liver glycogenolysis, insulin, and glucagon secretion and pituitary ACTH release. We tested whether plasma copeptin, the stable C-terminal fragment of arginine vasopressin prohormone, was associated with insulin resistance and MetS in a Swiss population-based study. DESIGN AND METHOD: We analyzed data from the population-based Swiss Kidney Project on Genes in Hypertension. Copeptin was assessed by an immunoluminometric assay. Insulin resistance was derived from the HOMA model and calculated as follows: (FPI x FPG)/22.5, where FPI is fasting plasma insulin concentration (mU/L) and FPG fasting plasma glucose (mmol/L). Subjects were classified as having the MetS according to the National Cholesterol Education Program Adult Treatment Panel III criteria. Mixed multivariate linear regression models were built to explore the association of insulin resistance with copeptin. In addition, multivariate logistic regression models were built to explore the association between MetS and copeptin. In the two analyses, adjustment was done for age, gender, center, tobacco and alcohol consumption, socioeconomic status, physical activity, intake of fruits and vegetables and 24 h urine flow rate. Copeptin was log-transformed for the analyses. RESULTS: Among the 1,089 subjects included in this analysis, 47% were male. Mean (SD) age and body mass index were 47.4 (17.6) years 25.0 (4.5) kg/m2. The prevalence of MetS was 10.5%. HOMA-IR was higher in men (median 1.3, IQR 0.7-2.1) than in women (median 1.0, IQR 0.5-1.6,P < 0.0001). Plasma copeptin was higher in men (median 5.2, IQR 3.7-7.8 pmol/L) than in women (median 3.0, IQR 2.2-4.3 pmol/L), P < 0.0001. HOMA-IR was positively associated with log-copeptin after full adjustment (β (95% CI) 0.19 (0.09-0.29), P < 0.001). MetS was not associated with copeptin after full adjustment (P = 0.92). CONCLUSIONS: Insulin resistance, but not MetS, was associated with higher copeptin levels. Further studies should examine whether modifying pharmacologically the arginine vasopressin system might improve insulin resistance, thereby providing insight into the causal nature of this association.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global warming mitigation has recently become a priority worldwide. A large body of literature dealing with energy related problems has focused on reducing greenhouse gases emissions at an engineering scale. In contrast, the minimization of climate change at a wider macroeconomic level has so far received much less attention. We investigate here the issue of how to mitigate global warming by performing changes in an economy. To this end, we make use of a systematic tool that combines three methods: linear programming, environmentally extended input output models, and life cycle assessment principles. The problem of identifying key economic sectors that contribute significantly to global warming is posed in mathematical terms as a bi criteria linear program that seeks to optimize simultaneously the total economic output and the total life cycle CO2 emissions. We have applied this approach to the European Union economy, finding that significant reductions in global warming potential can be attained by regulating specific economic sectors. Our tool is intended to aid policymakers in the design of more effective public policies for achieving the environmental and economic targets sought.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

La percepción del joven estudiante de economía es que la práctica con ejercicios es lo único que debe saber. Ésta percepción se puede cambiar con la Programación Lineal ya que unimos teoría y práctica y, al mismo tiempo, mejoramos la capacidad de modelar situaciones económicas y además, hacemos énfasis en el uso de las matemáticas como herramienta eficaz en la mejora de las actividades propias.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Techniques of evaluation of risks coming from inherent uncertainties to the agricultural activity should accompany planning studies. The risk analysis should be carried out by risk simulation using techniques as the Monte Carlo method. This study was carried out to develop a computer program so-called P-RISCO for the application of risky simulations on linear programming models, to apply to a case study, as well to test the results comparatively to the @RISK program. In the risk analysis it was observed that the average of the output variable total net present value, U, was considerably lower than the maximum U value obtained from the linear programming model. It was also verified that the enterprise will be front to expressive risk of shortage of water in the month of April, what doesn't happen for the cropping pattern obtained by the minimization of the irrigation requirement in the months of April in the four years. The scenario analysis indicated that the sale price of the passion fruit crop exercises expressive influence on the financial performance of the enterprise. In the comparative analysis it was verified the equivalence of P-RISCO and @RISK programs in the execution of the risk simulation for the considered scenario.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

En option är ett finansiellt kontrakt som ger dess innehavare en rättighet (men medför ingen skyldighet) att sälja eller köpa någonting (till exempel en aktie) till eller från säljaren av optionen till ett visst pris vid en bestämd tidpunkt i framtiden. Den som säljer optionen binder sig till att gå med på denna framtida transaktion ifall optionsinnehavaren längre fram bestämmer sig för att inlösa optionen. Säljaren av optionen åtar sig alltså en risk av att den framtida transaktion som optionsinnehavaren kan tvinga honom att göra visar sig vara ofördelaktig för honom. Frågan om hur säljaren kan skydda sig mot denna risk leder till intressanta optimeringsproblem, där målet är att hitta en optimal skyddsstrategi under vissa givna villkor. Sådana optimeringsproblem har studerats mycket inom finansiell matematik. Avhandlingen "The knapsack problem approach in solving partial hedging problems of options" inför en ytterligare synpunkt till denna diskussion: I en relativt enkel (ändlig och komplett) marknadsmodell kan nämligen vissa partiella skyddsproblem beskrivas som så kallade kappsäcksproblem. De sistnämnda är välkända inom en gren av matematik som heter operationsanalys. I avhandlingen visas hur skyddsproblem som tidigare lösts på andra sätt kan alternativt lösas med hjälp av metoder som utvecklats för kappsäcksproblem. Förfarandet tillämpas även på helt nya skyddsproblem i samband med så kallade amerikanska optioner.