17 resultados para Multiperiod mixed-integer convex model
Resumo:
This paper investigates a cross-layer design approach for minimizing energy consumption and maximizing network lifetime (NL) of a multiple-source and single-sink (MSSS) WSN with energy constraints. The optimization problem for MSSS WSN can be formulated as a mixed integer convex optimization problem with the adoption of time division multiple access (TDMA) in medium access control (MAC) layer, and it becomes a convex problem by relaxing the integer constraint on time slots. Impacts of data rate, link access and routing are jointly taken into account in the optimization problem formulation. Both linear and planar network topologies are considered for NL maximization (NLM). With linear MSSS and planar single-source and single-sink (SSSS) topologies, we successfully use Karush-Kuhn-Tucker (KKT) optimality conditions to derive analytical expressions of the optimal NL when all nodes are exhausted simultaneously. The problem for planar MSSS topology is more complicated, and a decomposition and combination (D&C) approach is proposed to compute suboptimal solutions. An analytical expression of the suboptimal NL is derived for a small scale planar network. To deal with larger scale planar network, an iterative algorithm is proposed for the D&C approach. Numerical results show that the upper-bounds of the network lifetime obtained by our proposed optimization models are tight. Important insights into the NL and benefits of cross-layer design for WSN NLM are obtained.
Resumo:
Integer-valued data envelopment analysis (DEA) with alternative returns to scale technology has been introduced and developed recently by Kuosmanen and Kazemi Matin. The proportionality assumption of their introduced "natural augmentability" axiom in constant and nondecreasing returns to scale technologies makes it possible to achieve feasible decision-making units (DMUs) of arbitrary large size. In many real world applications it is not possible to achieve such production plans since some of the input and output variables are bounded above. In this paper, we extend the axiomatic foundation of integer-valuedDEAmodels for including bounded output variables. Some model variants are achieved by introducing a new axiom of "boundedness" over the selected output variables. A mixed integer linear programming (MILP) formulation is also introduced for computing efficiency scores in the associated production set. © 2011 The Authors. International Transactions in Operational Research © 2011 International Federation of Operational Research Societies.
Resumo:
Purpose – A binary integer programming model for the simple assembly line balancing problem (SALBP), which is well known as SALBP-1, was formulated more than 30 years ago. Since then, a number of researchers have extended the model for the variants of assembly line balancing problem.The model is still prevalent nowadays mainly because of the lower and upper bounds on task assignment. These properties avoid significant increase of decision variables. The purpose of this paper is to use an example to show that the model may lead to a confusing solution. Design/methodology/approach – The paper provides a remedial constraint set for the model to rectify the disordered sequence problem. Findings – The paper presents proof that the assembly line balancing model formulated by Patterson and Albracht may lead to a confusing solution. Originality/value – No one previously has found that the commonly used model is incorrect.
Resumo:
This paper re-assesses three independently developed approaches that are aimed at solving the problem of zero-weights or non-zero slacks in Data Envelopment Analysis (DEA). The methods are weights restricted, non-radial and extended facet DEA models. Weights restricted DEA models are dual to envelopment DEA models with restrictions on the dual variables (DEA weights) aimed at avoiding zero values for those weights; non-radial DEA models are envelopment models which avoid non-zero slacks in the input-output constraints. Finally, extended facet DEA models recognize that only projections on facets of full dimension correspond to well defined rates of substitution/transformation between all inputs/outputs which in turn correspond to non-zero weights in the multiplier version of the DEA model. We demonstrate how these methods are equivalent, not only in their aim but also in the solutions they yield. In addition, we show that the aforementioned methods modify the production frontier by extending existing facets or creating unobserved facets. Further we propose a new approach that uses weight restrictions to extend existing facets. This approach has some advantages in computational terms, because extended facet models normally make use of mixed integer programming models, which are computationally demanding.
Resumo:
One of the major challenges in measuring efficiency in terms of resources and outcomes is the assessment of the evolution of units over time. Although Data Envelopment Analysis (DEA) has been applied for time series datasets, DEA models, by construction, form the reference set for inefficient units (lambda values) based on their distance from the efficient frontier, that is, in a spatial manner. However, when dealing with temporal datasets, the proximity in time between units should also be taken into account, since it reflects the structural resemblance among time periods of a unit that evolves. In this paper, we propose a two-stage spatiotemporal DEA approach, which captures both the spatial and temporal dimension through a multi-objective programming model. In the first stage, DEA is solved iteratively extracting for each unit only previous DMUs as peers in its reference set. In the second stage, the lambda values derived from the first stage are fed to a Multiobjective Mixed Integer Linear Programming model, which filters peers in the reference set based on weights assigned to the spatial and temporal dimension. The approach is demonstrated on a real-world example drawn from software development.
Resumo:
Firms worldwide are taking major initiatives to reduce the carbon footprint of their supply chains in response to the growing governmental and consumer pressures. In real life, these supply chains face stochastic and non-stationary demand but most of the studies on inventory lot-sizing problem with emission concerns consider deterministic demand. In this paper, we study the inventory lot-sizing problem under non-stationary stochastic demand condition with emission and cycle service level constraints considering carbon cap-and-trade regulatory mechanism. Using a mixed integer linear programming model, this paper aims to investigate the effects of emission parameters, product- and system-related features on the supply chain performance through extensive computational experiments to cover general type business settings and not a specific scenario. Results show that cycle service level and demand coefficient of variation have significant impacts on total cost and emission irrespective of level of demand variability while the impact of product's demand pattern is significant only at lower level of demand variability. Finally, results also show that increasing value of carbon price reduces total cost, total emission and total inventory and the scope of emission reduction by increasing carbon price is greater at higher levels of cycle service level and demand coefficient of variation. The analysis of results helps supply chain managers to take right decision in different demand and service level situations.
Resumo:
As microblog services such as Twitter become a fast and convenient communication approach, identification of trendy topics in microblog services has great academic and business value. However detecting trendy topics is very challenging due to huge number of users and short-text posts in microblog diffusion networks. In this paper we introduce a trendy topics detection system under computation and communication resource constraints. In stark contrast to retrieving and processing the whole microblog contents, we develop an idea of selecting a small set of microblog users and processing their posts to achieve an overall acceptable trendy topic coverage, without exceeding resource budget for detection. We formulate the selection operation of these subset users as mixed-integer optimization problems, and develop heuristic algorithms to compute their approximate solutions. The proposed system is evaluated with real-time test data retrieved from Sina Weibo, the dominant microblog service provider in China. It's shown that by monitoring 500 out of 1.6 million microblog users and tracking their microposts (about 15,000 daily) with our system, nearly 65% trendy topics can be detected, while on average 5 hours earlier before they appear in Sina Weibo official trends.
Resumo:
We propose a cost-effective hot event detection system over Sina Weibo platform, currently the dominant microblogging service provider in China. The problem of finding a proper subset of microbloggers under resource constraints is formulated as a mixed-integer problem for which heuristic algorithms are developed to compute approximate solution. Preliminary results show that by tracking about 500 out of 1.6 million candidate microbloggers and processing 15,000 microposts daily, 62% of the hot events can be detected five hours on average earlier than they are published by Weibo.
Resumo:
The Dirichlet process mixture model (DPMM) is a ubiquitous, flexible Bayesian nonparametric statistical model. However, full probabilistic inference in this model is analytically intractable, so that computationally intensive techniques such as Gibbs sampling are required. As a result, DPMM-based methods, which have considerable potential, are restricted to applications in which computational resources and time for inference is plentiful. For example, they would not be practical for digital signal processing on embedded hardware, where computational resources are at a serious premium. Here, we develop a simplified yet statistically rigorous approximate maximum a-posteriori (MAP) inference algorithm for DPMMs. This algorithm is as simple as DP-means clustering, solves the MAP problem as well as Gibbs sampling, while requiring only a fraction of the computational effort. (For freely available code that implements the MAP-DP algorithm for Gaussian mixtures see http://www.maxlittle.net/.) Unlike related small variance asymptotics (SVA), our method is non-degenerate and so inherits the “rich get richer” property of the Dirichlet process. It also retains a non-degenerate closed-form likelihood which enables out-of-sample calculations and the use of standard tools such as cross-validation. We illustrate the benefits of our algorithm on a range of examples and contrast it to variational, SVA and sampling approaches from both a computational complexity perspective as well as in terms of clustering performance. We demonstrate the wide applicabiity of our approach by presenting an approximate MAP inference method for the infinite hidden Markov model whose performance contrasts favorably with a recently proposed hybrid SVA approach. Similarly, we show how our algorithm can applied to a semiparametric mixed-effects regression model where the random effects distribution is modelled using an infinite mixture model, as used in longitudinal progression modelling in population health science. Finally, we propose directions for future research on approximate MAP inference in Bayesian nonparametrics.
Resumo:
Purpose – The purpose of this research is to develop a holistic approach to maximize the customer service level while minimizing the logistics cost by using an integrated multiple criteria decision making (MCDM) method for the contemporary transshipment problem. Unlike the prevalent optimization techniques, this paper proposes an integrated approach which considers both quantitative and qualitative factors in order to maximize the benefits of service deliverers and customers under uncertain environments. Design/methodology/approach – This paper proposes a fuzzy-based integer linear programming model, based on the existing literature and validated with an example case. The model integrates the developed fuzzy modification of the analytic hierarchy process (FAHP), and solves the multi-criteria transshipment problem. Findings – This paper provides several novel insights about how to transform a company from a cost-based model to a service-dominated model by using an integrated MCDM method. It suggests that the contemporary customer-driven supply chain remains and increases its competitiveness from two aspects: optimizing the cost and providing the best service simultaneously. Research limitations/implications – This research used one illustrative industry case to exemplify the developed method. Considering the generalization of the research findings and the complexity of the transshipment service network, more cases across multiple industries are necessary to further enhance the validity of the research output. Practical implications – The paper includes implications for the evaluation and selection of transshipment service suppliers, the construction of optimal transshipment network as well as managing the network. Originality/value – The major advantages of this generic approach are that both quantitative and qualitative factors under fuzzy environment are considered simultaneously and also the viewpoints of service deliverers and customers are focused. Therefore, it is believed that it is useful and applicable for the transshipment service network design.
Resumo:
This thesis is concerned with the inventory control of items that can be considered independent of one another. The decisions when to order and in what quantity, are the controllable or independent variables in cost expressions which are minimised. The four systems considered are referred to as (Q, R), (nQ,R,T), (M,T) and (M,R,T). Wiith ((Q,R) a fixed quantity Q is ordered each time the order cover (i.e. stock in hand plus on order ) equals or falls below R, the re-order level. With the other three systems reviews are made only at intervals of T. With (nQ,R,T) an order for nQ is placed if on review the inventory cover is less than or equal to R, where n, which is an integer, is chosen at the time so that the new order cover just exceeds R. In (M, T) each order increases the order cover to M. Fnally in (M, R, T) when on review, order cover does not exceed R, enough is ordered to increase it to M. The (Q, R) system is examined at several levels of complexity, so that the theoretical savings in inventory costs obtained with more exact models could be compared with the increases in computational costs. Since the exact model was preferable for the (Q,R) system only exact models were derived for theoretical systems for the other three. Several methods of optimization were tried, but most were found inappropriate for the exact models because of non-convergence. However one method did work for each of the exact models. Demand is considered continuous, and with one exception, the distribution assumed is the normal distribution truncated so that demand is never less than zero. Shortages are assumed to result in backorders, not lost sales. However, the shortage cost is a function of three items, one of which, the backorder cost, may be either a linear, quadratic or an exponential function of the length of time of a backorder, with or without period of grace. Lead times are assumed constant or gamma distributed. Lastly, the actual supply quantity is allowed to be distributed. All the sets of equations were programmed for a KDF 9 computer and the computed performances of the four inventory control procedures are compared under each assurnption.
Resumo:
Cell-wall components (cellulose, hemicellulose (oat spelt xylan), lignin (Organosolv)), and model compounds (levoglucosan (an intermediate product of cellulose decomposition) and chlorogenic acid (structurally similar to lignin polymer units)) have been investigated to probe in detail the influence of potassium on their pyrolysis behaviours as well as their uncatalysed decomposition reaction. Cellulose and lignin were pretreated to remove salts and metals by hydrochloric acid, and this dematerialized sample was impregnated with 1% of potassium as potassium acetate. Levoglucosan, xylan and chlorogenic acid were mixed with CHCOOK to introduce 1% K. Characterisation was performed using thermogravimetric analysis (TGA) and differential thermal analysis (DTA). In addition to the TGA pyrolysis, pyrolysis-gas chromatography-mass spectrometry (PY-GC-MS) analysis was introduced to examine reaction products. Potassium-catalysed pyrolysis has a huge influence on the char formation stage and increases the char yields considerably (from 7.7% for raw cellulose to 27.7% for potassium impregnated cellulose; from 5.7% for raw levoglucosan to 20.8% for levoglucosan with CHCOOK added). Major changes in the pyrolytic decomposition pathways were observed for cellulose, levoglucosan and chlorogenic acid. The results for cellulose and levoglucosan are consistent with a base catalysed route in the presence of the potassium salt which promotes complete decomposition of glucosidic units by a heterolytic mechanism and favours its direct depolymerization and fragmentation to low molecular weight components (e.g. acetic acid, formic acid, glyoxal, hydroxyacetaldehyde and acetol). Base catalysed polymerization reactions increase the char yield. Potassium-catalysed lignin pyrolysis is very significant: the temperature of maximum conversion in pyrolysis shifts to lower temperature by 70 K and catalysed polymerization reactions increase the char yield from 37% to 51%. A similar trend is observed for the model compound, chlorogenic acid. The addition of potassium does not produce a dramatic change in the tar product distribution, although its addition to chlorogenic acid promoted the generation of cyclohexane and phenol derivatives. Postulated thermal decomposition schemes for chlorogenic acid are presented. © 2008 Elsevier B.V. All rights reserved.
Resumo:
What form is small business activity taking among new migrants in the UK? This question is addressed by examining the case of Somalis in the English city of Leicester.We apply a novel synthesis of the Nee and Sanders' (2001) `forms of capital' model with the `mixed embeddedness' approach (Rath, 2000) to enterprises established by newly arrived immigrant communities, combining agency and structure perspectives. Data are drawn from business-owners (and workers) themselves, rather than community representatives. Face-to-face in-depth interviews were held with 25 business owners and 25 employees/`helpers', supplemented by 3 focus group encounters with different segments of the Somali business population.The findings indicate that a reliance solely on social capital explanations is not sufficient. An adequate understanding of business dynamics requires an appreciation of how Somalis mobilize different forms of capital within a given political, social and economic context.
Resumo:
In this letter, we propose an analytical approach to model uplink intercell interference (ICI) in hexagonal grid based orthogonal frequency division multiple access (OFMDA) cellular networks. The key idea is that the uplink ICI from individual cells is approximated with a lognormal distribution with statistical parameters being determined analytically. Accordingly, the aggregated uplink ICI is approximated with another lognormal distribution and its statistical parameters can be determined from those of individual cells using Fenton-Wilkson method. Analytic expressions of uplink ICI are derived with two traditional frequency reuse schemes, namely integer frequency reuse schemes with factor 1 (IFR-1) and factor 3 (IFR-3). Uplink fractional power control and lognormal shadowing are modeled. System performances in terms of signal to interference plus noise ratio (SINR) and spectrum efficiency are also derived. The proposed model has been validated by simulations. © 2013 IEEE.
Resumo:
Nitration of tyrosine in proteins and peptides is a post-translational modification that occurs under conditions of oxidative stress. It is implicated in a variety of medical conditions, including neurodegenerative and cardiovascular diseases. However, monitoring tyrosine nitration and understanding its role in modifying biological function remains a major challenge. In this work, we investigate the use of electron-vibration-vibration (EVV) two-dimensional infrared (2DIR) spectroscopy for the study of tyrosine nitration in model peptides. We demonstrate the ability of EVV 2DIR spectroscopy to differentiate between the neutral and deprotonated states of 3-nitrotyrosine, and we characterize their spectral signatures using information obtained from quantum chemistry calculations and simulated EVV 2DIR spectra. To test the sensitivity of the technique, we use mixed-peptide samples containing various levels of tyrosine nitration, and we use mass spectrometry to independently verify the level of nitration. We conclude that EVV 2DIR spectroscopy is able to provide detailed spectroscopic information on peptide side-chain modifications and to detect nitration levels down to 1%. We further propose that lower nitration levels could be detected by introducing a resonant Raman probe step to increase the detection sensitivity of EVV 2DIR spectroscopy. (Graph Presented).