887 resultados para network cost models


Relevância:

30.00% 30.00%

Publicador:

Resumo:

The constrained compartmentalized knapsack problem can be seen as an extension of the constrained knapsack problem. However, the items are grouped into different classes so that the overall knapsack has to be divided into compartments, and each compartment is loaded with items from the same class. Moreover, building a compartment incurs a fixed cost and a fixed loss of the capacity in the original knapsack, and the compartments are lower and upper bounded. The objective is to maximize the total value of the items loaded in the overall knapsack minus the cost of the compartments. This problem has been formulated as an integer non-linear program, and in this paper, we reformulate the non-linear model as an integer linear master problem with a large number of variables. Some heuristics based on the solution of the restricted master problem are investigated. A new and more compact integer linear model is also presented, which can be solved by a branch-and-bound commercial solver that found most of the optimal solutions for the constrained compartmentalized knapsack problem. On the other hand, heuristics provide good solutions with low computational effort. (C) 2011 Elsevier BM. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Solving multicommodity capacitated network design problems is a hard task that requires the use of several strategies like relaxing some constraints and strengthening the model with valid inequalities. In this paper, we compare three sets of inequalities that have been widely used in this context: Benders, metric and cutset inequalities. We show that Benders inequalities associated to extreme rays are metric inequalities. We also show how to strengthen Benders inequalities associated to non-extreme rays to obtain metric inequalities. We show that cutset inequalities are Benders inequalities, but not necessarily metric inequalities. We give a necessary and sufficient condition for a cutset inequality to be a metric inequality. Computational experiments show the effectiveness of strengthening Benders and cutset inequalities to obtain metric inequalities.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Large-scale simulations of parts of the brain using detailed neuronal models to improve our understanding of brain functions are becoming a reality with the usage of supercomputers and large clusters. However, the high acquisition and maintenance cost of these computers, including the physical space, air conditioning, and electrical power, limits the number of simulations of this kind that scientists can perform. Modern commodity graphical cards, based on the CUDA platform, contain graphical processing units (GPUs) composed of hundreds of processors that can simultaneously execute thousands of threads and thus constitute a low-cost solution for many high-performance computing applications. In this work, we present a CUDA algorithm that enables the execution, on multiple GPUs, of simulations of large-scale networks composed of biologically realistic Hodgkin-Huxley neurons. The algorithm represents each neuron as a CUDA thread, which solves the set of coupled differential equations that model each neuron. Communication among neurons located in different GPUs is coordinated by the CPU. We obtained speedups of 40 for the simulation of 200k neurons that received random external input and speedups of 9 for a network with 200k neurons and 20M neuronal connections, in a single computer with two graphic boards with two GPUs each, when compared with a modern quad-core CPU. Copyright (C) 2010 John Wiley & Sons, Ltd.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present an efficient numerical methodology for the 31) computation of incompressible multi-phase flows described by conservative phase-field models We focus here on the case of density matched fluids with different viscosity (Model H) The numerical method employs adaptive mesh refinements (AMR) in concert with an efficient semi-implicit time discretization strategy and a linear, multi-level multigrid to relax high order stability constraints and to capture the flow`s disparate scales at optimal cost. Only five linear solvers are needed per time-step. Moreover, all the adaptive methodology is constructed from scratch to allow a systematic investigation of the key aspects of AMR in a conservative, phase-field setting. We validate the method and demonstrate its capabilities and efficacy with important examples of drop deformation, Kelvin-Helmholtz instability, and flow-induced drop coalescence (C) 2010 Elsevier Inc. All rights reserved

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The pentrophic membrane (PM) is an anatomical structure surrounding the food bolus in most insects. Rejecting the idea that PM has evolved from coating mucus to play the same protective role as it, novel functions were proposed and experimentally tested. The theoretical principles underlying the digestive enzyme recycling mechanism were described and used to develop an algorithm to calculate enzyme distributions along the midgut and to infer secretory and absorptive sites. The activity of a Spodoptera frugiperda microvillar aminopeptidase decreases by 50% if placed in the presence of midgut contents. S. frugiperda trypsin preparations placed into dialysis bags in stirred and unstirred media have activities of 210 and 160%, respectively, over the activities of samples in a test tube. The ectoperitrophic fluid (EF) present in the midgut caeca of Rhynchosciara americana may be collected. If the enzymes restricted to this fluid are assayed in the presence of PM contents (PMC) their activities decrease by at least 58%. The lack of PM caused by calcofluor feeding impairs growth due to an increase in the metabolic cost associated with the conversion of food into body mass. This probably results from an increase in digestive enzyme excretion and useless homeostatic attempt to reestablish destroyed midgut gradients. The experimental models support the view that PM enhances digestive efficiency by: (a) prevention of non-specific binding of undigested material onto cell Surface; (b) prevention of excretion by allowing enzyme recycling powered by an ectoperitrophic counterflux of fluid; (c) removal from inside PM of the oligomeric molecules that may inhibit the enzymes involved in initial digestion; (d) restriction of oligomer hydrolases to ectoperitrophic space (ECS) to avoid probable partial inhibition by non-dispersed undigested food. Finally,PM functions are discussed regarding insects feeding on any diet. (C) 2008 Elsevier Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

GPS technology has been embedded into portable, low-cost electronic devices nowadays to track the movements of mobile objects. This implication has greatly impacted the transportation field by creating a novel and rich source of traffic data on the road network. Although the promise offered by GPS devices to overcome problems like underreporting, respondent fatigue, inaccuracies and other human errors in data collection is significant; the technology is still relatively new that it raises many issues for potential users. These issues tend to revolve around the following areas: reliability, data processing and the related application. This thesis aims to study the GPS tracking form the methodological, technical and practical aspects. It first evaluates the reliability of GPS based traffic data based on data from an experiment containing three different traffic modes (car, bike and bus) traveling along the road network. It then outline the general procedure for processing GPS tracking data and discuss related issues that are uncovered by using real-world GPS tracking data of 316 cars. Thirdly, it investigates the influence of road network density in finding optimal location for enhancing travel efficiency and decreasing travel cost. The results show that the geographical positioning is reliable. Velocity is slightly underestimated, whereas altitude measurements are unreliable.Post processing techniques with auxiliary information is found necessary and important when solving the inaccuracy of GPS data. The densities of the road network influence the finding of optimal locations. The influence will stabilize at a certain level and do not deteriorate when the node density is higher.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Location Models are usedfor planning the location of multiple service centers in order to serve a geographicallydistributed population. A cornerstone of such models is the measure of distancebetween the service center and a set of demand points, viz, the location of thepopulation (customers, pupils, patients and so on). Theoretical as well asempirical evidence support the current practice of using the Euclidian distancein metropolitan areas. In this paper, we argue and provide empirical evidencethat such a measure is misleading once the Location Models are applied to ruralareas with heterogeneous transport networks. This paper stems from the problemof finding an optimal allocation of a pre-specified number of hospitals in alarge Swedish region with a low population density. We conclude that the Euclidianand the network distances based on a homogenous network (equal travel costs inthe whole network) give approximately the same optimums. However networkdistances calculated from a heterogeneous network (different travel costs indifferent parts of the network) give widely different optimums when the numberof hospitals increases.  In terms ofaccessibility we find that the recent closure of hospitals and the in-optimallocation of the remaining ones has increased the average travel distance by 75%for the population. Finally, aggregation the population misplaces the hospitalsby on average 10 km.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Climate model projections show that climate change will further increase the risk of flooding in many regions of the world. There is a need for climate adaptation, but building new infrastructure or additional retention basins has its limits, especially in densely populated areas where open spaces are limited. Another solution is the more efficient use of the existing infrastructure. This research investigates a method for real-time flood control by means of existing gated weirs and retention basins. The method was tested for the specific study area of the Demer basin in Belgium but is generally applicable. Today, retention basins along the Demer River are controlled by means of adjustable gated weirs based on fixed logic rules. However, because of the high complexity of the system, only suboptimal results are achieved by these rules. By making use of precipitation forecasts and combined hydrological-hydraulic river models, the state of the river network can be predicted. To fasten the calculation speed, a conceptual river model was used. The conceptual model was combined with a Model Predictive Control (MPC) algorithm and a Genetic Algorithm (GA). The MPC algorithm predicts the state of the river network depending on the positions of the adjustable weirs in the basin. The GA generates these positions in a semi-random way. Cost functions, based on water levels, were introduced to evaluate the efficiency of each generation, based on flood damage minimization. In the final phase of this research the influence of the most important MPC and GA parameters was investigated by means of a sensitivity study. The results show that the MPC-GA algorithm manages to reduce the total flood volume during the historical event of September 1998 by 46% in comparison with the current regulation. Based on the MPC-GA results, some recommendations could be formulated to improve the logic rules.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the service life of water supply network (WSN) growth, the growing phenomenon of aging pipe network has become exceedingly serious. As urban water supply network is hidden underground asset, it is difficult for monitoring staff to make a direct classification towards the faults of pipe network by means of the modern detecting technology. In this paper, based on the basic property data (e.g. diameter, material, pressure, distance to pump, distance to tank, load, etc.) of water supply network, decision tree algorithm (C4.5) has been carried out to classify the specific situation of water supply pipeline. Part of the historical data was used to establish a decision tree classification model, and the remaining historical data was used to validate this established model. Adopting statistical methods were used to access the decision tree model including basic statistical method, Receiver Operating Characteristic (ROC) and Recall-Precision Curves (RPC). These methods has been successfully used to assess the accuracy of this established classification model of water pipe network. The purpose of classification model was to classify the specific condition of water pipe network. It is important to maintain the pipeline according to the classification results including asset unserviceable (AU), near perfect condition (NPC) and serious deterioration (SD). Finally, this research focused on pipe classification which plays a significant role in maintaining water supply networks in the future.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper describes the formulation of a Multi-objective Pipe Smoothing Genetic Algorithm (MOPSGA) and its application to the least cost water distribution network design problem. Evolutionary Algorithms have been widely utilised for the optimisation of both theoretical and real-world non-linear optimisation problems, including water system design and maintenance problems. In this work we present a pipe smoothing based approach to the creation and mutation of chromosomes which utilises engineering expertise with the view to increasing the performance of the algorithm whilst promoting engineering feasibility within the population of solutions. MOPSGA is based upon the standard Non-dominated Sorting Genetic Algorithm-II (NSGA-II) and incorporates a modified population initialiser and mutation operator which directly targets elements of a network with the aim to increase network smoothness (in terms of progression from one diameter to the next) using network element awareness and an elementary heuristic. The pipe smoothing heuristic used in this algorithm is based upon a fundamental principle employed by water system engineers when designing water distribution pipe networks where the diameter of any pipe is never greater than the sum of the diameters of the pipes directly upstream resulting in the transition from large to small diameters from source to the extremities of the network. MOPSGA is assessed on a number of water distribution network benchmarks from the literature including some real-world based, large scale systems. The performance of MOPSGA is directly compared to that of NSGA-II with regard to solution quality, engineering feasibility (network smoothness) and computational efficiency. MOPSGA is shown to promote both engineering and hydraulic feasibility whilst attaining good infrastructure costs compared to NSGA-II.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The cluster provides a greater commercial relationship between the companies that comprise it. This encourages companies to adopt competitive structures that allow solving problems that would hardly alone (Lubeck et. Al., 2011). With that this paper aims to describe the coopetition between companies operating on a commercial cluster planned, from the point of view of retailers, taking as a basis the theoretical models proposed by Bengtsson and Kock (1999) and Leon (2005) and operationalized by means of Social Network Analysis (SNA). Data collection consisted of two phases, the first exploratory aspect to identify the actors, and the second was characterized as descriptive as it aims to describe the coopetition among the enterprises. As a result we identified the companies that cooperate and compete simultaneously (coopetition), firms that only compete, companies just cooperate and businesses that do not compete and do not cooperate (coexistence)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using vector autoregressive (VAR) models and Monte-Carlo simulation methods we investigate the potential gains for forecasting accuracy and estimation uncertainty of two commonly used restrictions arising from economic relationships. The Örst reduces parameter space by imposing long-term restrictions on the behavior of economic variables as discussed by the literature on cointegration, and the second reduces parameter space by imposing short-term restrictions as discussed by the literature on serial-correlation common features (SCCF). Our simulations cover three important issues on model building, estimation, and forecasting. First, we examine the performance of standard and modiÖed information criteria in choosing lag length for cointegrated VARs with SCCF restrictions. Second, we provide a comparison of forecasting accuracy of Ötted VARs when only cointegration restrictions are imposed and when cointegration and SCCF restrictions are jointly imposed. Third, we propose a new estimation algorithm where short- and long-term restrictions interact to estimate the cointegrating and the cofeature spaces respectively. We have three basic results. First, ignoring SCCF restrictions has a high cost in terms of model selection, because standard information criteria chooses too frequently inconsistent models, with too small a lag length. Criteria selecting lag and rank simultaneously have a superior performance in this case. Second, this translates into a superior forecasting performance of the restricted VECM over the VECM, with important improvements in forecasting accuracy ñreaching more than 100% in extreme cases. Third, the new algorithm proposed here fares very well in terms of parameter estimation, even when we consider the estimation of long-term parameters, opening up the discussion of joint estimation of short- and long-term parameters in VAR models.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates that the applied monetary models - the Sidrauski-type models and the cash-in-advance models, augmented with a banking sector that supplies money substitutes services - imply trajectories which are Pareto-Optimum restricted to a given path of the real quantity of money. As a consequence, three results follow: First, Bailey’s formula to evaluate the welfare cost of inflation is indeed accurate, if the longrun capital stock does not depend on the inflation rate and if the compensate demand is considered. Second, the relevant money demand concept for this issue - the impact of inflation on welfare - is the monetary base. Third, if the long-run capital stock depends on the inflation rate, this dependence has a second-order impact on welfare, and, conceptually, it is not a distortion from the social point of view. These three implications moderate some evaluations of the welfare cost of the perfect predicted inflation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper demonstrates that the applied monetary mo deIs - the Sidrauski-type models and the cash-in-advance models, augmented with a banking sector that supplies money substitutes services - imply trajectories which are P8,reto-Optimum restricted to a given path of the real quantity of money. As a consequence, three results follow: First, Bailey's formula to evaluate the wclfare cost of inflation is indeed accurate, if the long-run capital stock does not depend on the inflation rate and if the compensate demand is considered. Second, the relevant money demand concept for this issue - the impact of inflation on welfare - is the monetary base, Third, if the long-run capital stock depends on the inflation rate, this dependence has a second-order impact ou wclfare, and, conceptually, it is not a distortion from tite social point of vicw. These three implications moderatc some evaluations of the wclfare cost of the perfect predicted inflation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The study presents the results and recommendations deriving from the application of two supply chain management analysis models as proposed by the Supply Chain Council (SCOR, version 10.0) and by Lambert (1997, Framework for Supply Chain Management) on the logistics of cash transfers in Brazil. Cash transfers consist of the transportation of notes to and from each node of the complex network formed by the bank branches, ATMs, armored transportation providers, the government custodian, Brazilian Central Bank and financial institutions. Although the logistic to sustain these operations is so wide-ranged (country-size), complex and subject to a lot of financial regulations and security procedures, it has been detected that it was probably not fully integrated. Through the use of a primary and a secondary data research and analysis, using the above mentioned models, the study ends up with propositions to strongly improve the operations efficiency