974 resultados para Simulated annealing (Matemática)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

The p-median problem is often used to locate p service centers by minimizing their distances to a geographically distributed demand (n). The optimal locations are sensitive to geographical context such as road network and demand points especially when they are asymmetrically distributed in the plane. Most studies focus on evaluating performances of the p-median model when p and n vary. To our knowledge this is not a very well-studied problem when the road network is alternated especially when it is applied in a real world context. The aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the density in the road network is alternated. The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 service centers we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000. To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when nodes in the road network increase and p is low. When p is high the improvements are larger. The results also show that choice of the best network depends on p. The larger p the larger density of the network is needed. 

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A customer is presumed to gravitate to a facility by the distance to it and the attractiveness of it. However regarding the location of the facility, the presumption is that the customer opts for the shortest route to the nearest facility.This paradox was recently solved by the introduction of the gravity p-median model. The model is yet to be implemented and tested empirically. We implemented the model in an empirical problem of locating locksmiths, vehicle inspections, and retail stores ofv ehicle spare-parts, and we compared the solutions with those of the p-median model. We found the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The p-median model is used to locate P facilities to serve a geographically distributed population. Conventionally, it is assumed that the population patronize the nearest facility and that the distance between the resident and the facility may be measured by the Euclidean distance. Carling, Han, and Håkansson (2012) compared two network distances with the Euclidean in a rural region witha sparse, heterogeneous network and a non-symmetric distribution of thepopulation. For a coarse network and P small, they found, in contrast to the literature, the Euclidean distance to be problematic. In this paper we extend their work by use of a refined network and study systematically the case when P is of varying size (2-100 facilities). We find that the network distance give as gooda solution as the travel-time network. The Euclidean distance gives solutions some 2-7 per cent worse than the network distances, and the solutions deteriorate with increasing P. Our conclusions extend to intra-urban location problems.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The p-medianmodel is commonly used to find optimal locations of facilities for geographically distributed demands. So far, there are few studies that have considered the importance of the road network in the model. However, Han, Håkansson, and Rebreyend (2013) examined the solutions of the p-median model with densities of the road network varying from 500 to 70,000 nodes. They found as the density went beyond some 10,000 nodes, solutions have no further improvements but gradually worsen. The aim of this study is to check their findings by using an alternative heuristic being vertex substitution, as a complement to their using simulated annealing. We reject the findings in Han et al (2013). The solutions do not further improve as the nodes exceed 10,000, but neither do the solutions deteriorate.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The p-median problem is often used to locate P service facilities in a geographically distributed population. Important for the performance of such a model is the distance measure. Distance measure can vary if the accuracy of the road network varies. The rst aim in this study is to analyze how the optimal location solutions vary, using the p-median model, when the road network is alternated. It is hard to nd an exact optimal solution for p-median problems. Therefore, in this study two heuristic solutions are applied, simulating annealing and a classic heuristic. The secondary aim is to compare the optimal location solutions using dierent algorithms for large p-median problem. The investigation is conducted by the means of a case study in a rural region with an asymmetrically distributed population, Dalecarlia. The study shows that the use of more accurate road networks gives better solutions for optimal location, regardless what algorithm that is used and regardless how many service facilities that is optimized for. It is also shown that the simulated annealing algorithm not just is much faster than the classic heuristic used here, but also in most cases gives better location solutions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

To have good data quality with high complexity is often seen to be important. Intuition says that the higher accuracy and complexity the data have the better the analytic solutions becomes if it is possible to handle the increasing computing time. However, for most of the practical computational problems, high complexity data means that computational times become too long or that heuristics used to solve the problem have difficulties to reach good solutions. This is even further stressed when the size of the combinatorial problem increases. Consequently, we often need a simplified data to deal with complex combinatorial problems. In this study we stress the question of how the complexity and accuracy in a network affect the quality of the heuristic solutions for different sizes of the combinatorial problem. We evaluate this question by applying the commonly used p-median model, which is used to find optimal locations in a network of p supply points that serve n demand points. To evaluate this, we vary both the accuracy (the number of nodes) of the network and the size of the combinatorial problem (p). The investigation is conducted by the means of a case study in a region in Sweden with an asymmetrically distributed population (15,000 weighted demand points), Dalecarlia. To locate 5 to 50 supply points we use the national transport administrations official road network (NVDB). The road network consists of 1.5 million nodes. To find the optimal location we start with 500 candidate nodes in the network and increase the number of candidate nodes in steps up to 67,000 (which is aggregated from the 1.5 million nodes). To find the optimal solution we use a simulated annealing algorithm with adaptive tuning of the temperature. The results show that there is a limited improvement in the optimal solutions when the accuracy in the road network increase and the combinatorial problem (low p) is simple. When the combinatorial problem is complex (large p) the improvements of increasing the accuracy in the road network are much larger. The results also show that choice of the best accuracy of the network depends on the complexity of the combinatorial (varying p) problem.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Regarding the location of a facility, the presumption in the widely used p-median model is that the customer opts for the shortest route to the nearest facility. However, this assumption is problematic on free markets since the customer is presumed to gravitate to a facility by the distance to and the attractiveness of it. The recently introduced gravity p-median model offers an extension to the p-median model that account for this. The model is therefore potentially interesting, although it has not yet been implemented and tested empirically. In this paper, we have implemented the model in an empirical problem of locating vehicle inspections, locksmiths, and retail stores of vehicle spare-parts for the purpose of investigating its superiority to the p-median model. We found, however, the gravity p-median model to be of limited use for the problem of locating facilities as it either gives solutions similar to the p-median model, or it gives unstable solutions due to a non-concave objective function.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Bin planning (arrangements) is a key factor in the timber industry. Improper planning of the storage bins may lead to inefficient transportation of resources, which threaten the overall efficiency and thereby limit the profit margins of sawmills. To address this challenge, a simulation model has been developed. However, as numerous alternatives are available for arranging bins, simulating all possibilities will take an enormous amount of time and it is computationally infeasible. A discrete-event simulation model incorporating meta-heuristic algorithms has therefore been investigated in this study. Preliminary investigations indicate that the results achieved by GA based simulation model are promising and better than the other meta-heuristic algorithm. Further, a sensitivity analysis has been done on the GA based optimal arrangement which contributes to gaining insights and knowledge about the real system that ultimately leads to improved and enhanced efficiency in sawmill yards. It is expected that the results achieved in the work will support timber industries in making optimal decisions with respect to arrangement of storage bins in a sawmill yard.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper provides a procedure to address all three phases of the design for cellular manufacturing namely parts/machines grouping, intra-cell and inter-cell layout designs concurrently. It provides a platform to investigate the impact of the cell formation method on intracell and inter-cell layout designs and vice versa by generating multiple efficient layout designs for different cell partitioning strategies. This approach enables the decision maker to have wider choices with regard to the different number of cells and to assess various criteria such as travelling cost, duplication of machines, space requirement against each alternative. The performance of the model is demonstrated by applying it to an example selected from literature.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Short-term load forecasting is fundamental for the reliable and efficient operation of power systems. Despite its importance, accurate prediction of loads is problematic and far remote. Often uncertainties significantly degrade performance of load forecasting models. Besides, there is no index available indicating reliability of predicted values. The objective of this study is to construct prediction intervals for future loads instead of forecasting their exact values. The delta technique is applied for constructing prediction intervals for outcomes of neural network models. Some statistical measures are developed for quantitative and comprehensive evaluation of prediction intervals. According to these measures, a new cost function is designed for shortening length of prediction intervals without compromising their coverage probability. Simulated annealing is used for minimization of this cost function and adjustment of neural network parameters. Demonstrated results clearly show that the proposed methods for constructing prediction interval outperforms the traditional delta technique. Besides, it yields prediction intervals that are practically more reliable and useful than exact point predictions.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The bootstrap method is one of the most widely used methods in literature for construction of confidence and prediction intervals. This paper proposes a new method for improving the quality of bootstrap-based prediction intervals. The core of the proposed method is a prediction interval-based cost function, which is used for training neural networks. A simulated annealing method is applied for minimization of the cost function and neural network parameter adjustment. The developed neural networks are then used for estimation of the target variance. Through experiments and simulations it is shown that the proposed method can be used to construct better quality bootstrap-based prediction intervals. The optimized prediction intervals have narrower widths with a greater coverage probability compared to traditional bootstrap-based prediction intervals.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Inferring transcriptional regulatory networks from high-throughput biological data is a major challenge to bioinformatics today. To address this challenge, we developed TReNGO (Transcriptional Regulatory Networks reconstruction based on Global Optimization), a global and threshold-free algorithm with simulated annealing for inferring regulatory networks by the integration of ChIP-chip and expression data. Superior to existing methods, TReNGO was expected to find the optimal structure of transcriptional regulatory networks without any arbitrary thresholds or predetermined number of transcriptional modules (TMs). TReNGO was applied to both synthetic data and real yeast data in the rapamycin response. In these applications, we demonstrated an improved functional coherence of TMs and TF (transcription factor)- target predictions by TReNGO when compared to GRAM, COGRIM or to analyzing ChIP-chip data alone. We also demonstrated the ability of TReNGO to discover unexpected biological processes that TFs may be involved in and to also identify interesting novel combinations of TFs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper proposes an innovative optimized parametric method for construction of prediction intervals (PIs) for uncertainty quantification. The mean-variance estimation (MVE) method employs two separate neural network (NN) models to estimate the mean and variance of targets. A new training method is developed in this study that adjusts parameters of NN models through minimization of a PI-based cost functions. A simulated annealing method is applied for minimization of the nonlinear non-differentiable cost function. The performance of the proposed method for PI construction is examined using monthly data sets taken from a wind farm in Australia. PIs for the wind farm power generation are constructed with five confidence levels between 50% and 90%. Demonstrated results indicate that valid PIs constructed using the optimized MVE method have a quality much better than the traditional MVE-based PIs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, prediction interval (PI)-based modelling techniques are introduced and applied to capture the nonlinear dynamics of a polystyrene batch reactor system. Traditional NN models are developed using experimental datasets with and without disturbances. Simulation results indicate that traditional NNs cannot properly handle disturbances in reactor data and demonstrate a poor forecasting performance, with an average MAPE of 22% in the presence of disturbances. The lower upper bound estimation (LUBE) method is applied for the construction of PIs to quantify uncertainties associated with forecasts. The simulated annealing optimization technique is employed to adjust NN parameters for minimization of an innovative PI-based cost function. The simulation results reveal that the LUBE method generates quality PIs without requiring prohibitive computations. As both calibration and sharpness of PIs are practically and theoretically satisfactory, the constructed PIs can be used as part of the decision-making and control process of polymerization reactors. © 2014 The Institution of Chemical Engineers.