950 resultados para Dynamic programming.
Resumo:
Segmentation of anatomical and pathological structures in ophthalmic images is crucial for the diagnosis and study of ocular diseases. However, manual segmentation is often a time-consuming and subjective process. This paper presents an automatic approach for segmenting retinal layers in Spectral Domain Optical Coherence Tomography images using graph theory and dynamic programming. Results show that this method accurately segments eight retinal layer boundaries in normal adult eyes more closely to an expert grader as compared to a second expert grader.
Resumo:
The paper considers the open shop scheduling problem to minimize the make-span, provided that one of the machines has to process the jobs according to a given sequence. We show that in the preemptive case the problem is polynomially solvable for an arbitrary number of machines. If preemption is not allowed, the problem is NP-hard in the strong sense if the number of machines is variable, and is NP-hard in the ordinary sense in the case of two machines. For the latter case we give a heuristic algorithm that runs in linear time and produces a schedule with the makespan that is at most 5/4 times the optimal value. We also show that the two-machine problem in the nonpreemptive case is solvable in pseudopolynomial time by a dynamic programming algorithm, and that the algorithm can be converted into a fully polynomial approximation scheme. © 1998 John Wiley & Sons, Inc. Naval Research Logistics 45: 705–731, 1998
Resumo:
This paper considers the problem of processing n jobs in a two-machine non-preemptive open shop to minimize the makespan, i.e., the maximum completion time. One of the machines is assumed to be non-bottleneck. It is shown that, unlike its flow shop counterpart, the problem is NP-hard in the ordinary sense. On the other hand, the problem is shown to be solvable by a dynamic programming algorithm that requires pseudopolynomial time. The latter algorithm can be converted into a fully polynomial approximation scheme that runs in time. An O(n log n) approximation algorithm is also designed whi finds a schedule with makespan at most 5/4 times the optimal value, and this bound is tight.
Resumo:
We develop a fully polynomial-time approximation scheme (FPTAS) for minimizing the weighted total tardiness on a single machine, provided that all due dates are equal. The FPTAS is obtained by converting an especially designed pseudopolynomial dynamic programming algorithm.
Resumo:
Single machine scheduling problems are considered, in which the processing of jobs depend on positions of the jobs in a schedule and the due-dates are assigned either according to the CON rule (a due-date common to all jobs is chosen) or according to the SLK rule (the due-dates are computed by increasing the actual processing times of each job by a slack, common to all jobs). Polynomial-time dynamic programming algorithms are proposed for the problems with the objective functions that include the cost of assigning the due-dates, the total cost of disgarded jobs (which are not scheduled) and, possibly, the total earliness of the scheduled jobs.
Resumo:
We consider a problem of scheduling jobs on m parallel machines. The machines are dedicated, i.e., for each job the processing machine is known in advance. We mainly concentrate on the model in which at any time there is one unit of an additional resource. Any job may be assigned the resource and this reduces its processing time. A job that is given the resource uses it at each time of its processing. No two jobs are allowed to use the resource simultaneously. The objective is to minimize the makespan. We prove that the two-machine problem is NP-hard in the ordinary sense, describe a pseudopolynomial dynamic programming algorithm and convert it into an FPTAS. For the problem with an arbitrary number of machines we present an algorithm with a worst-case ratio close to 3/2, and close to 3, if a job can be given several units of the resource. For the problem with a fixed number of machines we give a PTAS. Virtually all algorithms rely on a certain variant of the linear knapsack problem (maximization, minimization, multiple-choice, bicriteria). © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008
Resumo:
We consider single machine scheduling and due date assignment problems in which the processing time of a job depends on its position in a processing sequence. The objective functions include the cost of changing the due dates, the total cost of discarded jobs that cannot be completed by their due dates and, possibly, the total earliness of the scheduled jobs. We present polynomial-time dynamic programming algorithms in the case of two popular due date assignment methods: CON and SLK. The considered problems are related to mathematical models of cooperation between the manufacturer and the customer in supply chain scheduling.
Resumo:
This paper derives optimal life histories for fishes or other animals in relation to the size spectrum of the ecological community in which they are both predators and prey. Assuming log-linear size-spectra and well known scaling laws for feeding and mortality, we first construct the energetics of the individual. From these we find, using dynamic programming, the optimal allocation of energy between growth and reproduction as well as the trade-off between offspring size and numbers. Optimal strategies were found to be strongly dependent on size spectrum slope. For steep size spectra (numbers declining rapidly with size), determinate growth was optimal and allocation to somatic growth increased rapidly with increasing slope. However, restricting reproduction to a fixed mating season changed optimal allocations to give indeterminate growth approximating a von Bertalanffy trajectory. The optimal offspring size was as small as possible given other restrictions such as newborn starvation mortality. For shallow size spectra, finite optimal maturity size required a decline in fitness for large size or age. All the results are compared with observed size spectra of fish communities to show their consistency and relevance.
Resumo:
Electric vehicles (EV) do not emit tailpipe exhaust fumes in the same manner as internal combustion engine vehicles. Optimal benefits can only be achieved, if EVS are deployed effectively, so that the tailpipe emissions are not substituted by additional emissions in the electricity sector. This paper examines the potential contributions that Plug in Hybrid Electric Vehicles can make in reducing carbon dioxide. The paper presents the results of the generation expansion model for Northern Ireland and the Republic of Ireland built using the dynamic programming based long term generation expansion planning tool called the Wien Automatic System Planning IV tool. The model optimizes power dispatch using hourly electricity demand curves for each year up to 2020, while incorporating generator characteristics and certain operational requirements such as energy not served and loss of load probability while satisfying constraints on environmental emissions, fuel availability and generator operational and maintenance costs. In order to simulate the effect of PHEV, two distinct charging scenarios are applied based on a peak tariff and an off peak tariff. The importance and influence of the charging regime on the amount of energy used and gaseous emissions displaced is determined and discussed.
Resumo:
In this paper, we exploit the analogy between protein sequence alignment and image pair correspondence to design a bioinformatics-inspired framework for stereo matching based on dynamic programming. This approach also led to the creation of a meaningfulness graph, which helps to predict matching validity according to image overlap and pixel similarity. Finally, we propose an automatic procedure to estimate automatically all matching parameters. This work is evaluated qualitatively and quantitatively using a standard benchmarking dataset and by conducting stereo matching experiments between images captured at different resolutions. Results confirm the validity of the computer vision/bioinformatics analogy to develop a versatile and accurate low complexity stereo matching algorithm.
Resumo:
Although pumped hydro storage is seen as a strategic key asset by grid operators, financing it is complicated in new liberalised markets. It could be argued that the optimum generation portfolio is now determined by the economic viability of generators based on a short to medium term return on investment. This has meant that capital intensive projects such as pumped hydro storage are less attractive for wholesale electricity companies because the payback periods are too long. In tandem a significant amount of wind power has entered the generation mix, which has resulted in operating and planning integration issues due to wind's inherent uncertain, varying spatial and temporal nature. These integration issues can be overcome using fast acting gas peaking plant or energy storage. Most analysis of wind power integration using storage to date has used stochastic optimisation for power system balancing or arbitrage modelling to examine techno-economic viability. In this research a deterministic dynamic programming long term generation expansion model is employed to optimise the generation mix, total system costs and total carbon dioxide emissions, and unlike other studies calculates reserve to firm wind power. The key finding of this study is that the incentive to build capital-intensive pumped hydro storage to firm wind power is limited unless exogenous market costs come very strongly into play. Furthermore it was demonstrated that reserve increases with increasing wind power showing the importance of ancillary services in future power systems. © 2014 Elsevier Ltd. All rights reserved.
Resumo:
Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.
Resumo:
Under the European Union Renewable Energy Directive each Member State is mandated to ensure that 10% of transport energy (excluding aviation and marine transport) comes from renewable sources by 2020. The Irish Government intends to achieve this target with a number of policies including ensuring that 10% of all vehicles in the transport fleet are powered by electricity by 2020. This paper investigates the impact of the 10% electric vehicle target in Ireland in 2020 using a dynamic programming based long term generation expansion planning model. The model developed optimizes power dispatch using hourly electricity demand curves up to 2020, while incorporating generator characteristics and certain operational requirements such as energy not served and loss of load probability while satisfying constraints on environmental emissions, fuel availability and generator operational and maintenance costs. Two distinct scenarios are analysed based on a peak and off-peak charging regimes in order to simulate the effects of the electric vehicles charging in 2020. The importance and influence of the charging regimes on the amount of energy used and tailgate emissions displaced is then determined.
Resumo:
The introduction of the Tesla in 2008 has demonstrated to the public of the potential of electric vehicles in terms of reducing fuel consumption and green-house gas from the transport sector. It has brought electric vehicles back into the spotlight worldwide at a moment when fossil fuel prices were reaching unexpected high due to increased demand and strong economic growth. The energy storage capabilities from of fleets of electric vehicles as well as the potentially random discharging and charging offers challenges to the grid in terms of operation and control. Optimal scheduling strategies are key to integrating large numbers of electric vehicles and the smart grid. In this paper, state-of-the-art optimization methods are reviewed on scheduling strategies for the grid integration with electric vehicles. The paper starts with a concise introduction to analytical charging strategies, followed by a review of a number of classical numerical optimization methods, including linear programming, non-linear programming, dynamic programming as well as some other means such as queuing theory. Meta-heuristic techniques are then discussed to deal with the complex, high-dimensional and multi-objective scheduling problem associated with stochastic charging and discharging of electric vehicles. Finally, future research directions are suggested.
Resumo:
Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.