950 resultados para Dynamic programming (DP)


Relevância:

80.00% 80.00%

Publicador:

Resumo:

Single machine scheduling problems are considered, in which the processing of jobs depend on positions of the jobs in a schedule and the due-dates are assigned either according to the CON rule (a due-date common to all jobs is chosen) or according to the SLK rule (the due-dates are computed by increasing the actual processing times of each job by a slack, common to all jobs). Polynomial-time dynamic programming algorithms are proposed for the problems with the objective functions that include the cost of assigning the due-dates, the total cost of disgarded jobs (which are not scheduled) and, possibly, the total earliness of the scheduled jobs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider a problem of scheduling jobs on m parallel machines. The machines are dedicated, i.e., for each job the processing machine is known in advance. We mainly concentrate on the model in which at any time there is one unit of an additional resource. Any job may be assigned the resource and this reduces its processing time. A job that is given the resource uses it at each time of its processing. No two jobs are allowed to use the resource simultaneously. The objective is to minimize the makespan. We prove that the two-machine problem is NP-hard in the ordinary sense, describe a pseudopolynomial dynamic programming algorithm and convert it into an FPTAS. For the problem with an arbitrary number of machines we present an algorithm with a worst-case ratio close to 3/2, and close to 3, if a job can be given several units of the resource. For the problem with a fixed number of machines we give a PTAS. Virtually all algorithms rely on a certain variant of the linear knapsack problem (maximization, minimization, multiple-choice, bicriteria). © 2008 Wiley Periodicals, Inc. Naval Research Logistics, 2008

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We consider single machine scheduling and due date assignment problems in which the processing time of a job depends on its position in a processing sequence. The objective functions include the cost of changing the due dates, the total cost of discarded jobs that cannot be completed by their due dates and, possibly, the total earliness of the scheduled jobs. We present polynomial-time dynamic programming algorithms in the case of two popular due date assignment methods: CON and SLK. The considered problems are related to mathematical models of cooperation between the manufacturer and the customer in supply chain scheduling.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This paper derives optimal life histories for fishes or other animals in relation to the size spectrum of the ecological community in which they are both predators and prey. Assuming log-linear size-spectra and well known scaling laws for feeding and mortality, we first construct the energetics of the individual. From these we find, using dynamic programming, the optimal allocation of energy between growth and reproduction as well as the trade-off between offspring size and numbers. Optimal strategies were found to be strongly dependent on size spectrum slope. For steep size spectra (numbers declining rapidly with size), determinate growth was optimal and allocation to somatic growth increased rapidly with increasing slope. However, restricting reproduction to a fixed mating season changed optimal allocations to give indeterminate growth approximating a von Bertalanffy trajectory. The optimal offspring size was as small as possible given other restrictions such as newborn starvation mortality. For shallow size spectra, finite optimal maturity size required a decline in fitness for large size or age. All the results are compared with observed size spectra of fish communities to show their consistency and relevance.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Electric vehicles (EV) do not emit tailpipe exhaust fumes in the same manner as internal combustion engine vehicles. Optimal benefits can only be achieved, if EVS are deployed effectively, so that the tailpipe emissions are not substituted by additional emissions in the electricity sector. This paper examines the potential contributions that Plug in Hybrid Electric Vehicles can make in reducing carbon dioxide. The paper presents the results of the generation expansion model for Northern Ireland and the Republic of Ireland built using the dynamic programming based long term generation expansion planning tool called the Wien Automatic System Planning IV tool. The model optimizes power dispatch using hourly electricity demand curves for each year up to 2020, while incorporating generator characteristics and certain operational requirements such as energy not served and loss of load probability while satisfying constraints on environmental emissions, fuel availability and generator operational and maintenance costs. In order to simulate the effect of PHEV, two distinct charging scenarios are applied based on a peak tariff and an off peak tariff. The importance and influence of the charging regime on the amount of energy used and gaseous emissions displaced is determined and discussed.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we exploit the analogy between protein sequence alignment and image pair correspondence to design a bioinformatics-inspired framework for stereo matching based on dynamic programming. This approach also led to the creation of a meaningfulness graph, which helps to predict matching validity according to image overlap and pixel similarity. Finally, we propose an automatic procedure to estimate automatically all matching parameters. This work is evaluated qualitatively and quantitatively using a standard benchmarking dataset and by conducting stereo matching experiments between images captured at different resolutions. Results confirm the validity of the computer vision/bioinformatics analogy to develop a versatile and accurate low complexity stereo matching algorithm.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Although pumped hydro storage is seen as a strategic key asset by grid operators, financing it is complicated in new liberalised markets. It could be argued that the optimum generation portfolio is now determined by the economic viability of generators based on a short to medium term return on investment. This has meant that capital intensive projects such as pumped hydro storage are less attractive for wholesale electricity companies because the payback periods are too long. In tandem a significant amount of wind power has entered the generation mix, which has resulted in operating and planning integration issues due to wind's inherent uncertain, varying spatial and temporal nature. These integration issues can be overcome using fast acting gas peaking plant or energy storage. Most analysis of wind power integration using storage to date has used stochastic optimisation for power system balancing or arbitrage modelling to examine techno-economic viability. In this research a deterministic dynamic programming long term generation expansion model is employed to optimise the generation mix, total system costs and total carbon dioxide emissions, and unlike other studies calculates reserve to firm wind power. The key finding of this study is that the incentive to build capital-intensive pumped hydro storage to firm wind power is limited unless exogenous market costs come very strongly into play. Furthermore it was demonstrated that reserve increases with increasing wind power showing the importance of ancillary services in future power systems. © 2014 Elsevier Ltd. All rights reserved.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Traditional internal combustion engine vehicles are a major contributor to global greenhouse gas emissions and other air pollutants, such as particulate matter and nitrogen oxides. If the tail pipe point emissions could be managed centrally without reducing the commercial and personal user functionalities, then one of the most attractive solutions for achieving a significant reduction of emissions in the transport sector would be the mass deployment of electric vehicles. Though electric vehicle sales are still hindered by battery performance, cost and a few other technological bottlenecks, focused commercialisation and support from government policies are encouraging large scale electric vehicle adoptions. The mass proliferation of plug-in electric vehicles is likely to bring a significant additional electric load onto the grid creating a highly complex operational problem for power system operators. Electric vehicle batteries also have the ability to act as energy storage points on the distribution system. This double charge and storage impact of many uncontrollable small kW loads, as consumers will want maximum flexibility, on a distribution system which was originally not designed for such operations has the potential to be detrimental to grid balancing. Intelligent scheduling methods if established correctly could smoothly integrate electric vehicles onto the grid. Intelligent scheduling methods will help to avoid cycling of large combustion plants, using expensive fossil fuel peaking plant, match renewable generation to electric vehicle charging and not overload the distribution system causing a reduction in power quality. In this paper, a state-of-the-art review of scheduling methods to integrate plug-in electric vehicles are reviewed, examined and categorised based on their computational techniques. Thus, in addition to various existing approaches covering analytical scheduling, conventional optimisation methods (e.g. linear, non-linear mixed integer programming and dynamic programming), and game theory, meta-heuristic algorithms including genetic algorithm and particle swarm optimisation, are all comprehensively surveyed, offering a systematic reference for grid scheduling considering intelligent electric vehicle integration.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Under the European Union Renewable Energy Directive each Member State is mandated to ensure that 10% of transport energy (excluding aviation and marine transport) comes from renewable sources by 2020. The Irish Government intends to achieve this target with a number of policies including ensuring that 10% of all vehicles in the transport fleet are powered by electricity by 2020. This paper investigates the impact of the 10% electric vehicle target in Ireland in 2020 using a dynamic programming based long term generation expansion planning model. The model developed optimizes power dispatch using hourly electricity demand curves up to 2020, while incorporating generator characteristics and certain operational requirements such as energy not served and loss of load probability while satisfying constraints on environmental emissions, fuel availability and generator operational and maintenance costs. Two distinct scenarios are analysed based on a peak and off-peak charging regimes in order to simulate the effects of the electric vehicles charging in 2020. The importance and influence of the charging regimes on the amount of energy used and tailgate emissions displaced is then determined.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The introduction of the Tesla in 2008 has demonstrated to the public of the potential of electric vehicles in terms of reducing fuel consumption and green-house gas from the transport sector. It has brought electric vehicles back into the spotlight worldwide at a moment when fossil fuel prices were reaching unexpected high due to increased demand and strong economic growth. The energy storage capabilities from of fleets of electric vehicles as well as the potentially random discharging and charging offers challenges to the grid in terms of operation and control. Optimal scheduling strategies are key to integrating large numbers of electric vehicles and the smart grid. In this paper, state-of-the-art optimization methods are reviewed on scheduling strategies for the grid integration with electric vehicles. The paper starts with a concise introduction to analytical charging strategies, followed by a review of a number of classical numerical optimization methods, including linear programming, non-linear programming, dynamic programming as well as some other means such as queuing theory. Meta-heuristic techniques are then discussed to deal with the complex, high-dimensional and multi-objective scheduling problem associated with stochastic charging and discharging of electric vehicles. Finally, future research directions are suggested.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Complex networks have recently attracted a significant amount of research attention due to their ability to model real world phenomena. One important problem often encountered is to limit diffusive processes spread over the network, for example mitigating pandemic disease or computer virus spread. A number of problem formulations have been proposed that aim to solve such problems based on desired network characteristics, such as maintaining the largest network component after node removal. The recently formulated critical node detection problem aims to remove a small subset of vertices from the network such that the residual network has minimum pairwise connectivity. Unfortunately, the problem is NP-hard and also the number of constraints is cubic in number of vertices, making very large scale problems impossible to solve with traditional mathematical programming techniques. Even many approximation algorithm strategies such as dynamic programming, evolutionary algorithms, etc. all are unusable for networks that contain thousands to millions of vertices. A computationally efficient and simple approach is required in such circumstances, but none currently exist. In this thesis, such an algorithm is proposed. The methodology is based on a depth-first search traversal of the network, and a specially designed ranking function that considers information local to each vertex. Due to the variety of network structures, a number of characteristics must be taken into consideration and combined into a single rank that measures the utility of removing each vertex. Since removing a vertex in sequential fashion impacts the network structure, an efficient post-processing algorithm is also proposed to quickly re-rank vertices. Experiments on a range of common complex network models with varying number of vertices are considered, in addition to real world networks. The proposed algorithm, DFSH, is shown to be highly competitive and often outperforms existing strategies such as Google PageRank for minimizing pairwise connectivity.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In a recent paper, Bai and Perron (1998) considered theoretical issues related to the limiting distribution of estimators and test statistics in the linear model with multiple structural changes. In this companion paper, we consider practical issues for the empirical applications of the procedures. We first address the problem of estimation of the break dates and present an efficient algorithm to obtain global minimizers of the sum of squared residuals. This algorithm is based on the principle of dynamic programming and requires at most least-squares operations of order O(T 2) for any number of breaks. Our method can be applied to both pure and partial structural-change models. Secondly, we consider the problem of forming confidence intervals for the break dates under various hypotheses about the structure of the data and the errors across segments. Third, we address the issue of testing for structural changes under very general conditions on the data and the errors. Fourth, we address the issue of estimating the number of breaks. We present simulation results pertaining to the behavior of the estimators and tests in finite samples. Finally, a few empirical applications are presented to illustrate the usefulness of the procedures. All methods discussed are implemented in a GAUSS program available upon request for non-profit academic use.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse envisage un ensemble de méthodes permettant aux algorithmes d'apprentissage statistique de mieux traiter la nature séquentielle des problèmes de gestion de portefeuilles financiers. Nous débutons par une considération du problème général de la composition d'algorithmes d'apprentissage devant gérer des tâches séquentielles, en particulier celui de la mise-à-jour efficace des ensembles d'apprentissage dans un cadre de validation séquentielle. Nous énumérons les desiderata que des primitives de composition doivent satisfaire, et faisons ressortir la difficulté de les atteindre de façon rigoureuse et efficace. Nous poursuivons en présentant un ensemble d'algorithmes qui atteignent ces objectifs et présentons une étude de cas d'un système complexe de prise de décision financière utilisant ces techniques. Nous décrivons ensuite une méthode générale permettant de transformer un problème de décision séquentielle non-Markovien en un problème d'apprentissage supervisé en employant un algorithme de recherche basé sur les K meilleurs chemins. Nous traitons d'une application en gestion de portefeuille où nous entraînons un algorithme d'apprentissage à optimiser directement un ratio de Sharpe (ou autre critère non-additif incorporant une aversion au risque). Nous illustrons l'approche par une étude expérimentale approfondie, proposant une architecture de réseaux de neurones spécialisée à la gestion de portefeuille et la comparant à plusieurs alternatives. Finalement, nous introduisons une représentation fonctionnelle de séries chronologiques permettant à des prévisions d'être effectuées sur un horizon variable, tout en utilisant un ensemble informationnel révélé de manière progressive. L'approche est basée sur l'utilisation des processus Gaussiens, lesquels fournissent une matrice de covariance complète entre tous les points pour lesquels une prévision est demandée. Cette information est utilisée à bon escient par un algorithme qui transige activement des écarts de cours (price spreads) entre des contrats à terme sur commodités. L'approche proposée produit, hors échantillon, un rendement ajusté pour le risque significatif, après frais de transactions, sur un portefeuille de 30 actifs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Thèse réalisée en cotutelle entre l'Université de Montréal et l'Université de Technologie de Troyes

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Cette thèse étudie des modèles de séquences de haute dimension basés sur des réseaux de neurones récurrents (RNN) et leur application à la musique et à la parole. Bien qu'en principe les RNN puissent représenter les dépendances à long terme et la dynamique temporelle complexe propres aux séquences d'intérêt comme la vidéo, l'audio et la langue naturelle, ceux-ci n'ont pas été utilisés à leur plein potentiel depuis leur introduction par Rumelhart et al. (1986a) en raison de la difficulté de les entraîner efficacement par descente de gradient. Récemment, l'application fructueuse de l'optimisation Hessian-free et d'autres techniques d'entraînement avancées ont entraîné la recrudescence de leur utilisation dans plusieurs systèmes de l'état de l'art. Le travail de cette thèse prend part à ce développement. L'idée centrale consiste à exploiter la flexibilité des RNN pour apprendre une description probabiliste de séquences de symboles, c'est-à-dire une information de haut niveau associée aux signaux observés, qui en retour pourra servir d'à priori pour améliorer la précision de la recherche d'information. Par exemple, en modélisant l'évolution de groupes de notes dans la musique polyphonique, d'accords dans une progression harmonique, de phonèmes dans un énoncé oral ou encore de sources individuelles dans un mélange audio, nous pouvons améliorer significativement les méthodes de transcription polyphonique, de reconnaissance d'accords, de reconnaissance de la parole et de séparation de sources audio respectivement. L'application pratique de nos modèles à ces tâches est détaillée dans les quatre derniers articles présentés dans cette thèse. Dans le premier article, nous remplaçons la couche de sortie d'un RNN par des machines de Boltzmann restreintes conditionnelles pour décrire des distributions de sortie multimodales beaucoup plus riches. Dans le deuxième article, nous évaluons et proposons des méthodes avancées pour entraîner les RNN. Dans les quatre derniers articles, nous examinons différentes façons de combiner nos modèles symboliques à des réseaux profonds et à la factorisation matricielle non-négative, notamment par des produits d'experts, des architectures entrée/sortie et des cadres génératifs généralisant les modèles de Markov cachés. Nous proposons et analysons également des méthodes d'inférence efficaces pour ces modèles, telles la recherche vorace chronologique, la recherche en faisceau à haute dimension, la recherche en faisceau élagué et la descente de gradient. Finalement, nous abordons les questions de l'étiquette biaisée, du maître imposant, du lissage temporel, de la régularisation et du pré-entraînement.