968 resultados para Multi-stage programming
Resumo:
Яни Чаушев, Милослав Средков, Красимир Манев - Всяко състезание по програмиране използва множество софтуерни инструменти за управление на процесите по време на състезанието. Въпреки че тези инструменти обикновено покриват спецификите на конкретния вид състезание задоволително, те рядко адресират трудностите на дългосрочното съхранение на данни и оперативната съвместимост. В тази статия е представен един софтуерен инструмент адресиращ тези проблеми. Вместо комплексна система, касаеща всички аспекти на състезанието, CORE е централизирано хранилище за съхраняване и поддържане на необходимите за различни състезания данни. Представени са основните му елементи, текущото състояние на реализацията и перспективите за развитие на системата.
Resumo:
Abstract not available
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Development of new scenario decomposition techniques for linear and nonlinear stochastic programming
Resumo:
Une approche classique pour traiter les problèmes d’optimisation avec incertitude à deux- et multi-étapes est d’utiliser l’analyse par scénario. Pour ce faire, l’incertitude de certaines données du problème est modélisée par vecteurs aléatoires avec des supports finis spécifiques aux étapes. Chacune de ces réalisations représente un scénario. En utilisant des scénarios, il est possible d’étudier des versions plus simples (sous-problèmes) du problème original. Comme technique de décomposition par scénario, l’algorithme de recouvrement progressif est une des méthodes les plus populaires pour résoudre les problèmes de programmation stochastique multi-étapes. Malgré la décomposition complète par scénario, l’efficacité de la méthode du recouvrement progressif est très sensible à certains aspects pratiques, tels que le choix du paramètre de pénalisation et la manipulation du terme quadratique dans la fonction objectif du lagrangien augmenté. Pour le choix du paramètre de pénalisation, nous examinons quelques-unes des méthodes populaires, et nous proposons une nouvelle stratégie adaptive qui vise à mieux suivre le processus de l’algorithme. Des expériences numériques sur des exemples de problèmes stochastiques linéaires multi-étapes suggèrent que la plupart des techniques existantes peuvent présenter une convergence prématurée à une solution sous-optimale ou converger vers la solution optimale, mais avec un taux très lent. En revanche, la nouvelle stratégie paraît robuste et efficace. Elle a convergé vers l’optimalité dans toutes nos expériences et a été la plus rapide dans la plupart des cas. Pour la question de la manipulation du terme quadratique, nous faisons une revue des techniques existantes et nous proposons l’idée de remplacer le terme quadratique par un terme linéaire. Bien que qu’il nous reste encore à tester notre méthode, nous avons l’intuition qu’elle réduira certaines difficultés numériques et théoriques de la méthode de recouvrement progressif.
Resumo:
Os osciloscópios digitais são utilizados em diversas áreas do conhecimento, assumindo-se no âmbito da engenharia electrónica, como instrumentos indispensáveis. Graças ao advento das Field Programmable Gate Arrays (FPGAs), os instrumentos de medição reconfiguráveis, dadas as suas vantagens, i.e., altos desempenhos, baixos custos e elevada flexibilidade, são cada vez mais uma alternativa aos instrumentos tradicionalmente usados nos laboratórios. Tendo como objectivo a normalização no acesso e no controlo deste tipo de instrumentos, esta tese descreve o projecto e implementação de um osciloscópio digital reconfigurável baseado na norma IEEE 1451.0. Definido de acordo com uma arquitectura baseada nesta norma, as características do osciloscópio são descritas numa estrutura de dados denominada Transducer Electronic Data Sheet (TEDS), e o seu controlo é efectuado utilizando um conjunto de comandos normalizados. O osciloscópio implementa um conjunto de características e funcionalidades básicas, todas verificadas experimentalmente. Destas, destaca-se uma largura de banda de 575kHz, um intervalo de medição de 0.4V a 2.9V, a possibilidade de se definir um conjunto de escalas horizontais, o nível e declive de sincronismo e o modo de acoplamento com o circuito sob análise. Arquitecturalmente, o osciloscópio é constituído por um módulo especificado com a linguagem de descrição de hardware (HDL, Hardware Description Language) Verilog e por uma interface desenvolvida na linguagem de programação Java®. O módulo é embutido numa FPGA, definindo todo o processamento do osciloscópio. A interface permite o seu controlo e a representação do sinal medido. Durante o projecto foi utilizado um conversor Analógico/Digital (A/D) com uma frequência máxima de amostragem de 1.5MHz e 14 bits de resolução que, devido às suas limitações, obrigaram à implementação de um sistema de interpolação multi-estágio com filtros digitais.
Resumo:
Until this day, the most efficient Cu(In,Ga)Se2 thin film solar cells have been prepared using a rather complex growth process often referred to as three-stage or multistage. This family of processes is mainly characterized by a first step deposited with only In, Ga and Se flux to form a first layer. Cu is added in a second step until the film becomes slightly Cu-rich, where-after the film is converted to its final Cu-poor composition by a third stage, again with no or very little addition of Cu. In this paper, a comparison between solar cells prepared with the three-stage process and a one-stage/in-line process with the same composition, thickness, and solar cell stack is made. The one-stage process is easier to be used in an industrial scale and do not have Cu-rich transitions. The samples were analyzed using glow discharge optical emission spectroscopy, scanning electron microscopy, X-ray diffraction, current–voltage-temperature, capacitance-voltage, external quantum efficiency, transmission/reflection, and photoluminescence. It was concluded that in spite of differences in the texturing, morphology and Ga gradient, the electrical performance of the two types of samples is quite similar as demonstrated by the similar J–V behavior, quantum spectral response, and the estimated recombination losses.
Resumo:
One major component of power system operation is generation scheduling. The objective of the work is to develop efficient control strategies to the power scheduling problems through Reinforcement Learning approaches. The three important active power scheduling problems are Unit Commitment, Economic Dispatch and Automatic Generation Control. Numerical solution methods proposed for solution of power scheduling are insufficient in handling large and complex systems. Soft Computing methods like Simulated Annealing, Evolutionary Programming etc., are efficient in handling complex cost functions, but find limitation in handling stochastic data existing in a practical system. Also the learning steps are to be repeated for each load demand which increases the computation time.Reinforcement Learning (RL) is a method of learning through interactions with environment. The main advantage of this approach is it does not require a precise mathematical formulation. It can learn either by interacting with the environment or interacting with a simulation model. Several optimization and control problems have been solved through Reinforcement Learning approach. The application of Reinforcement Learning in the field of Power system has been a few. The objective is to introduce and extend Reinforcement Learning approaches for the active power scheduling problems in an implementable manner. The main objectives can be enumerated as:(i) Evolve Reinforcement Learning based solutions to the Unit Commitment Problem.(ii) Find suitable solution strategies through Reinforcement Learning approach for Economic Dispatch. (iii) Extend the Reinforcement Learning solution to Automatic Generation Control with a different perspective. (iv) Check the suitability of the scheduling solutions to one of the existing power systems.First part of the thesis is concerned with the Reinforcement Learning approach to Unit Commitment problem. Unit Commitment Problem is formulated as a multi stage decision process. Q learning solution is developed to obtain the optimwn commitment schedule. Method of state aggregation is used to formulate an efficient solution considering the minimwn up time I down time constraints. The performance of the algorithms are evaluated for different systems and compared with other stochastic methods like Genetic Algorithm.Second stage of the work is concerned with solving Economic Dispatch problem. A simple and straight forward decision making strategy is first proposed in the Learning Automata algorithm. Then to solve the scheduling task of systems with large number of generating units, the problem is formulated as a multi stage decision making task. The solution obtained is extended in order to incorporate the transmission losses in the system. To make the Reinforcement Learning solution more efficient and to handle continuous state space, a fimction approximation strategy is proposed. The performance of the developed algorithms are tested for several standard test cases. Proposed method is compared with other recent methods like Partition Approach Algorithm, Simulated Annealing etc.As the final step of implementing the active power control loops in power system, Automatic Generation Control is also taken into consideration.Reinforcement Learning has already been applied to solve Automatic Generation Control loop. The RL solution is extended to take up the approach of common frequency for all the interconnected areas, more similar to practical systems. Performance of the RL controller is also compared with that of the conventional integral controller.In order to prove the suitability of the proposed methods to practical systems, second plant ofNeyveli Thennal Power Station (NTPS IT) is taken for case study. The perfonnance of the Reinforcement Learning solution is found to be better than the other existing methods, which provide the promising step towards RL based control schemes for practical power industry.Reinforcement Learning is applied to solve the scheduling problems in the power industry and found to give satisfactory perfonnance. Proposed solution provides a scope for getting more profit as the economic schedule is obtained instantaneously. Since Reinforcement Learning method can take the stochastic cost data obtained time to time from a plant, it gives an implementable method. As a further step, with suitable methods to interface with on line data, economic scheduling can be achieved instantaneously in a generation control center. Also power scheduling of systems with different sources such as hydro, thermal etc. can be looked into and Reinforcement Learning solutions can be achieved.
Resumo:
Predictors of random effects are usually based on the popular mixed effects (ME) model developed under the assumption that the sample is obtained from a conceptual infinite population; such predictors are employed even when the actual population is finite. Two alternatives that incorporate the finite nature of the population are obtained from the superpopulation model proposed by Scott and Smith (1969. Estimation in multi-stage surveys. J. Amer. Statist. Assoc. 64, 830-840) or from the finite population mixed model recently proposed by Stanek and Singer (2004. Predicting random effects from finite population clustered samples with response error. J. Amer. Statist. Assoc. 99, 1119-1130). Predictors derived under the latter model with the additional assumptions that all variance components are known and that within-cluster variances are equal have smaller mean squared error (MSE) than the competitors based on either the ME or Scott and Smith`s models. As population variances are rarely known, we propose method of moment estimators to obtain empirical predictors and conduct a simulation study to evaluate their performance. The results suggest that the finite population mixed model empirical predictor is more stable than its competitors since, in terms of MSE, it is either the best or the second best and when second best, its performance lies within acceptable limits. When both cluster and unit intra-class correlation coefficients are very high (e.g., 0.95 or more), the performance of the empirical predictors derived under the three models is similar. (c) 2007 Elsevier B.V. All rights reserved.
Resumo:
This paper proposes strategies to reduce the number of variables and the combinatorial search space of the multistage transmission expansion planning problem (TEP). The concept of the binary numeral system (BNS) is used to reduce the number of binary and continuous variables related to the candidate transmission lines and network constraints that are connected with them. The construction phase of greedy randomized adaptive search procedure (GRASP-CP) and additional constraints, obtained from power flow equilibrium in an electric power system are employed for more reduction in search space. The multistage TEP problem is modeled like a mixed binary linear programming problem and solved using a commercial solver with a low computational time. The results of one test system and two real systems are presented in order to show the efficiency of the proposed solution technique. © 1969-2012 IEEE.
Resumo:
Pós-graduação em Engenharia Elétrica - FEIS
Resumo:
In this thesis a mathematical model was derived that describes the charge and energy transport in semiconductor devices like transistors. Moreover, numerical simulations of these physical processes are performed. In order to accomplish this, methods of theoretical physics, functional analysis, numerical mathematics and computer programming are applied. After an introduction to the status quo of semiconductor device simulation methods and a brief review of historical facts up to now, the attention is shifted to the construction of a model, which serves as the basis of the subsequent derivations in the thesis. Thereby the starting point is an important equation of the theory of dilute gases. From this equation the model equations are derived and specified by means of a series expansion method. This is done in a multi-stage derivation process, which is mainly taken from a scientific paper and which does not constitute the focus of this thesis. In the following phase we specify the mathematical setting and make precise the model assumptions. Thereby we make use of methods of functional analysis. Since the equations we deal with are coupled, we are concerned with a nonstandard problem. In contrary, the theory of scalar elliptic equations is established meanwhile. Subsequently, we are preoccupied with the numerical discretization of the equations. A special finite-element method is used for the discretization. This special approach has to be done in order to make the numerical results appropriate for practical application. By a series of transformations from the discrete model we derive a system of algebraic equations that are eligible for numerical evaluation. Using self-made computer programs we solve the equations to get approximate solutions. These programs are based on new and specialized iteration procedures that are developed and thoroughly tested within the frame of this research work. Due to their importance and their novel status, they are explained and demonstrated in detail. We compare these new iterations with a standard method that is complemented by a feature to fit in the current context. A further innovation is the computation of solutions in three-dimensional domains, which are still rare. Special attention is paid to applicability of the 3D simulation tools. The programs are designed to have justifiable working complexity. The simulation results of some models of contemporary semiconductor devices are shown and detailed comments on the results are given. Eventually, we make a prospect on future development and enhancements of the models and of the algorithms that we used.
Resumo:
This paper presents a multi-stage algorithm for the dynamic condition monitoring of a gear. The algorithm provides information referred to the gear status (fault or normal condition) and estimates the mesh stiffness per shaft revolution in case that any abnormality is detected. In the first stage, the analysis of coefficients generated through discrete wavelet transformation (DWT) is proposed as a fault detection and localization tool. The second stage consists in establishing the mesh stiffness reduction associated with local failures by applying a supervised learning mode and coupled with analytical models. To do this, a multi-layer perceptron neural network has been configured using as input features statistical parameters sensitive to torsional stiffness decrease and derived from wavelet transforms of the response signal. The proposed method is applied to the gear condition monitoring and results show that it can update the mesh dynamic properties of the gear on line.