970 resultados para current solution
Resumo:
Este trabalho tem por objetivo apresentar a evolução da indústria do refino de petróleo no Brasil desde suas origens, sua evolução ao longo dos anos, explicitando as mudanças no perfil de produção, na matéria prima processada e na complexidade das nossas refinarias. Busca, também, apresentar os próximos passos para o refino de petróleo nacional, seus desafios face a produção de petróleos pesados e ácidos, bem como os impactos provocados pela necessidade de produção de derivados com especificações cada vez mais restritivas e com menor impacto ambiental. Optou-se pelo hidrorrefino como o primeiro grande passo para os próximos anos concluindo-se que unidades para o hidrotratamento de correntes intermediárias ou mesmo produto final assumirão um papel fundamental nos futuros esquemas de refino. Outra vertente importante analisada foi a necessidade de aumento de conversão, ressaltando-se que o caminho hoje escolhido de implantação de Unidades de Coqueamento Retardado se esgota no início da próxima década abrindo caminho para a tecnologia de hidroconversão de resíduo. Com relação à qualidade da gasolina e do óleo diesel foi apresentada uma proposta de esquema de refino para permitir o atendimento de especificações mais rígidas
Resumo:
By first principle methods based on density functional theory (DFT),the equation of state(EOS) and elastic constants of both periclase and ferropericlase are calculated. The pressure and iron doping effects on the elastic constants of ferropericlase are investigated systematically. Firstly, we calculate the elastic constants of periclase and compare the obtained results with experimental data and other theoretical calculations, which shows a encouraging consistence and demonstrates the practicability of first-principle methods. Secondly, by adding iron into periclase crystal model, we build up ferropericlase with iron contents ranging from 0% to 25% mole percent. The corresponding elastic constants are calculated in a large pressure range(0~120GPa). Emphatically, the strong correlation of 3d electrons in transitional elements, such as iron, is difficult to treat in first-principle methods for a long time. The current solution is to make additional correction. During the initial stage of this study, the strong correlation of 3d electrons in iron is not considered, and we observed that addition of iron decreases the volume of ferropericlase, which is totally contradictory to the experimental data. By applying LDA+U approximation in order to solve the strongly correlated 3d electron of iron, we observed the expansion of volume by iron as expected. On the basis of the LDA+U approximation, the elastic constants of ferropericlase are calculated. After a detailed analysis of data obtained from theoretical calculations, we have reached the following conclusions:(1)pressure imposes positive effects on all elastic constants, and the degree of effects is C11>C12>C44. (2) Iron has no distinctive effects on C11 and C12, although some fluctuations are observed around 60GPa. However, iron has obvious softening effects on C44 The softening effects on C44 are intensified as pressure increases. Above the 100GPa, the effects increase greatly, even surpasses the pressure's positive effects in ferropericlase crystal models with iron mole percent of having 12.5%, 18.75% and 25% iron content. (3)As to the modulus deprived from elastic constants, iron has no effect on the adiabatic bulk module BS, only a little fluctuation around 60GPa. We find iron's softening effects on shear modulus G. (4)We find out that, compared with low iron content, elastic constants with iron content approaching 25mole% is consistently fluctuated,which may be caused by the limitations of the LDA+U approximation method itself. (5)We investigate the pressure and Fe doping effects on elastic anisotropy factor(A=(2C44+C12-C11)/C11) of ferropericlase and find out that iron contents will lower the critical isotropic pressure. At the same pressure, when the pressure is below the isotropic pressure, iron softens the anisotropy factor ; when pressure surpasses the isotropic pressure, iron increases the anisotropy factor.
Resumo:
The scarcity and diversity of resources among the devices of heterogeneous computing environments may affect their ability to perform services with specific Quality of Service constraints, particularly in dynamic distributed environments where the characteristics of the computational load cannot always be predicted in advance. Our work addresses this problem by allowing resource constrained devices to cooperate with more powerful neighbour nodes, opportunistically taking advantage of global distributed resources and processing power. Rather than assuming that the dynamic configuration of this cooperative service executes until it computes its optimal output, the paper proposes an anytime approach that has the ability to tradeoff deliberation time for the quality of the solution. Extensive simulations demonstrate that the proposed anytime algorithms are able to quickly find a good initial solution and effectively optimise the rate at which the quality of the current solution improves at each iteration, with an overhead that can be considered negligible.
Resumo:
Trabalho Final de Mestrado para obtenção do grau de Mestre em Engenharia Mecânica Perfil Energia, Refrigeração e Climatização
Resumo:
The conventional Newton and fast decoupled power flow (FDPF) methods have been considered inadequate to obtain the maximum loading point of power systems due to ill-conditioning problems at and near this critical point. It is well known that the PV and Q-theta decoupling assumptions of the fast decoupled power flow formulation no longer hold in the vicinity of the critical point. Moreover, the Jacobian matrix of the Newton method becomes singular at this point. However, the maximum loading point can be efficiently computed through parameterization techniques of continuation methods. In this paper it is shown that by using either theta or V as a parameter, the new fast decoupled power flow versions (XB and BX) become adequate for the computation of the maximum loading point only with a few small modifications. The possible use of reactive power injection in a selected PV bus (Q(PV)) as continuation parameter (mu) for the computation of the maximum loading point is also shown. A trivial secant predictor, the modified zero-order polynomial which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used in predictor step. These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approach for the IEEE test systems (14, 30, 57 and 118 buses) are presented and discussed in the companion paper. The results show that the characteristics of the conventional method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that parameters can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
The parameterized fast decoupled power flow (PFDPF), versions XB and BX, using either theta or V as a parameter have been proposed by the authors in Part I of this paper. The use of reactive power injection of a selected PVbus (Q(PV)) as the continuation parameter for the computation of the maximum loading point (MLP) was also investigated. In this paper, the proposed versions obtained only with small modifications of the conventional one are used for the computation of the MLP of IEEE test systems (14, 30, 57 and 118 buses). These new versions are compared to each other with the purpose of pointing out their features, as well as the influence of reactive power and transformer tap limits. The results obtained with the new approaches are presented and discussed. The results show that the characteristics of the conventional FDPF method are enhanced and the region of convergence around the singular solution is enlarged. In addition, it is shown that these versions can be switched during the tracing process in order to efficiently determine all the PV curve points with few iterations. A trivial secant predictor, the modified zero-order polynomial, which uses the current solution and a fixed increment in the parameter (V, theta, or mu) as an estimate for the next solution, is used for the predictor step. (C) 2003 Elsevier B.V. All rights reserved.
Resumo:
This paper presents an efficient tabu search algorithm (TSA) to solve the problem of feeder reconfiguration of distribution systems. The main characteristics that make the proposed TSA particularly efficient are a) the way in which the neighborhood of the current solution was defined; b) the way in which the objective function value was estimated; and c) the reduction of the neighborhood using heuristic criteria. Four electrical systems, described in detail in the specialized literature, were used to test the proposed TSA. The result demonstrate that it is computationally very fast and finds the best solutions known in the specialized literature. © 2012 IEEE.
Resumo:
Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.
Resumo:
The purpose of this study is to design, develop and integrate a Compressed Natural Gas (CNG) tank that will have a conformable shape for efficient storage in a light-duty pick-up truck. The CNG tank will be a simple rectangular box geometry to demonstrate capability of non-cylindrical shapes. Using CAD drawings of the truck, a conformable tank will be designed to fit under the pick-up bed. The intent of the non-cylindrical CNG tank is to demonstrate improvement in size over the current solution, which is a large cylinder in the box of a pick-up truck. The geometry of the tank’s features is critical to its size and strength. The optimized tank design will be simulated with Finite Element Analysis (FEA) to determine critical stress regions, and appropriate design changes will be made to reduce stress concentration. Following the American National Standard Institute (ANSI) guide, different aluminum alloys will be optimized to obtain the best possible result for the CNG tank.
Resumo:
Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presence of observations. The method is applied to two simple problems: the Ornstein-Uhlenbeck process, of which the exact solution is known and can be compared to, and the double-well system, for which standard approaches such as the ensemble Kalman smoother fail to provide a satisfactory result. Experiments show that our variational approximation is viable and that the results are very promising as the variational approximate solution outperforms standard Gaussian process regression for non-Gaussian Markov processes.
Resumo:
Costs related to inventory are usually a significant amount of the company’s total assets. Despite this, companies in general don’t pay a lot of interest in it, even if the benefits from effective inventory are obvious when it comes to less tied up capital, increased customer satisfaction and better working environment. Permobil AB, Timrå is in an intense period when it comes to revenue and growth. The production unit is aiming for an increased output of 30 % in the next two years. To make this possible the company has to improve their way to distribute and handle material,The purpose of the study is to provide useful information and concrete proposals for action, so that the company can build a strategy for an effective and sustainable solution when it comes to inventory management. Alternative methods for making forecasts are suggested, in order to reach a more nuanced perception of different articles, and how they should be managed. Analytic Hierarchy Process (AHP) was used in order to give specially selected persons the chance to decide criteria for how the article should be valued. The criteria they agreed about were annual volume value, lead time, frequency rate and purchase price. The other method that was proposed was a two-dimensional model where annual volume value and frequency was the criteria that specified in which class an article should be placed. Both methods resulted in significant changes in comparison to the current solution. For the spare part inventory different forecast methods were tested and compared with the current solution. It turned out that the current forecast method performed worse than both moving average and exponential smoothing with trend. The small sample of ten random articles is not big enough to reject the current solution, but still the result is a reason enough, for the company to control the quality of the forecasts.
Resumo:
This paper presents a technique called Improved Squeaky Wheel Optimisation (ISWO) for driver scheduling problems. It improves the original Squeaky Wheel Optimisation’s (SWO) effectiveness and execution speed by incorporating two additional steps of Selection and Mutation which implement evolution within a single solution. In the ISWO, a cycle of Analysis-Selection-Mutation-Prioritization-Construction continues until stopping conditions are reached. The Analysis step first computes the fitness of a current solution to identify troublesome components. The Selection step then discards these troublesome components probabilistically by using the fitness measure, and the Mutation step follows to further discard a small number of components at random. After the above steps, an input solution becomes partial and thus the resulting partial solution needs to be repaired. The repair is carried out by using the Prioritization step to first produce priorities that determine an order by which the following Construction step then schedules the remaining components. Therefore, the optimisation in the ISWO is achieved by solution disruption, iterative improvement and an iterative constructive repair process performed. Encouraging experimental results are reported.
Resumo:
The rolling stock circulation depends on two different problems: the rolling stock assignment and the train routing problems, which up to now have been solved sequentially. We propose a new approach to obtain better and more robust circulations of the rolling stock train units, solving the rolling stock assignment while accounting for the train routing problem. Here robustness means that difficult shunting operations are selectively penalized and propagated delays together with the need for human resources are minimized. This new integrated approach provides a huge model. Then, we solve the integrated model using Benders decomposition, where the main decision is the rolling stock assignment and the train routing is in the second level. For computational reasons we propose a heuristic based on Benders decomposition. Computational experiments show how the current solution operated by RENFE (the main Spanish train operator) can be improved: more robust and efficient solutions are obtained
Resumo:
Polycrystalline gold electrodes of the kind that are routinely used in analysis and catalysis in aqueous media are often regarded as exhibiting relatively simple double-layer charging/discharging and monolayer oxide formation/ removal in the positive potential region. Application of the large amplitude Fourier transformed alternating current (FT-ac) voltammetric technique that allows the faradaic current contribution of fast electron-transfer processes to be emphasized in the higher harmonic components has revealed the presence of well-defined faradaic (premonolayer oxidation) processes at positive potentials in the double-layer region in acidic and basic media which are enhanced by electrochemical activation. These underlying quasi-reversible interfacial electron-transfer processes may mediate the course of electrocatalytic oxidation reactions of hydrazine, ethylene glycol, and glucose on gold electrodes in aqueous media. The observed responses support key assumptions associated with the incipient hydrous oxide adatom mediator (IHOAM) model of electrocatalysis.