47 resultados para Problema de dimensionamento de lotes


Relevância:

20.00% 20.00%

Publicador:

Resumo:

The Combinatorial Optimization is a basic area to companies who look for competitive advantages in the diverse productive sectors and the Assimetric Travelling Salesman Problem, which one classifies as one of the most important problems of this area, for being a problem of the NP-hard class and for possessing diverse practical applications, has increased interest of researchers in the development of metaheuristics each more efficient to assist in its resolution, as it is the case of Memetic Algorithms, which is a evolutionary algorithms that it is used of the genetic operation in combination with a local search procedure. This work explores the technique of Viral Infection in one Memetic Algorithms where the infection substitutes the mutation operator for obtaining a fast evolution or extinguishing of species (KANOH et al, 1996) providing a form of acceleration and improvement of the solution . For this it developed four variants of Viral Infection applied in the Memetic Algorithms for resolution of the Assimetric Travelling Salesman Problem where the agent and the virus pass for a symbiosis process which favored the attainment of a hybrid evolutionary algorithms and computational viable

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The usual programs for load flow calculation were in general developped aiming the simulation of electric energy transmission, subtransmission and distribution systems. However, the mathematical methods and algorithms used by the formulations were based, in majority, just on the characteristics of the transmittion systems, which were the main concern focus of engineers and researchers. Though, the physical characteristics of these systems are quite different from the distribution ones. In the transmission systems, the voltage levels are high and the lines are generally very long. These aspects contribute the capacitive and inductive effects that appear in the system to have a considerable influence in the values of the interest quantities, reason why they should be taken into consideration. Still in the transmission systems, the loads have a macro nature, as for example, cities, neiborhoods, or big industries. These loads are, generally, practically balanced, what reduces the necessity of utilization of three-phase methodology for the load flow calculation. Distribution systems, on the other hand, present different characteristics: the voltage levels are small in comparison to the transmission ones. This almost annul the capacitive effects of the lines. The loads are, in this case, transformers, in whose secondaries are connected small consumers, in a sort of times, mono-phase ones, so that the probability of finding an unbalanced circuit is high. This way, the utilization of three-phase methodologies assumes an important dimension. Besides, equipments like voltage regulators, that use simultaneously the concepts of phase and line voltage in their functioning, need a three-phase methodology, in order to allow the simulation of their real behavior. For the exposed reasons, initially was developped, in the scope of this work, a method for three-phase load flow calculation in order to simulate the steady-state behaviour of distribution systems. Aiming to achieve this goal, the Power Summation Algorithm was used, as a base for developping the three phase method. This algorithm was already widely tested and approved by researchers and engineers in the simulation of radial electric energy distribution systems, mainly for single-phase representation. By our formulation, lines are modeled in three-phase circuits, considering the magnetic coupling between the phases; but the earth effect is considered through the Carson reduction. Its important to point out that, in spite of the loads being normally connected to the transformers secondaries, was considered the hypothesis of existence of star or delta loads connected to the primary circuit. To perform the simulation of voltage regulators, a new model was utilized, allowing the simulation of various types of configurations, according to their real functioning. Finally, was considered the possibility of representation of switches with current measuring in various points of the feeder. The loads are adjusted during the iteractive process, in order to match the current in each switch, converging to the measured value specified by the input data. In a second stage of the work, sensibility parameters were derived taking as base the described load flow, with the objective of suporting further optimization processes. This parameters are found by calculating of the partial derivatives of a variable in respect to another, in general, voltages, losses and reactive powers. After describing the calculation of the sensibility parameters, the Gradient Method was presented, using these parameters to optimize an objective function, that will be defined for each type of study. The first one refers to the reduction of technical losses in a medium voltage feeder, through the installation of capacitor banks; the second one refers to the problem of correction of voltage profile, through the instalation of capacitor banks or voltage regulators. In case of the losses reduction will be considered, as objective function, the sum of the losses in all the parts of the system. To the correction of the voltage profile, the objective function will be the sum of the square voltage deviations in each node, in respect to the rated voltage. In the end of the work, results of application of the described methods in some feeders are presented, aiming to give insight about their performance and acuity

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The purpose of this study was to develop a pilot plant which the main goal is to emulate a flow peak pressure in a separation vessel. Effect similar that is caused by the production in a slug flow in production wells equipped with the artificial lift method plunger lift. The motivation for its development was the need to test in a plant on a smaller scale, a new technique developed to estimate the gas flow in production wells equipped with plunger lift. To develop it, studies about multiphase flow effects, operation methods of artificial lift in plunger lift wells, industrial instrumentation elements, control valves, vessel sizing separators and measurement systems were done. The methodology used was the definition of process flowcharts, its parameters and how the effects needed would be generated for the success of the experiments. Therefore, control valves, the design and construction of vessels and the acquisition of other equipment used were defined. One of the vessels works as a tank of compressed air that is connected to the separation vessel and generates pulses of gas controlled by a on/off valve. With the emulator system ready, several control experiments were made, being the control of peak flow pressure generation and the flow meter the main experiments, this way, it was confirmed the efficiency of the plant usage in the problem that motivated it. It was concluded that the system is capable of generate effects of flow with peak pressure in a primary separation vessel. Studies such as the estimation of gas flow at the exit of the vessel and several academic studies can be done and tested on a smaller scale and then applied in real plants, avoiding waste of time and money.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Neste trabalho é proposto um novo algoritmo online para o resolver o Problema dos k-Servos (PKS). O desempenho desta solução é comparado com o de outros algoritmos existentes na literatura, a saber, os algoritmos Harmonic e Work Function, que mostraram ser competitivos, tornando-os parâmetros de comparação significativos. Um algoritmo que apresente desempenho eficiente em relação aos mesmos tende a ser competitivo também, devendo, obviamente, se provar o referido fato. Tal prova, entretanto, foge aos objetivos do presente trabalho. O algoritmo apresentado para a solução do PKS é baseado em técnicas de aprendizagem por reforço. Para tanto, o problema foi modelado como um processo de decisão em múltiplas etapas, ao qual é aplicado o algoritmo Q-Learning, um dos métodos de solução mais populares para o estabelecimento de políticas ótimas neste tipo de problema de decisão. Entretanto, deve-se observar que a dimensão da estrutura de armazenamento utilizada pela aprendizagem por reforço para se obter a política ótima cresce em função do número de estados e de ações, que por sua vez é proporcional ao número n de nós e k de servos. Ao se analisar esse crescimento (matematicamente, ) percebe-se que o mesmo ocorre de maneira exponencial, limitando a aplicação do método a problemas de menor porte, onde o número de nós e de servos é reduzido. Este problema, denominado maldição da dimensionalidade, foi introduzido por Belmann e implica na impossibilidade de execução de um algoritmo para certas instâncias de um problema pelo esgotamento de recursos computacionais para obtenção de sua saída. De modo a evitar que a solução proposta, baseada exclusivamente na aprendizagem por reforço, seja restrita a aplicações de menor porte, propõe-se uma solução alternativa para problemas mais realistas, que envolvam um número maior de nós e de servos. Esta solução alternativa é hierarquizada e utiliza dois métodos de solução do PKS: a aprendizagem por reforço, aplicada a um número reduzido de nós obtidos a partir de um processo de agregação, e um método guloso, aplicado aos subconjuntos de nós resultantes do processo de agregação, onde o critério de escolha do agendamento dos servos é baseado na menor distância ao local de demanda

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In multi-robot systems, both control architecture and work strategy represent a challenge for researchers. It is important to have a robust architecture that can be easily adapted to requirement changes. It is also important that work strategy allows robots to complete tasks efficiently, considering that robots interact directly in environments with humans. In this context, this work explores two approaches for robot soccer team coordination for cooperative tasks development. Both approaches are based on a combination of imitation learning and reinforcement learning. Thus, in the first approach was developed a control architecture, a fuzzy inference engine for recognizing situations in robot soccer games, a software for narration of robot soccer games based on the inference engine and the implementation of learning by imitation from observation and analysis of others robotic teams. Moreover, state abstraction was efficiently implemented in reinforcement learning applied to the robot soccer standard problem. Finally, reinforcement learning was implemented in a form where actions are explored only in some states (for example, states where an specialist robot system used them) differently to the traditional form, where actions have to be tested in all states. In the second approach reinforcement learning was implemented with function approximation, for which an algorithm called RBF-Sarsa($lambda$) was created. In both approaches batch reinforcement learning algorithms were implemented and imitation learning was used as a seed for reinforcement learning. Moreover, learning from robotic teams controlled by humans was explored. The proposal in this work had revealed efficient in the robot soccer standard problem and, when implemented in other robotics systems, they will allow that these robotics systems can efficiently and effectively develop assigned tasks. These approaches will give high adaptation capabilities to requirements and environment changes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work presents in a simulated environment, to analyze the length of cable needed counterweight connected to ground rod, able to avoid the phenomenon of flashover return, back flashover, the insulator chains of transmission lines consisting of concrete structures when they are subjected to lightning standardized regarding certain resistivity values of some kinds of soil and geometric arrangements of disposal of grounding systems structures

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Sludge of Wastewater Treatment Plants (WTPs) disposal is a problem for any municipality, for this reason the amount of sludge production is now a key issue in selecting treatment methods. It is necessary to investigate new applications for this waste type, due to the restrictions imposed by the environmental organs. The raw materials used in the Red Ceramic, are generally very heterogeneous, for this reason, such materials can tolerate the presence of different types of wastes. In Rio Grande do Norte, the roof tiles production corresponds to 60,61% from the total of ceramic units produced. Due to the importance of the ceramic industry of roof tiles for the state, allied to the environmental problem of the sludge disposal, this work had for objective to verify the possibility of the incorporation of sewage sludge in ceramic body used for production of roof tiles. In the research, sludge originating from drying beds of WTP of the Central Campus from UFRN and clays originating from a ceramic industry from Goianinha/RN were used. The raw materials were characterized by techniques of: analysis of particles distribution by diffraction to laser; real density; consistence limits; chemical analysis by X-ray fluorescence; mineralogical analysis by X-ray diffraction; organic matter; and solids content. Five batches of roof tiles were manufactured in the approximate dosages of 2%, 4%, 6%, 8% and 10%. To evaluate the properties of each final product, tests of water absorption, impermeability, bending strength, leachability and solubility were accomplished. The roof tiles manufactured with sludge presented characteristics similar to the roof tiles without sludge in relation to the environmental risk. The results showed that it is possible to use approximately up to 4% of sludge in ceramic bodies for production of roof tiles. However, it is observed that the high amount of organic matter (71%) present in the sludge is shown as factor that limits the sludge incorporation in ceramic bodies, worsening the quality of the roof tiles. It is necessary the use of mixtures of different raw materials under point of view of the granulometry and of the other chemical and mineralogical properties for the obtaining of a satisfactory mass to the production of ceramic roof tiles

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Eutrophication is a growing process present in the water sources located in the northeast of Brazil. Among the main consequences of these changes in trophic levels of a water source, stands out adding complexity to the treatment to achieve water standards. By these considerations, this study aimed to define, on a laboratory scale, products and operational conditions to be applied in the processing steps using raw water from Gargalheiras dam, RN, Brazil. The dam mentioned shows a high number of cyanobacteria, with a concentration of cells / ml higher than that established by Decree No. 518/04 MS. The same source was also considered by the state environmental agency in 2009 as hypereutrophic. The static tests developed in this research simulated direct filtration (laboratory filters) and pre-oxidation with chlorine and powdered activated carbon adsorption. The research included the evaluation of the coagulants aluminum hydrochloride (HCA) and alum (SA). The development of the research investigated the conditions for rapid mixing, the dosages of coagulants and pHs of coagulation by the drawing of diagrams. The interference of filtration rate and particle size of filtering means were evaluated as samples and the time of contact were tested with chlorine and activated carbon. By the results of the characterization of the raw water source it was possible to identify the presence of a high pH (7.34). The true color was significant (29 uH) in relation to the apparent color and turbidity (66 uH and 13.60 NTU), reflecting in the measurement of organic matter: MON (8.41 mg.L-1) and Abs254 (0.065 cm-1). The optimization of quick mix set time of 17", the speed gradient of 700 s-1 in the coagulation with HCA and the time of 20" with speed gradient of 800 s-1 for SA. The smaller particle sizes of sand filtering means helped the treatment and the variation in filtration rate did not affect significantly the efficiency of the process. The evaluation of the processing steps found adjustment in standard color and turbidity of the Decree nº 518/04 MS, taking in consideration the average values found in raw water. In the treatment using the HCA for direct filtration the palatable pattern based on the apparent color can be achieved with a dose of 25 mg L-1. With the addition of pre-oxidation step, the standard result was achieved with a reduced dose for 12 mgHCA.L-1. The turbidity standard for water was obtained by direct filtration when the dose exceeds 25 mg L-1 of HCA. With pre-oxidation step there is the possibility of reducing the dose to 20 mg L-1.The addition of CAP adsorption, promoted drinking water for both parameters, with even lower dosage, 13 mg L-1 of HCA. With coagulation using SA removal required for the parameter of apparent color it was achieved with pre-oxidation and 22 mgSA.L-1. Despite the satisfactory results of treatment with the alum, it was not possible to provide water with turbidity less than 1.00 NTU even with the use of all stages of treatment

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work whose title is "The transcendental arguments: Kant Andy Hume's problem" has as its main objective to interpret Kant's answer to Hume's problem in the light of the conjunction of the causality and induction themes which is equivalent to skeptical- naturalist reading of the latter. In this sense, this initiative complements the previous treatment seen in our dissertation, where the same issue had been discussed from a merely skeptical reading that Kant got from Hume thought and was only examined causality. Among the specific objectives, we list the following: a) critical philosophy fulfills three basic functions, a founding, one negative and one would argue that the practical use of reason, here named as defensive b) the Kantian solution of Hume's problem in the first critisism would fulfill its founding and negative functions of critique of reason; c) the Kantian treatment of the theme of induction in other criticisms would will fulfill the defense function of critique of reason; d) that the evidence of Kant's answer to Hume's problem are more consistent when will be satisfied these three functions or moments of criticism. The basic structure of the work consists of three parts: the first the genesis of Hume's problem - our intention is to reconstruct Hume's problem, analyzing it from the perspective of two definitions of cause, where the dilution of the first definition in the second match the reduction of psychological knowledge to the probability of following the called naturalization of causal relations; whereas in the second - Legality and Causality - it is stated that when considering Hume in the skeptic-naturalist option, Kant is not entitled to respond by transcendental argument A􀁴B; A⊢B from the second Analogy, evidence that is rooted in the position of contemporary thinkers, such as Strawson and Allison; in third part - Purpose and Induction - admits that Kant responds to Hume on the level of regulative reason use, although the development of this test exceeds the limits of the founding function of criticism. And this is articulated in both the Introduction and Concluding Remarks by meeting the defensive [and negative] function of criticism. In this context, based on the use of so-called transcendental arguments that project throughout the critical trilogy, we provide solution to a recurring issue that recurs at several points in our submission and concerning to the "existence and / or the necessity of empirical causal laws. In this light, our thesis is that transcendental arguments are only an apodictic solution to the Hume s skeptical-naturalist problem when is at stake a practical project in which the interest of reason is ensured, as will, in short, proved in our final considerations

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Lithium (Li) is a chemical element with atomic number 3 and it is among the lightest known elements in the universe. In general, the Lithium is found in the nature under the form of two stable isotopes, the 6Li and 7Li. This last one is the most dominant and responds for about 93% of the Li found in the Universe. Due to its fragileness this element is largely used in the astrophysics, especially in what refers to the understanding of the physical process that has occurred since the Big Bang going through the evolution of the galaxies and stars. In the primordial nucleosynthesis in the Big Bang moment (BBN), the theoretical calculation forecasts a Li production along with all the light elements such as Deuterium and Beryllium. To the Li the BNB theory reviews a primordial abundance of Log log ǫ(Li) =2.72 dex in a logarithmic scale related to the H. The abundance of Li found on the poor metal stars, or pop II stars type, is called as being the abundance of Li primordial and is the measure as being log ǫ(Li) =2.27 dex. In the ISM (Interstellar medium), that reflects the current value, the abundance of Lithium is log ǫ(Li) = 3.2 dex. This value has great importance for our comprehension on the chemical evolution of the galaxy. The process responsible for the increasing of the primordial value present in the Li is not clearly understood until nowadays. In fact there is a real contribution of Li from the giant stars of little mass and this contribution needs to be well streamed if we want to understand our galaxy. The main objection in this logical sequence is the appearing of some giant stars with little mass of G and K spectral types which atmosphere is highly enriched with Li. Such elevated values are exactly the opposite of what could happen with the typical abundance of giant low mass stars, where convective envelops pass through a mass deepening in which all the Li should be diluted and present abundances around log ǫ(Li) ∼1.4 dex following the model of stellar evolution. In the Literature three suggestions are found that try to reconcile the values of the abundance of Li theoretical and observed in these rich in Li giants, but any of them bring conclusive answers. In the present work, we propose a qualitative study of the evolutionary state of the rich in Li stars in the literature along with the recent discovery of the first star rich in Li observed by the Kepler Satellite. The main objective of this work is to promote a solid discussion about the evolutionary state based on the characteristic obtained from the seismic analysis of the object observed by Kepler. We used evolutionary traces and simulation done with the population synthesis code TRILEGAL intending to evaluate as precisely as possible the evolutionary state of the internal structure of these groups of stars. The results indicate a very short characteristic time when compared to the evolutionary scale related to the enrichment of these stars

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done