82 resultados para Otimização de perdas
Resumo:
Combinatorial optimization problems have the goal of maximize or minimize functions defined over a finite domain. Metaheuristics are methods designed to find good solutions in this finite domain, sometimes the optimum solution, using a subordinated heuristic, which is modeled for each particular problem. This work presents algorithms based on particle swarm optimization (metaheuristic) applied to combinatorial optimization problems: the Traveling Salesman Problem and the Multicriteria Degree Constrained Minimum Spanning Tree Problem. The first problem optimizes only one objective, while the other problem deals with many objectives. In order to evaluate the performance of the algorithms proposed, they are compared, in terms of the quality of the solutions found, to other approaches
Resumo:
This work performs an algorithmic study of optimization of a conformal radiotherapy plan treatment. Initially we show: an overview about cancer, radiotherapy and the physics of interaction of ionizing radiation with matery. A proposal for optimization of a plan of treatment in radiotherapy is developed in a systematic way. We show the paradigm of multicriteria problem, the concept of Pareto optimum and Pareto dominance. A generic optimization model for radioterapic treatment is proposed. We construct the input of the model, estimate the dose given by the radiation using the dose matrix, and show the objective function for the model. The complexity of optimization models in radiotherapy treatment is typically NP which justifyis the use of heuristic methods. We propose three distinct methods: MOGA, MOSA e MOTS. The project of these three metaheuristic procedures is shown. For each procedures follows: a brief motivation, the algorithm itself and the method for tuning its parameters. The three method are applied to a concrete case and we confront their performances. Finally it is analyzed for each method: the quality of the Pareto sets, some solutions and the respective Pareto curves
Resumo:
This work seeks to propose and evaluate a change to the Ant Colony Optimization based on the results of experiments performed on the problem of Selective Ride Robot (PRS, a new problem, also proposed in this paper. Four metaheuristics are implemented, GRASP, VNS and two versions of Ant Colony Optimization, and their results are analyzed by running the algorithms over 32 instances created during this work. The metaheuristics also have their results compared to an exact approach. The results show that the algorithm implemented using the GRASP metaheuristic show good results. The version of the multicolony ant colony algorithm, proposed and evaluated in this work, shows the best results
Resumo:
Committees of classifiers may be used to improve the accuracy of classification systems, in other words, different classifiers used to solve the same problem can be combined for creating a system of greater accuracy, called committees of classifiers. To that this to succeed is necessary that the classifiers make mistakes on different objects of the problem so that the errors of a classifier are ignored by the others correct classifiers when applying the method of combination of the committee. The characteristic of classifiers of err on different objects is called diversity. However, most measures of diversity could not describe this importance. Recently, were proposed two measures of the diversity (good and bad diversity) with the aim of helping to generate more accurate committees. This paper performs an experimental analysis of these measures applied directly on the building of the committees of classifiers. The method of construction adopted is modeled as a search problem by the set of characteristics of the databases of the problem and the best set of committee members in order to find the committee of classifiers to produce the most accurate classification. This problem is solved by metaheuristic optimization techniques, in their mono and multi-objective versions. Analyzes are performed to verify if use or add the measures of good diversity and bad diversity in the optimization objectives creates more accurate committees. Thus, the contribution of this study is to determine whether the measures of good diversity and bad diversity can be used in mono-objective and multi-objective optimization techniques as optimization objectives for building committees of classifiers more accurate than those built by the same process, but using only the accuracy classification as objective of optimization
Resumo:
Este trabalho aborda o problema de otimização em braquiterapia de alta taxa de dose no tratamento de pacientes com câncer, com vistas à definição do conjunto de tempos de parada. A técnica de solução adotada foi a Transgenética Computacional apoiada pelo método L-BFGS. O algoritmo desenvolvido foi empregado para gerar soluções não denominadas cujas distribuições de dose fossem capazes de eiminar o câncer e ao mesmo tempo preservar as regiões normais
Otimização da síntese do AlSBA-15 para produção de biodiesel por transesteri-ficação do óleo de coco
Resumo:
Stimulus encouraging the production and consumption of biodiesel favors the policy of pre-serving the environment, contributing to the reduction of greenhouse gas reducing climate change. The current trend of research in this field focuses on improving these processes with the use of heterogeneous catalysts, seeing has significant advantages such as: low contamination of products, ease of separation of the catalyst from the reaction medium, possibili-ty of reuse of the catalyst, decreased corrosion problems. The objective of this research was to optimize the synthesis of AlSBA-15 for the production of biodiesel through transesterification process via ethyl route. For the optimization of hydrothermal synthesis of type AlSBA-15 catalyst has assembled a 23 factorial experimental matrix with eleven trials. The stoichiometric amounts of starting materials were varied according to different ratios Si / Al which is a factor in the experimental design, in addition to the time and temperature of aging of the synthesis gel. The material showed the best results of characterization (SBET = 591.7 (m2 / g), Vp = 0.83 (cm3 / g), Dp = 5.59 (nm), w = 6.48 (nm) was synthesized at 100 ° C for 24 hours, with a ratio Si / Al = 10.This material was applied as a heterogeneous catalyst in the reaction of ethyl transesterification as raw coconut oil in natura. Coconut oil presented suitable for obtaining biodiesel via ethyl route.The visual aspects and physical-chemical characteristics of the reaction products show that AlSBA-15 catalyst favored the reaction. According to physical-chemical analysis the order of oxidative stability of the product of the transesterification reaction was: catalytic reaction at 1500 ° C> non-catalytic reaction at 100 ° C> 100 ° C catalytic> catalytic reaction at 200 ° C Reaction. The results of oxidative stability and kinematic viscosity shows that the biodiesel produced in the catalytic sandblasting held at 150 ° C which was maintained within the ABNT NBR 7148, ABNT NBR 10441 and EN 14112.
Resumo:
The sulfated polysaccharides (SP) from the edible red seaweed Gracilaria birdiae were obtained using five different condition extraction (GB1: Water; GB1p: Water/proteolysis; GB1s: Water/sonication; GB1sp: Water/sonication/proteolysis; GB2s: NaOH/sonication; GB2sp: NaOH/sonication/proteolysis. The yield (g) increased in the following order GB2sp>GB1sp>GB1p>GB2s>GB1s>GB1. However, the amount of SP extracted increased in different way GB2sp>GB1p>GB1>GB1sp>GB1s>GB2s. Infrared and electrophoresis analysis showed that all conditions extracted the same SP. In addition, monosaccharide composition showed that ultrasound promotes the extraction of other polysaccharides than SP. In the prothrombin time (PT) test, which evaluates the extrinsic coagulation pathway, none of the samples showed anticoagulant activity. While in the activated partial thromboplastin time (aPTT) test, which evaluates the intrinsic coagulation pathway, all samples showed anticoagulant activity, except GB2s. The aPTT activity decreased in the order of GB1sp>GB2sp>GB1p>GB1>GB1s>GB2s. Total capacity antioxidant (TCA) of the SP was also affected by condition extraction, since GB2s and GB1 showed lower activity in comparison to the other conditions. In conclusion, the conditions of SP extraction influence their biological activities and chemical composition. The data showed NaOH/sonication/proteolysis was the best condition to extract anticoagulant and antioxidant SPs from Gracilaria birdiae.
Resumo:
A major and growing problems faced by modern society is the high production of waste and related effects they produce, such as environmental degradation and pollution of various ecosystems, with direct effects on quality of life. The thermal treatment technologies have been widely used in the treatment of these wastes and thermal plasma is gaining importance in processing blanketing. This work is focused on developing an optimized system of supervision and control applied to a processing plant and petrochemical waste effluents using thermal plasma. The system is basically composed of a inductive plasma torch reactors washing system / exhaust gases and RF power used to generate plasma. The process of supervision and control of the plant is of paramount importance in the development of the ultimate goal. For this reason, various subsidies were created in the search for greater efficiency in the process, generating events, graphics / distribution and storage of data for each subsystem of the plant, process execution, control and 3D visualization of each subsystem of the plant between others. A communication platform between the virtual 3D plant architecture and a real control structure (hardware) was created. The goal is to use the concepts of mixed reality and develop strategies for different types of controls that allow manipulating 3D plant without restrictions and schedules, optimize the actual process. Studies have shown that one of the best ways to implement the control of generation inductively coupled plasma techniques is to use intelligent control, both for their efficiency in the results is low for its implementation, without requiring a specific model. The control strategy using Fuzzy Logic (Fuzzy-PI) was developed and implemented, and the results showed satisfactory condition on response time and viability
Resumo:
An important problem faced by the oil industry is to distribute multiple oil products through pipelines. Distribution is done in a network composed of refineries (source nodes), storage parks (intermediate nodes), and terminals (demand nodes) interconnected by a set of pipelines transporting oil and derivatives between adjacent areas. Constraints related to storage limits, delivery time, sources availability, sending and receiving limits, among others, must be satisfied. Some researchers deal with this problem under a discrete viewpoint in which the flow in the network is seen as batches sending. Usually, there is no separation device between batches of different products and the losses due to interfaces may be significant. Minimizing delivery time is a typical objective adopted by engineers when scheduling products sending in pipeline networks. However, costs incurred due to losses in interfaces cannot be disregarded. The cost also depends on pumping expenses, which are mostly due to the electricity cost. Since industrial electricity tariff varies over the day, pumping at different time periods have different cost. This work presents an experimental investigation of computational methods designed to deal with the problem of distributing oil derivatives in networks considering three minimization objectives simultaneously: delivery time, losses due to interfaces and electricity cost. The problem is NP-hard and is addressed with hybrid evolutionary algorithms. Hybridizations are mainly focused on Transgenetic Algorithms and classical multi-objective evolutionary algorithm architectures such as MOEA/D, NSGA2 and SPEA2. Three architectures named MOTA/D, NSTA and SPETA are applied to the problem. An experimental study compares the algorithms on thirty test cases. To analyse the results obtained with the algorithms Pareto-compliant quality indicators are used and the significance of the results evaluated with non-parametric statistical tests.
Resumo:
Launching centers are designed for scientific and commercial activities with aerospace vehicles. Rockets Tracking Systems (RTS) are part of the infrastructure of these centers and they are responsible for collecting and processing the data trajectory of vehicles. Generally, Parabolic Reflector Radars (PRRs) are used in RTS. However, it is possible to use radars with antenna arrays, or Phased Arrays (PAs), so called Phased Arrays Radars (PARs). Thus, the excitation signal of each radiating element of the array can be adjusted to perform electronic control of the radiation pattern in order to improve functionality and maintenance of the system. Therefore, in the implementation and reuse projects of PARs, modeling is subject to various combinations of excitation signals, producing a complex optimization problem due to the large number of available solutions. In this case, it is possible to use offline optimization methods, such as Genetic Algorithms (GAs), to calculate the problem solutions, which are stored for online applications. Hence, the Genetic Algorithm with Maximum-Minimum Crossover (GAMMC) optimization method was used to develop the GAMMC-P algorithm that optimizes the modeling step of radiation pattern control from planar PAs. Compared with a conventional crossover GA, the GAMMC has a different approach from the conventional one, because it performs the crossover of the fittest individuals with the least fit individuals in order to enhance the genetic diversity. Thus, the GAMMC prevents premature convergence, increases population fitness and reduces the processing time. Therefore, the GAMMC-P uses a reconfigurable algorithm with multiple objectives, different coding and genetic operator MMC. The test results show that GAMMC-P reached the proposed requirements for different operating conditions of a planar RAV.
Desenvolvimento da célula base de microestruturas periódicas de compósitos sob otimização topológica
Resumo:
This thesis develops a new technique for composite microstructures projects by the Topology Optimization process, in order to maximize rigidity, making use of Deformation Energy Method and using a refining scheme h-adaptative to obtain a better defining the topological contours of the microstructure. This is done by distributing materials optimally in a region of pre-established project named as Cell Base. In this paper, the Finite Element Method is used to describe the field and for government equation solution. The mesh is refined iteratively refining so that the Finite Element Mesh is made on all the elements which represent solid materials, and all empty elements containing at least one node in a solid material region. The Finite Element Method chosen for the model is the linear triangular three nodes. As for the resolution of the nonlinear programming problem with constraints we were used Augmented Lagrangian method, and a minimization algorithm based on the direction of the Quasi-Newton type and Armijo-Wolfe conditions assisting in the lowering process. The Cell Base that represents the composite is found from the equivalence between a fictional material and a preescribe material, distributed optimally in the project area. The use of the strain energy method is justified for providing a lower computational cost due to a simpler formulation than traditional homogenization method. The results are presented prescription with change, in displacement with change, in volume restriction and from various initial values of relative densities.
Resumo:
Surface defects on steel parts borne costs of smelting industries due to the need of rework. Sand molds are frequently used in foundry industries and largely responsible for providing surface defects. This study aims to optimize the levels of the molding process variables to minimize the occurrence of surface defects in steel castings in silica sand molds chemically linked by cold cure process. The methodology used the experimental design with split plot, being considered in the study the resin percentage factors in the mold formulation, addition of iron oxide, type of paint, the paint application method, amount of ink layers, use of hot air along the lines and waiting time of the mold before casting. They were analyzed as response variables erosion defects, sand inclusion, penetration, porosity and surface finish. Tensile strength tests were performed to evaluate the influence of factors on mechanical parameters and the microstructural parameters were carried out the analysis of X-ray diffraction, scanning electron microscopy (SEM) and thermal analysis (TG / DSC / dilatometry). The results elucidate that for the faulty erosion, the only significant factor with a 95% confidence level was the type of ink and the ink alumina-based superior results obtained. For the sand inclusion of defect, there were three significant factors, with best results obtained with alumina-based paint and spray applied using hot air in the mold before casting the metal. For the defect penetration, there were four significant factors, the best results being achieved with 0.8% of resin and addition of iron oxide in the molding formulation, the paint being applied by brush and standby time of 24 hours before leak. For the defect porosity with a 95% confidence level, no significant factors. For the defect surface finish, the best results were achieved with the 0.8% formulation of the resin in the mold and application of the paint brush. To obtain the levels of the factors that optimize all defects simultaneously, we performed a weighted average of the results of each type of fault, concluding that the best levels of the factors were: 0.8% resin and addition of iron oxide in the formulation of the template, application of two coats of paint applied with a brush or spray, using hot air in the mold before casting and 24 hours of waiting ready the mold before casting. These levels of the optimized factors were used in an experiment to confirm that ratified the results, helping to reduce rework and consequently reducing costs of cast steel parts.
Resumo:
Hexavalent chromium is a heavy metal present in various industrial effluents, and depending on its concentration may cause irreparable damage to the environment and to humans. Facing this surrounding context, this study aimed on the application of electrochemical methods to determine and remove the hexavalent chromium (Cr6+) in simulated wastewater. To determine was applied to cathodic stripping voltammetry (CSV) using ultra trace graphite electrodes ultra trace (work), Ag/AgCl (reference) and platinum (counter electrode), the samples were complexed with 1,5- diphenylcarbazide and then subjected to analysis. The removal of Cr6+ was applied electrocoagulation process (EC) using Fe and Al electrodes. The variables that constituted the factorial design 24, applied to optimizing the EC process, were: current density (5 and 10 mA.cm-2), temperature (25 and 60 ºC), concentration (50 and 100 ppm) and agitation rate (400 and 600 RPM). Through the preliminary test it was possible the adequacy of applying the CSV for determining of Cr6+, removed during the EC process. The Fe and Al electrodes as anodes sacrifice showed satisfactory results in the EC process, however Fe favored complete removal in 30 min, whereas with Al occurred at 240 min. In the application of factorial design 24 and analysis of Response Surface Methodology was possible to optimize the EC process for removal of Cr6+ in H2SO4 solution (0.5 mol.L-1), in which the temperature, with positive effect, was the variable that presented higher statistical significance compared with other variables and interactions, while in optimizing the EC process for removal of Cr6+ in NaCl solution (0.1 mol.L-1) the current density, with positive effect, and concentration, with a negative effect were the variables that had greater statistical significance with greater statistical significance compared with other variables and interactions. The utilization of electrolytes supports NaCl and Na2SO4 showed no significant differences, however NaCl resulted in rapid improvement in Cr6+ removal kinetics and increasing the NaCl concentration provided an increase in conductivity of the solution, resulting in lower energy consumption. The wear of the electrodes evaluated in all the process of EC showed that the Al in H2SO4 solution (0.5 mol.L-1), undergoes during the process of anodization CE, then the experimental mass loss is less than the theoretical mass loss, however, the Fe in the same medium showed a loss of mass greater experimental estimated theoretically. This fact is due to a spontaneous reaction of Fe with H2SO4, and when the reaction medium was the NaCl and Na2SO4 loss experimental mass approached the theoretical mass loss. Furthermore, it was observed the energy consumption of all processes involved in this study had a low operating cost, thus enabling the application of the EC process for treating industrial effluents. The results were satisfactory, it was achieved complete removal of Cr6+ in all processes used in this study.
Resumo:
The flows turbulent and laminar are present in various applications of engineering and one of the villain of energy loss big is the surface friction. Currently, there are several research aimed for the study of reducing drag (DR) with the objective of developing effective methods to reduce the friction. Regardless of numerous research carried out until today, the phenomenon DR still remains in study not it is fully understood. This paper studied the drag reduction by polymer induction in turbulent internal flows in ducts. We constructed a testing bench to perform the analysis of drag reduction, the bench has basically two manometers with a 8.5 psi full scale, a peripheral pump 0.5 HP, an acrylic tank, valves and tubes pvc and is situated in the Laboratory Fluid Mechanics UFRN. Were used as polymer additives to polyethylene glycol 4000, the Polyox WSR N60K, Polyox WSR 301 and Polyox WSR 205. The rationale for the choice of these polymers is their wide application in situations requiring greater energy efficiency, such as the addition reducing polymers for the jet used by the fire department to achieve greater distances. The induced drag reduction polymers is investigated from the turbulent flow analysis, with Reynolds number in a range between 2×104
Resumo:
The flows turbulent and laminar are present in various applications of engineering and one of the villain of energy loss big is the surface friction. Currently, there are several research aimed for the study of reducing drag (DR) with the objective of developing effective methods to reduce the friction. Regardless of numerous research carried out until today, the phenomenon DR still remains in study not it is fully understood. This paper studied the drag reduction by polymer induction in turbulent internal flows in ducts. We constructed a testing bench to perform the analysis of drag reduction, the bench has basically two manometers with a 8.5 psi full scale, a peripheral pump 0.5 HP, an acrylic tank, valves and tubes pvc and is situated in the Laboratory Fluid Mechanics UFRN. Were used as polymer additives to polyethylene glycol 4000, the Polyox WSR N60K, Polyox WSR 301 and Polyox WSR 205. The rationale for the choice of these polymers is their wide application in situations requiring greater energy efficiency, such as the addition reducing polymers for the jet used by the fire department to achieve greater distances. The induced drag reduction polymers is investigated from the turbulent flow analysis, with Reynolds number in a range between 2×104