42 resultados para Problema aditivo
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed
Resumo:
We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications
Resumo:
Neste trabalho é proposto um novo algoritmo online para o resolver o Problema dos k-Servos (PKS). O desempenho desta solução é comparado com o de outros algoritmos existentes na literatura, a saber, os algoritmos Harmonic e Work Function, que mostraram ser competitivos, tornando-os parâmetros de comparação significativos. Um algoritmo que apresente desempenho eficiente em relação aos mesmos tende a ser competitivo também, devendo, obviamente, se provar o referido fato. Tal prova, entretanto, foge aos objetivos do presente trabalho. O algoritmo apresentado para a solução do PKS é baseado em técnicas de aprendizagem por reforço. Para tanto, o problema foi modelado como um processo de decisão em múltiplas etapas, ao qual é aplicado o algoritmo Q-Learning, um dos métodos de solução mais populares para o estabelecimento de políticas ótimas neste tipo de problema de decisão. Entretanto, deve-se observar que a dimensão da estrutura de armazenamento utilizada pela aprendizagem por reforço para se obter a política ótima cresce em função do número de estados e de ações, que por sua vez é proporcional ao número n de nós e k de servos. Ao se analisar esse crescimento (matematicamente, ) percebe-se que o mesmo ocorre de maneira exponencial, limitando a aplicação do método a problemas de menor porte, onde o número de nós e de servos é reduzido. Este problema, denominado maldição da dimensionalidade, foi introduzido por Belmann e implica na impossibilidade de execução de um algoritmo para certas instâncias de um problema pelo esgotamento de recursos computacionais para obtenção de sua saída. De modo a evitar que a solução proposta, baseada exclusivamente na aprendizagem por reforço, seja restrita a aplicações de menor porte, propõe-se uma solução alternativa para problemas mais realistas, que envolvam um número maior de nós e de servos. Esta solução alternativa é hierarquizada e utiliza dois métodos de solução do PKS: a aprendizagem por reforço, aplicada a um número reduzido de nós obtidos a partir de um processo de agregação, e um método guloso, aplicado aos subconjuntos de nós resultantes do processo de agregação, onde o critério de escolha do agendamento dos servos é baseado na menor distância ao local de demanda
Resumo:
The tricalcium phosphate ceramics has been widely investigated in the last years due its bioresorbable behavior. The limiting factor of the application of these materials as temporary implants is its low strength resistance. The tricalcium phosphate presents an allotropic transformation β→α around 1250 ºC that degrades its resistance. Some studies have been developed in order to densify this material at this temperature range. The objective of this work is to study the influence of the addition of magnesium oxide (MgO) in the sintering of β-TCP. The processing route was uniaxial hot pressing and its objective was to obtain dense samples. The samples were physically characterized through density and porosity measurements. The thermal behavior was studied through dilatometric, thermal differential and thermogravimetric analysis. The mechanical properties were characterized by three point flexure test and Vickers microhardness measurements, analyzed of the microstructure. The addition of magnesium oxide doesn t cause an improvement of the mechanical strength in relation to material without additive.
Resumo:
The calcium phosphate ceramics have been very investigated as material for bone implants. The tricalcium phosphate (β-TCP) had a great potential for application in temporary implants like a resorbable bioceramic. This material presents a limitation in its sintering temperature due to occurrence of the allotropic transformation β → α at temperatures around 1200°C, not allowing the attainment of dense ceramic bodies. This transformation also causes cracks, what diminishes the mechanical strength, limiting its use to applications of low mechanical requests. This work studies the influence of the addition of manganese oxide in the sintering of β-TCP. Two processing routes were investigated. The first was the powder metallurgy conventional process. The test bodies (samples) were pressed and sintering at temperatures of 1200 and 1250°C. The second route was uniaxial hot pressing and its objective was to obtain samples with high relative density. The samples were physically characterized through density and porosity measurements. The thermal behavior was studied through dilatometric, thermal differential and thermogravimetric analysis. The mechanical properties were characterized by three point flexure test and Vickers microhardness measurements. The microstructure was analyzed by scanning electron microscopy. The addition of manganese oxide caused an improvement of the mechanical strength in relation to the material without additive and promoting the stabilization of β-TCP to greater temperatures
Resumo:
This work whose title is "The transcendental arguments: Kant Andy Hume's problem" has as its main objective to interpret Kant's answer to Hume's problem in the light of the conjunction of the causality and induction themes which is equivalent to skeptical- naturalist reading of the latter. In this sense, this initiative complements the previous treatment seen in our dissertation, where the same issue had been discussed from a merely skeptical reading that Kant got from Hume thought and was only examined causality. Among the specific objectives, we list the following: a) critical philosophy fulfills three basic functions, a founding, one negative and one would argue that the practical use of reason, here named as defensive b) the Kantian solution of Hume's problem in the first critisism would fulfill its founding and negative functions of critique of reason; c) the Kantian treatment of the theme of induction in other criticisms would will fulfill the defense function of critique of reason; d) that the evidence of Kant's answer to Hume's problem are more consistent when will be satisfied these three functions or moments of criticism. The basic structure of the work consists of three parts: the first the genesis of Hume's problem - our intention is to reconstruct Hume's problem, analyzing it from the perspective of two definitions of cause, where the dilution of the first definition in the second match the reduction of psychological knowledge to the probability of following the called naturalization of causal relations; whereas in the second - Legality and Causality - it is stated that when considering Hume in the skeptic-naturalist option, Kant is not entitled to respond by transcendental argument AB; A⊢B from the second Analogy, evidence that is rooted in the position of contemporary thinkers, such as Strawson and Allison; in third part - Purpose and Induction - admits that Kant responds to Hume on the level of regulative reason use, although the development of this test exceeds the limits of the founding function of criticism. And this is articulated in both the Introduction and Concluding Remarks by meeting the defensive [and negative] function of criticism. In this context, based on the use of so-called transcendental arguments that project throughout the critical trilogy, we provide solution to a recurring issue that recurs at several points in our submission and concerning to the "existence and / or the necessity of empirical causal laws. In this light, our thesis is that transcendental arguments are only an apodictic solution to the Hume s skeptical-naturalist problem when is at stake a practical project in which the interest of reason is ensured, as will, in short, proved in our final considerations
Resumo:
The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise
Resumo:
Lithium (Li) is a chemical element with atomic number 3 and it is among the lightest known elements in the universe. In general, the Lithium is found in the nature under the form of two stable isotopes, the 6Li and 7Li. This last one is the most dominant and responds for about 93% of the Li found in the Universe. Due to its fragileness this element is largely used in the astrophysics, especially in what refers to the understanding of the physical process that has occurred since the Big Bang going through the evolution of the galaxies and stars. In the primordial nucleosynthesis in the Big Bang moment (BBN), the theoretical calculation forecasts a Li production along with all the light elements such as Deuterium and Beryllium. To the Li the BNB theory reviews a primordial abundance of Log log ǫ(Li) =2.72 dex in a logarithmic scale related to the H. The abundance of Li found on the poor metal stars, or pop II stars type, is called as being the abundance of Li primordial and is the measure as being log ǫ(Li) =2.27 dex. In the ISM (Interstellar medium), that reflects the current value, the abundance of Lithium is log ǫ(Li) = 3.2 dex. This value has great importance for our comprehension on the chemical evolution of the galaxy. The process responsible for the increasing of the primordial value present in the Li is not clearly understood until nowadays. In fact there is a real contribution of Li from the giant stars of little mass and this contribution needs to be well streamed if we want to understand our galaxy. The main objection in this logical sequence is the appearing of some giant stars with little mass of G and K spectral types which atmosphere is highly enriched with Li. Such elevated values are exactly the opposite of what could happen with the typical abundance of giant low mass stars, where convective envelops pass through a mass deepening in which all the Li should be diluted and present abundances around log ǫ(Li) ∼1.4 dex following the model of stellar evolution. In the Literature three suggestions are found that try to reconcile the values of the abundance of Li theoretical and observed in these rich in Li giants, but any of them bring conclusive answers. In the present work, we propose a qualitative study of the evolutionary state of the rich in Li stars in the literature along with the recent discovery of the first star rich in Li observed by the Kepler Satellite. The main objective of this work is to promote a solid discussion about the evolutionary state based on the characteristic obtained from the seismic analysis of the object observed by Kepler. We used evolutionary traces and simulation done with the population synthesis code TRILEGAL intending to evaluate as precisely as possible the evolutionary state of the internal structure of these groups of stars. The results indicate a very short characteristic time when compared to the evolutionary scale related to the enrichment of these stars
Resumo:
The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done
Resumo:
Biodiesel is a fuel made up by mono-alkyl-esters of long chain fatty acids, derived from vegetable oils or animal fat. This fuel can be used in compression ignition engines for automotive propulsion or energy generation, as a partial or total substitute of fossil diesel fuel. Biodiesel can be processed from different mechanisms. Transesterification is the most common process for obtaining biodiesel, in which an ester compound reacts with an alcohol to form a new ester and a new alcohol. These reactions are normally catalyzed by the addition of an acid or a base. Initially sunflower, castor and soybean oil physicochemical properties are determined according to standard test methods, to evaluate if they had favorable conditions for use as raw material in the transesterification reaction. Sunflower, castor and soybean biodiesel were obtained by the methylic transesterification route in the presence of KOH and presented a yield above 93% m/m. The sunflower/castor and soybean/castor blends were studied with the aim of evaluating the thermal and oxidative stability of the biofuels. The biodiesel and blends were characterized by acid value, iodine value, density, flash point, sulfur content, and content of methanol and esters by gas chromatography (GC). Also studies of thermal and oxidative stability by Thermogravimetry (TG), Differential Scanning Calorimetry High Pressure (P-DSC) and dynamic method exothermic and Rancimat were carried out. Biodiesel sunflower and soybean are presented according to the specifications established by the Resolution ANP no 7/2008. Biodiesel from castor oil, as expected, showed a high density and kinematic viscosity. For the blends studied, the concentration of castor biodiesel to increased the density, kinematic viscosity and flash point. The addition of castor biodiesel as antioxidant in sunflower and soybean biodiesels is promising, for a significant improvement in resistance to autoxidation and therefore on its oxidative stability. The blends showed that compliance with the requirements of the ANP have been included in the range of 20-40%. This form may be used as a partial substitute of fossil diesel
Resumo:
The Car Rental Salesman Problem (CaRS) is a variant of the classical Traveling Salesman Problem which was not described in the literature where a tour of visits can be decomposed into contiguous paths that may be performed in different rental cars. The aim is to determine the Hamiltonian cycle that results in a final minimum cost, considering the cost of the route added to the cost of an expected penalty paid for each exchange of vehicles on the route. This penalty is due to the return of the car dropped to the base. This paper introduces the general problem and illustrates some examples, also featuring some of its associated variants. An overview of the complexity of this combinatorial problem is also outlined, to justify their classification in the NPhard class. A database of instances for the problem is presented, describing the methodology of its constitution. The presented problem is also the subject of a study based on experimental algorithmic implementation of six metaheuristic solutions, representing adaptations of the best of state-of-the-art heuristic programming. New neighborhoods, construction procedures, search operators, evolutionary agents, cooperation by multi-pheromone are created for this problem. Furtermore, computational experiments and comparative performance tests are conducted on a sample of 60 instances of the created database, aiming to offer a algorithm with an efficient solution for this problem. These results will illustrate the best performance reached by the transgenetic algorithm in all instances of the dataset
Resumo:
The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions
Resumo:
Due to great difficulty of accurate solution of Combinatorial Optimization Problems, some heuristic methods have been developed and during many years, the analysis of performance of these approaches was not carried through in a systematic way. The proposal of this work is to make a statistical analysis of heuristic approaches to the Traveling Salesman Problem (TSP). The focus of the analysis is to evaluate the performance of each approach in relation to the necessary computational time until the attainment of the optimal solution for one determined instance of the TSP. Survival Analysis, assisted by methods for the hypothesis test of the equality between survival functions was used. The evaluated approaches were divided in three classes: Lin-Kernighan Algorithms, Evolutionary Algorithms and Particle Swarm Optimization. Beyond those approaches, it was enclosed in the analysis, a memetic algorithm (for symmetric and asymmetric TSP instances) that utilizes the Lin-Kernighan heuristics as its local search procedure
Resumo:
The intervalar arithmetic well-known as arithmetic of Moore, doesn't possess the same properties of the real numbers, and for this reason, it is confronted with a problem of operative nature, when we want to solve intervalar equations as extension of real equations by the usual equality and of the intervalar arithmetic, for this not to possess the inverse addictive, as well as, the property of the distributivity of the multiplication for the sum doesn t be valid for any triplet of intervals. The lack of those properties disables the use of equacional logic, so much for the resolution of an intervalar equation using the same, as for a representation of a real equation, and still, for the algebraic verification of properties of a computational system, whose data are real numbers represented by intervals. However, with the notion of order of information and of approach on intervals, introduced by Acióly[6] in 1991, the idea of an intervalar equation appears to represent a real equation satisfactorily, since the terms of the intervalar equation carry the information about the solution of the real equation. In 1999, Santiago proposed the notion of simple equality and, later on, local equality for intervals [8] and [33]. Based on that idea, this dissertation extends Santiago's local groups for local algebras, following the idea of Σ-algebras according to (Hennessy[31], 1988) and (Santiago[7], 1995). One of the contributions of this dissertation, is the theorem 5.1.3.2 that it guarantees that, when deducing a local Σ-equation E t t in the proposed system SDedLoc(E), the interpretations of t and t' will be locally the same in any local Σ-algebra that satisfies the group of fixed equations local E, whenever t and t have meaning in A. This assures to a kind of safety between the local equacional logic and the local algebras