49 resultados para Problema de n-corpos
Resumo:
This work presents a hybrid approach for the supplier selection problem in Supply Chain Management. We joined decision-making philosophy by researchers from business school and researchers from engineering in order to deal with the problem more extensively. We utilized traditional multicriteria decision-making methods, like AHP and TOPSIS, in order to evaluate alternatives according decision maker s preferences. The both techiniques were modeled by using definitions from the Fuzzy Sets Theory to deal with imprecise data. Additionally, we proposed a multiobjetive GRASP algorithm to perform an order allocation procedure between all pre-selected alternatives. These alternatives must to be pre-qualified on the basis of the AHP and TOPSIS methods before entering the LCR. Our allocation procedure has presented low CPU times for five pseudorandom instances, containing up to 1000 alternatives, as well as good values for all considered objectives. This way, we consider the proposed model as appropriate to solve the supplier selection problem in the SCM context. It can be used to help decision makers in reducing lead times, cost and risks in their supply chain. The proposed model can also improve firm s efficiency in relation to business strategies, according decision makers, even when a large number of alternatives must be considered, differently from classical models in purchasing literature
Resumo:
The objective in the facility location problem with limited distances is to minimize the sum of distance functions from the facility to the customers, but with a limit on each distance, after which the corresponding function becomes constant. The problem has applications in situations where the service provided by the facility is insensitive after a given threshold distance (eg. fire station location). In this work, we propose a global optimization algorithm for the case in which there are lower and upper limits on the numbers of customers that can be served
Resumo:
The Combinatorial Optimization is a basic area to companies who look for competitive advantages in the diverse productive sectors and the Assimetric Travelling Salesman Problem, which one classifies as one of the most important problems of this area, for being a problem of the NP-hard class and for possessing diverse practical applications, has increased interest of researchers in the development of metaheuristics each more efficient to assist in its resolution, as it is the case of Memetic Algorithms, which is a evolutionary algorithms that it is used of the genetic operation in combination with a local search procedure. This work explores the technique of Viral Infection in one Memetic Algorithms where the infection substitutes the mutation operator for obtaining a fast evolution or extinguishing of species (KANOH et al, 1996) providing a form of acceleration and improvement of the solution . For this it developed four variants of Viral Infection applied in the Memetic Algorithms for resolution of the Assimetric Travelling Salesman Problem where the agent and the virus pass for a symbiosis process which favored the attainment of a hybrid evolutionary algorithms and computational viable
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
The metaheuristics techiniques are known to solve optimization problems classified as NP-complete and are successful in obtaining good quality solutions. They use non-deterministic approaches to generate solutions that are close to the optimal, without the guarantee of finding the global optimum. Motivated by the difficulties in the resolution of these problems, this work proposes the development of parallel hybrid methods using the reinforcement learning, the metaheuristics GRASP and Genetic Algorithms. With the use of these techniques, we aim to contribute to improved efficiency in obtaining efficient solutions. In this case, instead of using the Q-learning algorithm by reinforcement learning, just as a technique for generating the initial solutions of metaheuristics, we use it in a cooperative and competitive approach with the Genetic Algorithm and GRASP, in an parallel implementation. In this context, was possible to verify that the implementations in this study showed satisfactory results, in both strategies, that is, in cooperation and competition between them and the cooperation and competition between groups. In some instances were found the global optimum, in others theses implementations reach close to it. In this sense was an analyze of the performance for this proposed approach was done and it shows a good performance on the requeriments that prove the efficiency and speedup (gain in speed with the parallel processing) of the implementations performed
Resumo:
We revisit the problem of visibility, which is to determine a set of primitives potentially visible in a set of geometry data represented by a data structure, such as a mesh of polygons or triangles, we propose a solution for speeding up the three-dimensional visualization processing in applications. We introduce a lean structure , in the sense of data abstraction and reduction, which can be used for online and interactive applications. The visibility problem is especially important in 3D visualization of scenes represented by large volumes of data, when it is not worthwhile keeping all polygons of the scene in memory. This implies a greater time spent in the rendering, or is even impossible to keep them all in huge volumes of data. In these cases, given a position and a direction of view, the main objective is to determine and load a minimum ammount of primitives (polygons) in the scene, to accelerate the rendering step. For this purpose, our algorithm performs cutting primitives (culling) using a hybrid paradigm based on three known techniques. The scene is divided into a cell grid, for each cell we associate the primitives that belong to them, and finally determined the set of primitives potentially visible. The novelty is the use of triangulation Ja 1 to create the subdivision grid. We chose this structure because of its relevant characteristics of adaptivity and algebrism (ease of calculations). The results show a substantial improvement over traditional methods when applied separately. The method introduced in this work can be used in devices with low or no dedicated processing power CPU, and also can be used to view data via the Internet, such as virtual museums applications
Resumo:
Neste trabalho é proposto um novo algoritmo online para o resolver o Problema dos k-Servos (PKS). O desempenho desta solução é comparado com o de outros algoritmos existentes na literatura, a saber, os algoritmos Harmonic e Work Function, que mostraram ser competitivos, tornando-os parâmetros de comparação significativos. Um algoritmo que apresente desempenho eficiente em relação aos mesmos tende a ser competitivo também, devendo, obviamente, se provar o referido fato. Tal prova, entretanto, foge aos objetivos do presente trabalho. O algoritmo apresentado para a solução do PKS é baseado em técnicas de aprendizagem por reforço. Para tanto, o problema foi modelado como um processo de decisão em múltiplas etapas, ao qual é aplicado o algoritmo Q-Learning, um dos métodos de solução mais populares para o estabelecimento de políticas ótimas neste tipo de problema de decisão. Entretanto, deve-se observar que a dimensão da estrutura de armazenamento utilizada pela aprendizagem por reforço para se obter a política ótima cresce em função do número de estados e de ações, que por sua vez é proporcional ao número n de nós e k de servos. Ao se analisar esse crescimento (matematicamente, ) percebe-se que o mesmo ocorre de maneira exponencial, limitando a aplicação do método a problemas de menor porte, onde o número de nós e de servos é reduzido. Este problema, denominado maldição da dimensionalidade, foi introduzido por Belmann e implica na impossibilidade de execução de um algoritmo para certas instâncias de um problema pelo esgotamento de recursos computacionais para obtenção de sua saída. De modo a evitar que a solução proposta, baseada exclusivamente na aprendizagem por reforço, seja restrita a aplicações de menor porte, propõe-se uma solução alternativa para problemas mais realistas, que envolvam um número maior de nós e de servos. Esta solução alternativa é hierarquizada e utiliza dois métodos de solução do PKS: a aprendizagem por reforço, aplicada a um número reduzido de nós obtidos a partir de um processo de agregação, e um método guloso, aplicado aos subconjuntos de nós resultantes do processo de agregação, onde o critério de escolha do agendamento dos servos é baseado na menor distância ao local de demanda
Resumo:
Emerald mining is an important area of the economy in Brazil, country which is in second place among the exporting nations of this gem. Due to the process of extraction, a great amount of reject is generated. Since there is no appropriate destination, the reject is abandoned around the mining industries, contributing to environment degradation. Nowadays, some of the most relevant things to an industry in general are: energy conservation, cost reduction, quality and productivity enhancement. The production of isolating, transformed refractory materials achieves the sustainability dimension when protection of the environment is incorporated to such process. This work investigates the use of emerald mining rejects in the ceramic body of refractory materials, aiming at obtaining a product whose characteristics are compatible with commercial products and, at the same time, allow the use of such rejects to solve the environmental issue caused by its disposal in nature. X-ray fluorescence analysis show that the emerald reject obtained after the flotation to extract molybdenum and mica has 70% of silica and alumina (SiO2+Al2O3) and 21% of a basic oxides and alkaline metals and earthy alkaline mixture (Na2O, K2O, CaO e MgO). Because of the significant amount of silica and alumina present in the reject, four refractory ceramic bodies were prepared. Samples with a rectangular shape and dimensions 100x50x10 mm were pressed in a steel mold at 27,5 MPa and sintered at 1200ºC for 40 min. under environment atmosphere in a resistive oven. The sintered samples were characterized in relation to the chemical composition (FRX), mineralogical composition (DRX), microstructure (MEV) and physical and mechanical properties. The results indicate that the mixture with 45% of reject, 45% of alumina and 10% of kaolin presents a refractory quality of 1420ºC, dimensional linear variation below 2.00%, apparent specific mass of 1,56 g/cm3 and porosity of 46,68%, which demonstrates the potential use of the reject as raw material for the industry of isolating transformed refractory materials
Resumo:
O gradiente térmico da superfície para o interior do sólido depende da taxa de colisões das partículas e da condutividade térmica do material utilizado. Quando um sólido é imerso em plasma, a transferência de energia ocorre por radiação e colisões das partículas sobre a superfície do material. Dependendo da taxa de colisões das particulas e da condutividade térmica do sólido existirão gradientes térmicos da superfície para o interior das amostras, ocorrendo picos térmicos na superficie, ou seja, o aquecimento pontual nas regiões de colisões. A fim de estudar esse efeito, amostras de aço rápido AISI M35 cujos valores de dureza são fortemente sensíveis à temperatura de revenimento, foram utilizadas como micro sensores térmicos. Amostras foram temperadas em forno resistivo e, em seguida, parte das mesmas foram revenidas em forno resistivo e a outra parte em plasma. A partir do gráfico da dureza (Hv) em função da temperatura (T) das amostras revenidas em forno resistivo foi possível obter uma função Hv(T) para determinação indireta do perfil térmico das amostras tratadas em plasma. As amostras foram revenidas em plasma utilizando temperatura de referência igual a 550 oC. Em seguida foi obtido o perfil de dureza dessas amostras ao longo da seção transversal e, subsequentemente, o perfil de temperatura. Verificou-se que amostras tratadas em plasma, ao contrário daquelas tratadas em forno resistivo, apresentaram gradiente de temperatura da superfície para o núcleo. Além disso, verificou-se que as amostras tratadas em configuração planar apresentaram gradientes térmicos inferiores àquelas tratadas em configuração cátodo oco, variando de 20 a 120 °C, respectivamente
Resumo:
Polyurethanes are very versatile macromolecular materials that can be used in the form of powders, adhesives and elastomers. As a consequence, they constitute important subject for research as well as outstanding materials used in several manufacturing processes. In addition to the search for new polyurethanes, the kinetics control during its preparation is a very important topic, mainly if the polyurethane is obtained via bulk polymerization. The work in thesis was directed towards this subject, particularly the synthesis of polyurethanes based castor oil and isophorone diisocianate. As a first step castor oil characterized using the following analytical methods: iodine index, saponification index, refraction index, humidity content and infrared absorption spectroscopy (FTIR). As a second step, test specimens of these polyurethanes were obtained via bulk polymerization and were submitted to swelling experiments with different solvents. From these experiments, the Hildebrand parameter was determined for this material. Finally, bulk polymerization was carried out in a differential scanning calorimetry (DSC) equipment, using different heating rates, at two conditions: without catalyst and with dibutyltin dilaurate (DBTDL) as catalyst. The DSC curves were adjusted to a kinetic model, using the isoconversional method, indicating the autocatalytic effect characteristic of this class of polymerization reaction
Resumo:
The aim of this approach is to describe the design and construction of a low-cost automated water sampler prototype. In recent years, there is an increasing need on the use of automated equipments for hydro climatic variables to be use in urban and rural environments. Such devices are always used to provide measured information which is of crucial importance on the development of water resources strategies at watershed scale. Actually, many research and water public institutions have been using these kinds of equipments. In most of the cases, automated equipments are expensive and need to be imported, generating a situation of technologic dependency. The prototype is based on an electronic system which controls a peristaltic pump functioning, five solenoid valves and an ultrasonic sensor connected to a datalloger. An interface with the user allows communication with a PC, when the equipment functioning parameters can be provided. The equipment has a hydraulic module composed by a 12V peristaltic pump connected to a distribution circuit composed by five solenoid valves, one of them being used to clean the circuit before each sampling procedure. Samples are collected by four 1.95 polyethylene bottles. The sampler body was made of acrylic material, with a cylindrical shape, and dimensions 0.72 m and 0.38 m height and diameter, respectively. The weight of the equipment without samples is approximately 15 kg, which infers to its portability. The prototype development total cost budget was approximately US$ 1,560.00. Laboratory tests aimed to evaluate the equipment performance and functioning demonstrated satisfactory results
Resumo:
This work whose title is "The transcendental arguments: Kant Andy Hume's problem" has as its main objective to interpret Kant's answer to Hume's problem in the light of the conjunction of the causality and induction themes which is equivalent to skeptical- naturalist reading of the latter. In this sense, this initiative complements the previous treatment seen in our dissertation, where the same issue had been discussed from a merely skeptical reading that Kant got from Hume thought and was only examined causality. Among the specific objectives, we list the following: a) critical philosophy fulfills three basic functions, a founding, one negative and one would argue that the practical use of reason, here named as defensive b) the Kantian solution of Hume's problem in the first critisism would fulfill its founding and negative functions of critique of reason; c) the Kantian treatment of the theme of induction in other criticisms would will fulfill the defense function of critique of reason; d) that the evidence of Kant's answer to Hume's problem are more consistent when will be satisfied these three functions or moments of criticism. The basic structure of the work consists of three parts: the first the genesis of Hume's problem - our intention is to reconstruct Hume's problem, analyzing it from the perspective of two definitions of cause, where the dilution of the first definition in the second match the reduction of psychological knowledge to the probability of following the called naturalization of causal relations; whereas in the second - Legality and Causality - it is stated that when considering Hume in the skeptic-naturalist option, Kant is not entitled to respond by transcendental argument AB; A⊢B from the second Analogy, evidence that is rooted in the position of contemporary thinkers, such as Strawson and Allison; in third part - Purpose and Induction - admits that Kant responds to Hume on the level of regulative reason use, although the development of this test exceeds the limits of the founding function of criticism. And this is articulated in both the Introduction and Concluding Remarks by meeting the defensive [and negative] function of criticism. In this context, based on the use of so-called transcendental arguments that project throughout the critical trilogy, we provide solution to a recurring issue that recurs at several points in our submission and concerning to the "existence and / or the necessity of empirical causal laws. In this light, our thesis is that transcendental arguments are only an apodictic solution to the Hume s skeptical-naturalist problem when is at stake a practical project in which the interest of reason is ensured, as will, in short, proved in our final considerations
Resumo:
The following work is to interpret and analyze the problem of induction under a vision founded on set theory and probability theory as a basis for solution of its negative philosophical implications related to the systems of inductive logic in general. Due to the importance of the problem and the relatively recent developments in these fields of knowledge (early 20th century), as well as the visible relations between them and the process of inductive inference, it has been opened a field of relatively unexplored and promising possibilities. The key point of the study consists in modeling the information acquisition process using concepts of set theory, followed by a treatment using probability theory. Throughout the study it was identified as a major obstacle to the probabilistic justification, both: the problem of defining the concept of probability and that of rationality, as well as the subtle connection between the two. This finding called for a greater care in choosing the criterion of rationality to be considered in order to facilitate the treatment of the problem through such specific situations, but without losing their original characteristics so that the conclusions can be extended to classic cases such as the question about the continuity of the sunrise
Resumo:
Lithium (Li) is a chemical element with atomic number 3 and it is among the lightest known elements in the universe. In general, the Lithium is found in the nature under the form of two stable isotopes, the 6Li and 7Li. This last one is the most dominant and responds for about 93% of the Li found in the Universe. Due to its fragileness this element is largely used in the astrophysics, especially in what refers to the understanding of the physical process that has occurred since the Big Bang going through the evolution of the galaxies and stars. In the primordial nucleosynthesis in the Big Bang moment (BBN), the theoretical calculation forecasts a Li production along with all the light elements such as Deuterium and Beryllium. To the Li the BNB theory reviews a primordial abundance of Log log ǫ(Li) =2.72 dex in a logarithmic scale related to the H. The abundance of Li found on the poor metal stars, or pop II stars type, is called as being the abundance of Li primordial and is the measure as being log ǫ(Li) =2.27 dex. In the ISM (Interstellar medium), that reflects the current value, the abundance of Lithium is log ǫ(Li) = 3.2 dex. This value has great importance for our comprehension on the chemical evolution of the galaxy. The process responsible for the increasing of the primordial value present in the Li is not clearly understood until nowadays. In fact there is a real contribution of Li from the giant stars of little mass and this contribution needs to be well streamed if we want to understand our galaxy. The main objection in this logical sequence is the appearing of some giant stars with little mass of G and K spectral types which atmosphere is highly enriched with Li. Such elevated values are exactly the opposite of what could happen with the typical abundance of giant low mass stars, where convective envelops pass through a mass deepening in which all the Li should be diluted and present abundances around log ǫ(Li) ∼1.4 dex following the model of stellar evolution. In the Literature three suggestions are found that try to reconcile the values of the abundance of Li theoretical and observed in these rich in Li giants, but any of them bring conclusive answers. In the present work, we propose a qualitative study of the evolutionary state of the rich in Li stars in the literature along with the recent discovery of the first star rich in Li observed by the Kepler Satellite. The main objective of this work is to promote a solid discussion about the evolutionary state based on the characteristic obtained from the seismic analysis of the object observed by Kepler. We used evolutionary traces and simulation done with the population synthesis code TRILEGAL intending to evaluate as precisely as possible the evolutionary state of the internal structure of these groups of stars. The results indicate a very short characteristic time when compared to the evolutionary scale related to the enrichment of these stars
Resumo:
The present essay shows strategies of improvement in a well succeded evolutionary metaheuristic to solve the Asymmetric Traveling Salesman Problem. Such steps consist in a Memetic Algorithm projected mainly to this problem. Basically this improvement applied optimizing techniques known as Path-Relinking and Vocabulary Building. Furthermore, this last one has being used in two different ways, in order to evaluate the effects of the improvement on the evolutionary metaheuristic. These methods were implemented in C++ code and the experiments were done under instances at TSPLIB library, being possible to observe that the procedures purposed reached success on the tests done