892 resultados para Constructive heuristics
Resumo:
The problems of combinatory optimization have involved a large number of researchers in search of approximative solutions for them, since it is generally accepted that they are unsolvable in polynomial time. Initially, these solutions were focused on heuristics. Currently, metaheuristics are used more for this task, especially those based on evolutionary algorithms. The two main contributions of this work are: the creation of what is called an -Operon- heuristic, for the construction of the information chains necessary for the implementation of transgenetic (evolutionary) algorithms, mainly using statistical methodology - the Cluster Analysis and the Principal Component Analysis; and the utilization of statistical analyses that are adequate for the evaluation of the performance of the algorithms that are developed to solve these problems. The aim of the Operon is to construct good quality dynamic information chains to promote an -intelligent- search in the space of solutions. The Traveling Salesman Problem (TSP) is intended for applications based on a transgenetic algorithmic known as ProtoG. A strategy is also proposed for the renovation of part of the chromosome population indicated by adopting a minimum limit in the coefficient of variation of the adequation function of the individuals, with calculations based on the population. Statistical methodology is used for the evaluation of the performance of four algorithms, as follows: the proposed ProtoG, two memetic algorithms and a Simulated Annealing algorithm. Three performance analyses of these algorithms are proposed. The first is accomplished through the Logistic Regression, based on the probability of finding an optimal solution for a TSP instance by the algorithm being tested. The second is accomplished through Survival Analysis, based on a probability of the time observed for its execution until an optimal solution is achieved. The third is accomplished by means of a non-parametric Analysis of Variance, considering the Percent Error of the Solution (PES) obtained by the percentage in which the solution found exceeds the best solution available in the literature. Six experiments have been conducted applied to sixty-one instances of Euclidean TSP with sizes of up to 1,655 cities. The first two experiments deal with the adjustments of four parameters used in the ProtoG algorithm in an attempt to improve its performance. The last four have been undertaken to evaluate the performance of the ProtoG in comparison to the three algorithms adopted. For these sixty-one instances, it has been concluded on the grounds of statistical tests that there is evidence that the ProtoG performs better than these three algorithms in fifty instances. In addition, for the thirty-six instances considered in the last three trials in which the performance of the algorithms was evaluated through PES, it was observed that the PES average obtained with the ProtoG was less than 1% in almost half of these instances, having reached the greatest average for one instance of 1,173 cities, with an PES average equal to 3.52%. Therefore, the ProtoG can be considered a competitive algorithm for solving the TSP, since it is not rare in the literature find PESs averages greater than 10% to be reported for instances of this size.
Resumo:
The frequency selective surfaces, or FSS (Frequency Selective Surfaces), are structures consisting of periodic arrays of conductive elements, called patches, which are usually very thin and they are printed on dielectric layers, or by openings perforated on very thin metallic surfaces, for applications in bands of microwave and millimeter waves. These structures are often used in aircraft, missiles, satellites, radomes, antennae reflector, high gain antennas and microwave ovens, for example. The use of these structures has as main objective filter frequency bands that can be broadcast or rejection, depending on the specificity of the required application. In turn, the modern communication systems such as GSM (Global System for Mobile Communications), RFID (Radio Frequency Identification), Bluetooth, Wi-Fi and WiMAX, whose services are highly demanded by society, have required the development of antennas having, as its main features, and low cost profile, and reduced dimensions and weight. In this context, the microstrip antenna is presented as an excellent choice for communications systems today, because (in addition to meeting the requirements mentioned intrinsically) planar structures are easy to manufacture and integration with other components in microwave circuits. Consequently, the analysis and synthesis of these devices mainly, due to the high possibility of shapes, size and frequency of its elements has been carried out by full-wave models, such as the finite element method, the method of moments and finite difference time domain. However, these methods require an accurate despite great computational effort. In this context, computational intelligence (CI) has been used successfully in the design and optimization of microwave planar structures, as an auxiliary tool and very appropriate, given the complexity of the geometry of the antennas and the FSS considered. The computational intelligence is inspired by natural phenomena such as learning, perception and decision, using techniques such as artificial neural networks, fuzzy logic, fractal geometry and evolutionary computation. This work makes a study of application of computational intelligence using meta-heuristics such as genetic algorithms and swarm intelligence optimization of antennas and frequency selective surfaces. Genetic algorithms are computational search methods based on the theory of natural selection proposed by Darwin and genetics used to solve complex problems, eg, problems where the search space grows with the size of the problem. The particle swarm optimization characteristics including the use of intelligence collectively being applied to optimization problems in many areas of research. The main objective of this work is the use of computational intelligence, the analysis and synthesis of antennas and FSS. We considered the structures of a microstrip planar monopole, ring type, and a cross-dipole FSS. We developed algorithms and optimization results obtained for optimized geometries of antennas and FSS considered. To validate results were designed, constructed and measured several prototypes. The measured results showed excellent agreement with the simulated. Moreover, the results obtained in this study were compared to those simulated using a commercial software has been also observed an excellent agreement. Specifically, the efficiency of techniques used were CI evidenced by simulated and measured, aiming at optimizing the bandwidth of an antenna for wideband operation or UWB (Ultra Wideband), using a genetic algorithm and optimizing the bandwidth, by specifying the length of the air gap between two frequency selective surfaces, using an optimization algorithm particle swarm
Resumo:
This paper presents a theoretical and numerical analysis of the parameters of a rectangular microstrip antenna with metamaterial substrate. The metamaterial (MTM) theory was applied along with Transverse Transmission Line (LTT) method to characterize substrate quantities and obtain the general equations of the electromagnetic fields. A study on metamaterial theory was conducted to obtain the constructive parameters, which were characterized through permittivity and permeability tensors to arrive at a set of electromagnetic equations. Electromagnetic principes are used to obtained parameters such as complex resonance frequency, bandwidth and radiation pattern were then obtained. Different metamaterial and antenna configurations were simulated to miniaturize them physically and increase their bandwidth, the results of which are shown through graphics. The theoretical computational analysis of this work proved to be accurate when compared to other studies, and may be used for other metamaterial devices. Conclusions and suggestions for future work are also proposed
Resumo:
In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method
Resumo:
This work presents a theoretical and numerical analysis for the radiation characteristics of rectangular microstrip antenna using metamaterial substrate. The full wave analysis is performed in the Fourier transform domain through the application of the Transverse Transmission Line - TTL method. A study on metamaterial theory was conducted to obtain the constructive parameters, which were characterized through permittivity and permeability tensors to arrive at a set of electromagnetic equations. The general equations for the electromagnetic fields of the antenna are developed using the Transverse Transmission Line - TTL method. Imposing the boundary conditions, the dyadic Green s function components are obtained relating the surface current density components at the plane of the patch to the electric field tangential components. Then, Galerkin s method is used to obtain a system of matrix equations, whose solution gives the antenna resonant frequency. From this modeling, it is possible to obtain numerical results for the resonant frequency and return loss for different configurations and substrates
Resumo:
Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark
Resumo:
The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors
Resumo:
This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency
Resumo:
Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures
Resumo:
This work proposes a formulation for optimization of 2D-structure layouts submitted to mechanic and thermal shipments and applied an h-adaptive filter process which conduced to computational low spend and high definition structural layouts. The main goal of the formulation is to minimize the structure mass submitted to an effective state of stress of von Mises, with stability and lateral restriction variants. A criterion of global measurement was used for intents a parametric condition of stress fields. To avoid singularity problems was considerate a release on the stress restriction. On the optimization was used a material approach where the homogenized constructive equation was function of the material relative density. The intermediary density effective properties were represented for a SIMP-type artificial model. The problem was simplified by use of the method of finite elements of Galerkin using triangles with linear Lagrangian basis. On the solution of the optimization problem, was applied the augmented Lagrangian Method, that consists on minimum problem sequence solution with box-type restrictions, resolved by a 2nd orderprojection method which uses the method of the quasi-Newton without memory, during the problem process solution. This process reduces computational expends showing be more effective and solid. The results materialize more refined layouts with accurate topologic and shape of structure definitions. On the other hand formulation of mass minimization with global stress criterion provides to modeling ready structural layouts, with violation of the criterion of homogeneous distributed stress
Resumo:
In the last decades there was a concentrate effort of researchers in the search for options to the problem of the continuity of city development and environmental preservation. The recycling and reuse of materials in industry have been considerate as the best option to sustainable development. One of the relevant aspects in this case refers to the rational use of electrical energy. At this point, the role of engineering is to conceive new processes and materials, with the objective of reducing energy consumption and maintaining, at the same time the benefits of the technology. In this context, the objective of the present research is to analyze quantitatively the thermal behavior of walls constructed with concrete blocks which composition aggregates the expanded polystyrene (EPS) reused in the shape of flakes and in the shape of a board, resulting in a “light concrete”. Experiments were conducted, systematically, with a wall (considerate as a standard) constructed with blocks of ordinary concrete; two walls constructed with blocks of light concrete, distinct by the proportion of EPS/sand; a wall of ceramic bricks (“eight holes” type) and a wall with ordinary blocks of cement, in a way to obtain a comparative analysis of the thermal behavior of the systems. Others tests conducted with the blocks were: stress analysis and thermal properties analysis (ρ, cp e k). Based on the results, it was possible to establish quantitative relationship between the concentration (density) of EPS in the constructive elements and the decreasing of the heat transfer rate, that also changes the others thermal properties of the material, as was proved. It was observed that the walls of light concrete presents better thermal behavior compared with the other two constructive systems world wide used. Based in the results of the investigation, there was shown the viability of the use of EPS as aggregate (raw material) in the composition of the concrete, with the objective of the fabrication of blocks to non-structural masonry that works as a thermal insulation in buildings. A direct consequence of this result is the possibility of reduction of the consume of the electrical energy used to climatization of buildings. Other aspect of the investigation that must be pointed was the reuse of the EPS as a raw material to civil construction, with a clear benefit to reducing of environmental problems
Resumo:
A housing unit was built to study the thermal performance, and of material using a composite made of gypsum and EPS ground. We used two techniques of construction, using blocks, and filling on the spot. Two compositions of the composite were studied. The blocks were fixed using conventional mortar. In the technical of filling on the spot were used PET bottles up inside the walls to provide mechanical and thermal resistance. Compression tests were realized according to the ABNT standard of sealing bricks. It is going to be shown an analysis of the thermal comfort through the use of thermocouples placed on the walls of the building, internally and externally. The manufacturing viability of houses, using recyclable materials, through the use of composite materials proposed will be demonstrated. The constructive aspects showing the advantages and disadvantages of the technique used also will be broached. The block used presents structural functions and thermal insulating, is low cost and represents an alternative to the use of EPS and PET bottles which are materials that end up occupying much space in the landfills, giving than an ecologically correct use. The results of thermal analysis shows the thermal comfort provided by the composite by the obtainment of a difference between the internal and external surfaces of the walls more exposed to the sun around 7º C. The average temperature of the air inside the building, around 28.0 º C was below the zone of thermal comfort recommended for countries with hot weather
Resumo:
We built an experimental house on an UFRN´s land using blocks made by a composite consisting of cement, plaster, EPS, crushed rubber and sand. Several blocks were made from various compositions and we made preliminary tests of mechanical and thermal resistance, choosing the most appropriate proportion. PET bottles were used inside the block to provide thermal resistance. In this work, a second function was given to the bottles: to serve as a docking between the blocks, because the ends of the cylinders came out of each block on top as well as at the bottom, with the bottom cut, allowing to fit of the extremities of the upper cylinder of a block in the lower holes of the other one, which were formed by the cutting already mentioned. Minimum compression tests were performed according to ABNT standards for walls closing blocks (fence). With that house built, we did studies of thermal performance in order to ascertain conditions of comfort, checking external and internal temperatures in the walls and in the ambient, among other variables, such as wind speed and relative humidity. The resulting blocks provided adequate thermal insulation to the environment, where the walls presented differences up to 11.7 ºC between the outer and inner faces, getting the maximum temperature inside the house around 31 °C, within the so-called thermal comfort zone for warm climates. At the end of the experiments it was evident the effectiveness of that construction in order to provide thermal comfort in the internal environment of the house, as well as we could confirm the viability of building houses from recyclable materials, reducing the constructive costs, becoming a suitable alternative for low- incoming families. Moreover, besides the low cost, the proposal represents an alternative use of various recyclable materials, therefore considered an ecological solution
Resumo:
O texto levanta os perfis epistemológico e socianalítico da questão paradigmática. Mauss evidenciara o moule affectif das noções científicas de força e causa. Posteriormente Baudouin falaria na indução arquetípica das noções e a antropologia do imaginário de Durand concluiria pela indução arquetipal do conceito pela imagem. Chegava-se, assim, ao desvendamento do substrato inconsciente das ideações, de um substrato regido pela catexis vetorializada, traduzindo-se nos valores como cerne das ideações. É o famoso a priori emotivo. Portanto, no texto, questionam-se dois mitos, esteios da ciência clássica: o mito da objetividade científica e o da neutralidade axiológica. Destaca, assim, a falácia da existência de uma ruptura epistemológica entre ciência e ideologia. A partir daí, as ideações tornam-se ideologias, sobretudo nas ciências do homem e nas ciências da educação que, ademais, tornam-se suporte de uma disfarçada luta ideológica, na qual, num colonialismo cognitivo, as estratégias de conhecimento dissimulam as de preconceito. Entretanto, assumir a realidade desse suporte fantasmanalítico e ideológico propicia uma tarefa educativa salutar: os paradigmas tornam-se fantasias e, nessa relativização crítica, podem ser usados como um campo de objetos transicionais coletivos num ludismo cultural e educativo. No policulturalismo da sociedade contemporânea, o politeísmo de valores de Weber transforma-se num politeísmo epistemológico, regido pelo relativismo ontológico de Feyerabend e por uma ética do pragmatismo. Articulando cultura, organização e educação, a antropologia das organizações educativas e a culturanálise de grupos de Paula Carvalho traduzem as heurísticas dessa dialética transicional.
Resumo:
Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)