917 resultados para Hyper-heuristics


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this work we present a new clustering method that groups up points of a data set in classes. The method is based in a algorithm to link auxiliary clusters that are obtained using traditional vector quantization techniques. It is described some approaches during the development of the work that are based in measures of distances or dissimilarities (divergence) between the auxiliary clusters. This new method uses only two a priori information, the number of auxiliary clusters Na and a threshold distance dt that will be used to decide about the linkage or not of the auxiliary clusters. The number os classes could be automatically found by the method, that do it based in the chosen threshold distance dt, or it is given as additional information to help in the choice of the correct threshold. Some analysis are made and the results are compared with traditional clustering methods. In this work different dissimilarities metrics are analyzed and a new one is proposed based on the concept of negentropy. Besides grouping points of a set in classes, it is proposed a method to statistical modeling the classes aiming to obtain a expression to the probability of a point to belong to one of the classes. Experiments with several values of Na e dt are made in tests sets and the results are analyzed aiming to study the robustness of the method and to consider heuristics to the choice of the correct threshold. During this work it is explored the aspects of information theory applied to the calculation of the divergences. It will be explored specifically the different measures of information and divergence using the Rényi entropy. The results using the different metrics are compared and commented. The work also has appendix where are exposed real applications using the proposed method

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Several methods of mobile robot navigation request the mensuration of robot position and orientation in its workspace. In the wheeled mobile robot case, techniques based on odometry allow to determine the robot localization by the integration of incremental displacements of its wheels. However, this technique is subject to errors that accumulate with the distance traveled by the robot, making unfeasible its exclusive use. Other methods are based on the detection of natural or artificial landmarks present in the environment and whose location is known. This technique doesnt generate cumulative errors, but it can request a larger processing time than the methods based on odometry. Thus, many methods make use of both techniques, in such a way that the odometry errors are periodically corrected through mensurations obtained from landmarks. Accordding to this approach, this work proposes a hybrid localization system for wheeled mobile robots in indoor environments based on odometry and natural landmarks. The landmarks are straight lines de.ned by the junctions in environments floor, forming a bi-dimensional grid. The landmark detection from digital images is perfomed through the Hough transform. Heuristics are associated with that transform to allow its application in real time. To reduce the search time of landmarks, we propose to map odometry errors in an area of the captured image that possesses high probability of containing the sought mark

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The seismic method is of extreme importance in geophysics. Mainly associated with oil exploration, this line of research focuses most of all investment in this area. The acquisition, processing and interpretation of seismic data are the parts that instantiate a seismic study. Seismic processing in particular is focused on the imaging that represents the geological structures in subsurface. Seismic processing has evolved significantly in recent decades due to the demands of the oil industry, and also due to the technological advances of hardware that achieved higher storage and digital information processing capabilities, which enabled the development of more sophisticated processing algorithms such as the ones that use of parallel architectures. One of the most important steps in seismic processing is imaging. Migration of seismic data is one of the techniques used for imaging, with the goal of obtaining a seismic section image that represents the geological structures the most accurately and faithfully as possible. The result of migration is a 2D or 3D image which it is possible to identify faults and salt domes among other structures of interest, such as potential hydrocarbon reservoirs. However, a migration fulfilled with quality and accuracy may be a long time consuming process, due to the mathematical algorithm heuristics and the extensive amount of data inputs and outputs involved in this process, which may take days, weeks and even months of uninterrupted execution on the supercomputers, representing large computational and financial costs, that could derail the implementation of these methods. Aiming at performance improvement, this work conducted the core parallelization of a Reverse Time Migration (RTM) algorithm, using the parallel programming model Open Multi-Processing (OpenMP), due to the large computational effort required by this migration technique. Furthermore, analyzes such as speedup, efficiency were performed, and ultimately, the identification of the algorithmic scalability degree with respect to the technological advancement expected by future processors

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper analyzes the performance of a parallel implementation of Coupled Simulated Annealing (CSA) for the unconstrained optimization of continuous variables problems. Parallel processing is an efficient form of information processing with emphasis on exploration of simultaneous events in the execution of software. It arises primarily due to high computational performance demands, and the difficulty in increasing the speed of a single processing core. Despite multicore processors being easily found nowadays, several algorithms are not yet suitable for running on parallel architectures. The algorithm is characterized by a group of Simulated Annealing (SA) optimizers working together on refining the solution. Each SA optimizer runs on a single thread executed by different processors. In the analysis of parallel performance and scalability, these metrics were investigated: the execution time; the speedup of the algorithm with respect to increasing the number of processors; and the efficient use of processing elements with respect to the increasing size of the treated problem. Furthermore, the quality of the final solution was verified. For the study, this paper proposes a parallel version of CSA and its equivalent serial version. Both algorithms were analysed on 14 benchmark functions. For each of these functions, the CSA is evaluated using 2-24 optimizers. The results obtained are shown and discussed observing the analysis of the metrics. The conclusions of the paper characterize the CSA as a good parallel algorithm, both in the quality of the solutions and the parallel scalability and parallel efficiency

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Bayesian networks are powerful tools as they represent probability distributions as graphs. They work with uncertainties of real systems. Since last decade there is a special interest in learning network structures from data. However learning the best network structure is a NP-Hard problem, so many heuristics algorithms to generate network structures from data were created. Many of these algorithms use score metrics to generate the network model. This thesis compare three of most used score metrics. The K-2 algorithm and two pattern benchmarks, ASIA and ALARM, were used to carry out the comparison. Results show that score metrics with hyperparameters that strength the tendency to select simpler network structures are better than score metrics with weaker tendency to select simpler network structures for both metrics (Heckerman-Geiger and modified MDL). Heckerman-Geiger Bayesian score metric works better than MDL with large datasets and MDL works better than Heckerman-Geiger with small datasets. The modified MDL gives similar results to Heckerman-Geiger for large datasets and close results to MDL for small datasets with stronger tendency to select simpler network structures

Relevância:

10.00% 10.00%

Publicador:

Resumo:

O texto levanta os perfis epistemológico e socianalítico da questão paradigmática. Mauss evidenciara o moule affectif das noções científicas de força e causa. Posteriormente Baudouin falaria na indução arquetípica das noções e a antropologia do imaginário de Durand concluiria pela indução arquetipal do conceito pela imagem. Chegava-se, assim, ao desvendamento do substrato inconsciente das ideações, de um substrato regido pela catexis vetorializada, traduzindo-se nos valores como cerne das ideações. É o famoso a priori emotivo. Portanto, no texto, questionam-se dois mitos, esteios da ciência clássica: o mito da objetividade científica e o da neutralidade axiológica. Destaca, assim, a falácia da existência de uma ruptura epistemológica entre ciência e ideologia. A partir daí, as ideações tornam-se ideologias, sobretudo nas ciências do homem e nas ciências da educação que, ademais, tornam-se suporte de uma disfarçada luta ideológica, na qual, num colonialismo cognitivo, as estratégias de conhecimento dissimulam as de preconceito. Entretanto, assumir a realidade desse suporte fantasmanalítico e ideológico propicia uma tarefa educativa salutar: os paradigmas tornam-se fantasias e, nessa relativização crítica, podem ser usados como um campo de objetos transicionais coletivos num ludismo cultural e educativo. No policulturalismo da sociedade contemporânea, o politeísmo de valores de Weber transforma-se num politeísmo epistemológico, regido pelo relativismo ontológico de Feyerabend e por uma ética do pragmatismo. Articulando cultura, organização e educação, a antropologia das organizações educativas e a culturanálise de grupos de Paula Carvalho traduzem as heurísticas dessa dialética transicional.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Obesity is rampant in modern society and growth hormone (GH) could be useful as adjunct therapy to reduce the obesity-induced cardiovascular damage. To investigate GH effects on obesity, initially 32 male Wistar rats were divided into two groups (n = 16): control (C) was fed standard-chow and water and hyper-caloric (H) was fed hypercaloric chow and 30% sucrose in its drinking water. After 45 days, both C and H groups were divided into two subgroups (n = 8): C + PL was fed standard-chow, water and received saline subcutaneously; C + GH was fed standard-chow, water, and received 2 mg/kg/day GH subcutaneously; H + PL was fed hypercaloric diet, 30% sucrose in its drinking water, and received saline subcutaneously; and H + GH was fed hypercaloric diet, 30% sucrose in its drinking water, and received GH subcutaneously. After 75 days of total experimental period, H + PL rats were considered obese, having higher body weight, body mass index, Lee-index, and atherogenic index (AI) compared to C + PL. Obesity was accompanied by enhanced myocardial lipid hydroperoxide (LH) and lactate dehydrogenase (LDH), as well of depressed energy expenditure (RMR) and oxygen consumption(VO(2))/body weight. H + GH rats had higher fasting RMR, as well as lower AI and myocardial LH than H + PL. Comparing C + GH with C + PL, despite no effects on morphometric parameters, lipid profile, myocardial LH, and LDH activity, GH enhanced fed RMR and myocardial pyruvate dehydrogenase. In conclusion, the present study brought new insights into the GH effects on obesity related cardiovascular damage demonstrating, for the first time, that GH regulated cardiac metabolic pathways, enhanced energy expenditure and improved the lipid profile in obesity condition. Growth hormone in standard fed condition also offered promising therapeutic value enhancing pyruvate-dehydrogenase activity and glucose oxidation in cardiac tissue, thus optimizing myocardial energy metabolism.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Conselho Nacional de Desenvolvimento Científico e Tecnológico (CNPq)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Autism comprises a heterogeneous group of neurodevelopmental disorders that affects the brain maturation and produces sensorial, motor, language and social interaction deficits in early childhood. Several studies have shown a major involvement of genetic factors leading to a predisposition to autism, which are possibly affected by environmental modulators during embryonic and post-natal life. Recent studies in animal models indicate that alterations in epigenetic control during development can generate neuronal maturation disturbances and produce a hyper-excitable circuit, resulting in typical symptoms of autism. In the animal model of autism induced by valproic acid (VPA) during rat pregnancy, behavioral, electrophysiological and cellular alterations have been reported which can also be observed in patients with autism. However, only a few studies have correlated behavioral alterations with the supposed neuronal hyper-excitability in this model. The aim of this project was to generate an animal model of autism by pre-natal exposure to VPA and evaluate the early post-natal development and pre-puberal (PND30) behavior in the offspring. Furthermore, we quantified the parvalbumin-positive neuronal distribution in the medial prefrontal cortex and Purkinje cells in the cerebellum of VPA animals. Our results show that VPA treatment induced developmental alterations, which were observed in behavioral changes as compared to vehicle-treated controls. VPA animals showed clear behavioral abnormalities such as hyperlocomotion, prolonged stereotipies and reduced social interaction with an unfamiliar mate. Cellular quantification revealed a decrease in the number of parvalbumin-positive interneurons in the anterior cingulate cortex and in the prelimbic cortex of the mPFC, suggesting an excitatory/inhibitory unbalance in this animal model of autism. Moreover, we also observed that the neuronal reduction occurred mainly in the cortical layers II/III and V/VI. We did not detect any change in the density of Purkinje neurons in the Crus I region of the cerebellar cortex. Together, our results strengthens the face validity of the VPA model in rats and shed light on specific changes in the inhibitory circuitry of the prefrontal cortex in this autism model. Further studies should address the challenges to clarify particular electrophysiological correlates of the cellular alterations in order to better understand the behavioral dysfunctions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In the minimization of tool switches problem we seek a sequence to process a set of jobs so that the number of tool switches required is minimized. In this work different variations of a heuristic based on partial ordered job sequences are implemented and evaluated. All variations adopt a depth first strategy of the enumeration tree. The computational test results indicate that good results can be obtained by a variation which keeps the best three branches at each node of the enumeration tree, and randomly choose, among all active nodes, the next node to branch when backtracking.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Capacitated Centered Clustering Problem (CCCP) consists of defining a set of p groups with minimum dissimilarity on a network with n points. Demand values are associated with each point and each group has a demand capacity. The problem is well known to be NP-hard and has many practical applications. In this paper, the hybrid method Clustering Search (CS) is implemented to solve the CCCP. This method identifies promising regions of the search space by generating solutions with a metaheuristic, such as Genetic Algorithm, and clustering them into clusters that are then explored further with local search heuristics. Computational results considering instances available in the literature are presented to demonstrate the efficacy of CS. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Due to great difficulty of accurate solution of Combinatorial Optimization Problems, some heuristic methods have been developed and during many years, the analysis of performance of these approaches was not carried through in a systematic way. The proposal of this work is to make a statistical analysis of heuristic approaches to the Traveling Salesman Problem (TSP). The focus of the analysis is to evaluate the performance of each approach in relation to the necessary computational time until the attainment of the optimal solution for one determined instance of the TSP. Survival Analysis, assisted by methods for the hypothesis test of the equality between survival functions was used. The evaluated approaches were divided in three classes: Lin-Kernighan Algorithms, Evolutionary Algorithms and Particle Swarm Optimization. Beyond those approaches, it was enclosed in the analysis, a memetic algorithm (for symmetric and asymmetric TSP instances) that utilizes the Lin-Kernighan heuristics as its local search procedure