820 resultados para Search-based algorithms


Relevância:

40.00% 40.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Optical flow methods are accurate algorithms for estimating the displacement and velocity fields of objects in a wide variety of applications, being their performance dependent on the configuration of a set of parameters. Since there is a lack of research that aims to automatically tune such parameters, in this work we have proposed an evolutionary-based framework for such task, thus introducing three techniques for such purpose: Particle Swarm Optimization, Harmony Search and Social-Spider Optimization. The proposed framework has been compared against with the well-known Large Displacement Optical Flow approach, obtaining the best results in three out eight image sequences provided by a public dataset. Additionally, the proposed framework can be used with any other optimization technique.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this paper we deal with the problem of boosting the Optimum-Path Forest (OPF) clustering approach using evolutionary-based optimization techniques. As the OPF classifier performs an exhaustive search to find out the size of sample's neighborhood that allows it to reach the minimum graph cut as a quality measure, we compared several optimization techniques that can obtain close graph cut values to the ones obtained by brute force. Experiments in two public datasets in the context of unsupervised network intrusion detection have showed the evolutionary optimization techniques can find suitable values for the neighborhood faster than the exhaustive search. Additionally, we have showed that it is not necessary to employ many agents for such task, since the neighborhood size is defined by discrete values, with constrain the set of possible solution to a few ones.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The emergence of wavelength-division multiplexing (WDM) technology provides the capability for increasing the bandwidth of synchronous optical network (SONET) rings by grooming low-speed traffic streams onto different high-speed wavelength channels. Since the cost of SONET add–drop multiplexers (SADM) at each node dominates the total cost of these networks, how to assign the wavelength, groom the traffic, and bypass the traffic through the intermediate nodes has received a lot of attention from researchers recently. Moreover, the traffic pattern of the optical network changes from time to time. How to develop dynamic reconfiguration algorithms for traffic grooming is an important issue. In this paper, two cases (best fit and full fit) for handling reconfigurable SONET over WDM networks are proposed. For each approach, an integer linear programming model and heuristic algorithms (TS-1 and TS-2, based on the tabu search method) are given. The results demonstrate that the TS-1 algorithm can yield better solutions but has a greater running time than the greedy algorithm for the best fit case. For the full fit case, the tabu search heuristic yields competitive results compared with an earlier simulated annealing based method and it is more stable for the dynamic case.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

This paper presents an optimum user-steered boundary tracking approach for image segmentation, which simulates the behavior of water flowing through a riverbed. The riverbed approach was devised using the image foresting transform with a never-exploited connectivity function. We analyze its properties in the derived image graphs and discuss its theoretical relation with other popular methods such as live wire and graph cuts. Several experiments show that riverbed can significantly reduce the number of user interactions (anchor points), as compared to live wire for objects with complex shapes. This paper also includes a discussion about how to combine different methods in order to take advantage of their complementary strengths.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Models are becoming increasingly important in the software development process. As a consequence, the number of models being used is increasing, and so is the need for efficient mechanisms to search them. Various existing search engines could be used for this purpose, but they lack features to properly search models, mainly because they are strongly focused on text-based search. This paper presents Moogle, a model search engine that uses metamodeling information to create richer search indexes and to allow more complex queries to be performed. The paper also presents the results of an evaluation of Moogle, which showed that the metamodel information improves the accuracy of the search.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Acute kidney injury (AKI) is classically described as a rapid loss of kidney function. AKI affects more than 15% of all hospital admissions and is associated with elevated mortality rates. Although many advances have occurred, intermittent or continuous renal replacement therapies are still considered the best options for reversing mild and severe AKI syndrome. For this reason, it is essential that innovative and effective therapies, without side effects and complications, be developed to treat AKI and the end-stages of renal disease. Mesenchymal stem cell (MSC) based therapies have numerous advantages in helping to repair inflamed and damaged tissues and are being considered as a new alternative for treating kidney injuries. Numerous experimental models have shown that MSCs can act via differentiation-independent mechanisms to help renal recovery. Essentially, MSCs can secrete a pool of cytokines, growth factors and chemokines, express enzymes, interact via cell-to-cell contacts and release bioagents such as microvesicles to orchestrate renal protection. In this review, we propose seven distinct properties of MSCs which explain how renoprotection may be conferred: 1) anti-inflammatory; 2) pro-angiogenic; 3) stimulation of endogenous progenitor cells; 4) anti-apoptotic; 5) anti-fibrotic; 6) anti-oxidant; and 7) promotion of cellular reprogramming. In this context, these mechanisms, either individually or synergically, could induce renal protection and functional recovery. This review summarises the most important effects and benefits associated with MSC-based therapies in experimental renal disease models and attempts to clarify the mechanisms behind the MSC-related renoprotection. MSCs may prove to be an effective, innovative and affordable treatment for moderate and severe AKI. However, more studies need to be performed to provide a more comprehensive global understanding of MSC-related therapies and to ensure their safety for future clinical applications.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Precipitation retrieval over high latitudes, particularly snowfall retrieval over ice and snow, using satellite-based passive microwave spectrometers, is currently an unsolved problem. The challenge results from the large variability of microwave emissivity spectra for snow and ice surfaces, which can mimic, to some degree, the spectral characteristics of snowfall. This work focuses on the investigation of a new snowfall detection algorithm specific for high latitude regions, based on a combination of active and passive sensors able to discriminate between snowing and non snowing areas. The space-borne Cloud Profiling Radar (on CloudSat), the Advanced Microwave Sensor units A and B (on NOAA-16) and the infrared spectrometer MODIS (on AQUA) have been co-located for 365 days, from October 1st 2006 to September 30th, 2007. CloudSat products have been used as truth to calibrate and validate all the proposed algorithms. The methodological approach followed can be summarised into two different steps. In a first step, an empirical search for a threshold, aimed at discriminating the case of no snow, was performed, following Kongoli et al. [2003]. This single-channel approach has not produced appropriate results, a more statistically sound approach was attempted. Two different techniques, which allow to compute the probability above and below a Brightness Temperature (BT) threshold, have been used on the available data. The first technique is based upon a Logistic Distribution to represent the probability of Snow given the predictors. The second technique, defined Bayesian Multivariate Binary Predictor (BMBP), is a fully Bayesian technique not requiring any hypothesis on the shape of the probabilistic model (such as for instance the Logistic), which only requires the estimation of the BT thresholds. The results obtained show that both methods proposed are able to discriminate snowing and non snowing condition over the Polar regions with a probability of correct detection larger than 0.5, highlighting the importance of a multispectral approach.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Combinatorial Optimization is a branch of optimization that deals with the problems where the set of feasible solutions is discrete. Routing problem is a well studied branch of Combinatorial Optimization that concerns the process of deciding the best way of visiting the nodes (customers) in a network. Routing problems appear in many real world applications including: Transportation, Telephone or Electronic data Networks. During the years, many solution procedures have been introduced for the solution of different Routing problems. Some of them are based on exact approaches to solve the problems to optimality and some others are based on heuristic or metaheuristic search to find optimal or near optimal solutions. There is also a less studied method, which combines both heuristic and exact approaches to face different problems including those in the Combinatorial Optimization area. The aim of this dissertation is to develop some solution procedures based on the combination of heuristic and Integer Linear Programming (ILP) techniques for some important problems in Routing Optimization. In this approach, given an initial feasible solution to be possibly improved, the method follows a destruct-and-repair paradigm, where the given solution is randomly destroyed (i.e., customers are removed in a random way) and repaired by solving an ILP model, in an attempt to find a new improved solution.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

In this thesis we made the first steps towards the systematic application of a methodology for automatically building formal models of complex biological systems. Such a methodology could be useful also to design artificial systems possessing desirable properties such as robustness and evolvability. The approach we follow in this thesis is to manipulate formal models by means of adaptive search methods called metaheuristics. In the first part of the thesis we develop state-of-the-art hybrid metaheuristic algorithms to tackle two important problems in genomics, namely, the Haplotype Inference by parsimony and the Founder Sequence Reconstruction Problem. We compare our algorithms with other effective techniques in the literature, we show strength and limitations of our approaches to various problem formulations and, finally, we propose further enhancements that could possibly improve the performance of our algorithms and widen their applicability. In the second part, we concentrate on Boolean network (BN) models of gene regulatory networks (GRNs). We detail our automatic design methodology and apply it to four use cases which correspond to different design criteria and address some limitations of GRN modeling by BNs. Finally, we tackle the Density Classification Problem with the aim of showing the learning capabilities of BNs. Experimental evaluation of this methodology shows its efficacy in producing network that meet our design criteria. Our results, coherently to what has been found in other works, also suggest that networks manipulated by a search process exhibit a mixture of characteristics typical of different dynamical regimes.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

The Capacitated Location-Routing Problem (CLRP) is a NP-hard problem since it generalizes two well known NP-hard problems: the Capacitated Facility Location Problem (CFLP) and the Capacitated Vehicle Routing Problem (CVRP). The Multi-Depot Vehicle Routing Problem (MDVRP) is known to be a NP-hard since it is a generalization of the well known Vehicle Routing Problem (VRP), arising with one depot. This thesis addresses heuristics algorithms based on the well-know granular search idea introduced by Toth and Vigo (2003) to solve the CLRP and the MDVRP. Extensive computational experiments on benchmark instances for both problems have been performed to determine the effectiveness of the proposed algorithms. This work is organized as follows: Chapter 1 describes a detailed overview and a methodological review of the literature for the the Capacitated Location-Routing Problem (CLRP) and the Multi-Depot Vehicle Routing Problem (MDVRP). Chapter 2 describes a two-phase hybrid heuristic algorithm to solve the CLRP. Chapter 3 shows a computational comparison of heuristic algorithms for the CLRP. Chapter 4 presents a hybrid granular tabu search approach for solving the MDVRP.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Information is nowadays a key resource: machine learning and data mining techniques have been developed to extract high-level information from great amounts of data. As most data comes in form of unstructured text in natural languages, research on text mining is currently very active and dealing with practical problems. Among these, text categorization deals with the automatic organization of large quantities of documents in priorly defined taxonomies of topic categories, possibly arranged in large hierarchies. In commonly proposed machine learning approaches, classifiers are automatically trained from pre-labeled documents: they can perform very accurate classification, but often require a consistent training set and notable computational effort. Methods for cross-domain text categorization have been proposed, allowing to leverage a set of labeled documents of one domain to classify those of another one. Most methods use advanced statistical techniques, usually involving tuning of parameters. A first contribution presented here is a method based on nearest centroid classification, where profiles of categories are generated from the known domain and then iteratively adapted to the unknown one. Despite being conceptually simple and having easily tuned parameters, this method achieves state-of-the-art accuracy in most benchmark datasets with fast running times. A second, deeper contribution involves the design of a domain-independent model to distinguish the degree and type of relatedness between arbitrary documents and topics, inferred from the different types of semantic relationships between respective representative words, identified by specific search algorithms. The application of this model is tested on both flat and hierarchical text categorization, where it potentially allows the efficient addition of new categories during classification. Results show that classification accuracy still requires improvements, but models generated from one domain are shown to be effectively able to be reused in a different one.

Relevância:

40.00% 40.00%

Publicador:

Resumo:

Die vorliegende Arbeit behandelt die Entwicklung und Verbesserung von linear skalierenden Algorithmen für Elektronenstruktur basierte Molekulardynamik. Molekulardynamik ist eine Methode zur Computersimulation des komplexen Zusammenspiels zwischen Atomen und Molekülen bei endlicher Temperatur. Ein entscheidender Vorteil dieser Methode ist ihre hohe Genauigkeit und Vorhersagekraft. Allerdings verhindert der Rechenaufwand, welcher grundsätzlich kubisch mit der Anzahl der Atome skaliert, die Anwendung auf große Systeme und lange Zeitskalen. Ausgehend von einem neuen Formalismus, basierend auf dem großkanonischen Potential und einer Faktorisierung der Dichtematrix, wird die Diagonalisierung der entsprechenden Hamiltonmatrix vermieden. Dieser nutzt aus, dass die Hamilton- und die Dichtematrix aufgrund von Lokalisierung dünn besetzt sind. Das reduziert den Rechenaufwand so, dass er linear mit der Systemgröße skaliert. Um seine Effizienz zu demonstrieren, wird der daraus entstehende Algorithmus auf ein System mit flüssigem Methan angewandt, das extremem Druck (etwa 100 GPa) und extremer Temperatur (2000 - 8000 K) ausgesetzt ist. In der Simulation dissoziiert Methan bei Temperaturen oberhalb von 4000 K. Die Bildung von sp²-gebundenem polymerischen Kohlenstoff wird beobachtet. Die Simulationen liefern keinen Hinweis auf die Entstehung von Diamant und wirken sich daher auf die bisherigen Planetenmodelle von Neptun und Uranus aus. Da das Umgehen der Diagonalisierung der Hamiltonmatrix die Inversion von Matrizen mit sich bringt, wird zusätzlich das Problem behandelt, eine (inverse) p-te Wurzel einer gegebenen Matrix zu berechnen. Dies resultiert in einer neuen Formel für symmetrisch positiv definite Matrizen. Sie verallgemeinert die Newton-Schulz Iteration, Altmans Formel für beschränkte und nicht singuläre Operatoren und Newtons Methode zur Berechnung von Nullstellen von Funktionen. Der Nachweis wird erbracht, dass die Konvergenzordnung immer mindestens quadratisch ist und adaptives Anpassen eines Parameters q in allen Fällen zu besseren Ergebnissen führt.