969 resultados para minimum spanning tree


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The identification of subject-specific traits extracted from patterns of brain activity still represents an important challenge. The need to detect distinctive brain features, which is relevant for biometric and brain computer interface systems, has been also emphasized in monitoring the effect of clinical treatments and in evaluating the progression of brain disorders. Graph theory and network science tools have revealed fundamental mechanisms of functional brain organization in resting-state M/EEG analysis. Nevertheless, it is still not clearly understood how several methodological aspects may bias the topology of the reconstructed functional networks. In this context, the literature shows inconsistency in the chosen length of the selected epochs, impeding a meaningful comparison between results from different studies. In this study we propose an approach which aims to investigate the existence of a distinctive functional core (sub-network) using an unbiased reconstruction of network topology. Brain signals from a public and freely available EEG dataset were analyzed using a phase synchronization based measure, minimum spanning tree and k-core decomposition. The analysis was performed for each classical brain rhythm separately. Furthermore, we aim to provide a network approach insensitive to the effects that epoch length has on functional connectivity (FC) and network reconstruction. Two different measures, the phase lag index (PLI) and the Amplitude Envelope Correlation (AEC), were applied to EEG resting-state recordings for a group of eighteen healthy volunteers. Weighted clustering coefficient (CCw), weighted characteristic path length (Lw) and minimum spanning tree (MST) parameters were computed to evaluate the network topology. The analysis was performed on both scalp and source-space data. Results about distinctive functional core, show highest classification rates from k-core decomposition in gamma (EER=0.130, AUC=0.943) and high beta (EER=0.172, AUC=0.905) frequency bands. Results from scalp analysis concerning the influence of epoch length, show a decrease in both mean PLI and AEC values with an increase in epoch length, with a tendency to stabilize at a length of 12 seconds for PLI and 6 seconds for AEC. Moreover, CCw and Lw show very similar behaviour, with metrics based on AEC more reliable in terms of stability. In general, MST parameters stabilize at short epoch lengths, particularly for MSTs based on PLI (1-6 seconds versus 4-8 seconds for AEC). At the source-level the results were even more reliable, with stability already at 1 second duration for PLI-based MSTs. Our results confirm that EEG analysis may represent an effective tool to identify subject-specific characteristics that may be of great impact for several bioengineering applications. Regarding epoch length, the present work suggests that both PLI and AEC depend on epoch length and that this has an impact on the reconstructed network topology, particularly at the scalp-level. Source-level MST topology is less sensitive to differences in epoch length, therefore enabling the comparison of brain network topology between different studies.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Wydzia Matematyki i Informatyki: Zakad Matematyki Dyskretnej

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a genetic algorithm for finding a constrained minimum spanning tree. The problem is of relevance in the design of minimum cost communication networks, where there is a need to connect all the terminals at a user site to a terminal concentrator in a multipoint (tree) configuration, while ensuring that link capacity constraints are not violated. The approach used maintains a distinction between genotype and phenotype, which produces superior results to those found using a direct representation in a previous study.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Nesta tese abordam-se vrias formulaes e diferentes mtodos para resolver o Problema da rvore de Suporte de Custo Mnimo com Restries de Peso (WMST Weight-constrained Minimum Spanning Tree Problem). Este problema, com aplicaes no desenho de redes de comunicaes e telecomunicaes, um problema de Otimizao Combinatria NP-difcil. O Problema WMST consiste em determinar, numa rede com custos e pesos associados s arestas, uma rvore de suporte de custo mnimo de tal forma que o seu peso total no exceda um dado limite especificado. Apresentam-se e comparam-se vrias formulaes para o problema. Uma delas usada para desenvolver um procedimento com introduo de cortes baseado em separao e que se tornou bastante til na obteno de solues para o problema. Tendo como propsito fortalecer as formulaes apresentadas, introduzem-se novas classes de desigualdades vlidas que foram adaptadas das conhecidas desigualdades de cobertura, desigualdades de cobertura estendida e desigualdades de cobertura levantada. As novas desigualdades incorporam a informao de dois conjuntos de solues: o conjunto das rvores de suporte e o conjunto saco-mochila. Apresentam-se diversos algoritmos heursticos de separao que nos permitem usar as desigualdades vlidas propostas de forma eficiente. Com base na decomposio Lagrangeana, apresentam-se e comparam-se algoritmos simples, mas eficientes, que podem ser usados para calcular limites inferiores e superiores para o valor timo do WMST. Entre eles encontram-se dois novos algoritmos: um baseado na convexidade da funo Lagrangeana e outro que faz uso da incluso de desigualdades vlidas. Com o objetivo de obter solues aproximadas para o Problema WMST usam-se mtodos heursticos para encontrar uma soluo inteira admissvel. Os mtodos heursticos apresentados so baseados nas estratgias Feasibility Pump e Local Branching. Apresentam-se resultados computacionais usando todos os mtodos apresentados. Os resultados mostram que os diferentes mtodos apresentados so bastante eficientes para encontrar solues para o Problema WMST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A Multi-Objective Antenna Placement Genetic Algorithm (MO-APGA) has been proposed for the synthesis of matched antenna arrays on complex platforms. The total number of antennas required, their position on the platform, location of loads, loading circuit parameters, decoupling and matching network topology, matching network parameters and feed network parameters are optimized simultaneously. The optimization goal was to provide a given minimum gain, specific gain discrimination between the main and back lobes and broadband performance. This algorithm is developed based on the non-dominated sorting genetic algorithm (NSGA-II) and Minimum Spanning Tree (MST) technique for producing diverse solutions when the number of objectives is increased beyond two. The proposed method is validated through the design of a wideband airborne SAR

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EMand the Minimum Spanning Tree algorithm to find the ML and MAP mixtureof trees for a variety of priors, including the Dirichlet and the MDL priors.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper introduces a probability model, the mixture of trees that can account for sparse, dynamically changing dependence relationships. We present a family of efficient algorithms that use EM and the Minimum Spanning Tree algorithm to find the ML and MAP mixture of trees for a variety of priors, including the Dirichlet and the MDL priors. We also show that the single tree classifier acts like an implicit feature selector, thus making the classification performance insensitive to irrelevant attributes. Experimental results demonstrate the excellent performance of the new model both in density estimation and in classification.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Large scale image mosaicing methods are in great demand among scientists who study dierent aspects of the seabed, and have been fostered by impressive advances in the capabilities of underwater robots in gathering optical data from the seaoor. Cost and weight constraints mean that lowcost Remotely operated vehicles (ROVs) usually have a very limited number of sensors. When a low-cost robot carries out a seafloor survey using a down-looking camera, it usually follows a predetermined trajectory that provides several non time-consecutive overlapping image pairs. Finding these pairs (a process known as topology estimation) is indispensable to obtaining globally consistent mosaics and accurate trajectory estimates, which are necessary for a global view of the surveyed area, especially when optical sensors are the only data source. This thesis presents a set of consistent methods aimed at creating large area image mosaics from optical data obtained during surveys with low-cost underwater vehicles. First, a global alignment method developed within a Feature-based image mosaicing (FIM) framework, where nonlinear minimisation is substituted by two linear steps, is discussed. Then, a simple four-point mosaic rectifying method is proposed to reduce distortions that might occur due to lens distortions, error accumulation and the diculties of optical imaging in an underwater medium. The topology estimation problem is addressed by means of an augmented state and extended Kalman lter combined framework, aimed at minimising the total number of matching attempts and simultaneously obtaining the best possible trajectory. Potential image pairs are predicted by taking into account the uncertainty in the trajectory. The contribution of matching an image pair is investigated using information theory principles. Lastly, a dierent solution to the topology estimation problem is proposed in a bundle adjustment framework. Innovative aspects include the use of fast image similarity criterion combined with a Minimum spanning tree (MST) solution, to obtain a tentative topology. This topology is improved by attempting image matching with the pairs for which there is the most overlap evidence. Unlike previous approaches for large-area mosaicing, our framework is able to deal naturally with cases where time-consecutive images cannot be matched successfully, such as completely unordered sets. Finally, the eciency of the proposed methods is discussed and a comparison made with other state-of-the-art approaches, using a series of challenging datasets in underwater scenarios

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Early American crania show a different morphological pattern from the one shared by late Native Americans. Although the origin of the diachronic morphological diversity seen on the continents is still debated, the distinct morphology of early Americans is well documented and widely dispersed. This morphology has been described extensively for South America, where larger samples are available. Here we test the hypotheses that the morphology of Early Americans results from retention of the morphological pattern of Late Pleistocene modern humans and that the occupation of the New World precedes the morphological differentiation that gave rise to recent Eurasian and American morphology. We compare Early American samples with European Upper Paleolithic skulls, the East Asian Zhoukoudian Upper Cave specimens and a series of 20 modern human reference crania. Canonical Analysis and Minimum Spanning Tree were used to assess the morphological affinities among the series, while Mantel and Dow-Cheverud tests based on Mahalanobis Squared Distances were used to test different evolutionary scenarios. Our results show strong morphological affinities among the early series irrespective of geographical origin, which together with the matrix analyses results favor the scenario of a late morphological differentiation of modern humans. We conclude that the geographic differentiation of modern human morphology is a late phenomenon that occurred after the initial settlement of the Americas. Am J Phys Anthropol 144:442-453, 2011. (c) 2010 Wiley-Liss, Inc.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Quadratic Minimum Spanning Tree Problem (QMST) is a version of the Minimum Spanning Tree Problem in which, besides the traditional linear costs, there is a quadratic structure of costs. This quadratic structure models interaction effects between pairs of edges. Linear and quadratic costs are added up to constitute the total cost of the spanning tree, which must be minimized. When these interactions are restricted to adjacent edges, the problem is named Adjacent Only Quadratic Minimum Spanning Tree (AQMST). AQMST and QMST are NP-hard problems that model several problems of transport and distribution networks design. In general, AQMST arises as a more suitable model for real problems. Although, in literature, linear and quadratic costs are added, in real applications, they may be conflicting. In this case, it may be interesting to consider these costs separately. In this sense, Multiobjective Optimization provides a more realistic model for QMST and AQMST. A review of the state-of-the-art, so far, was not able to find papers regarding these problems under a biobjective point of view. Thus, the objective of this Thesis is the development of exact and heuristic algorithms for the Biobjective Adjacent Only Quadratic Spanning Tree Problem (bi-AQST). In order to do so, as theoretical foundation, other NP-hard problems directly related to bi-AQST are discussed: the QMST and AQMST problems. Bracktracking and branch-and-bound exact algorithms are proposed to the target problem of this investigation. The heuristic algorithms developed are: Pareto Local Search, Tabu Search with ejection chain, Transgenetic Algorithm, NSGA-II and a hybridization of the two last-mentioned proposals called NSTA. The proposed algorithms are compared to each other through performance analysis regarding computational experiments with instances adapted from the QMST literature. With regard to exact algorithms, the analysis considers, in particular, the execution time. In case of the heuristic algorithms, besides execution time, the quality of the generated approximation sets is evaluated. Quality indicators are used to assess such information. Appropriate statistical tools are used to measure the performance of exact and heuristic algorithms. Considering the set of instances adopted as well as the criteria of execution time and quality of the generated approximation set, the experiments showed that the Tabu Search with ejection chain approach obtained the best results and the transgenetic algorithm ranked second. The PLS algorithm obtained good quality solutions, but at a very high computational time compared to the other (meta)heuristics, getting the third place. NSTA and NSGA-II algorithms got the last positions

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combinatorial optimization problems have the goal of maximize or minimize functions defined over a finite domain. Metaheuristics are methods designed to find good solutions in this finite domain, sometimes the optimum solution, using a subordinated heuristic, which is modeled for each particular problem. This work presents algorithms based on particle swarm optimization (metaheuristic) applied to combinatorial optimization problems: the Traveling Salesman Problem and the Multicriteria Degree Constrained Minimum Spanning Tree Problem. The first problem optimizes only one objective, while the other problem deals with many objectives. In order to evaluate the performance of the algorithms proposed, they are compared, in terms of the quality of the solutions found, to other approaches

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In general, pattern recognition techniques require a high computational burden for learning the discriminating functions that are responsible to separate samples from distinct classes. As such, there are several studies that make effort to employ machine learning algorithms in the context of big data classification problems. The research on this area ranges from Graphics Processing Units-based implementations to mathematical optimizations, being the main drawback of the former approaches to be dependent on the graphic video card. Here, we propose an architecture-independent optimization approach for the optimum-path forest (OPF) classifier, that is designed using a theoretical formulation that relates the minimum spanning tree with the minimum spanning forest generated by the OPF over the training dataset. The experiments have shown that the approach proposed can be faster than the traditional one in five public datasets, being also as accurate as the original OPF. (C) 2014 Elsevier B. V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The design of a network is a solution to several engineering and science problems. Several network design problems are known to be NP-hard, and population-based metaheuristics like evolutionary algorithms (EAs) have been largely investigated for such problems. Such optimization methods simultaneously generate a large number of potential solutions to investigate the search space in breadth and, consequently, to avoid local optima. Obtaining a potential solution usually involves the construction and maintenance of several spanning trees, or more generally, spanning forests. To efficiently explore the search space, special data structures have been developed to provide operations that manipulate a set of spanning trees (population). For a tree with n nodes, the most efficient data structures available in the literature require time O(n) to generate a new spanning tree that modifies an existing one and to store the new solution. We propose a new data structure, called node-depth-degree representation (NDDR), and we demonstrate that using this encoding, generating a new spanning forest requires average time O(root n). Experiments with an EA based on NDDR applied to large-scale instances of the degree-constrained minimum spanning tree problem have shown that the implementation adds small constants and lower order terms to the theoretical bound.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Mixed integer programming is up today one of the most widely used techniques for dealing with hard optimization problems. On the one side, many practical optimization problems arising from real-world applications (such as, e.g., scheduling, project planning, transportation, telecommunications, economics and finance, timetabling, etc) can be easily and effectively formulated as Mixed Integer linear Programs (MIPs). On the other hand, 50 and more years of intensive research has dramatically improved on the capability of the current generation of MIP solvers to tackle hard problems in practice. However, many questions are still open and not fully understood, and the mixed integer programming community is still more than active in trying to answer some of these questions. As a consequence, a huge number of papers are continuously developed and new intriguing questions arise every year. When dealing with MIPs, we have to distinguish between two different scenarios. The first one happens when we are asked to handle a general MIP and we cannot assume any special structure for the given problem. In this case, a Linear Programming (LP) relaxation and some integrality requirements are all we have for tackling the problem, and we are ``forced" to use some general purpose techniques. The second one happens when mixed integer programming is used to address a somehow structured problem. In this context, polyhedral analysis and other theoretical and practical considerations are typically exploited to devise some special purpose techniques. This thesis tries to give some insights in both the above mentioned situations. The first part of the work is focused on general purpose cutting planes, which are probably the key ingredient behind the success of the current generation of MIP solvers. Chapter 1 presents a quick overview of the main ingredients of a branch-and-cut algorithm, while Chapter 2 recalls some results from the literature in the context of disjunctive cuts and their connections with Gomory mixed integer cuts. Chapter 3 presents a theoretical and computational investigation of disjunctive cuts. In particular, we analyze the connections between different normalization conditions (i.e., conditions to truncate the cone associated with disjunctive cutting planes) and other crucial aspects as cut rank, cut density and cut strength. We give a theoretical characterization of weak rays of the disjunctive cone that lead to dominated cuts, and propose a practical method to possibly strengthen those cuts arising from such weak extremal solution. Further, we point out how redundant constraints can affect the quality of the generated disjunctive cuts, and discuss possible ways to cope with them. Finally, Chapter 4 presents some preliminary ideas in the context of multiple-row cuts. Very recently, a series of papers have brought the attention to the possibility of generating cuts using more than one row of the simplex tableau at a time. Several interesting theoretical results have been presented in this direction, often revisiting and recalling other important results discovered more than 40 years ago. However, is not clear at all how these results can be exploited in practice. As stated, the chapter is a still work-in-progress and simply presents a possible way for generating two-row cuts from the simplex tableau arising from lattice-free triangles and some preliminary computational results. The second part of the thesis is instead focused on the heuristic and exact exploitation of integer programming techniques for hard combinatorial optimization problems in the context of routing applications. Chapters 5 and 6 present an integer linear programming local search algorithm for Vehicle Routing Problems (VRPs). The overall procedure follows a general destroy-and-repair paradigm (i.e., the current solution is first randomly destroyed and then repaired in the attempt of finding a new improved solution) where a class of exponential neighborhoods are iteratively explored by heuristically solving an integer programming formulation through a general purpose MIP solver. Chapters 7 and 8 deal with exact branch-and-cut methods. Chapter 7 presents an extended formulation for the Traveling Salesman Problem with Time Windows (TSPTW), a generalization of the well known TSP where each node must be visited within a given time window. The polyhedral approaches proposed for this problem in the literature typically follow the one which has been proven to be extremely effective in the classical TSP context. Here we present an overall (quite) general idea which is based on a relaxed discretization of time windows. Such an idea leads to a stronger formulation and to stronger valid inequalities which are then separated within the classical branch-and-cut framework. Finally, Chapter 8 addresses the branch-and-cut in the context of Generalized Minimum Spanning Tree Problems (GMSTPs) (i.e., a class of NP-hard generalizations of the classical minimum spanning tree problem). In this chapter, we show how some basic ideas (and, in particular, the usage of general purpose cutting planes) can be useful to improve on branch-and-cut methods proposed in the literature.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Atmosphrische Aerosolpartikel wirken in vielerlei Hinsicht auf die Menschen und die Umwelt ein. Eine genaue Charakterisierung der Partikel hilft deren Wirken zu verstehen und dessen Folgen einzuschtzen. Partikel knnen hinsichtlich ihrer Gre, ihrer Form und ihrer chemischen Zusammensetzung charakterisiert werden. Mit der Laserablationsmassenspektrometrie ist es mglich die Gre und die chemische Zusammensetzung einzelner Aerosolpartikel zu bestimmen. Im Rahmen dieser Arbeit wurde das SPLAT (Single Particle Laser Ablation Time-of-flight mass spectrometer) zur besseren Analyse insbesondere von atmosphrischen Aerosolpartikeln weiterentwickelt. Der Aerosoleinlass wurde dahingehend optimiert, einen mglichst weiten Partikelgrenbereich (80 nm - 3 m) in das SPLAT zu transferieren und zu einem feinen Strahl zu bndeln. Eine neue Beschreibung fr die Beziehung der Partikelgre zu ihrer Geschwindigkeit im Vakuum wurde gefunden. Die Justage des Einlasses wurde mithilfe von Schrittmotoren automatisiert. Die optische Detektion der Partikel wurde so verbessert, dass Partikel mit einer Gre < 100 nm erfasst werden knnen. Aufbauend auf der optischen Detektion und der automatischen Verkippung des Einlasses wurde eine neue Methode zur Charakterisierung des Partikelstrahls entwickelt. Die Steuerelektronik des SPLAT wurde verbessert, so dass die maximale Analysefrequenz nur durch den Ablationslaser begrenzt wird, der hchsten mit etwa 10 Hz ablatieren kann. Durch eine Optimierung des Vakuumsystems wurde der Ionenverlust im Massenspektrometer um den Faktor 4 verringert.rnrnNeben den hardwareseitigen Weiterentwicklungen des SPLAT bestand ein Groteil dieser Arbeit in der Konzipierung und Implementierung einer Softwarelsung zur Analyse der mit dem SPLAT gewonnenen Rohdaten. CRISP (Concise Retrieval of Information from Single Particles) ist ein auf IGOR PRO (Wavemetrics, USA) aufbauendes Softwarepaket, das die effiziente Auswertung der Einzelpartikel Rohdaten erlaubt. CRISP enthlt einen neu entwickelten Algorithmus zur automatischen Massenkalibration jedes einzelnen Massenspektrums, inklusive der Unterdrckung von Rauschen und von Problemen mit Signalen die ein intensives Tailing aufweisen. CRISP stellt Methoden zur automatischen Klassifizierung der Partikel zur Verfgung. Implementiert sind k-means, fuzzy-c-means und eine Form der hierarchischen Einteilung auf Basis eines minimal aufspannenden Baumes. CRISP bietet die Mglichkeit die Daten vorzubehandeln, damit die automatische Einteilung der Partikel schneller abluft und die Ergebnisse eine hhere Qualitt aufweisen. Daneben kann CRISP auf einfache Art und Weise Partikel anhand vorgebener Kriterien sortieren. Die CRISP zugrundeliegende Daten- und Infrastruktur wurde in Hinblick auf Wartung und Erweiterbarkeit erstellt. rnrnIm Rahmen der Arbeit wurde das SPLAT in mehreren Kampagnen erfolgreich eingesetzt und die Fhigkeiten von CRISP konnten anhand der gewonnen Datenstze gezeigt werden.rnrnDas SPLAT ist nun in der Lage effizient im Feldeinsatz zur Charakterisierung des atmosphrischen Aerosols betrieben zu werden, whrend CRISP eine schnelle und gezielte Auswertung der Daten ermglicht.