982 resultados para local minimum spanning tree (LMST)


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we develop a new family of graph kernels where the graph structure is probed by means of a discrete-time quantum walk. Given a pair of graphs, we let a quantum walk evolve on each graph and compute a density matrix with each walk. With the density matrices for the pair of graphs to hand, the kernel between the graphs is defined as the negative exponential of the quantum Jensen–Shannon divergence between their density matrices. In order to cope with large graph structures, we propose to construct a sparser version of the original graphs using the simplification method introduced in Qiu and Hancock (2007). To this end, we compute the minimum spanning tree over the commute time matrix of a graph. This spanning tree representation minimizes the number of edges of the original graph while preserving most of its structural information. The kernel between two graphs is then computed on their respective minimum spanning trees. We evaluate the performance of the proposed kernels on several standard graph datasets and we demonstrate their effectiveness and efficiency.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The current study presents quantitative reconstructions of tree cover, annual precipitation and mean July temperature derived from the pollen record from Lake Billyakh (65°17'N, 126°47'E, 340 m above sea level) spanning the last ca. 50 kyr. The reconstruction of tree cover suggests presence of woody plants through the entire analyzed time interval, although trees played only a minor role in the vegetation around Lake Billyakh prior to 14 kyr BP (<5%). This result corroborates low percentages of tree pollen and low scores of the cold deciduous forest biome in the PG1755 record from Lake Billyakh. The reconstructed values of the mean temperature of the warmest month ~8-10 °C do not support larch forest or woodland around Lake Billyakh during the coldest phase of the last glacial between ~32 and ~15 kyr BP. However, modern cases from northern Siberia, ca. 750 km north of Lake Billyakh, demonstrate that individual larch plants can grow within shrub and grass tundra landscape in very low mean July temperatures of about 8 °C. This makes plausible our hypothesis that the western and southern foreland of the Verkhoyansk Mountains could provide enough moist and warm microhabitats and allow individual larch specimens to survive climatic extremes of the last glacial. Reconstructed mean values of precipitation are about 270 mm/yr during the last glacial interval. This value is almost 100 mm higher than modern averages reported for the extreme-continental north-eastern Siberia east of Lake Billyakh, where larch-dominated cold deciduous forest grows at present. This suggests that last glacial environments around Lake Billyakh were never too dry for larch to grow and that the summer warmth was the main factor, which limited tree growth during the last glacial interval. The n-alkane analysis of the Siberian plants presented in this study demonstrates rather complex alkane distribution patterns, which challenge the interpretation of the fossil records. In particular, extremely low n-alkane concentrations in the leaves of local coniferous trees and shrubs suggest that their contribution to the litter and therefore to the fossil lake sediments might be not high enough for tracing the Quaternary history of the needleleaved taxa using the n-alkane biomarker method.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

New substation technology, such as non-conventional instrument transformers,and a need to reduce design and construction costs, are driving the adoption of Ethernet based digital process bus networks for high voltage substations. Protection and control applications can share a process bus, making more efficient use of the network infrastructure. This paper classifies and defines performance requirements for the protocols used in a process bus on the basis of application. These include GOOSE, SNMP and IEC 61850-9-2 sampled values. A method, based on the Multiple Spanning Tree Protocol (MSTP) and virtual local area networks, is presented that separates management and monitoring traffic from the rest of the process bus. A quantitative investigation of the interaction between various protocols used in a process bus is described. These tests also validate the effectiveness of the MSTP based traffic segregation method. While this paper focusses on a substation automation network, the results are applicable to other real-time industrial networks that implement multiple protocols. High volume sampled value data and time-critical circuit breaker tripping commands do not interact on a full duplex switched Ethernet network, even under very high network load conditions. This enables an efficient digital network to replace a large number of conventional analog connections between control rooms and high voltage switchyards.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A spanning tree T of a graph G is said to be a tree t-spanner if the distance between any two vertices in T is at most t times their distance in G. A graph that has a tree t-spanner is called a tree t-spanner admissible graph. The problem of deciding whether a graph is tree t-spanner admissible is NP-complete for any fixed t >= 4 and is linearly solvable for t <= 2. The case t = 3 still remains open. A chordal graph is called a 2-sep chordal graph if all of its minimal a - b vertex separators for every pair of non-adjacent vertices a and b are of size two. It is known that not all 2-sep chordal graphs admit tree 3-spanners This paper presents a structural characterization and a linear time recognition algorithm of tree 3-spanner admissible 2-sep chordal graphs. Finally, a linear time algorithm to construct a tree 3-spanner of a tree 3-spanner admissible 2-sep chordal graph is proposed. (C) 2010 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An algorithm to generate a minimal spanning tree is presented when the nodes with their coordinates in some m-dimensional Euclidean space and the corresponding metric are given. This algorithm is tested on manually generated data sets. The worst case time complexity of this algorithm is O(n log2n) for a collection of n data samples.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Chow and Liu introduced an algorithm for fitting a multivariate distribution with a tree (i.e. a density model that assumes that there are only pairwise dependencies between variables) and that the graph of these dependencies is a spanning tree. The original algorithm is quadratic in the dimesion of the domain, and linear in the number of data points that define the target distribution $P$. This paper shows that for sparse, discrete data, fitting a tree distribution can be done in time and memory that is jointly subquadratic in the number of variables and the size of the data set. The new algorithm, called the acCL algorithm, takes advantage of the sparsity of the data to accelerate the computation of pairwise marginals and the sorting of the resulting mutual informations, achieving speed ups of up to 2-3 orders of magnitude in the experiments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Manipulator motion planning is a task which relies heavily on the construction of a configuration space prior to path planning. However when fast real-time motion is needed, the full construction of the manipulator's high-dimensional configu-ration space can be too slow and expensive. Alternative planning methods, which avoid this full construction of the manipulator's configuration space are needed to solve this problem. Here, one such existing local planning method for manipulators based on configuration-sampling and subgoal-selection has been extended. Using a modified Artificial Potential Fields (APF) function, goal-configuration sampling and a novel subgoal selection method, it provides faster, more optimal paths than the previously proposed work. Simulation results show a decrease in both runtime and path lengths, along with a decrease in unexpected local minimum and crashing issues.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Consider an undirected graph G and a subgraph of G, H. A q-backbone k-colouring of (G,H) is a mapping f: V(G) {1, 2, ..., k} such that G is properly coloured and for each edge of H, the colours of its endpoints differ by at least q. The minimum number k for which there is a backbone k-colouring of (G,H) is the backbone chromatic number, BBCq(G,H). It has been proved that backbone k-colouring of (G,T) is at most 4 if G is a connected C4-free planar graph or non-bipartite C5-free planar graph or Cj-free, j∈{6,7,8} planar graph without adjacent triangles. In this thesis we improve the results mentioned above and prove that 2-backbone k-colouring of any connected planar graphs without adjacent triangles is at most 4 by using a discharging method. In the second part of this thesis we further improve these results by proving that for any graph G with χ(G) ≥ 4, BBC(G,T) = χ(G). In fact, we prove the stronger result that a backbone tree T in G exists, such that ∀ uv ∈ T, |f(u)-f(v)|=2 or |f(u)-f(v)| ≥ k-2, k = χ(G). For the case that G is a planar graph, according to Four Colour Theorem, χ(G) = 4; so, BBC(G,T) = 4.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Knowing the best 1D model of the crustal and upper mantle structure is useful not only for routine hypocenter determination, but also for linearized joint inversions of hypocenters and 3D crustal structure, where a good choice of the initial model can be very important. Here, we tested the combination of a simple GA inversion with the widely used HYPO71 program to find the best three-layer model (upper crust, lower crust, and upper mantle) by minimizing the overall P- and S-arrival residuals, using local and regional earthquakes in two areas of the Brazilian shield. Results from the Tocantins Province (Central Brazil) and the southern border of the Sao Francisco craton (SE Brazil) indicated an average crustal thickness of 38 and 43 km, respectively, consistent with previous estimates from receiver functions and seismic refraction lines. The GA + HYPO71 inversion produced correct Vp/Vs ratios (1.73 and 1.71, respectively), as expected from Wadati diagrams. Tests with synthetic data showed that the method is robust for the crustal thickness, Pn velocity, and Vp/Vs ratio when using events with distance up to about 400 km, despite the small number of events available (7 and 22, respectively). The velocities of the upper and lower crusts, however, are less well constrained. Interestingly, in the Tocantins Province, the GA + HYPO71 inversion showed a secondary solution (local minimum) for the average crustal thickness, besides the global minimum solution, which was caused by the existence of two distinct domains in the Central Brazil with very different crustal thicknesses. (C) 2010 Elsevier Ltd. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

O método de empilhamento sísmico por Superfície de Reflexão Comum (ou empilhamento SRC) produz a simulação de seções com afastamento nulo (NA) a partir dos dados de cobertura múltipla. Para meios 2D, o operador de empilhamento SRC depende de três parâmetros que são: o ângulo de emergência do raio central com fonte-receptor nulo (β0), o raio de curvatura da onda ponto de incidência normal (RNIP) e o raio de curvatura da onda normal (RN). O problema crucial para a implementação do método de empilhamento SRC consiste na determinação, a partir dos dados sísmicos, dos três parâmetros ótimos associados a cada ponto de amostragem da seção AN a ser simulada. No presente trabalho foi desenvolvido uma nova sequência de processamento para a simulação de seções AN por meio do método de empilhamento SRC. Neste novo algoritmo, a determinação dos três parâmetros ótimos que definem o operador de empilhamento SRC é realizada em três etapas: na primeira etapa são estimados dois parâmetros (β°0 e R°NIP) por meio de uma busca global bidimensional nos dados de cobertura múltipla. Na segunda etapa é usado o valor de β°0 estimado para determinar-se o terceiro parâmetro (R°N) através de uma busca global unidimensional na seção AN resultante da primeira etapa. Em ambas etapas as buscas globais são realizadas aplicando o método de otimização Simulated Annealing (SA). Na terceira etapa são determinados os três parâmetros finais (β0, RNIP e RN) através uma busca local tridimensional aplicando o método de otimização Variable Metric (VM) nos dados de cobertura múltipla. Nesta última etapa é usado o trio de parâmetros (β°0, R°NIP, R°N) estimado nas duas etapas anteriores como aproximação inicial. Com o propósito de simular corretamente os eventos com mergulhos conflitantes, este novo algoritmo prevê a determinação de dois trios de parâmetros associados a pontos de amostragem da seção AN onde há intersecção de eventos. Em outras palavras, nos pontos da seção AN onde dois eventos sísmicos se cruzam são determinados dois trios de parâmetros SRC, os quais serão usados conjuntamente na simulação dos eventos com mergulhos conflitantes. Para avaliar a precisão e eficiência do novo algoritmo, este foi aplicado em dados sintéticos de dois modelos: um com interfaces contínuas e outro com uma interface descontinua. As seções AN simuladas têm elevada razão sinal-ruído e mostram uma clara definição dos eventos refletidos e difratados. A comparação das seções AN simuladas com as suas similares obtidas por modelamento direto mostra uma correta simulação de reflexões e difrações. Além disso, a comparação dos valores dos três parâmetros otimizados com os seus correspondentes valores exatos calculados por modelamento direto revela também um alto grau de precisão. Usando a aproximação hiperbólica dos tempos de trânsito, porém sob a condição de RNIP = RN, foi desenvolvido um novo algoritmo para a simulação de seções AN contendo predominantemente campos de ondas difratados. De forma similar ao algoritmo de empilhamento SRC, este algoritmo denominado empilhamento por Superfícies de Difração Comum (SDC) também usa os métodos de otimização SA e VM para determinar a dupla de parâmetros ótimos (β0, RNIP) que definem o melhor operador de empilhamento SDC. Na primeira etapa utiliza-se o método de otimização SA para determinar os parâmetros iniciais β°0 e R°NIP usando o operador de empilhamento com grande abertura. Na segunda etapa, usando os valores estimados de β°0 e R°NIP, são melhorados as estimativas do parâmetro RNIP por meio da aplicação do algoritmo VM na seção AN resultante da primeira etapa. Na terceira etapa são determinados os melhores valores de β°0 e R°NIP por meio da aplicação do algoritmo VM nos dados de cobertura múltipla. Vale salientar que a aparente repetição de processos tem como efeito a atenuação progressiva dos eventos refletidos. A aplicação do algoritmo de empilhamento SDC em dados sintéticos contendo campos de ondas refletidos e difratados, produz como resultado principal uma seção AN simulada contendo eventos difratados claramente definidos. Como uma aplicação direta deste resultado na interpretação de dados sísmicos, a migração pós-empilhamento em profundidade da seção AN simulada produz uma seção com a localização correta dos pontos difratores associados às descontinuidades do modelo.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Non-Equilibrium Statistical Mechanics is a broad subject. Grossly speaking, it deals with systems which have not yet relaxed to an equilibrium state, or else with systems which are in a steady non-equilibrium state, or with more general situations. They are characterized by external forcing and internal fluxes, resulting in a net production of entropy which quantifies dissipation and the extent by which, by the Second Law of Thermodynamics, time-reversal invariance is broken. In this thesis we discuss some of the mathematical structures involved with generic discrete-state-space non-equilibrium systems, that we depict with networks in all analogous to electrical networks. We define suitable observables and derive their linear regime relationships, we discuss a duality between external and internal observables that reverses the role of the system and of the environment, we show that network observables serve as constraints for a derivation of the minimum entropy production principle. We dwell on deep combinatorial aspects regarding linear response determinants, which are related to spanning tree polynomials in graph theory, and we give a geometrical interpretation of observables in terms of Wilson loops of a connection and gauge degrees of freedom. We specialize the formalism to continuous-time Markov chains, we give a physical interpretation for observables in terms of locally detailed balanced rates, we prove many variants of the fluctuation theorem, and show that a well-known expression for the entropy production due to Schnakenberg descends from considerations of gauge invariance, where the gauge symmetry is related to the freedom in the choice of a prior probability distribution. As an additional topic of geometrical flavor related to continuous-time Markov chains, we discuss the Fisher-Rao geometry of nonequilibrium decay modes, showing that the Fisher matrix contains information about many aspects of non-equilibrium behavior, including non-equilibrium phase transitions and superposition of modes. We establish a sort of statistical equivalence principle and discuss the behavior of the Fisher matrix under time-reversal. To conclude, we propose that geometry and combinatorics might greatly increase our understanding of nonequilibrium phenomena.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lack of effective tools have hampered our ability to assess the size, growth and ages of clonal plants. With Serenoa repens (saw palmetto) as a model, we introduce a novel analytical framework that integrates DNA fingerprinting and mathematical modelling to simulate growth and estimate ages of clonal plants. We also demonstrate the application of such life-history information of clonal plants to provide insight into management plans. Serenoa is an ecologically important foundation species in many Southeastern United States ecosystems; yet, many land managers consider Serenoa a troublesome invasive plant. Accordingly, management plans have been developed to reduce or eliminate Serenoa with little understanding of its life history. Using Amplified Fragment Length Polymorphisms, we genotyped 263 Serenoa and 134 Sabal etonia (a sympatric non-clonal palmetto) samples collected from a 20 X 20 m study plot in Florida scrub. Sabal samples were used to assign small field-unidentifiable palmettos to Serenoa or Sabal and also as a negative control for clone detection. We then mathematically modelled clonal networks to estimate genet ages. Our results suggest that Serenoa predominantly propagate via vegetative sprouts and 10000-year-old genets may be common, while showing no evidence of clone formation by Sabal. The results of this and our previous studies suggest that: (i) Serenoa has been part of scrub associations for thousands of years, (ii) Serenoa invasion are unlikely and (ii) once Serenoa is eliminated from local communities, its restoration will be difficult. Reevaluation of the current management tools and plans is an urgent task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The lack of effective tools has hampered our ability to assess the size, growth and ages of clonal plants. With Serenoa repens (saw palmetto) as a model, we introduce a novel analytical frame work that integrates DNA fingerprinting and mathematical modelling to simulate growth and estimate ages of clonal plants. We also demonstrate the application of such life-history information of clonal plants to provide insight into management plans. Serenoa is an ecologically important foundation species in many Southeastern United States ecosystems; yet, many land managers consider Serenoa a troublesome invasive plant. Accordingly, management plans have been developed to reduce or eliminate Serenoa with little understanding of its life history. Using Amplified Fragment Length Polymorphisms, we genotyped 263 Serenoa and 134 Sabal etonia (a sympatric non-clonal palmetto) samples collected from a 20 x 20 m study plot in Florida scrub. Sabal samples were used to assign small field-unidentifiable palmettos to Serenoa or Sabal and also as a negative control for clone detection. We then mathematically modelled clonal networks to estimate genet ages. Our results suggest that Serenoa predominantly propagate via vegetative sprouts and 10000-year-old genets maybe common, while showing no evidence of clone formation by Sabal. The results of this and our previous studies suggest that: (i) Serenoa has been part of scrub associations for thousands of years, (ii) Serenoa invasions are unlikely and (ii) once Serenoa is eliminated from local communities, its restoration will be difficult. Reevaluation of the current management tools and plans is an urgent task.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sampling a network with a given probability distribution has been identified as a useful operation. In this paper we propose distributed algorithms for sampling networks, so that nodes are selected by a special node, called the source, with a given probability distribution. All these algorithms are based on a new class of random walks, that we call Random Centrifugal Walks (RCW). A RCW is a random walk that starts at the source and always moves away from it. Firstly, an algorithm to sample any connected network using RCW is proposed. The algorithm assumes that each node has a weight, so that the sampling process must select a node with a probability proportional to its weight. This algorithm requires a preprocessing phase before the sampling of nodes. In particular, a minimum diameter spanning tree (MDST) is created in the network, and then nodes weights are efficiently aggregated using the tree. The good news are that the preprocessing is done only once, regardless of the number of sources and the number of samples taken from the network. After that, every sample is done with a RCW whose length is bounded by the network diameter. Secondly, RCW algorithms that do not require preprocessing are proposed for grids and networks with regular concentric connectivity, for the case when the probability of selecting a node is a function of its distance to the source. The key features of the RCW algorithms (unlike previous Markovian approaches) are that (1) they do not need to warm-up (stabilize), (2) the sampling always finishes in a number of hops bounded by the network diameter, and (3) it selects a node with the exact probability distribution.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

An extended formulation of a polyhedron P is a linear description of a polyhedron Q together with a linear map π such that π(Q)=P. These objects are of fundamental importance in polyhedral combinatorics and optimization theory, and the subject of a number of studies. Yannakakis’ factorization theorem (Yannakakis in J Comput Syst Sci 43(3):441–466, 1991) provides a surprising connection between extended formulations and communication complexity, showing that the smallest size of an extended formulation of $$P$$P equals the nonnegative rank of its slack matrix S. Moreover, Yannakakis also shows that the nonnegative rank of S is at most 2c, where c is the complexity of any deterministic protocol computing S. In this paper, we show that the latter result can be strengthened when we allow protocols to be randomized. In particular, we prove that the base-2 logarithm of the nonnegative rank of any nonnegative matrix equals the minimum complexity of a randomized communication protocol computing the matrix in expectation. Using Yannakakis’ factorization theorem, this implies that the base-2 logarithm of the smallest size of an extended formulation of a polytope P equals the minimum complexity of a randomized communication protocol computing the slack matrix of P in expectation. We show that allowing randomization in the protocol can be crucial for obtaining small extended formulations. Specifically, we prove that for the spanning tree and perfect matching polytopes, small variance in the protocol forces large size in the extended formulation.