933 resultados para GRAPH SEARCH ALGORITHMS


Relevância:

20.00% 20.00%

Publicador:

Resumo:

XVIII IUFRO World Congress, Ljubljana 1986.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In this paper I will offer a novel understanding of a priori knowledge. My claim is that the sharp distinction that is usually made between a priori and a posteriori knowledge is groundless. It will be argued that a plausible understanding of a priori and a posteriori knowledge has to acknowledge that they are in a constant bootstrapping relationship. It is also crucial that we distinguish between a priori propositions that hold in the actual world and merely possible, non-actual a priori propositions, as we will see when considering cases like Euclidean geometry. Furthermore, contrary to what Kripke seems to suggest, a priori knowledge is intimately connected with metaphysical modality, indeed, grounded in it. The task of a priori reasoning, according to this account, is to delimit the space of metaphysically possible worlds in order for us to be able to determine what is actual.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

An asymmetric binary search switching technique for a successive approximation register (SAR) ADC is presented, and trade-off between switching energy and conversion cycles is discussed. Without using any additional switches, the proposed technique consumes 46% less switching energy, for a small input swing (0.5 V-ref (P-P)), as compared to the last reported efficient switching technique in literature for an 8-bit SAR ADC. For a full input swing (2 V-ref (P-P)), the proposed technique consumes 16.5% less switching energy.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This thesis studies optimisation problems related to modern large-scale distributed systems, such as wireless sensor networks and wireless ad-hoc networks. The concrete tasks that we use as motivating examples are the following: (i) maximising the lifetime of a battery-powered wireless sensor network, (ii) maximising the capacity of a wireless communication network, and (iii) minimising the number of sensors in a surveillance application. A sensor node consumes energy both when it is transmitting or forwarding data, and when it is performing measurements. Hence task (i), lifetime maximisation, can be approached from two different perspectives. First, we can seek for optimal data flows that make the most out of the energy resources available in the network; such optimisation problems are examples of so-called max-min linear programs. Second, we can conserve energy by putting redundant sensors into sleep mode; we arrive at the sleep scheduling problem, in which the objective is to find an optimal schedule that determines when each sensor node is asleep and when it is awake. In a wireless network simultaneous radio transmissions may interfere with each other. Task (ii), capacity maximisation, therefore gives rise to another scheduling problem, the activity scheduling problem, in which the objective is to find a minimum-length conflict-free schedule that satisfies the data transmission requirements of all wireless communication links. Task (iii), minimising the number of sensors, is related to the classical graph problem of finding a minimum dominating set. However, if we are not only interested in detecting an intruder but also locating the intruder, it is not sufficient to solve the dominating set problem; formulations such as minimum-size identifying codes and locating–dominating codes are more appropriate. This thesis presents approximation algorithms for each of these optimisation problems, i.e., for max-min linear programs, sleep scheduling, activity scheduling, identifying codes, and locating–dominating codes. Two complementary approaches are taken. The main focus is on local algorithms, which are constant-time distributed algorithms. The contributions include local approximation algorithms for max-min linear programs, sleep scheduling, and activity scheduling. In the case of max-min linear programs, tight upper and lower bounds are proved for the best possible approximation ratio that can be achieved by any local algorithm. The second approach is the study of centralised polynomial-time algorithms in local graphs – these are geometric graphs whose structure exhibits spatial locality. Among other contributions, it is shown that while identifying codes and locating–dominating codes are hard to approximate in general graphs, they admit a polynomial-time approximation scheme in local graphs.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Determining the sequence of amino acid residues in a heteropolymer chain of a protein with a given conformation is a discrete combinatorial problem that is not generally amenable for gradient-based continuous optimization algorithms. In this paper we present a new approach to this problem using continuous models. In this modeling, continuous "state functions" are proposed to designate the type of each residue in the chain. Such a continuous model helps define a continuous sequence space in which a chosen criterion is optimized to find the most appropriate sequence. Searching a continuous sequence space using a deterministic optimization algorithm makes it possible to find the optimal sequences with much less computation than many other approaches. The computational efficiency of this method is further improved by combining it with a graph spectral method, which explicitly takes into account the topology of the desired conformation and also helps make the combined method more robust. The continuous modeling used here appears to have additional advantages in mimicking the folding pathways and in creating the energy landscapes that help find sequences with high stability and kinetic accessibility. To illustrate the new approach, a widely used simplifying assumption is made by considering only two types of residues: hydrophobic (H) and polar (P). Self-avoiding compact lattice models are used to validate the method with known results in the literature and data that can be practically obtained by exhaustive enumeration on a desktop computer. We also present examples of sequence design for the HP models of some real proteins, which are solved in less than five minutes on a single-processor desktop computer Some open issues and future extensions are noted.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We report on a search for the production of the Higgs boson decaying to two bottom quarks accompanied by two additional quarks. The data sample used corresponds to an integrated luminosity of approximately 4  fb-1 of pp̅ collisions at √s=1.96  TeV recorded by the CDF II experiment. This search includes twice the integrated luminosity of the previous published result, uses analysis techniques to distinguish jets originating from light flavor quarks and those from gluon radiation, and adds sensitivity to a Higgs boson produced by vector boson fusion. We find no evidence of the Higgs boson and place limits on the Higgs boson production cross section for Higgs boson masses between 100  GeV/c2 and 150  GeV/c2 at the 95% confidence level. For a Higgs boson mass of 120  GeV/c2, the observed (expected) limit is 10.5 (20.0) times the predicted standard model cross section.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This work studies decision problems from the perspective of nondeterministic distributed algorithms. For a yes-instance there must exist a proof that can be verified with a distributed algorithm: all nodes must accept a valid proof, and at least one node must reject an invalid proof. We focus on locally checkable proofs that can be verified with a constant-time distributed algorithm. For example, it is easy to prove that a graph is bipartite: the locally checkable proof gives a 2-colouring of the graph, which only takes 1 bit per node. However, it is more difficult to prove that a graph is not bipartite—it turns out that any locally checkable proof requires Ω(log n) bits per node. In this work we classify graph problems according to their local proof complexity, i.e., how many bits per node are needed in a locally checkable proof. We establish tight or near-tight results for classical graph properties such as the chromatic number. We show that the proof complexities form a natural hierarchy of complexity classes: for many classical graph problems, the proof complexity is either 0, Θ(1), Θ(log n), or poly(n) bits per node. Among the most difficult graph properties are symmetric graphs, which require Ω(n2) bits per node, and non-3-colourable graphs, which require Ω(n2/log n) bits per node—any pure graph property admits a trivial proof of size O(n2).

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We study the following problem: given a geometric graph G and an integer k, determine if G has a planar spanning subgraph (with the original embedding and straight-line edges) such that all nodes have degree at least k. If G is a unit disk graph, the problem is trivial to solve for k = 1. We show that even the slightest deviation from the trivial case (e.g., quasi unit disk graphs or k = 1) leads to NP-hard problems.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We present a low-complexity algorithm based on reactive tabu search (RTS) for near maximum likelihood (ML) detection in large-MIMO systems. The conventional RTS algorithm achieves near-ML performance for 4-QAM in large-MIMO systems. But its performance for higher-order QAM is far from ML performance. Here, we propose a random-restart RTS (R3TS) algorithm which achieves significantly better bit error rate (BER) performance compared to that of the conventional RTS algorithm in higher-order QAM. The key idea is to run multiple tabu searches, each search starting with a random initial vector and choosing the best among the resulting solution vectors. A criterion to limit the number of searches is also proposed. Computer simulations show that the R3TS algorithm achieves almost the ML performance in 16 x 16 V-BLAST MIMO system with 16-QAM and 64-QAM at significantly less complexities than the sphere decoder. Also, in a 32 x 32 V-BLAST MIMO system, the R3TS performs close to ML lower bound within 1.6 dB for 16-QAM (128 bps/Hz), and within 2.4 dB for 64-QAM (192 bps/Hz) at 10(-3) BER.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We propose to compress weighted graphs (networks), motivated by the observation that large networks of social, biological, or other relations can be complex to handle and visualize. In the process also known as graph simplication, nodes and (unweighted) edges are grouped to supernodes and superedges, respectively, to obtain a smaller graph. We propose models and algorithms for weighted graphs. The interpretation (i.e. decompression) of a compressed, weighted graph is that a pair of original nodes is connected by an edge if their supernodes are connected by one, and that the weight of an edge is approximated to be the weight of the superedge. The compression problem now consists of choosing supernodes, superedges, and superedge weights so that the approximation error is minimized while the amount of compression is maximized. In this paper, we formulate this task as the 'simple weighted graph compression problem'. We then propose a much wider class of tasks under the name of 'generalized weighted graph compression problem'. The generalized task extends the optimization to preserve longer-range connectivities between nodes, not just individual edge weights. We study the properties of these problems and propose a range of algorithms to solve them, with dierent balances between complexity and quality of the result. We evaluate the problems and algorithms experimentally on real networks. The results indicate that weighted graphs can be compressed efficiently with relatively little compression error.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

A polygon is said to be a weak visibility polygon if every point of the polygon is visible from some point of an internal segment. In this paper we derive properties of shortest paths in weak visibility polygons and present a characterization of weak visibility polygons in terms of shortest paths between vertices. These properties lead to the following efficient algorithms: (i) an O(E) time algorithm for determining whether a simple polygon P is a weak visibility polygon and for computing a visibility chord if it exist, where E is the size of the visibility graph of P and (ii) an O(n2) time algorithm for computing the maximum hidden vertex set in an n-sided polygon weakly visible from a convex edge.