962 resultados para EXPANDER GRAPHS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we introduce a method to detect pathological pathways of a disease. We aim to identify biological processes rather than single genes affected by the chronic fatigue syndrome (CFS). So far, CFS has neither diagnostic clinical signals nor abnormalities that could be diagnosed by laboratory examinations. It is also unclear if the CFS represents one disease or can be subdivided in different categories. We use information from clinical trials, the gene ontology (GO) database as well as gene expression data to identify undirected dependency graphs (UDGs) representing biological processes according to the GO database. The structural comparison of UDGs of sick versus non-sick patients allows us to make predictions about the modification of pathways due to pathogenesis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Food webs represent trophic (feeding) interactions in ecosystems. Since the late 1970s, it has been recognized that food-webs have a surprisingly close relationship to interval graphs. One interpretation of food-web intervality is that trophic niche space is low-dimensional, meaning that the trophic character of a species can be expressed by a single or at most a few quantitative traits. In a companion paper we demonstrated, by simulating a minimal food-web model, that food webs are also expected to be interval when niche-space is high-dimensional. Here we characterize the fundamental mechanisms underlying this phenomenon by proving a set of rigorous conditions for food-web intervality in high-dimensional niche spaces. Our results apply to a large class of food-web models, including the special case previously studied numerically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We present an information-theoretic method to measure the structural information content of networks and apply it to chemical graphs. As a result, we find that our entropy measure is more general than classical information indices known in mathematical and computational chemistry. Further, we demonstrate that our measure reflects the essence of molecular branching meaningfully by determining the structural information content of some chemical graphs numerically.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper we define the structural information content of graphs as their corresponding graph entropy. This definition is based on local vertex functionals obtained by calculating-spheres via the algorithm of Dijkstra. We prove that the graph entropy and, hence, the local vertex functionals can be computed with polynomial time complexity enabling the application of our measure for large graphs. In this paper we present numerical results for the graph entropy of chemical graphs and discuss resulting properties. (C) 2007 Elsevier Ltd. All rights reserved.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The classification of protein structures is an important and still outstanding problem. The purpose of this paper is threefold. First, we utilize a relation between the Tutte and homfly polynomial to show that the Alexander-Conway polynomial can be algorithmically computed for a given planar graph. Second, as special cases of planar graphs, we use polymer graphs of protein structures. More precisely, we use three building blocks of the three-dimensional protein structure-alpha-helix, antiparallel beta-sheet, and parallel beta-sheet-and calculate, for their corresponding polymer graphs, the Tutte polynomials analytically by providing recurrence equations for all three secondary structure elements. Third, we present numerical results comparing the results from our analytical calculations with the numerical results of our algorithm-not only to test consistency, but also to demonstrate that all assigned polynomials are unique labels of the secondary structure elements. This paves the way for an automatic classification of protein structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A single raised bog from the eastern Netherlands has been repeatedly analysed and 14C dated over the past few decades. Here we assess the within-site variability of fossil proxy data through comparing the regional
pollen, macrofossils and non-pollen palynomorphs of four of these profiles. High-resolution chronologies were obtained using 14C dating and Bayesian age-depth modelling. Where chronologies of profiles overlap, proxy curves are compared between the profiles using greyscale graphs that visualise chronological uncertainties. Even at this small spatial scale, there is considerable variability of the fossil proxy curves. Implications regarding signal (climate) and noise (internal dynamics) of the different types of fossil proxies are discussed. Single cores are of limited value for reconstructing centennial-scale climate change, and only by combining multiple cores and proxies can we obtain a reliable understanding of past environmental change and possible forcing factors (e.g., solar variability).

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We investigate the computational complexity of testing dominance and consistency in CP-nets. Previously, the complexity of dominance has been determined for restricted classes in which the dependency graph of the CP-net is acyclic. However, there are preferences of interest that define cyclic dependency graphs; these are modeled with general CP-nets. In our main results, we show here that both dominance and consistency for general CP-nets are PSPACE-complete. We then consider the concept of strong dominance, dominance equivalence and dominance incomparability, and several notions of optimality, and identify the complexity of the corresponding decision problems. The reductions used in the proofs are from STRIPS planning, and thus reinforce the earlier established connections between both areas.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent undesirable feature interactions at run-time. When the subscription requested by a user is inconsistent, one problem is to find an optimal relaxation, which is a generalisation of the feedback vertex set problem on directed graphs, and thus it is an NP-hard task. We present several constraint programming formulations of the problem. We also present formulations using partial weighted maximum Boolean satisfiability and mixed integer linear programming. We study all these formulations by experimentally comparing them on a variety of randomly generated instances of the feature subscription problem.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

It presents questions that have arisen as a result of more than 20 years of collecting the data for the Bach Bibliography. Taking editions of The Well-Tempered Clavier as an example and using figures and graphs extracted from the Bach Bibliography, Tomita explores the various facets of the work's reception including its market appeal, the ambitions that steered its editors and publishers, and trends in its interpretation.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods are presented for developing synthesizable FFT cores. These are based on a modular approach in which parameterized commutator and processor blocks are cascaded to implement the computations required in many important FFT signal flow graphs. In addition, it is shown how the use of a digital serial data organization can be used to produce systems that offer 100% processor utilization along with reductions in storage requirements. The approach has been used to create generators for the automated synthesis of FFT cores that are portable across a broad range of silicon technologies. Resulting chip designs are competitive with ones created using manual methods but with significant reductions in design times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Methods are presented for developing synthesizable FFT cores. These are based on a modular approach in which parameterizable blocks are cascaded to implement the computations required across a range of typical FFT signal flow graphs. The underlying architectural approach combines the use of a digital serial data organization with generic commutator blocks to produce systems that offer 100% processor utilization with storage requirements less than previous designs. The approach has been used to create generators for the automated synthesis of FFT cores that are portable across a broad range of silicon technologies. Resulting chip designs are competitive with manual methods but with significant reductions in design times.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper addresses the analytical solution of the mixed-mode bending (MMB) problem. The first published solutions used a load separation in pure mode I and mode II and were applied for a crack length less than the beam half-span, a <= L. In later publications, the same mode separation was used in deriving the analytical solution for crack lengths bigger than the beam half-span, a > L. In this paper it is shown that this mode separation is not valid when a > L and in some cases may lead to very erroneous results. The correct mode separation and the corresponding analytical solutions, when a > L, are presented. Results, of force vs. displacement and force vs. crack length graphs, obtained using the existing formulation and the corrected formulation are compared. A finite element solution, which does not use mode separation, is also presented

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Aim-To develop an expert system model for the diagnosis of fine needle aspiration cytology (FNAC) of the breast.

Methods-Knowledge and uncertainty were represented in the form of a Bayesian belief network which permitted the combination of diagnostic evidence in a cumulative manner and provided a final probability for the possible diagnostic outcomes. The network comprised 10 cytological features (evidence nodes), each independently linked to the diagnosis (decision node) by a conditional probability matrix. The system was designed to be interactive in that the cytopathologist entered evidence into the network in the form of likelihood ratios for the outcomes at each evidence node.

Results-The efficiency of the network was tested on a series of 40 breast FNAC specimens. The highest diagnostic probability provided by the network agreed with the cytopathologists' diagnosis in 100% of cases for the assessment of discrete, benign, and malignant aspirates. A typical probably benign cases were given probabilities in favour of a benign diagnosis. Suspicious cases tended to have similar probabilities for both diagnostic outcomes and so, correctly, could not be assigned as benign or malignant. A closer examination of cumulative belief graphs for the diagnostic sequence of each case provided insight into the diagnostic process, and quantitative data which improved the identification of suspicious cases.

Conclusion-The further development of such a system will have three important roles in breast cytodiagnosis: (1) to aid the cytologist in making a more consistent and objective diagnosis; (2) to provide a teaching tool on breast cytological diagnosis for the non-expert; and (3) it is the first stage in the development of a system capable of automated diagnosis through the use of expert system machine vision.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Based on the Dempster-Shafer (D-S) theory of evidence and G. Yen's (1989), extension of the theory, the authors propose approaches to representing heuristic knowledge by evidential mapping and pooling the mass distribution in a complex frame by partitioning that frame using Shafter's partition technique. The authors have generalized Yen's model from Bayesian probability theory to the D-S theory of evidence. Based on such a generalized model, an extended framework for evidential reasoning systems is briefly specified in which a semi-graph method is used to describe the heuristic knowledge. The advantage of such a method is that it can avoid the complexity of graphs without losing the explicitness of graphs. The extended framework can be widely used to build expert systems

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Electing a leader is a fundamental task in distributed computing. In its implicit version, only the leader must know who is the elected leader. This paper focuses on studying the message and time complexity of randomized implicit leader election in synchronous distributed networks. Surprisingly, the most "obvious" complexity bounds have not been proven for randomized algorithms. The "obvious" lower bounds of O(m) messages (m is the number of edges in the network) and O(D) time (D is the network diameter) are non-trivial to show for randomized (Monte Carlo) algorithms. (Recent results that show that even O(n) (n is the number of nodes in the network) is not a lower bound on the messages in complete networks, make the above bounds somewhat less obvious). To the best of our knowledge, these basic lower bounds have not been established even for deterministic algorithms (except for the limited case of comparison algorithms, where it was also required that some nodes may not wake up spontaneously, and that D and n were not known).

We establish these fundamental lower bounds in this paper for the general case, even for randomized Monte Carlo algorithms. Our lower bounds are universal in the sense that they hold for all universal algorithms (such algorithms should work for all graphs), apply to every D, m, and n, and hold even if D, m, and n are known, all the nodes wake up simultaneously, and the algorithms can make anyuse of node's identities. To show that these bounds are tight, we present an O(m) messages algorithm. An O(D) time algorithm is known. A slight adaptation of our lower bound technique gives rise to an O(m) message lower bound for randomized broadcast algorithms.

An interesting fundamental problem is whether both upper bounds (messages and time) can be reached simultaneously in the randomized setting for all graphs. (The answer is known to be negative in the deterministic setting). We answer this problem partially by presenting a randomized algorithm that matches both complexities in some cases. This already separates (for some cases) randomized algorithms from deterministic ones. As first steps towards the general case, we present several universal leader election algorithms with bounds that trade-off messages versus time. We view our results as a step towards understanding the complexity of universal leader election in distributed networks.