49 resultados para Eigenvalue of a graph


Relevância:

30.00% 30.00%

Publicador:

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This thesis introduces a novel way of writing polynomial invariants as network graphs, and applies this diagrammatic notation scheme, in conjunction with graph theory, to derive algorithms for constructing relationships (syzygies) between different invariants. These algorithms give rise to a constructive solution of a longstanding classical problem in invariant theory.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Recently, Aissa-El-Bey et al. have proposed two subspacebased methods for underdetermined blind source separation (UBSS) in time-frequency (TF) domain. These methods allow multiple active sources at TF points so long as the number of active sources at any TF point is strictly less than the number of sensors, and the column vectors of the mixing matrix are pairwise linearly independent. In this correspondence, we first show that the subspace-based methods must also satisfy the condition that any M × M submatrix of the mixing matrix is of full rank. Then we present a new UBSS approach which only requires that the number of active sources at any TF point does not exceed that of sensors. An algorithm is proposed to perform the UBSS.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We introduce a new topological concept called k-partite protein cliques to study protein interaction (PPI) networks. In particular, we examine functional coherence of proteins in k-partite protein cliques. A k-partite protein clique is a k-partite maximal clique comprising two or more nonoverlapping protein subsets between any two of which full interactions are exhibited. In the detection of PPI’s k-partite maximal cliques, we propose to transform PPI networks into induced K-partite graphs with proteins as vertices where edges only exist among the graph’s partites. Then, we present a k-partite maximal clique mining (MaCMik) algorithm to enumerate k-partite maximal cliques from K-partite graphs. Our MaCMik algorithm is applied to a yeast PPI network. We observe that there does exist interesting and unusually high functional coherence in k-partite protein cliques—most proteins in k-partite protein cliques, especially those in the same partites, share the same functions. Therefore, the idea of k-partite protein cliques suggests a novel approach to characterizing PPI networks, and may help function prediction for unknown proteins.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Funnel graphs provide a simple, yet highly effective, means to identify key features of an empirical literature. This paper illustrates the use of funnel graphs to detect publication selection bias, identify the existence of genuine empirical effects and discover potential moderator variables that can help to explain the wide variation routinely found among reported research findings. Applications include union–productivity effects, water price elasticities, common currency-trade effects, minimum-wage employment effects, efficiency wages and the price elasticity of prescription drugs.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper poses and solves a new problem of consensus control where the task is to make the fixed-topology multi-agent network, with each agent described by an uncertain nonlinear system in chained form, to reach consensus in a fast finite time. Our development starts with a set of new sliding mode surfaces. It is proven that, on these sliding mode surfaces, consensus can be achieved if the communication graph has the proposed directed spanning tree. Next, we introduce the multi-surface sliding mode control to drive the sliding variables to the sliding mode surfaces in a fast finite time. The control Lyapunov function for fast finite time stability, motivated by the fast terminal sliding mode control, is used to prove the reachability of the sliding mode surface. A recursive design procedure is provided, which guarantees the boundedness of the control input.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

A radio labelling of a connected graph G is a mapping f : V (G) → {0, 1, 2, ...} such that | f (u) - f (v) | ≥ diam (G) - d (u, v) + 1 for each pair of distinct vertices u, v ∈ V (G), where diam (G) is the diameter of G and d (u, v) the distance between u and v. The span of f is defined as maxu, v V (G) | f (u) - f (v) |, and the radio number of G is the minimum span of a radio labelling of G. A complete m-ary tree (m ≥ 2) is a rooted tree such that each vertex of degree greater than one has exactly m children and all degree-one vertices are of equal distance (height) to the root. In this paper we determine the radio number of the complete m-ary tree for any m ≥ 2 with any height and construct explicitly an optimal radio labelling.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As one of the primary substances in a living organism, protein defines the character of each cell by interacting with the cellular environment to promote the cell’s growth and function [1]. Previous studies on proteomics indicate that the functions of different proteins could be assigned based upon protein structures [2,3]. The knowledge on protein structures gives us an overview of protein fold space and is helpful for the understanding of the evolutionary principles behind structure. By observing the architectures and topologies of the protein families, biological processes can be investigated more directly with much higher resolution and finer detail. For this reason, the analysis of protein, its structure and the interaction with the other materials is emerging as an important problem in bioinformatics. However, the determination of protein structures is experimentally expensive and time consuming, this makes scientists largely dependent on sequence rather than more general structure to infer the function of the protein at the present time. For this reason, data mining technology is introduced into this area to provide more efficient data processing and knowledge discovery approaches.

Unlike many data mining applications which lack available data, the protein structure determination problem and its interaction study, on the contrary, could utilize a vast amount of biologically relevant information on protein and its interaction, such as the protein data bank (PDB) [4], the structural classification of proteins (SCOP) databases [5], CATH databases [6], UniProt [7], and others. The difficulty of predicting protein structures, specially its 3D structures, and the interactions between proteins as shown in Figure 6.1, lies in the computational complexity of the data. Although a large number of approaches have been developed to determine the protein structures such as ab initio modelling [8], homology modelling [9] and threading [10], more efficient and reliable methods are still greatly needed.

In this chapter, we will introduce a state-of-the-art data mining technique, graph mining, which is good at defining and discovering interesting structural patterns in graphical data sets, and take advantage of its expressive power to study protein structures, including protein structure prediction and comparison, and protein-protein interaction (PPI). The current graph pattern mining methods will be described, and typical algorithms will be presented, together with their applications in the protein structure analysis.

The rest of the chapter is organized as follows: Section 6.2 will give a brief introduction of the fundamental knowledge of protein, the publicly accessible protein data resources and the current research status of protein analysis; in Section 6.3, we will pay attention to one of the state-of-the-art data mining methods, graph mining; then Section 6.4 surveys several existing work for protein structure analysis using advanced graph mining methods in the recent decade; finally, in Section 6.5, a conclusion with potential further work will be summarized.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports selected findings from a study of one form of cross-boundary relationship: cross-sector R&D collaboration under the Australian Cooperative Research Centre (CRC) Programme. The study sought to explain project partners’ collaboration experience using a theoretical model which was empirically tested with a survey of CRC project leaders. It was hypothesised (H1) that the higher the level of relational trust (measured, following Sako, in terms of contractual, competence, and goodwill trust) amongst the partners in a collaborative project team, the more positive would be the partners’ experience of the project. The construct of credible commitments (the making of pledges, or the economic equivalent of the taking of hostages, which bind partners to a relationship) was posed in the model as an antecedent of relational trust. Accordingly it was hypothesised (H2) that the more that credible commitments are made by the project partners, the higher would be the level of relational trust between them. Data from the achieved sample (n = 156, 51% response rate) were analysed using PLS Graph. The results of the analysis provided support for hypothesis 1 but not for hypothesis 2. It was concluded that this latter finding could be due to the specific context of the study (cross-sector R&D collaborations under the CRC Programme differ markedly from inter-firm strategic alliances), or it could be due to the complex nature of credible commitments which was not adequately captured by our measure of this construct. Further research is required in this area to clarify the nature credible commitments, and the circumstances under which they contribute to a spiral of rising trust, in different cross-boundary contexts.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This research introduces a method of using Lindenmayer Systems to model the spreading and behavior of fire inside a factory building. The research investigates the use of L-System propagated fires for determining factors such as where the fire is most likely to spread first and how fast. It also looks at an alternative way of storing the Lindenmayer System, not in the form of a string but rather as a graph. A variation on the building and traversal process is also investigated, in which the L-System is traversed depth first instead of breadth first. Results of fire propagation are presented and we conclude that L-Systems are a suitable tool for fire propagation.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Active Peer-to-Peer worms are great threat to the network security since they can propagate in automated ways and flood the Internet within a very short duration. Modeling a propagation process can help us to devise effective strategies against a worm's spread. This paper presents a study on modeling a worm's propagation probability in a P2P overlay network and proposes an optimized patch strategy for defenders. Firstly, we present a probability matrix model to construct the propagation of P2P worms. Our model involves three indispensible aspects for propagation: infected state, vulnerability distribution and patch strategy. Based on a fully connected graph, our comprehensive model is highly suited for real world cases like Code Red II. Finally, by inspecting the propagation procedure, we propose four basic tactics for defense of P2P botnets. The rationale is exposed by our simulated experiments and the results show these tactics are of effective and have considerable worth in being applied in real-world networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

To improve the accuracy of access prediction, a prefetcher for web browsing should recognize the fact that a web page is a compound. By this term we mean that a user request for a single web page may require the retrieval of several multimedia items. Our prediction algorithm builds an access graph that captures the dynamics of web navigation rather than merely attaching probabilities to hypertext structure. When it comes to making prefetch decisions, most previous studies in speculative prefetching resort to simple heuristics, such as prefetching an item with access probabilities larger than a manually tuned threshold. The paper takes a different approach. Specifically, it models the performance of the prefetcher and develops a prefetch policy based on a theoretical analysis of the model. In the analysis, we derive a formula for the expected improvement in access time when prefetch is performed in anticipation for a compound request. We then develop an algorithm that integrates prefetch and cache replacement decisions so as to maximize this improvement. We present experimental results to demonstrate the effectiveness of compound-based prefetching in low bandwidth networks.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The social interactions manifest in blogs by the network of comments left by owners and readers are an under-used resource, both for blog pundits and industry. We present a web-based feed reader that renders these relationships with a graph representation, and enables exploration by displaying people and blogs who are proximate to a user's network. Social Reader is an example of Casual Information Visualization, and aims to help the user understand and explore blog-based social networks in a daily, real-life setting. A six week study of the software involving 20 users confirmed the usefulness of the novel visual display, via a quantitative analysis of use logs, and an exit survey.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This article presents experimental results devoted to a new application of the novel clustering technique introduced by the authors recently. Our aim is to facilitate the application of robust and stable consensus functions in information security, where it is often necessary to process large data sets and monitor outcomes in real time, as it is required, for example, for intrusion detection. Here we concentrate on the particular case of application to profiling of phishing websites. First, we apply several independent clustering algorithms to a randomized sample of data to obtain independent initial clusterings. Silhouette index is used to determine the number of clusters. Second, we use a consensus function to combine these independent clusterings into one consensus clustering . Feature ranking is used to select a subset of features for the consensus function. Third, we train fast supervised classification algorithms on the resulting consensus clustering in order to enable them to process the whole large data set as well as new data. The precision and recall of classifiers at the final stage of this scheme are critical for effectiveness of the whole procedure. We investigated various combinations of three consensus functions, Cluster-Based Graph Formulation (CBGF), Hybrid Bipartite Graph Formulation (HBGF), and Instance-Based Graph Formulation (IBGF) and a variety of supervised classification algorithms. The best precision and recall have been obtained by the combination of the HBGF consensus function and the SMO classifier with the polynomial kernel.