270 resultados para GRAPH
Resumo:
We address the problem of multi-instrument recognition in polyphonic music signals. Individual instruments are modeled within a stochastic framework using Student's-t Mixture Models (tMMs). We impose a mixture of these instrument models on the polyphonic signal model. No a priori knowledge is assumed about the number of instruments in the polyphony. The mixture weights are estimated in a latent variable framework from the polyphonic data using an Expectation Maximization (EM) algorithm, derived for the proposed approach. The weights are shown to indicate instrument activity. The output of the algorithm is an Instrument Activity Graph (IAG), using which, it is possible to find out the instruments that are active at a given time. An average F-ratio of 0 : 7 5 is obtained for polyphonies containing 2-5 instruments, on a experimental test set of 8 instruments: clarinet, flute, guitar, harp, mandolin, piano, trombone and violin.
Resumo:
The temperature dependent electrical properties of the dropcasted Cu2SnS3 films have been measured in the temperature range 140 K to 317 K. The log I versus root V plot shows two regions. The region at lower bias is due to electrode limited Schottky emission and the higher bias region is due to bulk limited Poole Frenkel emission. The ideality factor is calculated from the ln I versus V plot for different temperatures fitted with the thermionic emission model and is found to vary from 6.05 eV to 12.23 eV. This large value is attributed to the presence of defects or amorphous layer at the Ag / Cu2SnS3 interface. From the Richardson's plot the Richardson's constant and the barrier height were calculated. Owing to the inhomogeneity in the barrier heights, the Richardson's constant and the barrier height were also calculated from the modified Richardson's plot. The I-V-T curves were also fitted using the thermionic field emission model. The barrier heights were found to be higher than those calculated using thermionic emission model. From the fit of the I-V-T curves to the field emission model, field emission was seen to dominate in the low temperature range of 140 K to 177 K. The temperature dependent current graphs show two regions of different mechanisms. The log I versus 1000/T plot gives activation energies E-a1 = 0.367095 - 0.257682 eV and E-a2 = 0.038416 - 0.042452 eV. The log ( I/T-2) versus 1000/T graph gives trap depths Phi(o1) = 0.314159 - 0.204752 eV and Phi(o2) = 0.007425- 0.011163 eV. With increasing voltage the activation energy E-a1 and the trap depth Phi(o1) decrease. From the ln (IT1/ 2) versus 1/T-1/ 4 graph, the low temperature region is due to variable range hopping mechanism and the high temperature region is due to thermionic emission. (C) 2014 Author(s).
Resumo:
A new dinuclear cadmium(II) complex, Cd(L)(NCS)](2) (1) has been synthesized using a potentially tetradentate Schiff base ligand HL, 2-((E)-(2-(diethylamino) ethylimino) methyl)-6-methoxyphenol, obtained by the condensation of 2-diethylaminoethylamine and o-vanillin, and characterized by different physicochemical techniques. Crystal structure of the title complex was unambiguously established by single crystal X-ray diffraction which reveals that metal centers are connected by bridging phenolato and chelating methoxy oxygen atoms of the coordinating Schiff bases and embedded in severely distorted octahedral geometries. Fluorescence properties of the ligand and its complex, studied at room temperature indicate that later may serve as strong fluorescent
Resumo:
Background: The number of genome-wide association studies (GWAS) has increased rapidly in the past couple of years, resulting in the identification of genes associated with different diseases. The next step in translating these findings into biomedically useful information is to find out the mechanism of the action of these genes. However, GWAS studies often implicate genes whose functions are currently unknown; for example, MYEOV, ANKLE1, TMEM45B and ORAOV1 are found to be associated with breast cancer, but their molecular function is unknown. Results: We carried out Bayesian inference of Gene Ontology (GO) term annotations of genes by employing the directed acyclic graph structure of GO and the network of protein-protein interactions (PPIs). The approach is designed based on the fact that two proteins that interact biophysically would be in physical proximity of each other, would possess complementary molecular function, and play role in related biological processes. Predicted GO terms were ranked according to their relative association scores and the approach was evaluated quantitatively by plotting the precision versus recall values and F-scores (the harmonic mean of precision and recall) versus varying thresholds. Precisions of similar to 58% and similar to 40% for localization and functions respectively of proteins were determined at a threshold of similar to 30 (top 30 GO terms in the ranked list). Comparison with function prediction based on semantic similarity among nodes in an ontology and incorporation of those similarities in a k nearest neighbor classifier confirmed that our results compared favorably. Conclusions: This approach was applied to predict the cellular component and molecular function GO terms of all human proteins that have interacting partners possessing at least one known GO annotation. The list of predictions is available at http://severus.dbmi.pitt.edu/engo/GOPRED.html. We present the algorithm, evaluations and the results of the computational predictions, especially for genes identified in GWAS studies to be associated with diseases, which are of translational interest.
Resumo:
Classification of pharmacologic activity of a chemical compound is an essential step in any drug discovery process. We develop two new atom-centered fragment descriptors (vertex indices) - one based solely on topological considerations without discriminating atomor bond types, and another based on topological and electronic features. We also assess their usefulness by devising a method to rank and classify molecules with regard to their antibacterial activity. Classification performances of our method are found to be superior compared to two previous studies on large heterogeneous data sets for hit finding and hit-to-lead studies even though we use much fewer parameters. It is found that for hit finding studies topological features (simple graph) alone provide significant discriminating power, and for hit-to-lead process small but consistent improvement can be made by additionally including electronic features (colored graph). Our approach is simple, interpretable, and suitable for design of molecules as we do not use any physicochemical properties. The singular use of vertex index as descriptor, novel range based feature extraction, and rigorous statistical validation are the key elements of this study.
Resumo:
In this paper we establish that the Lovasz theta function on a graph can be restated as a kernel learning problem. We introduce the notion of SVM-theta graphs, on which Lovasz theta function can be approximated well by a Support vector machine (SVM). We show that Erdos-Renyi random G(n, p) graphs are SVM-theta graphs for log(4)n/n <= p < 1. Even if we embed a large clique of size Theta(root np/1-p) in a G(n, p) graph the resultant graph still remains a SVM-theta graph. This immediately suggests an SVM based algorithm for recovering a large planted clique in random graphs. Associated with the theta function is the notion of orthogonal labellings. We introduce common orthogonal labellings which extends the idea of orthogonal labellings to multiple graphs. This allows us to propose a Multiple Kernel learning (MKL) based solution which is capable of identifying a large common dense subgraph in multiple graphs. Both in the planted clique case and common subgraph detection problem the proposed solutions beat the state of the art by an order of magnitude.
Resumo:
In the current study, amino silane functionalized cenosphere particles was used as a reinforcing filler in poly(vinyl butyral) matrix and were made by melt blending. The changes observed in the dielectric performance of the composite films with varying weight percentage of cenosphere particle in the matrix were investigated. The dielectric property and impedance spectroscopy were evaluated as a function of applied frequency in the range of 50 Hz to 5 MHz. It is observed that, because of orientation polarization of the PVB polymer, the permittivity and impedance decrease, whereas conductivity increases. Tangent loss graph indicates that the property of the matrix is associated with geometrical fill factor and the lowest quality factor. Therefore, above 10 kHz, these composites can be considered as dielectric loss-less material. (C) 2013 Society of Plastics Engineers
Resumo:
In this paper, we study a problem of designing a multi-hop wireless network for interconnecting sensors (hereafter called source nodes) to a Base Station (BS), by deploying a minimum number of relay nodes at a subset of given potential locations, while meeting a quality of service (QoS) objective specified as a hop count bound for paths from the sources to the BS. The hop count bound suffices to ensure a certain probability of the data being delivered to the BS within a given maximum delay under a light traffic model. We observe that the problem is NP-Hard. For this problem, we propose a polynomial time approximation algorithm based on iteratively constructing shortest path trees and heuristically pruning away the relay nodes used until the hop count bound is violated. Results show that the algorithm performs efficiently in various randomly generated network scenarios; in over 90% of the tested scenarios, it gave solutions that were either optimal or were worse than optimal by just one relay. We then use random graph techniques to obtain, under a certain stochastic setting, an upper bound on the average case approximation ratio of a class of algorithms (including the proposed algorithm) for this problem as a function of the number of source nodes, and the hop count bound. To the best of our knowledge, the average case analysis is the first of its kind in the relay placement literature. Since the design is based on a light traffic model, we also provide simulation results (using models for the IEEE 802.15.4 physical layer and medium access control) to assess the traffic levels up to which the QoS objectives continue to be met. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Metabolism is a defining feature of life, and its study is important to understand how a cell works, alterations that lead to disease and for applications in drug discovery. From a systems perspective, metabolism can be represented as a network that captures all the metabolites as nodes and the inter-conversions among pairs of them as edges. Such an abstraction enables the networks to be studied by applying graph theory, particularly, to infer the flow of chemical information in the networks by identifying relevant metabolic pathways. In this study, different weighting schemes are used to illustrate that appropriately weighted networks can capture the quantitative cellular dynamics quite accurately. Thus, the networks now combine the elegance and simplicity of representation of the system and ease of analysing metabolic graphs. Metabolic routes or paths determined by this therefore are likely to be more biologically meaningful. The usefulness of the approach is demonstrated with two examples, first for understanding bacterial stress response and second for studying metabolic alterations that occurs in cancer cells.
Resumo:
We investigate the parameterized complexity of the following edge coloring problem motivated by the problem of channel assignment in wireless networks. For an integer q >= 2 and a graph G, the goal is to find a coloring of the edges of G with the maximum number of colors such that every vertex of the graph sees at most q colors. This problem is NP-hard for q >= 2, and has been well-studied from the point of view of approximation. Our main focus is the case when q = 2, which is already theoretically intricate and practically relevant. We show fixed-parameter tractable algorithms for both the standard and the dual parameter, and for the latter problem, the result is based on a linear vertex kernel.
Resumo:
The correlation clustering problem is a fundamental problem in both theory and practice, and it involves identifying clusters of objects in a data set based on their similarity. A traditional modeling of this question as a graph theoretic problem involves associating vertices with data points and indicating similarity by adjacency. Clusters then correspond to cliques in the graph. The resulting optimization problem, Cluster Editing (and several variants) are very well-studied algorithmically. In many situations, however, translating clusters to cliques can be somewhat restrictive. A more flexible notion would be that of a structure where the vertices are mutually ``not too far apart'', without necessarily being adjacent. One such generalization is realized by structures called s-clubs, which are graphs of diameter at most s. In this work, we study the question of finding a set of at most k edges whose removal leaves us with a graph whose components are s-clubs. Recently, it has been shown that unless Exponential Time Hypothesis fail (ETH) fails Cluster Editing (whose components are 1-clubs) does not admit sub-exponential time algorithm STACS, 2013]. That is, there is no algorithm solving the problem in time 2 degrees((k))n(O(1)). However, surprisingly they show that when the number of cliques in the output graph is restricted to d, then the problem can be solved in time O(2(O(root dk)) + m + n). We show that this sub-exponential time algorithm for the fixed number of cliques is rather an exception than a rule. Our first result shows that assuming the ETH, there is no algorithm solving the s-Club Cluster Edge Deletion problem in time 2 degrees((k))n(O(1)). We show, further, that even the problem of deleting edges to obtain a graph with d s-clubs cannot be solved in time 2 degrees((k))n(O)(1) for any fixed s, d >= 2. This is a radical contrast from the situation established for cliques, where sub-exponential algorithms are known.
Resumo:
The boxicity (resp. cubicity) of a graph G(V, E) is the minimum integer k such that G can be represented as the intersection graph of axis parallel boxes (resp. cubes) in R-k. Equivalently, it is the minimum number of interval graphs (resp. unit interval graphs) on the vertex set V, such that the intersection of their edge sets is E. The problem of computing boxicity (resp. cubicity) is known to be inapproximable, even for restricted graph classes like bipartite, co-bipartite and split graphs, within an O(n(1-epsilon))-factor for any epsilon > 0 in polynomial time, unless NP = ZPP. For any well known graph class of unbounded boxicity, there is no known approximation algorithm that gives n(1-epsilon)-factor approximation algorithm for computing boxicity in polynomial time, for any epsilon > 0. In this paper, we consider the problem of approximating the boxicity (cubicity) of circular arc graphs intersection graphs of arcs of a circle. Circular arc graphs are known to have unbounded boxicity, which could be as large as Omega(n). We give a (2 + 1/k) -factor (resp. (2 + log n]/k)-factor) polynomial time approximation algorithm for computing the boxicity (resp. cubicity) of any circular arc graph, where k >= 1 is the value of the optimum solution. For normal circular arc (NCA) graphs, with an NCA model given, this can be improved to an additive two approximation algorithm. The time complexity of the algorithms to approximately compute the boxicity (resp. cubicity) is O(mn + n(2)) in both these cases, and in O(mn + kn(2)) = O(n(3)) time we also get their corresponding box (resp. cube) representations, where n is the number of vertices of the graph and m is its number of edges. Our additive two approximation algorithm directly works for any proper circular arc graph, since their NCA models can be computed in polynomial time. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
Given a point set P and a class C of geometric objects, G(C)(P) is a geometric graph with vertex set P such that any two vertices p and q are adjacent if and only if there is some C is an element of C containing both p and q but no other points from P. We study G(del)(P) graphs where del is the class of downward equilateral triangles (i.e., equilateral triangles with one of their sides parallel to the x-axis and the corner opposite to this side below that side). For point sets in general position, these graphs have been shown to be equivalent to half-Theta(6) graphs and TD-Delaunay graphs. The main result in our paper is that for point sets P in general position, G(del)(P) always contains a matching of size at least vertical bar P vertical bar-1/3] and this bound is tight. We also give some structural properties of G(star)(P) graphs, where is the class which contains both upward and downward equilateral triangles. We show that for point sets in general position, the block cut point graph of G(star)(P) is simply a path. Through the equivalence of G(star)(P) graphs with Theta(6) graphs, we also derive that any Theta(6) graph can have at most 5n-11 edges, for point sets in general position. (C) 2013 Elsevier B.V. All rights reserved.
Resumo:
Given a connected outerplanar graph G of pathwidth p, we give an algorithm to add edges to G to get a supergraph of G, which is 2-vertex-connected, outerplanar and of pathwidth O(p). This settles an open problem raised by Biedl 1], in the context of computing minimum height planar straight line drawings of outerplanar graphs, with their vertices placed on a two-dimensional grid. In conjunction with the result of this paper, the constant factor approximation algorithm for this problem obtained by Biedl 1] for 2-vertex-connected outerplanar graphs will work for all outer planar graphs. (C) 2014 Elsevier B.V. All rights reserved.
Resumo:
We address the parameterized complexity ofMaxColorable Induced Subgraph on perfect graphs. The problem asks for a maximum sized q-colorable induced subgraph of an input graph G. Yannakakis and Gavril IPL 1987] showed that this problem is NP-complete even on split graphs if q is part of input, but gave a n(O(q)) algorithm on chordal graphs. We first observe that the problem is W2]-hard parameterized by q, even on split graphs. However, when parameterized by l, the number of vertices in the solution, we give two fixed-parameter tractable algorithms. The first algorithm runs in time 5.44(l) (n+#alpha(G))(O(1)) where #alpha(G) is the number of maximal independent sets of the input graph. The second algorithm runs in time q(l+o()l())n(O(1))T(alpha) where T-alpha is the time required to find a maximum independent set in any induced subgraph of G. The first algorithm is efficient when the input graph contains only polynomially many maximal independent sets; for example split graphs and co-chordal graphs. The running time of the second algorithm is FPT in l alone (whenever T-alpha is a polynomial in n), since q <= l for all non-trivial situations. Finally, we show that (under standard complexitytheoretic assumptions) the problem does not admit a polynomial kernel on split and perfect graphs in the following sense: (a) On split graphs, we do not expect a polynomial kernel if q is a part of the input. (b) On perfect graphs, we do not expect a polynomial kernel even for fixed values of q >= 2.