945 resultados para Entropic graphs
Resumo:
We are looking into variants of a domination set problem in social networks. While randomised algorithms for solving the minimum weighted domination set problem and the minimum alpha and alpha-rate domination problem on simple graphs are already present in the literature, we propose here a randomised algorithm for the minimum weighted alpha-rate domination set problem which is, to the best of our knowledge, the first such algorithm. A theoretical approximation bound based on a simple randomised rounding technique is given. The algorithm is implemented in Python and applied to a UK Twitter mentions networks using a measure of individuals’ influence (klout) as weights. We argue that the weights of vertices could be interpreted as the costs of getting those individuals on board for a campaign or a behaviour change intervention. The minimum weighted alpha-rate dominating set problem can therefore be seen as finding a set that minimises the total cost and each individual in a network has at least alpha percentage of its neighbours in the chosen set. We also test our algorithm on generated graphs with several thousand vertices and edges. Our results on this real-life Twitter networks and generated graphs show that the implementation is reasonably efficient and thus can be used for real-life applications when creating social network based interventions, designing social media campaigns and potentially improving users’ social media experience.
Resumo:
In this paper, Bond Graphs are employed to develop a novel mathematical model of conventional switched-mode DC-DC converters valid for both continuous and discontinuous conduction modes. A unique causality bond graph model of hybrid models is suggested with the operation of the switch and the diode to be represented by a Modulated Transformer with a binary input and a resistor with fixed conductance causality. The operation of the diode is controlled using an if-then function within the model. The extracted hybrid model is implemented on a Boost and Buck converter with their operations to change from CCM to DCM and to return to CCM. The vector fields of the models show validity in a wide operation area and comparison with the simulation of the converters using PSPICE reveals high accuracy of the proposed model, with the Normalised Root Means Square Error and the Maximum Absolute Error remaining adequately low. The model is also experimentally tested on a Buck topology.
Resumo:
Recent developments in the area of Bid Tender Forecasting have enabled bidders to implement new types of easy-to-use tools for increasing their chances of winning contracts. Although these new tools (such as iso-Score Curve Graphs, Scoring Probability Graphs, and Position Probability Graphs) are designed for bidders in capped tendering (tenders with an upper price limit), some of their principles can also be applied by a Contracting Authority to detect which bidders do not follow a standard pattern, that is, their bids are extremely high or low. Since a collusive bid generally needs to be sufficiently high or low to make an impact on the bid distribution, any person in charge of supervising capped tenders can be alerted to any bidder that might be involved in a cartel after identifying the same abnormal behavior in a series of tenders through simple calculations and a new type of graph.
Resumo:
Research in Bid Tender Forecasting Models (BTFM) has been in progress since the 1950s. None of the developed models were easy-to-use tools for effective use by bidding practitioners because the advanced mathematical apparatus and massive data inputs required. This scenario began to change in 2012 with the development of the Smartbid BTFM, a quite simple model that presents a series of graphs that enables any project manager to study competitors using a relatively short historical tender dataset. However, despite the advantages of this new model, so far, it is still necessary to study all the auction participants as an indivisible group; that is, the original BTFM was not devised for analyzing the behavior of a single bidding competitor or a subgroup of them. The present paper tries to solve that flaw and presents a stand-alone methodology useful for estimating future competitors’ bidding behaviors separately.
Resumo:
Predictive performance evaluation is a fundamental issue in design, development, and deployment of classification systems. As predictive performance evaluation is a multidimensional problem, single scalar summaries such as error rate, although quite convenient due to its simplicity, can seldom evaluate all the aspects that a complete and reliable evaluation must consider. Due to this, various graphical performance evaluation methods are increasingly drawing the attention of machine learning, data mining, and pattern recognition communities. The main advantage of these types of methods resides in their ability to depict the trade-offs between evaluation aspects in a multidimensional space rather than reducing these aspects to an arbitrarily chosen (and often biased) single scalar measure. Furthermore, to appropriately select a suitable graphical method for a given task, it is crucial to identify its strengths and weaknesses. This paper surveys various graphical methods often used for predictive performance evaluation. By presenting these methods in the same framework, we hope this paper may shed some light on deciding which methods are more suitable to use in different situations.
Resumo:
We generalize results in Cruz and de Rezende (1999) [7] by completely describing how the Beth numbers of the boundary of an orientable manifold vary after attaching a handle, when the homology coefficients are in Z, Q, R or Z/pZ with p prime. First we apply this result to the Conley index theory of Lyapunov graphs. Next we consider the Ogasa invariant associated with handle decompositions of manifolds. We make use of the above results in order to obtain upper bounds for the Ogasa invariant of product manifolds. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
The thermodynamic properties of a selected set of benchmark hydrogen-bonded systems (acetic acid dimer and the complexes of acetic acid with acetamide and methanol) was studied with the goal of obtaining detailed information on solvent effects on the hydrogen-bonded interactions using water, chloroform, and n-heptane as representatives for a wide range in the dielectric constant. Solvent effects were investigated using both explicit and implicit solvation models. For the explicit description of the solvent, molecular dynamics and Monte Carlo simulations in the isothermal isobaric (NpT) ensemble combined with the free energy perturbation technique were performed to determine solvation free energies. Within the implicit solvation approach, the polarizable continuum model and the conductor-like screening model were applied. Combination of gas phase results with the results obtained from the different solvation models through an appropriate thermodynamic cycle allows estimation of complexation free energies, enthalpies, and the respective entropic contributions in solution. Owing to the strong solvation effects of water the cyclic acetic acid dimer is not stable in aqueous solution. In less polar solvents the double hydrogen bond structure of the acetic acid dimer remains stable. This finding is in agreement with previous theoretical and experimental results. A similar trend as for the acetic acid dimer is also observed for the acetamide complex. The methanol complex was found to be thermodynamically unstable in gas phase as well as in any of the three solvents. (C) 2010 Wiley Periodicals, Inc. J Comput Chem 31: 2046-2055, 2010
Resumo:
Automatic summarization of texts is now crucial for several information retrieval tasks owing to the huge amount of information available in digital media, which has increased the demand for simple, language-independent extractive summarization strategies. In this paper, we employ concepts and metrics of complex networks to select sentences for an extractive summary. The graph or network representing one piece of text consists of nodes corresponding to sentences, while edges connect sentences that share common meaningful nouns. Because various metrics could be used, we developed a set of 14 summarizers, generically referred to as CN-Summ, employing network concepts such as node degree, length of shortest paths, d-rings and k-cores. An additional summarizer was created which selects the highest ranked sentences in the 14 systems, as in a voting system. When applied to a corpus of Brazilian Portuguese texts, some CN-Summ versions performed better than summarizers that do not employ deep linguistic knowledge, with results comparable to state-of-the-art summarizers based on expensive linguistic resources. The use of complex networks to represent texts appears therefore as suitable for automatic summarization, consistent with the belief that the metrics of such networks may capture important text features. (c) 2008 Elsevier Inc. All rights reserved.
Resumo:
Complex networks can be understood as graphs whose connectivity properties deviate from those of regular or near-regular graphs, which are understood as being ""simple"". While a great deal of the attention so far dedicated to complex networks has been duly driven by the ""complex"" nature of these structures, in this work we address the identification of their simplicity. The basic idea is to seek for subgraphs whose nodes exhibit similar measurements. This approach paves the way for complementing the characterization of networks, including results suggesting that the protein-protein interaction networks, and to a lesser extent also the Internet, may be getting simpler over time. Copyright (C) EPLA, 2009
Resumo:
Cortical bones, essential for mechanical support and structure in many animals, involve a large number of canals organized in intricate fashion. By using state-of-the art image analysis and computer graphics, the 3D reconstruction of a whole bone (phalange) of a young chicken was obtained and represented in terms of a complex network where each canal was associated to an edge and every confluence of three or more canals yielded a respective node. The representation of the bone canal structure as a complex network has allowed several methods to be applied in order to characterize and analyze the canal system organization and the robustness. First, the distribution of the node degrees (i.e. the number of canals connected to each node) confirmed previous indications that bone canal networks follow a power law, and therefore present some highly connected nodes (hubs). The bone network was also found to be partitioned into communities or modules, i.e. groups of nodes which are more intensely connected to one another than with the rest of the network. We verified that each community exhibited distinct topological properties that are possibly linked with their specific function. In order to better understand the organization of the bone network, its resilience to two types of failures (random attack and cascaded failures) was also quantified comparatively to randomized and regular counterparts. The results indicate that the modular structure improves the robustness of the bone network when compared to a regular network with the same average degree and number of nodes. The effects of disease processes (e. g., osteoporosis) and mutations in genes (e.g., BMP4) that occur at the molecular level can now be investigated at the mesoscopic level by using network based approaches.
Resumo:
Complex networks exist in many areas of science such as biology, neuroscience, engineering, and sociology. The growing development of this area has led to the introduction of several topological and dynamical measurements, which describe and quantify the structure of networks. Such characterization is essential not only for the modeling of real systems but also for the study of dynamic processes that may take place in them. However, it is not easy to use several measurements for the analysis of complex networks, due to the correlation between them and the difficulty of their visualization. To overcome these limitations, we propose an effective and comprehensive approach for the analysis of complex networks, which allows the visualization of several measurements in a few projections that contain the largest data variance and the classification of networks into three levels of detail, vertices, communities, and the global topology. We also demonstrate the efficiency and the universality of the proposed methods in a series of real-world networks in the three levels.
Resumo:
Based on a divide and conquer approach, knowledge about nature has been organized into a set of interrelated facts, allowing a natural representation in terms of graphs: each `chunk` of knowledge corresponds to a node, while relationships between such chunks are expressed as edges. This organization becomes particularly clear in the case of mathematical theorems, with their intense cross-implications and relationships. We have derived a web of mathematical theorems from Wikipedia and, thanks to the powerful concept of entropy, identified its more central and frontier elements. Our results also suggest that the central nodes are the oldest theorems, while the frontier nodes are those recently added to the network. The network communities have also been identified, allowing further insights about the organization of this network, such as its highly modular structure.
Resumo:
Nuclear receptors are important targets for pharmaceuticals, but similarities between family members cause difficulties in obtaining highly selective compounds. Synthetic ligands that are selective for thyroid hormone (TH) receptor beta (TR beta) vs. TR alpha reduce cholesterol and fat without effects on heart rate; thus, it is important to understand TR beta-selective binding. Binding of 3 selective ligands (GC-1, KB141, and GC-24) is characterized at the atomic level; preferential binding depends on a nonconserved residue (Asn-331 beta) in the TR beta ligand-binding cavity (LBC), and GC-24 gains extra selectivity from insertion of a bulky side group into an extension of the LBC that only opens up with this ligand. Here we report that the natural TH 3,5,3`-triodothyroacetic acid (Triac) exhibits a previously unrecognized mechanism of TR beta selectivity. TR x-ray structures reveal better fit of ligand with the TR alpha LBC. The TR beta LBC, however, expands relative to TR alpha in the presence of Triac (549 angstrom(3) vs. 461 angstrom(3)), and molecular dynamics simulations reveal that water occupies the extra space. Increased solvation compensates for weaker interactions of ligand with TR beta and permits greater flexibility of the Triac carboxylate group in TR beta than in TR alpha. We propose that this effect results in lower entropic restraint and decreases free energy of interactions between Triac and TR beta, explaining subtype-selective binding. Similar effects could potentially be exploited in nuclear receptor drug design.
Resumo:
There has been great interest in deciding whether a combinatorial structure satisfies some property, or in estimating the value of some numerical function associated with this combinatorial structure, by considering only a randomly chosen substructure of sufficiently large, but constant size. These problems are called property testing and parameter testing, where a property or parameter is said to be testable if it can be estimated accurately in this way. The algorithmic appeal is evident, as, conditional on sampling, this leads to reliable constant-time randomized estimators. Our paper addresses property testing and parameter testing for permutations in a subpermutation perspective; more precisely, we investigate permutation properties and parameters that can be well approximated based on a randomly chosen subpermutation of much smaller size. In this context, we use a theory of convergence of permutation sequences developed by the present authors [C. Hoppen, Y. Kohayakawa, C.G. Moreira, R.M. Sampaio, Limits of permutation sequences through permutation regularity, Manuscript, 2010, 34pp.] to characterize testable permutation parameters along the lines of the work of Borgs et al. [C. Borgs, J. Chayes, L Lovasz, V.T. Sos, B. Szegedy, K. Vesztergombi, Graph limits and parameter testing, in: STOC`06: Proceedings of the 38th Annual ACM Symposium on Theory of Computing, ACM, New York, 2006, pp. 261-270.] in the case of graphs. Moreover, we obtain a permutation result in the direction of a famous result of Alon and Shapira [N. Alon, A. Shapira, A characterization of the (natural) graph properties testable with one-sided error, SIAM J. Comput. 37 (6) (2008) 1703-1727.] stating that every hereditary graph property is testable. (C) 2011 Elsevier B.V. All rights reserved.
Resumo:
We consider conditions which allow the embedding of linear hypergraphs of fixed size. In particular, we prove that any k-uniform hypergraph H of positive uniform density contains all linear k-uniform hypergraphs of a given size. More precisely, we show that for all integers l >= k >= 2 and every d > 0 there exists Q > 0 for which the following holds: if His a sufficiently large k-uniform hypergraph with the property that the density of H induced on every vertex subset of size on is at least d, then H contains every linear k-uniform hypergraph F with l vertices. The main ingredient in the proof of this result is a counting lemma for linear hypergraphs, which establishes that the straightforward extension of graph epsilon-regularity to hypergraphs suffices for counting linear hypergraphs. We also consider some related problems. (C) 2009 Elsevier Inc. All rights reserved.