957 resultados para Pinched-cube topology
STRUCTURE-PROPERTY RELATIONSHIP BETWEEN HALF-WAVE POTENTIALS OF ORGANIC-COMPOUNDS AND THEIR TOPOLOGY
Resumo:
A significant correlation was found between half-wave potentials of organic compounds and their topological indices, A(x1), A(x2), and A(x3). The simplicity of calculation of the index from the connectivity in the molecular skeleton, together with the significant correlation, indicates its practical value. Good results have been obtained by using them to predict the half-wave potentials of some organic compounds.
Resumo:
A new lead(II) phosphonate, Pb[(PO3)(2)C(OH)CH3]center dot H2O (1) was hydrothermally synthesized and characterized by IR, elemental analysis, UV, TGA, SEM, and single crystal X-ray diffraction analysis. X-ray crystallographic study showed that complex 1 has a two-dimensional double layered hybrid structure containing interconnected 4- and 12-membered rings and shows an unusual (5,5)-connected (4(7) . 6(3)) (4(8) .6(2)) topology. (C) 2008 Elsevier B.V. All rights reserved.
Resumo:
Recent studies have noted that vertex degree in the autonomous system (AS) graph exhibits a highly variable distribution [15, 22]. The most prominent explanatory model for this phenomenon is the Barabási-Albert (B-A) model [5, 2]. A central feature of the B-A model is preferential connectivity—meaning that the likelihood a new node in a growing graph will connect to an existing node is proportional to the existing node’s degree. In this paper we ask whether a more general explanation than the B-A model, and absent the assumption of preferential connectivity, is consistent with empirical data. We are motivated by two observations: first, AS degree and AS size are highly correlated [11]; and second, highly variable AS size can arise simply through exponential growth. We construct a model incorporating exponential growth in the size of the Internet, and in the number of ASes. We then show via analysis that such a model yields a size distribution exhibiting a power-law tail. In such a model, if an AS’s link formation is roughly proportional to its size, then AS degree will also show high variability. We instantiate such a model with empirically derived estimates of growth rates and show that the resulting degree distribution is in good agreement with that of real AS graphs.
Resumo:
Current research on Internet-based distributed systems emphasizes the scalability of overlay topologies for efficient search and retrieval of data items, as well as routing amongst peers. However, most existing approaches fail to address the transport of data across these logical networks in accordance with quality of service (QoS) constraints. Consequently, this paper investigates the use of scalable overlay topologies for routing real-time media streams between publishers and potentially many thousands of subscribers. Specifically, we analyze the costs of using k-ary n-cubes for QoS-constrained routing. Given a number of nodes in a distributed system, we calculate the optimal k-ary n-cube structure for minimizing the average distance between any pair of nodes. Using this structure, we describe a greedy algorithm that selects paths between nodes in accordance with the real-time delays along physical links. We show this method improves the routing latencies by as much as 67%, compared to approaches that do not consider physical link costs. We are in the process of developing a method for adaptive node placement in the overlay topology, based upon the locations of publishers, subscribers, physical link costs and per-subscriber QoS constraints. One such method for repositioning nodes in logical space is discussed, to improve the likelihood of meeting service requirements on data routed between publishers and subscribers. Future work will evaluate the benefits of such techniques more thoroughly.
Resumo:
Wireless sensor networks are characterized by limited energy resources. To conserve energy, application-specific aggregation (fusion) of data reports from multiple sensors can be beneficial in reducing the amount of data flowing over the network. Furthermore, controlling the topology by scheduling the activity of nodes between active and sleep modes has often been used to uniformly distribute the energy consumption among all nodes by de-synchronizing their activities. We present an integrated analytical model to study the joint performance of in-network aggregation and topology control. We define performance metrics that capture the tradeoffs among delay, energy, and fidelity of the aggregation. Our results indicate that to achieve high fidelity levels under medium to high event reporting load, shorter and fatter aggregation/routing trees (toward the sink) offer the best delay-energy tradeoff as long as topology control is well coordinated with routing.
Resumo:
Effective engineering of the Internet is predicated upon a detailed understanding of issues such as the large-scale structure of its underlying physical topology, the manner in which it evolves over time, and the way in which its constituent components contribute to its overall function. Unfortunately, developing a deep understanding of these issues has proven to be a challenging task, since it in turn involves solving difficult problems such as mapping the actual topology, characterizing it, and developing models that capture its emergent behavior. Consequently, even though there are a number of topology models, it is an open question as to how representative the topologies they generate are of the actual Internet. Our goal is to produce a topology generation framework which improves the state of the art and is based on design principles which include representativeness, inclusiveness, and interoperability. Representativeness leads to synthetic topologies that accurately reflect many aspects of the actual Internet topology (e.g. hierarchical structure, degree distribution, etc.). Inclusiveness combines the strengths of as many generation models as possible in a single generation tool. Interoperability provides interfaces to widely-used simulation and visualization applications such as ns and SSF. We call such a tool a universal topology generator. In this paper we discuss the design, implementation and usage of the BRITE universal topology generation tool that we have built. We also describe the BRITE Analysis Engine, BRIANA, which is an independent piece of software designed and built upon BRITE design goals of flexibility and extensibility. The purpose of BRIANA is to act as a repository of analysis routines along with a user–friendly interface that allows its use on different topology formats.
Resumo:
Considerable attention has been focused on the properties of graphs derived from Internet measurements. Router-level topologies collected via traceroute studies have led some authors to conclude that the router graph of the Internet is a scale-free graph, or more generally a power-law random graph. In such a graph, the degree distribution of nodes follows a distribution with a power-law tail. In this paper we argue that the evidence to date for this conclusion is at best insufficient. We show that graphs appearing to have power-law degree distributions can arise surprisingly easily, when sampling graphs whose true degree distribution is not at all like a power-law. For example, given a classical Erdös-Rényi sparse, random graph, the subgraph formed by a collection of shortest paths from a small set of random sources to a larger set of random destinations can easily appear to show a degree distribution remarkably like a power-law. We explore the reasons for how this effect arises, and show that in such a setting, edges are sampled in a highly biased manner. This insight allows us to distinguish measurements taken from the Erdös-Rényi graphs from those taken from power-law random graphs. When we apply this distinction to a number of well-known datasets, we find that the evidence for sampling bias in these datasets is strong.
Resumo:
We recently developed an approach for testing the accuracy of network inference algorithms by applying them to biologically realistic simulations with known network topology. Here, we seek to determine the degree to which the network topology and data sampling regime influence the ability of our Bayesian network inference algorithm, NETWORKINFERENCE, to recover gene regulatory networks. NETWORKINFERENCE performed well at recovering feedback loops and multiple targets of a regulator with small amounts of data, but required more data to recover multiple regulators of a gene. When collecting the same number of data samples at different intervals from the system, the best recovery was produced by sampling intervals long enough such that sampling covered propagation of regulation through the network but not so long such that intervals missed internal dynamics. These results further elucidate the possibilities and limitations of network inference based on biological data.
Resumo:
El proyecto CUBE es una propuesta de trabajo en el aula de Matemáticas donde a partir de la película CUBE (Vincenzo Natali, 1997) se desarrollan una serie de actividades introductorias a la Geometría Analítica tridimensional y a la visualización espacial geométrica. Consta de dos partes, una relativa al guión de la película y otra derivada hacia el desarrollo del currículo de 4º de ESO en el bloque de Geometría. Las características de la propuesta hacen que se presente como un proyecto abierto a la interdisciplinariedad e idóneo para la práctica del aprendizaje significativo en un contexto de prácticas procedimentales.
Resumo:
The last few years have seen a substantial increase in the geometric complexity for 3D flow simulation. In this paper we describe the challenges in generating computation grids for 3D aerospace configuations and demonstrate the progress made to eventually achieve a push button technology for CAD to visualized flow. Special emphasis is given to the interfacing from the grid generator to the flow solver by semi-automatic generation of boundary conditions during the grid generation process. In this regard, once a grid has been generated, push button technology of most commercial flow solvers has been achieved. This will be demonstrated by the ad hoc simulation for the Hopper configuration.
Resumo:
The impact of source/drain engineering on the performance of a six-transistor (6-T) static random access memory (SRAM) cell, based on 22 nm double-gate (DG) SOI MOSFETs, has been analyzed using mixed-mode simulation, for three different circuit topologies for low voltage operation. The trade-offs associated with the various conflicting requirements relating to read/write/standby operations have been evaluated comprehensively in terms of eight performance metrics, namely retention noise margin, static noise margin, static voltage/current noise margin, write-ability current, write trip voltage/current and leakage current. Optimal design parameters with gate-underlap architecture have been identified to enhance the overall SRAM performance, and the influence of parasitic source/drain resistance and supply voltage scaling has been investigated. A gate-underlap device designed with a spacer-to-straggle (s/sigma) ratio in the range 2-3 yields improved SRAM performance metrics, regardless of circuit topology. An optimal two word-line double-gate SOI 6-T SRAM cell design exhibits a high SNM similar to 162 mV, I-wr similar to 35 mu A and low I-leak similar to 70 pA at V-DD = 0.6 V, while maintaining SNM similar to 30% V-DD over the supply voltage (V-DD) range of 0.4-0.9 V.
Resumo:
The problem of topology control is to assign per-node transmission power such that the resulting topology is energy efficient and satisfies certain global properties such as connectivity. The conventional approach to achieve these objectives is based on the fundamental assumption that nodes are socially responsible. We examine the following question: if nodes behave in a selfish manner, how does it impact the overall connectivity and energy consumption in the resulting topologies? We pose the above problem as a noncooperative game and use game-theoretic analysis to address it. We study Nash equilibrium properties of the topology control game and evaluate the efficiency of the induced topology when nodes employ a greedy best response algorithm. We show that even when the nodes have complete information about the network, the steady-state topologies are suboptimal. We propose a modified algorithm based on a better response dynamic and show that this algorithm is guaranteed to converge to energy-efficient and connected topologies. Moreover, the node transmit power levels are more evenly distributed, and the network performance is comparable to that obtained from centralized algorithms.