997 resultados para random topology


Relevância:

20.00% 20.00%

Publicador:

Resumo:

We consider the problem of performing topological optimizations of distributed hash tables. Such hash tables include Chord and Tapestry and are a popular building block for distributed applications. Optimizing topologies over one dimensional hash spaces is particularly difficult as the higher dimensionality of the underlying network makes close fits unlikely. Instead, current schemes are limited to heuristically performing local optimizations finding the best of small random set of peers. We propose a new class of topology optimizations based on the existence of clusters of close overlay members within the underlying network. By constructing additional overlays for each cluster, a significant portion of the search procedure can be performed within the local cluster with a corresponding reduction in the search time. Finally, we discuss the effects of these additional overlays on spatial locality and other load balancing scheme.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

In an n-way broadcast application each one of n overlay nodes wants to push its own distinct large data file to all other n-1 destinations as well as download their respective data files. BitTorrent-like swarming protocols are ideal choices for handling such massive data volume transfers. The original BitTorrent targets one-to-many broadcasts of a single file to a very large number of receivers and thus, by necessity, employs an almost random overlay topology. n-way broadcast applications on the other hand, owing to their inherent n-squared nature, are realizable only in small to medium scale networks. In this paper, we show that we can leverage this scale constraint to construct optimized overlay topologies that take into consideration the end-to-end characteristics of the network and as a consequence deliver far superior performance compared to random and myopic (local) approaches. We present the Max-Min and MaxSum peer-selection policies used by individual nodes to select their neighbors. The first one strives to maximize the available bandwidth to the slowest destination, while the second maximizes the aggregate output rate. We design a swarming protocol suitable for n-way broadcast and operate it on top of overlay graphs formed by nodes that employ Max-Min or Max-Sum policies. Using trace-driven simulation and measurements from a PlanetLab prototype implementation, we demonstrate that the performance of swarming on top of our constructed topologies is far superior to the performance of random and myopic overlays. Moreover, we show how to modify our swarming protocol to allow it to accommodate selfish nodes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Recent empirical studies have shown that Internet topologies exhibit power laws of the form for the following relationships: (P1) outdegree of node (domain or router) versus rank; (P2) number of nodes versus outdegree; (P3) number of node pairs y = x^α within a neighborhood versus neighborhood size (in hops); and (P4) eigenvalues of the adjacency matrix versus rank. However, causes for the appearance of such power laws have not been convincingly given. In this paper, we examine four factors in the formation of Internet topologies. These factors are (F1) preferential connectivity of a new node to existing nodes; (F2) incremental growth of the network; (F3) distribution of nodes in space; and (F4) locality of edge connections. In synthetically generated network topologies, we study the relevance of each factor in causing the aforementioned power laws as well as other properties, namely diameter, average path length and clustering coefficient. Different kinds of network topologies are generated: (T1) topologies generated using our parametrized generator, we call BRITE; (T2) random topologies generated using the well-known Waxman model; (T3) Transit-Stub topologies generated using GT-ITM tool; and (T4) regular grid topologies. We observe that some generated topologies may not obey power laws P1 and P2. Thus, the existence of these power laws can be used to validate the accuracy of a given tool in generating representative Internet topologies. Power laws P3 and P4 were observed in nearly all considered topologies, but different topologies showed different values of the power exponent α. Thus, while the presence of power laws P3 and P4 do not give strong evidence for the representativeness of a generated topology, the value of α in P3 and P4 can be used as a litmus test for the representativeness of a generated topology. We also find that factors F1 and F2 are the key contributors in our study which provide the resemblance of our generated topologies to that of the Internet.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Continuing our development of a mathematical theory of stochastic microlensing, we study the random shear and expected number of random lensed images of different types. In particular, we characterize the first three leading terms in the asymptotic expression of the joint probability density function (pdf) of the random shear tensor due to point masses in the limit of an infinite number of stars. Up to this order, the pdf depends on the magnitude of the shear tensor, the optical depth, and the mean number of stars through a combination of radial position and the star's mass. As a consequence, the pdf's of the shear components are seen to converge, in the limit of an infinite number of stars, to shifted Cauchy distributions, which shows that the shear components have heavy tails in that limit. The asymptotic pdf of the shear magnitude in the limit of an infinite number of stars is also presented. All the results on the random microlensing shear are given for a general point in the lens plane. Extending to the general random distributions (not necessarily uniform) of the lenses, we employ the Kac-Rice formula and Morse theory to deduce general formulas for the expected total number of images and the expected number of saddle images. We further generalize these results by considering random sources defined on a countable compact covering of the light source plane. This is done to introduce the notion of global expected number of positive parity images due to a general lensing map. Applying the result to microlensing, we calculate the asymptotic global expected number of minimum images in the limit of an infinite number of stars, where the stars are uniformly distributed. This global expectation is bounded, while the global expected number of images and the global expected number of saddle images diverge as the order of the number of stars. © 2009 American Institute of Physics.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Genome rearrangement often produces chromosomes with two centromeres (dicentrics) that are inherently unstable because of bridge formation and breakage during cell division. However, mammalian dicentrics, and particularly those in humans, can be quite stable, usually because one centromere is functionally silenced. Molecular mechanisms of centromere inactivation are poorly understood since there are few systems to experimentally create dicentric human chromosomes. Here, we describe a human cell culture model that enriches for de novo dicentrics. We demonstrate that transient disruption of human telomere structure non-randomly produces dicentric fusions involving acrocentric chromosomes. The induced dicentrics vary in structure near fusion breakpoints and like naturally-occurring dicentrics, exhibit various inter-centromeric distances. Many functional dicentrics persist for months after formation. Even those with distantly spaced centromeres remain functionally dicentric for 20 cell generations. Other dicentrics within the population reflect centromere inactivation. In some cases, centromere inactivation occurs by an apparently epigenetic mechanism. In other dicentrics, the size of the alpha-satellite DNA array associated with CENP-A is reduced compared to the same array before dicentric formation. Extra-chromosomal fragments that contained CENP-A often appear in the same cells as dicentrics. Some of these fragments are derived from the same alpha-satellite DNA array as inactivated centromeres. Our results indicate that dicentric human chromosomes undergo alternative fates after formation. Many retain two active centromeres and are stable through multiple cell divisions. Others undergo centromere inactivation. This event occurs within a broad temporal window and can involve deletion of chromatin that marks the locus as a site for CENP-A maintenance/replenishment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Although many feature selection methods for classification have been developed, there is a need to identify genes in high-dimensional data with censored survival outcomes. Traditional methods for gene selection in classification problems have several drawbacks. First, the majority of the gene selection approaches for classification are single-gene based. Second, many of the gene selection procedures are not embedded within the algorithm itself. The technique of random forests has been found to perform well in high-dimensional data settings with survival outcomes. It also has an embedded feature to identify variables of importance. Therefore, it is an ideal candidate for gene selection in high-dimensional data with survival outcomes. In this paper, we develop a novel method based on the random forests to identify a set of prognostic genes. We compare our method with several machine learning methods and various node split criteria using several real data sets. Our method performed well in both simulations and real data analysis.Additionally, we have shown the advantages of our approach over single-gene-based approaches. Our method incorporates multivariate correlations in microarray data for survival outcomes. The described method allows us to better utilize the information available from microarray data with survival outcomes.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

© 2015 IOP Publishing Ltd & London Mathematical Society.This is a detailed analysis of invariant measures for one-dimensional dynamical systems with random switching. In particular, we prove the smoothness of the invariant densities away from critical points and describe the asymptotics of the invariant densities at critical points.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

© 2015 Society for Industrial and Applied Mathematics.We consider parabolic PDEs with randomly switching boundary conditions. In order to analyze these random PDEs, we consider more general stochastic hybrid systems and prove convergence to, and properties of, a stationary distribution. Applying these general results to the heat equation with randomly switching boundary conditions, we find explicit formulae for various statistics of the solution and obtain almost sure results about its regularity and structure. These results are of particular interest for biological applications as well as for their significant departure from behavior seen in PDEs forced by disparate Gaussian noise. Our general results also have applications to other types of stochastic hybrid systems, such as ODEs with randomly switching right-hand sides.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We recently developed an approach for testing the accuracy of network inference algorithms by applying them to biologically realistic simulations with known network topology. Here, we seek to determine the degree to which the network topology and data sampling regime influence the ability of our Bayesian network inference algorithm, NETWORKINFERENCE, to recover gene regulatory networks. NETWORKINFERENCE performed well at recovering feedback loops and multiple targets of a regulator with small amounts of data, but required more data to recover multiple regulators of a gene. When collecting the same number of data samples at different intervals from the system, the best recovery was produced by sampling intervals long enough such that sampling covered propagation of regulation through the network but not so long such that intervals missed internal dynamics. These results further elucidate the possibilities and limitations of network inference based on biological data.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The last few years have seen a substantial increase in the geometric complexity for 3D flow simulation. In this paper we describe the challenges in generating computation grids for 3D aerospace configuations and demonstrate the progress made to eventually achieve a push button technology for CAD to visualized flow. Special emphasis is given to the interfacing from the grid generator to the flow solver by semi-automatic generation of boundary conditions during the grid generation process. In this regard, once a grid has been generated, push button technology of most commercial flow solvers has been achieved. This will be demonstrated by the ad hoc simulation for the Hopper configuration.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

1. A first step in the analysis of complex movement data often involves discretisation of the path into a series of step-lengths and turns, for example in the analysis of specialised random walks, such as Lévy flights. However, the identification of turning points, and therefore step-lengths, in a tortuous path is dependent on ad-hoc parameter choices. Consequently, studies testing for movement patterns in these data, such as Lévy flights, have generated debate. However, studies focusing on one-dimensional (1D) data, as in the vertical displacements of marine pelagic predators, where turning points can be identified unambiguously have provided strong support for Lévy flight movement patterns. 2. Here, we investigate how step-length distributions in 3D movement patterns would be interpreted by tags recording in 1D (i.e. depth) and demonstrate the dimensional symmetry previously shown mathematically for Lévy-flight movements. We test the veracity of this symmetry by simulating several measurement errors common in empirical datasets and find Lévy patterns and exponents to be robust to low-quality movement data. 3. We then consider exponential and composite Brownian random walks and show that these also project into 1D with sufficient symmetry to be clearly identifiable as such. 4. By extending the symmetry paradigm, we propose a new methodology for step-length identification in 2D or 3D movement data. The methodology is successfully demonstrated in a re-analysis of wandering albatross Global Positioning System (GPS) location data previously analysed using a complex methodology to determine bird-landing locations as turning points in a Lévy walk. For this high-resolution GPS data, we show that there is strong evidence for albatross foraging patterns approximated by truncated Lévy flights spanning over 3·5 orders of magnitude. 5. Our simple methodology and freely available software can be used with any 2D or 3D movement data at any scale or resolution and are robust to common empirical measurement errors. The method should find wide applicability in the field of movement ecology spanning the study of motile cells to humans.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Scepticism over stated preference surveys conducted online revolves around the concerns over “professional respondents” who might rush through the questionnaire without sufficiently considering the information provided. To gain insight on the validity of this phenomenon and test the effect of response time on choice randomness, this study makes use of a recently conducted choice experiment survey on ecological and amenity effects of an offshore windfarm in the UK. The positive relationship between self-rated and inferred attribute attendance and response time is taken as evidence for a link between response time and cognitive effort. Subsequently, the generalised multinomial logit model is employed to test the effect of response time on scale, which indicates the weight of the deterministic relative to the error component in the random utility model. Results show that longer response time increases scale, i.e. decreases choice randomness. This positive scale effect of response time is further found to be non-linear and wear off at some point beyond which extreme response time decreases scale. While response time does not systematically affect welfare estimates, higher response time increases the precision of such estimates. These effects persist when self-reported choice certainty is controlled for. Implications of the results for online stated preference surveys and further research are discussed.