128 resultados para RANDOM REGULAR GRAPHS
Resumo:
A ranking method assigns to every weighted directed graph a (weak) ordering of the nodes. In this paper we axiomatize the ranking method that ranks the nodes according to their outflow using four independent axioms. Besides the well-known axioms of anonymity and positive responsiveness we introduce outflow monotonicity – meaning that in pairwise comparison between two nodes, a node is not doing worse in case its own outflow does not decrease and the other node’s outflow does not increase – and order preservation – meaning that adding two weighted digraphs such that the pairwise ranking between two nodes is the same in both weighted digraphs, then this is also their pairwise ranking in the ‘sum’ weighted digraph. The outflow ranking method generalizes the ranking by outdegree for directed graphs, and therefore also generalizes the ranking by Copeland score for tournaments.
Resumo:
The comet assay is a technique used to quantify DNA damage and repair at a cellular level. In the assay, cells are embedded in agarose and the cellular content is stripped away leaving only the DNA trapped in an agarose cavity which can then be electrophoresed. The damaged DNA can enter the agarose and migrate while the undamaged DNA cannot and is retained. DNA damage is measured as the proportion of the migratory ‘tail’ DNA compared to the total DNA in the cell. The fundamental basis of these arbitrary values is obtained in the comet acquisition phase using fluorescence microscopy with a stoichiometric stain in tandem with image analysis software. Current methods deployed in such an acquisition are expected to be both objectively and randomly obtained. In this paper we examine the ‘randomness’ of the acquisition phase and suggest an alternative method that offers both objective and unbiased comet selection. In order to achieve this, we have adopted a survey sampling approach widely used in stereology, which offers a method of systematic random sampling (SRS). This is desirable as it offers an impartial and reproducible method of comet analysis that can be used both manually or automated. By making use of an unbiased sampling frame and using microscope verniers, we are able to increase the precision of estimates of DNA damage. Results obtained from a multiple-user pooled variation experiment showed that the SRS technique attained a lower variability than that of the traditional approach. The analysis of a single user with repetition experiment showed greater individual variances while not being detrimental to overall averages. This would suggest that the SRS method offers a better reflection of DNA damage for a given slide and also offers better user reproducibility.
Resumo:
We study the typical entanglement properties of a system comprising two independent qubit environments interacting via a shuttling ancilla. The initial preparation of the environments is modeled using random matrix techniques. The entanglement measure used in our study is then averaged over many histories of randomly prepared environmental states. Under a Heisenberg interaction model, the average entanglement between the ancilla and one of the environments remains constant, regardless of the preparation of the latter and the details of the interaction. We also show that, upon suitable kinematic and dynamical changes in the ancillaenvironment subsystems, the entanglement-sharing structure undergoes abrupt modifications associated with a change in the multipartite entanglement class of the overall system's state. These results are invariant with respect to the randomized initial state of the environments.
Resumo:
A wealth of palaeoecological studies (e.g. pollen, diatoms, chironomids and macrofossils from deposits such as lakes or bogs) have revealed major as well as more subtle ecosystem changes over decadal to multimillennial timescales. Such ecosystem changes are usually assumed to have been forced by specific environmental changes. Here, we test if the observed changes in palaeoecological records may be reproduced by random simulations, and we find that simple procedures generate abrupt events, long-term trends, quasi-cyclic behaviour, extinctions and immigrations. Our results highlight the importance of replicated and multiproxy data for reliable reconstructions of past climate and environmental changes.
Resumo:
Let X be a quasi-compact scheme, equipped with an open covering by affine schemes U s = Spec A s . A quasi-coherent sheaf on X gives rise, by taking sections over the U s , to a diagram of modules over the coordinate rings A s , indexed by the intersection poset S of the covering. If X is a regular toric scheme over an arbitrary commutative ring, we prove that the unbounded derived category of quasi-coherent sheaves on X can be obtained from a category of Sop-diagrams of chain complexes of modules by inverting maps which induce homology isomorphisms on hyper-derived inverse limits. Moreover, we show that there is a finite set of weak generators, one for each cone in the fan S. The approach taken uses the machinery of Bousfield–Hirschhorn colocalisation of model categories. The first step is to characterise colocal objects; these turn out to be homotopy sheaves in the sense that chain complexes over different open sets U s agree on intersections up to quasi-isomorphism. In a second step it is shown that the homotopy category of homotopy sheaves is equivalent to the derived category of X.
Resumo:
Hardware synthesis from dataflow graphs of signal processing systems is a growing research area as focus shifts to high level design methodologies. For data intensive systems, dataflow based synthesis can lead to an inefficient usage of memory due to the restrictive nature of synchronous dataflow and its inability to easily model data reuse. This paper explores how dataflow graph changes can be used to drive both the on-chip and off-chip memory organisation and how these memory architectures can be mapped to a hardware implementation. By exploiting the data reuse inherent to many image processing algorithms and by creating memory hierarchies, off-chip memory bandwidth can be reduced by a factor of a thousand from the original dataflow graph level specification of a motion estimation algorithm, with a minimal increase in memory size. This analysis is verified using results gathered from implementation of the motion estimation algorithm on a Xilinx Virtex-4 FPGA, where the delay between the memories and processing elements drops from 14.2 ns down to 1.878 ns through the refinement of the memory architecture. Care must be taken when modeling these algorithms however, as inefficiencies in these models can be easily translated into overuse of hardware resources.