962 resultados para EXPANDER GRAPHS


Relevância:

10.00% 10.00%

Publicador:

Resumo:

Organizations that leverage lessons learned from their experience in the practice of complex real-world activities are faced with five difficult problems. First, how to represent the learning situation in a recognizable way. Second, how to represent what was actually done in terms of repeatable actions. Third, how to assess performance taking account of the particular circumstances. Fourth, how to abstract lessons learned that are re-usable on future occasions. Fifth, how to determine whether to pursue practice maturity or strategic relevance of activities. Here, organizational learning and performance improvement are investigated in a field study using the Context-based Intelligent Assistant Support (CIAS) approach. A new conceptual framework for practice-based organizational learning and performance improvement is presented that supports researchers and practitioners address the problems evoked and contributes to a practice-based approach to activity management. The novelty of the research lies in the simultaneous study of the different levels involved in the activity. Route selection in light rail infrastructure projects involves practices at both the strategic and operational levels; it is part managerial/political and part engineering. Aspectual comparison of practices represented in Contextual Graphs constitutes a new approach to the selection of Key Performance Indicators (KPIs). This approach is free from causality assumptions and forms the basis of a new approach to practice-based organizational learning and performance improvement. The evolution of practices in contextual graphs is shown to be an objective and measurable expression of organizational learning. This diachronic representation is interpreted using a practice-based organizational learning novelty typology. This dissertation shows how lessons learned when effectively leveraged by an organization lead to practice maturity. The practice maturity level of an activity in combination with an assessment of an activity’s strategic relevance can be used by management to prioritize improvement effort.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We revisit the well-known problem of sorting under partial information: sort a finite set given the outcomes of comparisons between some pairs of elements. The input is a partially ordered set P, and solving the problem amounts to discovering an unknown linear extension of P, using pairwise comparisons. The information-theoretic lower bound on the number of comparisons needed in the worst case is log e(P), the binary logarithm of the number of linear extensions of P. In a breakthrough paper, Jeff Kahn and Jeong Han Kim (STOC 1992) showed that there exists a polynomial-time algorithm for the problem achieving this bound up to a constant factor. Their algorithm invokes the ellipsoid algorithm at each iteration for determining the next comparison, making it impractical. We develop efficient algorithms for sorting under partial information. Like Kahn and Kim, our approach relies on graph entropy. However, our algorithms differ in essential ways from theirs. Rather than resorting to convex programming for computing the entropy, we approximate the entropy, or make sure it is computed only once in a restricted class of graphs, permitting the use of a simpler algorithm. Specifically, we present: an O(n2) algorithm performing O(log n·log e(P)) comparisons; an O(n2.5) algorithm performing at most (1+ε) log e(P) + Oε(n) comparisons; an O(n2.5) algorithm performing O(log e(P)) comparisons. All our algorithms are simple to implement. © 2010 ACM.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

User supplied knowledge and interaction is a vital component of a toolkit for producing high quality parallel implementations of scalar FORTRAN numerical code. In this paper we consider the necessary components that such a parallelisation toolkit should possess to provide an effective environment to identify, extract and embed user relevant user knowledge. We also examine to what extent these facilities are available in leading parallelisation tools; in particular we discuss how these issues have been addressed in the development of the user interface of the Computer Aided Parallelisation Tools (CAPTools). The CAPTools environment has been designed to enable user exploration, interaction and insertion of user knowledge to facilitate the automatic generation of very efficient parallel code. A key issue in the user's interaction is control of the volume of information so that the user is focused on only that which is needed. User control over the level and extent of information revealed at any phase is supplied using a wide variety of filters. Another issue is the way in which information is communicated. Dependence analysis and its resulting graphs involve a lot of sophisticated rather abstract concepts unlikely to be familiar to most users of parallelising tools. As such, considerable effort has been made to communicate with the user in terms that they will understand. These features, amongst others, and their use in the parallelisation process are described and their effectiveness discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

A common but informal notion in social network analysis and other fields is the concept of a core/periphery structure. The intuitive conception entails a dense, cohesive core and a sparse, unconnected periphery. This paper seeks to formalize the intuitive notion of a core/periphery structure and suggests algorithms for detecting this structure, along with statistical tests for testing a priori hypotheses. Different models are presented for different kinds of graphs (directed and undirected, valued and nonvalued). In addition, the close relation of the continuous models developed to certain centrality measures is discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a heuristic method for drawing graphs which uses a multilevel technique combined with a force-directed placement algorithm. The multilevel process groups vertices to form clusters, uses the clusters to define a new graph and is repeated until the graph size falls below some threshold. The coarsest graph is then given an initial layout and the layout is successively refined on all the graphs starting with the coarsest and ending with the original. In this way the multilevel algorithm both accelerates and gives a more global quality to the force- directed placement. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on a number of examples ranging from 500 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 30 seconds for a 10,000 vertex graph to around 10 minutes for the largest graph. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We describe a heuristic method for drawing graphs which uses a multilevel framework combined with a force-directed placement algorithm. The multilevel technique matches and coalesces pairs of adjacent vertices to define a new graph and is repeated recursively to create a hierarchy of increasingly coarse graphs, G0, G1, …, GL. The coarsest graph, GL, is then given an initial layout and the layout is refined and extended to all the graphs starting with the coarsest and ending with the original. At each successive change of level, l, the initial layout for Gl is taken from its coarser and smaller child graph, Gl+1, and refined using force-directed placement. In this way the multilevel framework both accelerates and appears to give a more global quality to the drawing. The algorithm can compute both 2 & 3 dimensional layouts and we demonstrate it on examples ranging in size from 10 to 225,000 vertices. It is also very fast and can compute a 2D layout of a sparse graph in around 12 seconds for a 10,000 vertex graph to around 5-7 minutes for the largest graphs. This is an order of magnitude faster than recent implementations of force-directed placement algorithms.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The graph-partitioning problem is to divide a graph into several pieces so that the number of vertices in each piece is the same within some defined tolerance and the number of cut edges is minimised. Important applications of the problem arise, for example, in parallel processing where data sets need to be distributed across the memory of a parallel machine. Very effective heuristic algorithms have been developed for this problem which run in real-time, but it is not known how good the partitions are since the problem is, in general, NP-complete. This paper reports an evolutionary search algorithm for finding benchmark partitions. A distinctive feature is the use of a multilevel heuristic algorithm to provide an effective crossover. The technique is tested on several example graphs and it is demonstrated that our method can achieve extremely high quality partitions significantly better than those found by the state-of-the-art graph-partitioning packages.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We consider a single machine due date assignment and scheduling problem of minimizing holding costs with no tardy jobs tinder series parallel and somewhat wider class of precedence constraints as well as the properties of series-parallel graphs.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

The notion of time plays a vital and ubiquitous role of a common universal reference. In knowledge-based systems, temporal information is usually represented in terms of a collection of statements, together with the corresponding temporal reference. This paper introduces a visualized consistency checker for temporal reference. It allows expression of both absolute and relative temporal knowledge, and provides visual representation of temporal references in terms of directed and partially weighted graphs. Based on the temporal reference of a given scenario, the visualized checker can deliver a verdict to the user as to whether the scenario is temporally consistent or not, and provide the corresponding analysis / diagnosis.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

This paper will analyse two of the likely damage mechanisms present in a paper fibre matrix when placed under controlled stress conditions: fibre/fibre bond failure and fibre failure. The failure process associated with each damage mechanism will be presented in detail focusing on the change in mechanical and acoustic properties of the surrounding fibre structure before and after failure. To present this complex process mathematically, geometrically simple fibre arrangements will be chosen based on certain assumptions regarding the structure and strength of paper, to model the damage mechanisms. The fibre structures are then formulated in terms of a hybrid vibro-acoustic model based on a coupled mass/spring system and the pressure wave equation. The model will be presented in detail in the paper. The simulation of the simple fibre structures serves two purposes; it highlights the physical and acoustic differences of each damage mechanism before and after failure, and also shows the differences in the two damage mechanisms when compared with one another. The results of the simulations are given in the form of pressure wave contours, time-frequency graphs and the Continuous Wavelet Transform (CWT) diagrams. The analysis of the results leads to criteria by which the two damage mechanisms can be identified. Using these criteria it was possible to verify the results of the simulations against experimental acoustic data. The models developed in this study are of specific practical interest in the paper-making industry, where acoustic sensors may be used to monitor continuous paper production. The same techniques may be adopted more generally to correlate acoustic signals to damage mechanisms in other fibre-based structures.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In this paper, we shall critically examine a special class of graph matching algorithms that follow the approach of node-similarity measurement. A high-level algorithm framework, namely node-similarity graph matching framework (NSGM framework), is proposed, from which, many existing graph matching algorithms can be subsumed, including the eigen-decomposition method of Umeyama, the polynomial-transformation method of Almohamad, the hubs and authorities method of Kleinberg, and the kronecker product successive projection methods of Wyk, etc. In addition, improved algorithms can be developed from the NSGM framework with respects to the corresponding results in graph theory. As the observation, it is pointed out that, in general, any algorithm which can be subsumed from NSGM framework fails to work well for graphs with non-trivial auto-isomorphism structure.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

In terms of a general time theory which addresses time-elements as typed point-based intervals, a formal characterization of time-series and state-sequences is introduced. Based on this framework, the subsequence matching problem is specially tackled by means of being transferred into bipartite graph matching problem. Then a hybrid similarity model with high tolerance of inversion, crossover and noise is proposed for matching the corresponding bipartite graphs involving both temporal and non-temporal measurements. Experimental results on reconstructed time-series data from UCI KDD Archive demonstrate that such an approach is more effective comparing with the traditional similarity model based algorithms, promising robust techniques for lager time-series databases and real-life applications such as Content-based Video Retrieval (CBVR), etc.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

1. The changes in the composition and distribution of the plankton of the southern North Sea have been investigated month by month, from June 1932 to December 1937; the present report deals with the phytoplankton. The survey was carried out by the Continuous Plankton Recorder, towed at a standard depth of 10 metres, by ships on regular steamship lines across the North Sea from Hull towards the Skagerrak, to Bremen and to Rotterdam, and later between London and Esbjerg. 2. The material and methods are described, together with a discussion on the validity of this type of survey and some comparison of its results with those obtained by other methods (pp. 76-86). 3. Particular attention has been paid to Rhizosolenia styliformis (pp. 92- 107), Biddulphia sinensis (pp. 108-115), Phaeocystis (pp. 149-153), and the Dinoflagellates (pp. 134-149); of these the first three are known to be of particular importance in relation to the herring fisheries. More generalised data are available for the principal diatoms other than R. styliformis and B. sinensis (pp. 116-134). 4. The main part of the work is an ecological study of the phytoplankton changes in time and space over the 5½ years. Each year is marked by some distinct variations in the abundance and the times of increase, maximum numbers and decline as recorded in the different forms. These variations in the annual cycles are compared on the different lines by a series of graphs arranged against a time scale of months, a set for each year being placed side by side (Plates I-XXI). More detailed studies by more frequent records were made in the autumns of 1934, 1935, 1936 and 1937 (cf. Figs. 3 and 4). The changes in spatial distribution are shown by a series of monthly maps arranged in a similar manner for each year (Plates XXII-LXIV). These intensive studies of the changes in time and space are also intended to form the basis for correlations with other features in the general ecology of the area (e. g. the zooplankton, hydrology, meteorology and fisheries) to be made in later publications. 5. Whilst each form has shown its own peculiar features, a trend towards a general increase in the phytoplankton as a whole has been observed during the period, although the years 1934 and 1936 have in some respects shown deviations and regressive features, and not all organisms have revealed the same trend. The possible relation of this gradual trend to other events observed in recent years in these and neighbouring waters is discussed (pp. 162-167). 6. The application of these results to the study of patchiness (pp. 154-158), inter-relationships in the plankton (pp. 159-160) and to water movements (pp. 160-162) is briefly discussed.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

Coleoid cephalopods show flexibility in their reproductive strategies or mode of spawning, which can range from simultaneous terminal spawning over a short period at the end of the animal’s life to continuous spawning over a long period of the animal’s life. Although a simultaneous terminal spawning strategy is typical of shallow water temperate octopuses, it is not known whether deep-sea octopods would have the same reproductive strategy. The reproductive strategies and fecundity were investigated in nine species of deep-sea incirrate octopuses: Bathypolypus arcticus, Bathypolypus bairdii, Bathypolypus ergasticus, Bathypolypus sponsalis, Bathypolypus valdiviae, Benthoctopus levis, Benthoctopus normani, Benthoctopus sp., and Graneledone verrucosa (total n = 85). Egg-length frequency graphs and multivariate analysis (principal components analysis) suggest that B. sponsalis has a synchronous ovulation pattern and therefore a simultaneous terminal spawning strategy. Although a simultaneous terminal spawning strategy is most likely for B. levis and B. normani, the egg-length frequency graphs and multivariate analysis also suggest a greater variation in egglengths which could lead to spawning over an extended period.

Relevância:

10.00% 10.00%

Publicador:

Resumo:

We introduce three compact graph states that can be used to perform a measurement-based Toffoli gate. Given a weighted graph of six, seven, or eight qubits, we show that success probabilities of 1/4, 1/2, and 1, respectively, can be achieved. Our study puts a measurement-based version of this important quantum logic gate within the reach of current experiments. As the graphs are setup independent, they could be realized in a variety of systems, including linear optics and ion traps.