21 resultados para graphs and groups
Resumo:
In this paper we define the structural information content of graphs as their corresponding graph entropy. This definition is based on local vertex functionals obtained by calculating-spheres via the algorithm of Dijkstra. We prove that the graph entropy and, hence, the local vertex functionals can be computed with polynomial time complexity enabling the application of our measure for large graphs. In this paper we present numerical results for the graph entropy of chemical graphs and discuss resulting properties. (C) 2007 Elsevier Ltd. All rights reserved.
Resumo:
Call control features (e.g., call-divert, voice-mail) are primitive options to which users can subscribe off-line to personalise their service. The configuration of a feature subscription involves choosing and sequencing features from a catalogue and is subject to constraints that prevent undesirable feature interactions at run-time. When the subscription requested by a user is inconsistent, one problem is to find an optimal relaxation, which is a generalisation of the feedback vertex set problem on directed graphs, and thus it is an NP-hard task. We present several constraint programming formulations of the problem. We also present formulations using partial weighted maximum Boolean satisfiability and mixed integer linear programming. We study all these formulations by experimentally comparing them on a variety of randomly generated instances of the feature subscription problem.
Resumo:
Processor architectures has taken a turn towards many-core processors, which integrate multiple processing cores on a single chip to increase overall performance, and there are no signs that this trend will stop in the near future. Many-core processors are harder to program than multi-core and single-core processors due to the need of writing parallel or concurrent programs with high degrees of parallelism. Moreover, many-cores have to operate in a mode of strong scaling because of memory bandwidth constraints. In strong scaling increasingly finer-grain parallelism must be extracted in order to keep all processing cores busy.
Task dataflow programming models have a high potential to simplify parallel program- ming because they alleviate the programmer from identifying precisely all inter-task de- pendences when writing programs. Instead, the task dataflow runtime system detects and enforces inter-task dependences during execution based on the description of memory each task accesses. The runtime constructs a task dataflow graph that captures all tasks and their dependences. Tasks are scheduled to execute in parallel taking into account dependences specified in the task graph.
Several papers report important overheads for task dataflow systems, which severely limits the scalability and usability of such systems. In this paper we study efficient schemes to manage task graphs and analyze their scalability. We assume a programming model that supports input, output and in/out annotations on task arguments, as well as commutative in/out and reductions. We analyze the structure of task graphs and identify versions and generations as key concepts for efficient management of task graphs. Then, we present three schemes to manage task graphs building on graph representations, hypergraphs and lists. We also consider a fourth edge-less scheme that synchronizes tasks using integers. Analysis using micro-benchmarks shows that the graph representation is not always scalable and that the edge-less scheme introduces least overhead in nearly all situations.
Resumo:
We examine the representation of judgements of stochastic independence in probabilistic logics. We focus on a relational logic where (i) judgements of stochastic independence are encoded by directed acyclic graphs, and (ii) probabilistic assessments are flexible in the sense that they are not required to specify a single probability measure. We discuss issues of knowledge representation and inference that arise from our particular combination of graphs, stochastic independence, logical formulas and probabilistic assessments.
Resumo:
OBJECTIVES: Identify the words and phrases that authors used to describe time-to-event outcomes of dental treatments in patients.
MATERIALS AND METHODS: A systematic handsearch of 50 dental journals with the highest Citation Index for 2008 identified articles reporting dental treatment with time-to-event statistics (included "case" articles, n = 95), without time-to-event statistics (active "control" articles, n = 91), and all other articles (passive "control" articles n = 6796). The included and active controls were read, identifying 43 English words across the title, aim and abstract, indicating that outcomes were studied over time. Once identified, these words were sought within the 6796 passive controls. Words were divided into six groups. Differences in use of words were analyzed with Pearson's chi-square across these six groups, and the three locations (title, aim, and abstract).
RESULTS: In the abstracts, included articles used group 1 (statistical technique) and group 2 (statistical terms) more frequently than the active and passive controls (group 1: 35%, 2%, 0.37%, P < 0.001 and group 2: 31%, 1%, 0.06%, P < 0.001). The included and active controls used group 3 (quasi-statistical) equally, but significantly more often than the passive controls (82%, 78%, 3.21%, P < 0.001). In the aims, use of target words was similar for included and active controls, but less frequent for groups 1-4 in the passive controls (P < 0.001). In the title, group 2 (statistical techniques) and groups 3-5 (outcomes) were similar for included and active controls, but groups 2 and 3 were less frequent in the passive controls (P < 0.001). Significantly more included articles used group 6 words (stating the study duration) (54%, 30%, P = 0.001).
CONCLUSION: All included articles used time-to-event analyses, but two-thirds did not include words to highlight this in the abstract. There is great variation in the words authors used to describe dental time-to-event outcomes. Electronic identification of such articles would be inconsistent, with low sensitivity and specificity. Authors should improve the reporting quality. Journals should allow sufficient space in abstracts to summarize research, and not impose unrealistic word limits. Readers should be mindful of these problems when searching for relevant articles. Additional research is required in this field.
Resumo:
The co-edited collection investigates the processes of learning how to live with individual and groups differences in the 21century and examines the ambivalences of contemporary cosmopolitanism. The contributions focus on visual, normative and cultural embodiments of differences, examining conflicts at local sites that are connected by the processes of Europeanization and globalization.