886 resultados para Graph-Based Metrics
Resumo:
We investigate the problem of waveband switching (WBS) in a wavelength-division multiplexing (WDM) mesh network with dynamic traffic requests. To solve the WBS problem in a homogeneous dynamic WBS network, where every node is a multi-granular optical cross-connect (MG-OXC), we construct an auxiliary graph. Based on the auxiliary graph, we develop two heuristic on-line WBS algorithms with different grouping policies, namely the wavelength-first WBS algorithm based on the auxiliary graph (WFAUG) and the waveband-first WBS algorithm based on the auxiliary graph (BFAUG). Our results show that the WFAUG algorithm outperforms the BFAUG algorithm.
Resumo:
This book will serve as a foundation for a variety of useful applications of graph theory to computer vision, pattern recognition, and related areas. It covers a representative set of novel graph-theoretic methods for complex computer vision and pattern recognition tasks. The first part of the book presents the application of graph theory to low-level processing of digital images such as a new method for partitioning a given image into a hierarchy of homogeneous areas using graph pyramids, or a study of the relationship between graph theory and digital topology. Part II presents graph-theoretic learning algorithms for high-level computer vision and pattern recognition applications, including a survey of graph based methodologies for pattern recognition and computer vision, a presentation of a series of computationally efficient algorithms for testing graph isomorphism and related graph matching tasks in pattern recognition and a new graph distance measure to be used for solving graph matching problems. Finally, Part III provides detailed descriptions of several applications of graph-based methods to real-world pattern recognition tasks. It includes a critical review of the main graph-based and structural methods for fingerprint classification, a new method to visualize time series of graphs, and potential applications in computer network monitoring and abnormal event detection.
Resumo:
Software evolution, and particularly its growth, has been mainly studied at the file (also sometimes referred as module) level. In this paper we propose to move from the physical towards a level that includes semantic information by using functions or methods for measuring the evolution of a software system. We point out that use of functions-based metrics has many advantages over the use of files or lines of code. We demonstrate our approach with an empirical study of two Free/Open Source projects: a community-driven project, Apache, and a company-led project, Novell Evolution. We discovered that most functions never change; when they do their number of modifications is correlated with their size, and that very few authors who modify each; finally we show that the departure of a developer from a software project slows the evolution of the functions that she authored.
Resumo:
INTRODUCTION: The EVA (Endoscopic Video Analysis) tracking system a new tracking system for extracting motions of laparoscopic instruments based on non-obtrusive video tracking was developed. The feasibility of using EVA in laparoscopic settings has been tested in a box trainer setup. METHODS: EVA makes use of an algorithm that employs information of the laparoscopic instrument's shaft edges in the image, the instrument's insertion point, and the camera's optical centre to track the 3D position of the instrument tip. A validation study of EVA comprised a comparison of the measurements achieved with EVA and the TrEndo tracking system. To this end, 42 participants (16 novices, 22 residents, and 4 experts) were asked to perform a peg transfer task in a box trainer. Ten motion-based metrics were used to assess their performance. RESULTS: Construct validation of the EVA has been obtained for seven motion-based metrics. Concurrent validation revealed that there is a strong correlation between the results obtained by EVA and the TrEndo for metrics such as path length (p=0,97), average speed (p=0,94) or economy of volume (p=0,85), proving the viability of EVA. CONCLUSIONS: EVA has been successfully used in the training setup showing potential of endoscopic video analysis to assess laparoscopic psychomotor skills. The results encourage further implementation of video tracking in training setups and in image guided surgery.
Resumo:
Macroscopic brain networks have been widely described with the manifold of metrics available using graph theory. However, most analyses do not incorporate information about the physical position of network nodes. Here, we provide a multimodal macroscopic network characterization while considering the physical positions of nodes. To do so, we examined anatomical and functional macroscopic brain networks in a sample of twenty healthy subjects. Anatomical networks are obtained with a graph based tractography algorithm from diffusion-weighted magnetic resonance images (DW-MRI). Anatomical con- nections identified via DW-MRI provided probabilistic constraints for determining the connectedness of 90 dif- ferent brain areas. Functional networks are derived from temporal linear correlations between blood-oxygenation level-dependent signals derived from the same brain areas. Rentian Scaling analysis, a technique adapted from very- large-scale integration circuits analyses, shows that func- tional networks are more random and less optimized than the anatomical networks. We also provide a new metric that allows quantifying the global connectivity arrange- ments for both structural and functional networks. While the functional networks show a higher contribution of inter-hemispheric connections, the anatomical networks highest connections are identified in a dorsal?ventral arrangement. These results indicate that anatomical and functional networks present different connectivity organi- zations that can only be identified when the physical locations of the nodes are included in the analysis.
Resumo:
Assessing video quality is a complex task. While most pixel-based metrics do not present enough correlation between objective and subjective results, algorithms need to correspond to human perception when analyzing quality in a video sequence. For analyzing the perceived quality derived from concrete video artifacts in determined region of interest we present a novel methodology for generating test sequences which allow the analysis of impact of each individual distortion. Through results obtained after subjective assessment it is possible to create psychovisual models based on weighting pixels belonging to different regions of interest distributed by color, position, motion or content. Interesting results are obtained in subjective assessment which demonstrates the necessity of new metrics adapted to human visual system.
Resumo:
Models and model transformations are the core concepts of OMG's MDA (TM) approach. Within this approach, most models are derived from the MOF and have a graph-based nature. In contrast, most of the current model transformations are specified textually. To enable a graphical specification of model transformation rules, this paper proposes to use triple graph grammars as declarative specification formalism. These triple graph grammars can be specified within the FUJABA tool and we argue that these rules can be more easily specified and they become more understandable and maintainable. To show the practicability of our approach, we present how to generate Tefkat rules from triple graph grammar rules, which helps to integrate triple graph grammars with a state of a art model transformation tool and shows the expressiveness of the concept.
Resumo:
In this paper, we develop a new graph kernel by using the quantum Jensen-Shannon divergence and the discrete-time quantum walk. To this end, we commence by performing a discrete-time quantum walk to compute a density matrix over each graph being compared. For a pair of graphs, we compare the mixed quantum states represented by their density matrices using the quantum Jensen-Shannon divergence. With the density matrices for a pair of graphs to hand, the quantum graph kernel between the pair of graphs is defined by exponentiating the negative quantum Jensen-Shannon divergence between the graph density matrices. We evaluate the performance of our kernel on several standard graph datasets, and demonstrate the effectiveness of the new kernel.
Resumo:
In this paper, we use the quantum Jensen-Shannon divergence as a means to establish the similarity between a pair of graphs and to develop a novel graph kernel. In quantum theory, the quantum Jensen-Shannon divergence is defined as a distance measure between quantum states. In order to compute the quantum Jensen-Shannon divergence between a pair of graphs, we first need to associate a density operator with each of them. Hence, we decide to simulate the evolution of a continuous-time quantum walk on each graph and we propose a way to associate a suitable quantum state with it. With the density operator of this quantum state to hand, the graph kernel is defined as a function of the quantum Jensen-Shannon divergence between the graph density operators. We evaluate the performance of our kernel on several standard graph datasets from bioinformatics. We use the Principle Component Analysis (PCA) on the kernel matrix to embed the graphs into a feature space for classification. The experimental results demonstrate the effectiveness of the proposed approach. © 2013 Springer-Verlag.