933 resultados para Graph matching
Resumo:
Non-invasive quantitative assessment of the right ventricular anatomical and functional parameters is a challenging task. We present a semi-automatic approach for right ventricle (RV) segmentation from 4D MR images in two variants, which differ in the amount of user interaction. The method consists of three main phases: First, foreground and background markers are generated from the user input. Next, an over-segmented region image is obtained applying a watershed transform. Finally, these regions are merged using 4D graph-cuts with an intensity based boundary term. For the first variant the user outlines the inside of the RV wall in a few end-diastole slices, for the second two marker pixels serve as starting point for a statistical atlas application. Results were obtained by blind evaluation on 16 testing 4D MR volumes. They prove our method to be robust against markers location and place it favourably in the ranks of existing approaches.
Resumo:
This article describes the simulation and characterization of an ultrasonic transducer using a new material called Rexolite to be used as a matching element. This transducer was simulated using a commercial piezoelectric ceramic PIC255 at 8 MHz. Rexolite, the new material, presents an excellent acoustic matching, specially in terms of the acoustic impedance of water. Finite elements simulations were used in this work. Rexolite was considered as a suitable material in the construction of the transducer due to its malleability and acoustic properties, to validate the simulations a prototype transducer was constructed. Experimental measurements were used to determine the resonance frequency of the prototype transducer. Simulated and experimental results were very similar showing that Rexolite may be an excellent matching, particularly for medical applications.
Resumo:
In the context of aerial imagery, one of the first steps toward a coherent processing of the information contained in multiple images is geo-registration, which consists in assigning geographic 3D coordinates to the pixels of the image. This enables accurate alignment and geo-positioning of multiple images, detection of moving objects and fusion of data acquired from multiple sensors. To solve this problem there are different approaches that require, in addition to a precise characterization of the camera sensor, high resolution referenced images or terrain elevation models, which are usually not publicly available or out of date. Building upon the idea of developing technology that does not need a reference terrain elevation model, we propose a geo-registration technique that applies variational methods to obtain a dense and coherent surface elevation model that is used to replace the reference model. The surface elevation model is built by interpolation of scattered 3D points, which are obtained in a two-step process following a classical stereo pipeline: first, coherent disparity maps between image pairs of a video sequence are estimated and then image point correspondences are back-projected. The proposed variational method enforces continuity of the disparity map not only along epipolar lines (as done by previous geo-registration techniques) but also across them, in the full 2D image domain. In the experiments, aerial images from synthetic video sequences have been used to validate the proposed technique.
Resumo:
A real-time large scale part-to-part video matching algorithm, based on the cross correlation of the intensity of motion curves, is proposed with a view to originality recognition, video database cleansing, copyright enforcement, video tagging or video result re-ranking. Moreover, it is suggested how the most representative hashes and distance functions - strada, discrete cosine transformation, Marr-Hildreth and radial - should be integrated in order for the matching algorithm to be invariant against blur, compression and rotation distortions: (R; _) 2 [1; 20]_[1; 8], from 512_512 to 32_32pixels2 and from 10 to 180_. The DCT hash is invariant against blur and compression up to 64x64 pixels2. Nevertheless, although its performance against rotation is the best, with a success up to 70%, it should be combined with the Marr-Hildreth distance function. With the latter, the image selected by the DCT hash should be at a distance lower than 1.15 times the Marr-Hildreth minimum distance.
Resumo:
This paper presents a strategy for solving the feature matching problem in calibrated very wide-baseline camera settings. In this kind of settings, perspective distortion, depth discontinuities and occlusion represent enormous challenges. The proposed strategy addresses them by using geometrical information, specifically by exploiting epipolar-constraints. As a result it provides a sparse number of reliable feature points for which 3D position is accurately recovered. Special features known as junctions are used for robust matching. In particular, a strategy for refinement of junction end-point matching is proposed which enhances usual junction-based approaches. This allows to compute cross-correlation between perfectly aligned plane patches in both images, thus yielding better matching results. Evaluation of experimental results proves the effectiveness of the proposed algorithm in very wide-baseline environments.
Resumo:
In this paper we provide a method that allows the visualization of similarity relationships present between items of collaborative filtering recommender systems, as well as the relative importance of each of these. The objective is to offer visual representations of the recommender system?s set of items and of their relationships; these graphs show us where the most representative information can be found and which items are rated in a more similar way by the recommender system?s community of users. The visual representations achieved take the shape of phylogenetic trees, displaying the numerical similarity and the reliability between each pair of items considered to be similar. As a case study we provide the results obtained using the public database Movielens 1M, which contains 3900 movies.
Resumo:
The aim of this paper is to develop a probabilistic modeling framework for the segmentation of structures of interest from a collection of atlases. Given a subset of registered atlases into the target image for a particular Region of Interest (ROI), a statistical model of appearance and shape is computed for fusing the labels. Segmentations are obtained by minimizing an energy function associated with the proposed model, using a graph-cut technique. We test different label fusion methods on publicly available MR images of human brains.
Resumo:
Con el auge del Cloud Computing, las aplicaciones de proceso de datos han sufrido un incremento de demanda, y por ello ha cobrado importancia lograr m�ás eficiencia en los Centros de Proceso de datos. El objetivo de este trabajo es la obtenci�ón de herramientas que permitan analizar la viabilidad y rentabilidad de diseñar Centros de Datos especializados para procesamiento de datos, con una arquitectura, sistemas de refrigeraci�ón, etc. adaptados. Algunas aplicaciones de procesamiento de datos se benefician de las arquitecturas software, mientras que en otras puede ser m�ás eficiente un procesamiento con arquitectura hardware. Debido a que ya hay software con muy buenos resultados en el procesamiento de grafos, como el sistema XPregel, en este proyecto se realizará una arquitectura hardware en VHDL, implementando el algoritmo PageRank de Google de forma escalable. Se ha escogido este algoritmo ya que podr��á ser m�ás eficiente en arquitectura hardware, debido a sus características concretas que se indicaráan m�ás adelante. PageRank sirve para ordenar las p�áginas por su relevancia en la web, utilizando para ello la teorí��a de grafos, siendo cada página web un vértice de un grafo; y los enlaces entre páginas, las aristas del citado grafo. En este proyecto, primero se realizará un an�álisis del estado de la técnica. Se supone que la implementaci�ón en XPregel, un sistema de procesamiento de grafos, es una de las m�ás eficientes. Por ello se estudiará esta �ultima implementaci�ón. Sin embargo, debido a que Xpregel procesa, en general, algoritmos que trabajan con grafos; no tiene en cuenta ciertas caracterí��sticas del algoritmo PageRank, por lo que la implementaci�on no es �optima. Esto es debido a que en PageRank, almacenar todos los datos que manda un mismo v�értice es un gasto innecesario de memoria ya que todos los mensajes que manda un vértice son iguales entre sí e iguales a su PageRank. Se realizará el diseño en VHDL teniendo en cuenta esta caracter��ística del citado algoritmo,evitando almacenar varias veces los mensajes que son iguales. Se ha elegido implementar PageRank en VHDL porque actualmente las arquitecturas de los sistemas operativos no escalan adecuadamente. Se busca evaluar si con otra arquitectura se obtienen mejores resultados. Se realizará un diseño partiendo de cero, utilizando la memoria ROM de IPcore de Xillinx (Software de desarrollo en VHDL), generada autom�áticamente. Se considera hacer cuatro tipos de módulos para que as�� el procesamiento se pueda hacer en paralelo. Se simplificar�á la estructura de XPregel con el fin de intentar aprovechar la particularidad de PageRank mencionada, que hace que XPregel no le saque el m�aximo partido. Despu�és se escribirá el c�ódigo, realizando una estructura escalable, ya que en la computación intervienen millones de páginas web. A continuación, se sintetizar�á y se probará el código en una FPGA. El �ultimo paso será una evaluaci�ón de la implementaci�ón, y de posibles mejoras en cuanto al consumo.
Resumo:
Although context could be exploited to improve performance, elasticity and adaptation in most distributed systems that adopt the publish/subscribe (P/S) communication model, only a few researchers have focused on the area of context-aware matching in P/S systems and have explored its implications in domains with highly dynamic context like wireless sensor networks (WSNs) and IoT-enabled applications. Most adopted P/S models are context agnostic or do not differentiate context from the other application data. In this article, we present a novel context-aware P/S model. SilboPS manages context explicitly, focusing on the minimization of network overhead in domains with recurrent context changes related, for example, to mobile ad hoc networks (MANETs). Our approach represents a solution that helps to efficiently share and use sensor data coming from ubiquitous WSNs across a plethora of applications intent on using these data to build context awareness. Specifically, we empirically demonstrate that decoupling a subscription from the changing context in which it is produced and leveraging contextual scoping in the filtering process notably reduces (un)subscription cost per node, while improving the global performance/throughput of the network of brokers without fltering the cost of SIENA-like topology changes.
Resumo:
La tesis que se presenta tiene como propósito la construcción automática de ontologías a partir de textos, enmarcándose en el área denominada Ontology Learning. Esta disciplina tiene como objetivo automatizar la elaboración de modelos de dominio a partir de fuentes información estructurada o no estructurada, y tuvo su origen con el comienzo del milenio, a raíz del crecimiento exponencial del volumen de información accesible en Internet. Debido a que la mayoría de información se presenta en la web en forma de texto, el aprendizaje automático de ontologías se ha centrado en el análisis de este tipo de fuente, nutriéndose a lo largo de los años de técnicas muy diversas provenientes de áreas como la Recuperación de Información, Extracción de Información, Sumarización y, en general, de áreas relacionadas con el procesamiento del lenguaje natural. La principal contribución de esta tesis consiste en que, a diferencia de la mayoría de las técnicas actuales, el método que se propone no analiza la estructura sintáctica superficial del lenguaje, sino que estudia su nivel semántico profundo. Su objetivo, por tanto, es tratar de deducir el modelo del dominio a partir de la forma con la que se articulan los significados de las oraciones en lenguaje natural. Debido a que el nivel semántico profundo es independiente de la lengua, el método permitirá operar en escenarios multilingües, en los que es necesario combinar información proveniente de textos en diferentes idiomas. Para acceder a este nivel del lenguaje, el método utiliza el modelo de las interlinguas. Estos formalismos, provenientes del área de la traducción automática, permiten representar el significado de las oraciones de forma independiente de la lengua. Se utilizará en concreto UNL (Universal Networking Language), considerado como la única interlingua de propósito general que está normalizada. La aproximación utilizada en esta tesis supone la continuación de trabajos previos realizados tanto por su autor como por el equipo de investigación del que forma parte, en los que se estudió cómo utilizar el modelo de las interlinguas en las áreas de extracción y recuperación de información multilingüe. Básicamente, el procedimiento definido en el método trata de identificar, en la representación UNL de los textos, ciertas regularidades que permiten deducir las piezas de la ontología del dominio. Debido a que UNL es un formalismo basado en redes semánticas, estas regularidades se presentan en forma de grafos, generalizándose en estructuras denominadas patrones lingüísticos. Por otra parte, UNL aún conserva ciertos mecanismos de cohesión del discurso procedentes de los lenguajes naturales, como el fenómeno de la anáfora. Con el fin de aumentar la efectividad en la comprensión de las expresiones, el método provee, como otra contribución relevante, la definición de un algoritmo para la resolución de la anáfora pronominal circunscrita al modelo de la interlingua, limitada al caso de pronombres personales de tercera persona cuando su antecedente es un nombre propio. El método propuesto se sustenta en la definición de un marco formal, que ha debido elaborarse adaptando ciertas definiciones provenientes de la teoría de grafos e incorporando otras nuevas, con el objetivo de ubicar las nociones de expresión UNL, patrón lingüístico y las operaciones de encaje de patrones, que son la base de los procesos del método. Tanto el marco formal como todos los procesos que define el método se han implementado con el fin de realizar la experimentación, aplicándose sobre un artículo de la colección EOLSS “Encyclopedia of Life Support Systems” de la UNESCO. ABSTRACT The purpose of this thesis is the automatic construction of ontologies from texts. This thesis is set within the area of Ontology Learning. This discipline aims to automatize domain models from structured or unstructured information sources, and had its origin with the beginning of the millennium, as a result of the exponential growth in the volume of information accessible on the Internet. Since most information is presented on the web in the form of text, the automatic ontology learning is focused on the analysis of this type of source, nourished over the years by very different techniques from areas such as Information Retrieval, Information Extraction, Summarization and, in general, by areas related to natural language processing. The main contribution of this thesis consists of, in contrast with the majority of current techniques, the fact that the method proposed does not analyze the syntactic surface structure of the language, but explores his deep semantic level. Its objective, therefore, is trying to infer the domain model from the way the meanings of the sentences are articulated in natural language. Since the deep semantic level does not depend on the language, the method will allow to operate in multilingual scenarios, where it is necessary to combine information from texts in different languages. To access to this level of the language, the method uses the interlingua model. These formalisms, coming from the area of machine translation, allow to represent the meaning of the sentences independently of the language. In this particular case, UNL (Universal Networking Language) will be used, which considered to be the only interlingua of general purpose that is standardized. The approach used in this thesis corresponds to the continuation of previous works carried out both by the author of this thesis and by the research group of which he is part, in which it is studied how to use the interlingua model in the areas of multilingual information extraction and retrieval. Basically, the procedure defined in the method tries to identify certain regularities at the UNL representation of texts that allow the deduction of the parts of the ontology of the domain. Since UNL is a formalism based on semantic networks, these regularities are presented in the form of graphs, generalizing in structures called linguistic patterns. On the other hand, UNL still preserves certain mechanisms of discourse cohesion from natural languages, such as the phenomenon of the anaphora. In order to increase the effectiveness in the understanding of expressions, the method provides, as another significant contribution, the definition of an algorithm for the resolution of pronominal anaphora limited to the model of the interlingua, in the case of third person personal pronouns when its antecedent is a proper noun. The proposed method is based on the definition of a formal framework, adapting some definitions from Graph Theory and incorporating new ones, in order to locate the notions of UNL expression and linguistic pattern, as well as the operations of pattern matching, which are the basis of the method processes. Both the formal framework and all the processes that define the method have been implemented in order to carry out the experimentation, applying on an article of the "Encyclopedia of Life Support Systems" of the UNESCO-EOLSS collection.
Resumo:
A novel pedestrian motion prediction technique is presented in this paper. Its main achievement regards to none previous observation, any knowledge of pedestrian trajectories nor the existence of possible destinations is required; hence making it useful for autonomous surveillance applications. Prediction only requires initial position of the pedestrian and a 2D representation of the scenario as occupancy grid. First, it uses the Fast Marching Method (FMM) to calculate the pedestrian arrival time for each position in the map and then, the likelihood that the pedestrian reaches those positions is estimated. The technique has been tested with synthetic and real scenarios. In all cases, accurate probability maps as well as their representative graphs were obtained with low computational cost.
Resumo:
A coarse-grained model for protein-folding dynamics is introduced based on a discretized representation of torsional modes. The model, based on the Ramachandran map of the local torsional potential surface and the class (hydrophobic/polar/neutral) of each residue, recognizes patterns of both torsional conformations and hydrophobic-polar contacts, with tolerance for imperfect patterns. It incorporates empirical rates for formation of secondary and tertiary structure. The method yields a topological representation of the evolving local torsional configuration of the folding protein, modulo the basins of the Ramachandran map. The folding process is modeled as a sequence of transitions from one contact pattern to another, as the torsional patterns evolve. We test the model by applying it to the folding process of bovine pancreatic trypsin inhibitor, obtaining a kinetic description of the transitions between the contact patterns visited by the protein along the dominant folding pathway. The kinetics and detailed balance make it possible to invert the result to obtain a coarse topographic description of the potential energy surface along the dominant folding pathway, in effect to go backward or forward between a topological representation of the chain conformation and a topographical description of the potential energy surface governing the folding process. As a result, the strong structure-seeking character of bovine pancreatic trypsin inhibitor and the principal features of its folding pathway are reproduced in a reasonably quantitative way.
Resumo:
This paper decomposes the conventional measure of selection bias in observational studies into three components. The first two components are due to differences in the distributions of characteristics between participant and nonparticipant (comparison) group members: the first arises from differences in the supports, and the second from differences in densities over the region of common support. The third component arises from selection bias precisely defined. Using data from a recent social experiment, we find that the component due to selection bias, precisely defined, is smaller than the first two components. However, selection bias still represents a substantial fraction of the experimental impact estimate. The empirical performance of matching methods of program evaluation is also examined. We find that matching based on the propensity score eliminates some but not all of the measured selection bias, with the remaining bias still a substantial fraction of the estimated impact. We find that the support of the distribution of propensity scores for the comparison group is typically only a small portion of the support for the participant group. For values outside the common support, it is impossible to reliably estimate the effect of program participation using matching methods. If the impact of participation depends on the propensity score, as we find in our data, the failure of the common support condition severely limits matching compared with random assignment as an evaluation estimator.
Resumo:
Two potential outcomes of a coevolutionary interaction are an escalating arms race and stable cycling. The general expectation has been that arms races predominate in cases of polygenic inheritance of resistance traits and permanent cycling predominates in cases in which resistance is controlled by major genes. In the interaction between Depressaria pastinacella, the parsnip webworm, and Pastinaca sativa, the wild parsnip, traits for plant resistance to insect herbivory (production of defensive furanocoumarins) as well as traits for herbivore “virulence” (ability to metabolize furanocoumarins) are characterized by continuous heritable variation. Furanocoumarin production in plants and rates of metabolism in insects were compared among four midwestern populations; these traits then were classified into four clusters describing multitrait phenotypes occurring in all or most of the populations. When the frequency of plant phenotypes belonging to each of the clusters is compared with the frequency of the insect phenotypes in each of the clusters across populations, a remarkable degree of frequency matching is revealed in three of the populations. That frequencies of phenotypes vary among populations is consistent with the fact that spatial variation occurs in the temporal cycling of phenotypes; such processes contribute in generating a geographic mosaic in this coevolutionary interaction on the landscape scale. Comparisons of contemporary plant phenotype distributions with phenotypes of herbarium specimens collected 9–125 years ago from across a similar latitudinal gradient, however, suggest that for at least one resistance trait—sphondin concentration—interactions with webworms have led to escalatory change.
Resumo:
Molecular and fragment ion data of intact 8- to 43-kDa proteins from electrospray Fourier-transform tandem mass spectrometry are matched against the corresponding data in sequence data bases. Extending the sequence tag concept of Mann and Wilm for matching peptides, a partial amino acid sequence in the unknown is first identified from the mass differences of a series of fragment ions, and the mass position of this sequence is defined from molecular weight and the fragment ion masses. For three studied proteins, a single sequence tag retrieved only the correct protein from the data base; a fourth protein required the input of two sequence tags. However, three of the data base proteins differed by having an extra methionine or by missing an acetyl or heme substitution. The positions of these modifications in the protein examined were greatly restricted by the mass differences of its molecular and fragment ions versus those of the data base. To characterize the primary structure of an unknown represented in the data base, this method is fast and specific and does not require prior enzymatic or chemical degradation.