938 resultados para Graph-based segmentation


Relevância:

80.00% 80.00%

Publicador:

Resumo:

In den westlichen Industrieländern ist das Mammakarzinom der häufigste bösartige Tumor der Frau. Sein weltweiter Anteil an allen Krebserkrankungen der Frau beläuft sich auf etwa 21 %. Inzwischen ist jede neunte Frau bedroht, während ihres Lebens an Brustkrebs zu erkranken. Die alterstandardisierte Mortalitätrate liegt derzeit bei knapp 27 %.rnrnDas Mammakarzinom hat eine relative geringe Wachstumsrate. Die Existenz eines diagnostischen Verfahrens, mit dem alle Mammakarzinome unter 10 mm Durchmesser erkannt und entfernt werden, würden den Tod durch Brustkrebs praktisch beseitigen. Denn die 20-Jahres-Überlebungsrate bei Erkrankung durch initiale Karzinome der Größe 5 bis 10 mm liegt mit über 95 % sehr hoch.rnrnMit der Kontrastmittel gestützten Bildgebung durch die MRT steht eine relativ junge Untersuchungsmethode zur Verfügung, die sensitiv genug zur Erkennung von Karzinomen ab einer Größe von 3 mm Durchmesser ist. Die diagnostische Methodik ist jedoch komplex, fehleranfällig, erfordert eine lange Einarbeitungszeit und somit viel Erfahrung des Radiologen.rnrnEine Computer unterstützte Diagnosesoftware kann die Qualität einer solch komplexen Diagnose erhöhen oder zumindest den Prozess beschleunigen. Das Ziel dieser Arbeit ist die Entwicklung einer vollautomatischen Diagnose Software, die als Zweitmeinungssystem eingesetzt werden kann. Meines Wissens existiert eine solche komplette Software bis heute nicht.rnrnDie Software führt eine Kette von verschiedenen Bildverarbeitungsschritten aus, die dem Vorgehen des Radiologen nachgeahmt wurden. Als Ergebnis wird eine selbstständige Diagnose für jede gefundene Läsion erstellt: Zuerst eleminiert eine 3d Bildregistrierung Bewegungsartefakte als Vorverarbeitungsschritt, um die Bildqualität der nachfolgenden Verarbeitungsschritte zu verbessern. Jedes kontrastanreichernde Objekt wird durch eine regelbasierte Segmentierung mit adaptiven Schwellwerten detektiert. Durch die Berechnung kinetischer und morphologischer Merkmale werden die Eigenschaften der Kontrastmittelaufnahme, Form-, Rand- und Textureeigenschaften für jedes Objekt beschrieben. Abschließend werden basierend auf den erhobenen Featurevektor durch zwei trainierte neuronale Netze jedes Objekt in zusätzliche Funde oder in gut- oder bösartige Läsionen klassifiziert.rnrnDie Leistungsfähigkeit der Software wurde auf Bilddaten von 101 weiblichen Patientinnen getested, die 141 histologisch gesicherte Läsionen enthielten. Die Vorhersage der Gesundheit dieser Läsionen ergab eine Sensitivität von 88 % bei einer Spezifität von 72 %. Diese Werte sind den in der Literatur bekannten Vorhersagen von Expertenradiologen ähnlich. Die Vorhersagen enthielten durchschnittlich 2,5 zusätzliche bösartige Funde pro Patientin, die sich als falsch klassifizierte Artefakte herausstellten.rn

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Dynamic spectrum access (DSA) aims at utilizing spectral opportunities both in time and frequency domains at any given location, which arise due to variations in spectrum usage. Recently, Cognitive radios (CRs) have been proposed as a means of implementing DSA. In this work we focus on the aspect of resource management in overlaid CRNs. We formulate resource allocation strategies for cognitive radio networks (CRNs) as mathematical optimization problems. Specifically, we focus on two key problems in resource management: Sum Rate Maximization and Maximization of Number of Admitted Users. Since both the above mentioned problems are NP hard due to presence of binary assignment variables, we propose novel graph based algorithms to optimally solve these problems. Further, we analyze the impact of location awareness on network performance of CRNs by considering three cases: Full location Aware, Partial location Aware and Non location Aware. Our results clearly show that location awareness has significant impact on performance of overlaid CRNs and leads to increase in spectrum utilization effciency.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This contribution discusses the effects of camera aperture correction in broadcast video on colour-based keying. The aperture correction is used to ’sharpen’ an image and is one element that distinguishes the ’TV-look’ from ’film-look’. ’If a very high level of sharpening is applied, as is the case in many TV productions then this significantly shifts the colours around object boundaries with hight contrast. This paper discusses these effects and their impact on keying and describes a simple low-pass filter to compensate for them. Tests with colour-based segmentation algorithms show that the proposed compensation is an effective way of decreasing the keying artefacts on object boundaries.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

A nonlinear viscoelastic image registration algorithm based on the demons paradigm and incorporating inverse consistent constraint (ICC) is implemented. An inverse consistent and symmetric cost function using mutual information (MI) as a similarity measure is employed. The cost function also includes regularization of transformation and inverse consistent error (ICE). The uncertainties in balancing various terms in the cost function are avoided by alternatively minimizing the similarity measure, the regularization of the transformation, and the ICE terms. The diffeomorphism of registration for preventing folding and/or tearing in the deformation is achieved by the composition scheme. The quality of image registration is first demonstrated by constructing brain atlas from 20 adult brains (age range 30-60). It is shown that with this registration technique: (1) the Jacobian determinant is positive for all voxels and (2) the average ICE is around 0.004 voxels with a maximum value below 0.1 voxels. Further, the deformation-based segmentation on Internet Brain Segmentation Repository, a publicly available dataset, has yielded high Dice similarity index (DSI) of 94.7% for the cerebellum and 74.7% for the hippocampus, attesting to the quality of our registration method.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Point Distribution Models (PDM) are among the most popular shape description techniques and their usefulness has been demonstrated in a wide variety of medical imaging applications. However, to adequately characterize the underlying modeled population it is essential to have a representative number of training samples, which is not always possible. This problem is especially relevant as the complexity of the modeled structure increases, being the modeling of ensembles of multiple 3D organs one of the most challenging cases. In this paper, we introduce a new GEneralized Multi-resolution PDM (GEM-PDM) in the context of multi-organ analysis able to efficiently characterize the different inter-object relations, as well as the particular locality of each object separately. Importantly, unlike previous approaches, the configuration of the algorithm is automated thanks to a new agglomerative landmark clustering method proposed here, which equally allows us to identify smaller anatomically significant regions within organs. The significant advantage of the GEM-PDM method over two previous approaches (PDM and hierarchical PDM) in terms of shape modeling accuracy and robustness to noise, has been successfully verified for two different databases of sets of multiple organs: six subcortical brain structures, and seven abdominal organs. Finally, we propose the integration of the new shape modeling framework into an active shape-model-based segmentation algorithm. The resulting algorithm, named GEMA, provides a better overall performance than the two classical approaches tested, ASM, and hierarchical ASM, when applied to the segmentation of 3D brain MRI.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

This article presents an approach for segmenting sporting event volunteers according to differences in their motives. Empirical data were obtained from a sample of 1169 volunteers who registered for the 2014 European Athletics Championships in Zürich. They completed the ‘Volunteer Motivation Scale for International Sporting Events’ (VMS-ISE) questionaire. The validity of the VMS-ISE was replicated by confirmatory factor analysis and the data were cluster analysed to identify distinct motivation-based volunteer profiles. These segmented volunteers on the basis of mutually exclusive motivational characteristics. The external validity of the four motivation-based types (‘community supporters’, ‘material incentive seekers’, ‘social networkers’ and ‘career and personal growth orienteers’) was confirmed with socio-economic, sport-related and volunteer activity-related variables. It is concluded that motivation-based segmentation represents a useful way of gaining a clearer understanding of the patterns underlying the heterogeneity of sporting events volunteers.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Libraries of learning objects may serve as basis for deriving course offerings that are customized to the needs of different learning communities or even individuals. Several ways of organizing this course composition process are discussed. Course composition needs a clear understanding of the dependencies between the learning objects. Therefore we discuss the metadata for object relationships proposed in different standardization projects and especially those suggested in the Dublin Core Metadata Initiative. Based on these metadata we construct adjacency matrices and graphs. We show how Gozinto-type computations can be used to determine direct and indirect prerequisites for certain learning objects. The metadata may also be used to define integer programming models which can be applied to support the instructor in formulating his specifications for selecting objects or which allow a computer agent to automatically select learning objects. Such decision models could also be helpful for a learner navigating through a library of learning objects. We also sketch a graph-based procedure for manual or automatic sequencing of the learning objects.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

There has been significant interest in parallel execution models for logic programs which exploit Independent And-Parallelism (IAP). In these models, it is necessary to determine which goals are independent and therefore eligible for parallel execution and which goals have to wait for which others during execution. Although this can be done at run-time, it can imply a very heavy overhead. In this paper, we present three algorithms for automatic compiletime parallelization of logic programs using IAP. This is done by converting a clause into a graph-based computational form and then transforming this graph into linear expressions based on &-Prolog, a language for IAP. We also present an algorithm which, given a clause, determines if there is any loss of parallelism due to linearization, for the case in which only unconditional parallelism is desired. Finally, the performance of these annotation algorithms is discussed for some benchmark programs.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Macroscopic brain networks have been widely described with the manifold of metrics available using graph theory. However, most analyses do not incorporate information about the physical position of network nodes. Here, we provide a multimodal macroscopic network characterization while considering the physical positions of nodes. To do so, we examined anatomical and functional macroscopic brain networks in a sample of twenty healthy subjects. Anatomical networks are obtained with a graph based tractography algorithm from diffusion-weighted magnetic resonance images (DW-MRI). Anatomical con- nections identified via DW-MRI provided probabilistic constraints for determining the connectedness of 90 dif- ferent brain areas. Functional networks are derived from temporal linear correlations between blood-oxygenation level-dependent signals derived from the same brain areas. Rentian Scaling analysis, a technique adapted from very- large-scale integration circuits analyses, shows that func- tional networks are more random and less optimized than the anatomical networks. We also provide a new metric that allows quantifying the global connectivity arrange- ments for both structural and functional networks. While the functional networks show a higher contribution of inter-hemispheric connections, the anatomical networks highest connections are identified in a dorsal?ventral arrangement. These results indicate that anatomical and functional networks present different connectivity organi- zations that can only be identified when the physical locations of the nodes are included in the analysis.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

We propose a new algorithm for the design of prediction structures with low delay and limited penalty in the rate-distortion performance for multiview video coding schemes. This algorithm constitutes one of the elements of a framework for the analysis and optimization of delay in multiview coding schemes that is based in graph theory. The objective of the algorithm is to find the best combination of prediction dependencies to prune from a multiview prediction structure, given a number of cuts. Taking into account the properties of the graph-based analysis of the encoding delay, the algorithm is able to find the best prediction dependencies to eliminate from an original prediction structure, while limiting the number of cut combinations to evaluate. We show that this algorithm obtains optimum results in the reduction of the encoding latency with a lower computational complexity than exhaustive search alternatives.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Debido al creciente aumento del tamaño de los datos en muchos de los actuales sistemas de información, muchos de los algoritmos de recorrido de estas estructuras pierden rendimento para realizar búsquedas en estos. Debido a que la representacion de estos datos en muchos casos se realiza mediante estructuras nodo-vertice (Grafos), en el año 2009 se creó el reto Graph500. Con anterioridad, otros retos como Top500 servían para medir el rendimiento en base a la capacidad de cálculo de los sistemas, mediante tests LINPACK. En caso de Graph500 la medicion se realiza mediante la ejecución de un algoritmo de recorrido en anchura de grafos (BFS en inglés) aplicada a Grafos. El algoritmo BFS es uno de los pilares de otros muchos algoritmos utilizados en grafos como SSSP, shortest path o Betweeness centrality. Una mejora en este ayudaría a la mejora de los otros que lo utilizan. Analisis del Problema El algoritmos BFS utilizado en los sistemas de computación de alto rendimiento (HPC en ingles) es usualmente una version para sistemas distribuidos del algoritmo secuencial original. En esta versión distribuida se inicia la ejecución realizando un particionado del grafo y posteriormente cada uno de los procesadores distribuidos computará una parte y distribuirá sus resultados a los demás sistemas. Debido a que la diferencia de velocidad entre el procesamiento en cada uno de estos nodos y la transfencia de datos por la red de interconexión es muy alta (estando en desventaja la red de interconexion) han sido bastantes las aproximaciones tomadas para reducir la perdida de rendimiento al realizar transferencias. Respecto al particionado inicial del grafo, el enfoque tradicional (llamado 1D-partitioned graph en ingles) consiste en asignar a cada nodo unos vertices fijos que él procesará. Para disminuir el tráfico de datos se propuso otro particionado (2D) en el cual la distribución se haciá en base a las aristas del grafo, en vez de a los vertices. Este particionado reducía el trafico en la red en una proporcion O(NxM) a O(log(N)). Si bien han habido otros enfoques para reducir la transferecnia como: reordemaniento inicial de los vertices para añadir localidad en los nodos, o particionados dinámicos, el enfoque que se va a proponer en este trabajo va a consistir en aplicar técnicas recientes de compression de grandes sistemas de datos como Bases de datos de alto volume o motores de búsqueda en internet para comprimir los datos de las transferencias entre nodos.---ABSTRACT---The Breadth First Search (BFS) algorithm is the foundation and building block of many higher graph-based operations such as spanning trees, shortest paths and betweenness centrality. The importance of this algorithm increases each day due to it is a key requirement for many data structures which are becoming popular nowadays. These data structures turn out to be internally graph structures. When the BFS algorithm is parallelized and the data is distributed into several processors, some research shows a performance limitation introduced by the interconnection network [31]. Hence, improvements on the area of communications may benefit the global performance in this key algorithm. In this work it is presented an alternative compression mechanism. It differs with current existing methods in that it is aware of characteristics of the data which may benefit the compression. Apart from this, we will perform a other test to see how this algorithm (in a dis- tributed scenario) benefits from traditional instruction-based optimizations. Last, we will review the current supercomputing techniques and the related work being done in the area.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

Vários métodos tradicionais de segmentação de imagens, como a transformada de watershed de marcado- res e métodos de conexidade fuzzy (Relative Fuzzy Connectedness- RFC, Iterative Relative Fuzzy Connected- ness - IRFC), podem ser implementados de modo eficiente utilizando o método em grafos da Transformada Imagem-Floresta (Image Foresting Transform - IFT). No entanto, a carência de termos de regularização de fronteira em sua formulação fazem com que a borda do objeto segmentado possa ser altamente irregular. Um modo de contornar isto é por meio do uso de restrições de forma do objeto, que favoreçam formas mais regulares, como na recente restrição de convexidade geodésica em estrela (Geodesic Star Convexity - GSC). Neste trabalho, apresentamos uma nova restrição de forma, chamada de Faixa de Restrição Geodésica (Geodesic Band Constraint - GBC), que pode ser incorporada eficientemente em uma sub-classe do fra- mework de corte em grafos generalizado (Generalized Graph Cut - GGC), que inclui métodos pela IFT. É apresentada uma prova da otimalidade do novo algoritmo em termos de um mínimo global de uma função de energia sujeita às novas restrições de borda. A faixa de restrição geodésica nos ajuda a regularizar a borda dos objetos, consequentemente melhorando a segmentação de objetos com formas mais regulares, mantendo o baixo custo computacional da IFT. A GBC pode também ser usada conjuntamente com um mapa de custos pré estabelecido, baseado em um modelo de forma, de modo a direcionar a segmentação a seguir uma dada forma desejada, com grau de liberdade de escala e demais deformações controladas por um parâmetro único. Essa nova restrição também pode ser combinada com a GSC e com as restrições de polaridade de borda sem custo adicional. O método é demonstrado em imagens naturais, sintéticas e médicas, sendo estas provenientes de tomografias computadorizadas e de ressonância magnética.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

In this paper, we propose a novel method for the unsupervised clustering of graphs in the context of the constellation approach to object recognition. Such method is an EM central clustering algorithm which builds prototypical graphs on the basis of fast matching with graph transformations. Our experiments, both with random graphs and in realistic situations (visual localization), show that our prototypes improve the set median graphs and also the prototypes derived from our previous incremental method. We also discuss how the method scales with a growing number of images.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

En este trabajo presentamos unos resultados preliminares obtenidos mediante la aplicación de una nueva técnica de construcción de grafos semánticos a la tarea de desambiguación del sentido de las palabras en un entorno multilingüe. Gracias al uso de esta técnica no supervisada, inducimos los sentidos asociados a las traducciones de la palabra ambigua considerada en la lengua destino. Utilizamos las traducciones de las palabras del contexto de la palabra ambigua en la lengua origen para seleccionar el sentido más probable de la traducción. El sistema ha sido evaluado sobre la colección de datos de una tarea de desambiguación multilingüe que se propuso en la competición SemEval-2010, consiguiendo superar los resultados de todos los sistemas no supervisados que participaron en aquella tarea.

Relevância:

80.00% 80.00%

Publicador:

Resumo:

The World Wide Web provides plentiful contents for Web-based learning, but its hyperlink-based architecture connects Web resources for browsing freely rather than for effective learning. To support effective learning, an e-learning system should be able to discover and make use of the semantic communities and the emerging semantic relations in a dynamic complex network of learning resources. Previous graph-based community discovery approaches are limited in ability to discover semantic communities. This paper first suggests the Semantic Link Network (SLN), a loosely coupled semantic data model that can semantically link resources and derive out implicit semantic links according to a set of relational reasoning rules. By studying the intrinsic relationship between semantic communities and the semantic space of SLN, approaches to discovering reasoning-constraint, rule-constraint, and classification-constraint semantic communities are proposed. Further, the approaches, principles, and strategies for discovering emerging semantics in dynamic SLNs are studied. The basic laws of the semantic link network motion are revealed for the first time. An e-learning environment incorporating the proposed approaches, principles, and strategies to support effective discovery and learning is suggested.