33 resultados para Database, Image Retrieval, Browsing, Semantic Concept

em Reposit


Relevância:

100.00% 100.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Relevance feedback approaches have been established as an important tool for interactive search, enabling users to express their needs. However, in view of the growth of multimedia collections available, the user efforts required by these methods tend to increase as well, demanding approaches for reducing the need of user interactions. In this context, this paper proposes a semi-supervised learning algorithm for relevance feedback to be used in image retrieval tasks. The proposed semi-supervised algorithm aims at using both supervised and unsupervised approaches simultaneously. While a supervised step is performed using the information collected from the user feedback, an unsupervised step exploits the intrinsic dataset structure, which is represented in terms of ranked lists of images. Several experiments were conducted for different image retrieval tasks involving shape, color, and texture descriptors and different datasets. The proposed approach was also evaluated on multimodal retrieval tasks, considering visual and textual descriptors. Experimental results demonstrate the effectiveness of the proposed approach.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Due to the increased incidence of skin cancer, computational methods based on intelligent approaches have been developed to aid dermatologists in the diagnosis of skin lesions. This paper proposes a method to classify texture in images, since it is an important feature for the successfully identification of skin lesions. For this is defined a feature vector, with the fractal dimension of images through the box-counting method (BCM), which is used with a SVM to classify the texture of the lesions in to non-irregular or irregular. With the proposed solution, we could obtain an accuracy of 72.84%. © 2012 AISTI.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we propose a novel method for shape analysis called HTS (Hough Transform Statistics), which uses statistics from Hough Transform space in order to characterize the shape of objects in digital images. Experimental results showed that the HTS descriptor is robust and presents better accuracy than some traditional shape description methods. Furthermore, HTS algorithm has linear complexity, which is an important requirement for content based image retrieval from large databases. © 2013 IEEE.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Huge image collections are becoming available lately. In this scenario, the use of Content-Based Image Retrieval (CBIR) systems has emerged as a promising approach to support image searches. The objective of CBIR systems is to retrieve the most similar images in a collection, given a query image, by taking into account image visual properties such as texture, color, and shape. In these systems, the effectiveness of the retrieval process depends heavily on the accuracy of ranking approaches. Recently, re-ranking approaches have been proposed to improve the effectiveness of CBIR systems by taking into account the relationships among images. The re-ranking approaches consider the relationships among all images in a given dataset. These approaches typically demands a huge amount of computational power, which hampers its use in practical situations. On the other hand, these methods can be massively parallelized. In this paper, we propose to speedup the computation of the RL-Sim algorithm, a recently proposed image re-ranking approach, by using the computational power of Graphics Processing Units (GPU). GPUs are emerging as relatively inexpensive parallel processors that are becoming available on a wide range of computer systems. We address the image re-ranking performance challenges by proposing a parallel solution designed to fit the computational model of GPUs. We conducted an experimental evaluation considering different implementations and devices. Experimental results demonstrate that significant performance gains can be obtained. Our approach achieves speedups of 7x from serial implementation considering the overall algorithm and up to 36x on its core steps.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Ciência da Computação - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Pós-graduação em Estudos Linguísticos - IBILCE

Relevância:

100.00% 100.00%

Publicador:

Resumo:

With the widespread proliferation of computers, many human activities entail the use of automatic image analysis. The basic features used for image analysis include color, texture, and shape. In this paper, we propose a new shape description method, called Hough Transform Statistics (HTS), which uses statistics from the Hough space to characterize the shape of objects or regions in digital images. A modified version of this method, called Hough Transform Statistics neighborhood (HTSn), is also presented. Experiments carried out on three popular public image databases showed that the HTS and HTSn descriptors are robust, since they presented precision-recall results much better than several other well-known shape description methods. When compared to Beam Angle Statistics (BAS) method, a shape description method that inspired their development, both the HTS and the HTSn methods presented inferior results regarding the precision-recall criterion, but superior results in the processing time and multiscale separability criteria. The linear complexity of the HTS and the HTSn algorithms, in contrast to BAS, make them more appropriate for shape analysis in high-resolution image retrieval tasks when very large databases are used, which are very common nowadays. (C) 2014 Elsevier Inc. All rights reserved.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

This paper presents the overall methodology that has been used to encode both the Brazilian Portuguese WordNet (WordNet.Br) standard language-independent conceptual-semantic relations (hyponymy, co-hyponymy, meronymy, cause, and entailment) and the so-called cross-lingual conceptual-semantic relations between different wordnets. Accordingly, after contextualizing the project and outlining the current lexical database structure and statistics, it describes the WordNet.Br editing GUI that was designed to aid the linguist in carrying out the tasks of building synsets, selecting sample sentences from corpora, writing synset concept glosses, and encoding both language-independent conceptual-semantic relations and cross-lingual conceptual-semantic relations between WordNet.Br and Princeton WordNet © Springer-Verlag Berlin Heidelberg 2006.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The univocal correspondence between one gene and one polypeptide has been challenged by many examples of ambiguities. A rapidly expanding list of one-to-many or many-to-one correspondences includes: genomic rearrangements, alternative processing of transcripts, overlapping translation frames, RNA editing, alternative translation modes, and polyprotein cleavage.The genomic message requires interpretation through decoding by a sophisticated information retrieval system which should also carry some kind of information. The full meaning of the whole cell, as a unit, is emphasized.The gene is a combination of (one or more) nucleic acid (DNA or RNA) sequences, defined by the system (the whole cell, interacting with the environment, or the environment alone, in subcellular or pre-cellular systems), that gives origin to a product (RNA or polypeptide).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Characteristics of speech, especially figures of speech, are used by specific communities or domains, and, in this way, reflect their identities through their choice of vocabulary. This topic should be an object of study in the context of knowledge representation once it deals with different contexts of production of documents. This study aims to explore the dimensions of the concepts of euphemism, dysphemism, and orthophemism, focusing on the latter with the goal of extracting a concept which can be included in discussions about subject analysis and indexing. Euphemism is used as an alternative to a non-preferred expression or as an alternative to an offensive attribution-to avoid potential offense taken by the listener or by other persons, for instance, pass away. Dysphemism, on the other hand, is used by speakers to talk about people and things that frustrate and annoy them-their choice of language indicates disapproval and the topic is therefore denigrated, humiliated, or degraded, for instance, kick the bucket. While euphemism tries to make something sound better, dysphemism tries to make something sound worse. Orthophemism (Allan and Burridge 2006) is also used as an alternative to expressions, but it is a preferred, formal, and direct language of expression when representing an object or a situation, for instance, die. This paper suggests that the comprehension and use of such concepts could support the following issues: possible contributions from linguistics and terminology to subject analysis as demonstrated by Talamo et al. (1992); decrease of polysemy and ambiguity of terms used to represent certain topics of documents; and construction and evaluation of indexing languages. The concept of orthophemism can also serves to support associative relationships in the context of subject analysis, indexing, and even information retrieval related to more specific requests.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper presents a technique to share the data stored in an object-oriented database aimed at designing environments. This technique shares data between two related databases, called the Original and Product databases, and is composed of three processes: data separation, evolution and integration. Whenever a block of data needs to be shared, it is spread into both databases, resulting in a block on the original database, and another into the Product database, with special links between them controlled by the Object Manager. These blocks do not need to be maintained identical during the evolution phase of the sharing process. Six types of links were defined, and by choosing one, the designer control the evolution and reintegration of the block in both databases. This process uses the composite object concept as the unit of control. The presented concepts can be applied to any data model with support to composite objects.