848 resultados para Content Based Image Retrieval (CBIR)
Resumo:
Dissertação apresentada para obtenção do Grau de Mestre em Engenharia Informática pela Universidade Nova de Lisboa, Faculdade de Ciências e Tecnologia
Resumo:
In this paper, we present ICICLE (Image ChainNet and Incremental Clustering Engine), a prototype system that we have developed to efficiently and effectively retrieve WWW images based on image semantics. ICICLE has two distinguishing features. First, it employs a novel image representation model called Weight ChainNet to capture the semantics of the image content. A new formula, called list space model, for computing semantic similarities is also introduced. Second, to speed up retrieval, ICICLE employs an incremental clustering mechanism, ICC (Incremental Clustering on ChainNet), to cluster images with similar semantics into the same partition. Each cluster has a summary representative and all clusters' representatives are further summarized into a balanced and full binary tree structure. We conducted an extensive performance study to evaluate ICICLE. Compared with some recently proposed methods, our results show that ICICLE provides better recall and precision. Our clustering technique ICC facilitates speedy retrieval of images without sacrificing recall and precision significantly.
Resumo:
Continuing advances in digital image capture and storage are resulting in a proliferation of imagery and associated problems of information overload in image domains. In this work we present a framework that supports image management using an interactive approach that captures and reuses task-based contextual information. Our framework models the relationship between images and domain tasks they support by monitoring the interactive manipulation and annotation of task-relevant imagery. During image analysis, interactions are captured and a task context is dynamically constructed so that human expertise, proficiency and knowledge can be leveraged to support other users in carrying out similar domain tasks using case-based reasoning techniques. In this article we present our framework for capturing task context and describe how we have implemented the framework as two image retrieval applications in the geo-spatial and medical domains. We present an evaluation that tests the efficiency of our algorithms for retrieving image context information and the effectiveness of the framework for carrying out goal-directed image tasks. © 2010 Springer Science+Business Media, LLC.
Resumo:
Introduction: Internet users are increasingly using the worldwide web to search for information relating to their health. This situation makes it necessary to create specialized tools capable of supporting users in their searches. Objective: To apply and compare strategies that were developed to investigate the use of the Portuguese version of Medical Subject Headings (MeSH) for constructing an automated classifier for Brazilian Portuguese-language web-based content within or outside of the field of healthcare, focusing on the lay public. Methods: 3658 Brazilian web pages were used to train the classifier and 606 Brazilian web pages were used to validate it. The strategies proposed were constructed using content-based vector methods for text classification, such that Naive Bayes was used for the task of classifying vector patterns with characteristics obtained through the proposed strategies. Results: A strategy named InDeCS was developed specifically to adapt MeSH for the problem that was put forward. This approach achieved better accuracy for this pattern classification task (0.94 sensitivity, specificity and area under the ROC curve). Conclusions: Because of the significant results achieved by InDeCS, this tool has been successfully applied to the Brazilian healthcare search portal known as Busca Saude. Furthermore, it could be shown that MeSH presents important results when used for the task of classifying web-based content focusing on the lay public. It was also possible to show from this study that MeSH was able to map out mutable non-deterministic characteristics of the web. (c) 2010 Elsevier Inc. All rights reserved.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.
Resumo:
In this paper, we propose a content selection framework that improves the users` experience when they are enriching or authoring pieces of news. This framework combines a variety of techniques to retrieve semantically related videos, based on a set of criteria which are specified automatically depending on the media`s constraints. The combination of different content selection mechanisms can improve the quality of the retrieved scenes, because each technique`s limitations are minimized by other techniques` strengths. We present an evaluation based on a number of experiments, which show that the retrieved results are better when all criteria are used at time.
Resumo:
In this paper a review of the most used MPEG-7 descriptors are presented. Some considerations for choosing the most proper descriptor for a particular image or video data set are outlined.
Resumo:
With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).
Resumo:
Relevant past events can be remembered when visualizing related pictures. The main difficulty is how to find these photos in a large personal collection. Query definition and image annotation are key issues to overcome this problem. The former is relevant due to the diversity of the clues provided by our memory when recovering a past moment and the later because images need to be annotated with information regarding those clues to be retrieved. Consequently, tools to recover past memories should deal carefully with these two tasks. This paper describes a user interface designed to explore pictures from personal memories. Users can query the media collection in several ways and for this reason an iconic visual language to define queries is proposed. Automatic and semi-automatic annotation is also performed using the image content and the audio information obtained when users show their images to others. The paper also presents the user interface evaluation based on tests with 58 participants.
Resumo:
Dissertation submitted in partial fulfilment of the requirements for the Degree of Master of Science in Geospatial Technologies
Resumo:
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Resumo:
Shape complexity has recently received attention from different fields, such as computer vision and psychology. In this paper, integral geometry and information theory tools are applied to quantify the shape complexity from two different perspectives: from the inside of the object, we evaluate its degree of structure or correlation between its surfaces (inner complexity), and from the outside, we compute its degree of interaction with the circumscribing sphere (outer complexity). Our shape complexity measures are based on the following two facts: uniformly distributed global lines crossing an object define a continuous information channel and the continuous mutual information of this channel is independent of the object discretisation and invariant to translations, rotations, and changes of scale. The measures introduced in this paper can be potentially used as shape descriptors for object recognition, image retrieval, object localisation, tumour analysis, and protein docking, among others
Resumo:
Choosing what to eat is a complex activity for humans. Determining a food's pleasantness requires us to combine information about what is available at a given time with knowledge of the food's palatability, texture, fat content, and other nutritional information. It has been suggested that humans may have an implicit knowledge of a food's fat content based on its appearance; Toepel et al. (Neuroimage 44:967-974, 2009) reported visual-evoked potential modulations after participants viewed images of high-energy, high-fat food (HF), as compared to viewing low-fat food (LF). In the present study, we investigated whether there are any immediate behavioural consequences of these modulations for human performance. HF, LF, or non-food (NF) images were used to exogenously direct participants' attention to either the left or the right. Next, participants made speeded elevation discrimination responses (up vs. down) to visual targets presented either above or below the midline (and at one of three stimulus onset asynchronies: 150, 300, or 450 ms). Participants responded significantly more rapidly following the presentation of a HF image than following the presentation of either LF or NF images, despite the fact that the identity of the images was entirely task-irrelevant. Similar results were found when comparing response speeds following images of high-carbohydrate (HC) food items to low-carbohydrate (LC) food items. These results support the view that people rapidly process (i.e. within a few hundred milliseconds) the fat/carbohydrate/energy value or, perhaps more generally, the pleasantness of food. Potentially as a result of HF/HC food items being more pleasant and thus having a higher incentive value, it seems as though seeing these foods results in a response readiness, or an overall alerting effect, in the human brain.
Resumo:
Usual image fusion methods inject features from a high spatial resolution panchromatic sensor into every low spatial resolution multispectral band trying to preserve spectral signatures and improve spatial resolution to that of the panchromatic sensor. The objective is to obtain the image that would be observed by a sensor with the same spectral response (i.e., spectral sensitivity and quantum efficiency) as the multispectral sensors and the spatial resolution of the panchromatic sensor. But in these methods, features from electromagnetic spectrum regions not covered by multispectral sensors are injected into them, and physical spectral responses of the sensors are not considered during this process. This produces some undesirable effects, such as resolution overinjection images and slightly modified spectral signatures in some features. The authors present a technique which takes into account the physical electromagnetic spectrum responses of sensors during the fusion process, which produces images closer to the image obtained by the ideal sensor than those obtained by usual wavelet-based image fusion methods. This technique is used to define a new wavelet-based image fusion method.