984 resultados para Image Collections


Relevância:

30.00% 30.00%

Publicador:

Resumo:

L-R: Fr. Francis X. Canfield, Monsignor Henry Donnelly, Albert Hyma, F. Clever Bald

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L-R: F. Clever Bald, Fr. Francis X. Canfield, Monsignor Henry Donnelly, Albert Hyma,

Relevância:

30.00% 30.00%

Publicador:

Resumo:

L-R: Fr. Francis X. Canfield, Monsignor Henry Donnelly, Albert Hyma, F. Clever Bald

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Photo annotation is a resource-intensive task, yet is increasingly essential as image archives and personal photo collections grow in size. There is an inherent con?ict in the process of describing and archiving personal experiences, because casual users are generally unwilling to expend large amounts of e?ort on creating the annotations which are required to organise their collections so that they can make best use of them. This paper describes the Photocopain system, a semi-automatic image annotation system which combines information about the context in which a photograph was captured with information from other readily available sources in order to generate outline annotations for that photograph that the user may further extend or amend.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Visual information is becoming increasingly important and tools to manage repositories of media collections are highly sought after. In this paper, we focus on image databases and on how to effectively and efficiently access these. In particular, we present effective image browsing systems that are operated on a large multi-touch environment for truly interactive exploration. Not only do image browsers pose a useful alternative to retrieval-based systems, they also provide a visualisation of the whole image collection and let users explore particular parts of the collection. Our systems are based on the idea that visually similar images are located close to each other in the visualisation, that image thumbnails are arranged on a regular lattice (either a regular grid projected on a sphere or a hexagonal lattice), and that large image datasets can be accessed through a hierarchical tree structure. © 2014 International Information Institute.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The main contribution of this project is the study of collections of valuable documents related to the image of India in Bulgaria. Digital repositories of selected samples are constructed using modern information technologies. The results are presented in a virtual exhibition ‘The Image of India in Bulgaria: from the late 19th to the late 20th century’.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

With the rise of smart phones, lifelogging devices (e.g. Google Glass) and popularity of image sharing websites (e.g. Flickr), users are capturing and sharing every aspect of their life online producing a wealth of visual content. Of these uploaded images, the majority are poorly annotated or exist in complete semantic isolation making the process of building retrieval systems difficult as one must firstly understand the meaning of an image in order to retrieve it. To alleviate this problem, many image sharing websites offer manual annotation tools which allow the user to “tag” their photos, however, these techniques are laborious and as a result have been poorly adopted; Sigurbjörnsson and van Zwol (2008) showed that 64% of images uploaded to Flickr are annotated with < 4 tags. Due to this, an entire body of research has focused on the automatic annotation of images (Hanbury, 2008; Smeulders et al., 2000; Zhang et al., 2012a) where one attempts to bridge the semantic gap between an image’s appearance and meaning e.g. the objects present. Despite two decades of research the semantic gap still largely exists and as a result automatic annotation models often offer unsatisfactory performance for industrial implementation. Further, these techniques can only annotate what they see, thus ignoring the “bigger picture” surrounding an image (e.g. its location, the event, the people present etc). Much work has therefore focused on building photo tag recommendation (PTR) methods which aid the user in the annotation process by suggesting tags related to those already present. These works have mainly focused on computing relationships between tags based on historical images e.g. that NY and timessquare co-exist in many images and are therefore highly correlated. However, tags are inherently noisy, sparse and ill-defined often resulting in poor PTR accuracy e.g. does NY refer to New York or New Year? This thesis proposes the exploitation of an image’s context which, unlike textual evidences, is always present, in order to alleviate this ambiguity in the tag recommendation process. Specifically we exploit the “what, who, where, when and how” of the image capture process in order to complement textual evidences in various photo tag recommendation and retrieval scenarios. In part II, we combine text, content-based (e.g. # of faces present) and contextual (e.g. day-of-the-week taken) signals for tag recommendation purposes, achieving up to a 75% improvement to precision@5 in comparison to a text-only TF-IDF baseline. We then consider external knowledge sources (i.e. Wikipedia & Twitter) as an alternative to (slower moving) Flickr in order to build recommendation models on, showing that similar accuracy could be achieved on these faster moving, yet entirely textual, datasets. In part II, we also highlight the merits of diversifying tag recommendation lists before discussing at length various problems with existing automatic image annotation and photo tag recommendation evaluation collections. In part III, we propose three new image retrieval scenarios, namely “visual event summarisation”, “image popularity prediction” and “lifelog summarisation”. In the first scenario, we attempt to produce a rank of relevant and diverse images for various news events by (i) removing irrelevant images such memes and visual duplicates (ii) before semantically clustering images based on the tweets in which they were originally posted. Using this approach, we were able to achieve over 50% precision for images in the top 5 ranks. In the second retrieval scenario, we show that by combining contextual and content-based features from images, we are able to predict if it will become “popular” (or not) with 74% accuracy, using an SVM classifier. Finally, in chapter 9 we employ blur detection and perceptual-hash clustering in order to remove noisy images from lifelogs, before combining visual and geo-temporal signals in order to capture a user’s “key moments” within their day. We believe that the results of this thesis show an important step towards building effective image retrieval models when there lacks sufficient textual content (i.e. a cold start).

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Partial ms. (ink, pencil, and typescript) on col. base map.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The problem of determining the script and language of a document image has a number of important applications in the field of document analysis, such as indexing and sorting of large collections of such images, or as a precursor to optical character recognition (OCR). In this paper, we investigate the use of texture as a tool for determining the script of a document image, based on the observation that text has a distinct visual texture. An experimental evaluation of a number of commonly used texture features is conducted on a newly created script database, providing a qualitative measure of which features are most appropriate for this task. Strategies for improving classification results in situations with limited training data and multiple font types are also proposed.