37 resultados para Content-Based Image Retrieval (CBIR)

em CentAUR: Central Archive University of Reading - UK


Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Techniques to retrieve reliable images from complicated objects are described, overcoming problems introduced by uneven surfaces, giving enhanced depth resolution and improving image contrast. The techniques are illustrated with application to THz imaging of concealed wall paintings.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In any data mining applications, automated text and text and image retrieval of information is needed. This becomes essential with the growth of the Internet and digital libraries. Our approach is based on the latent semantic indexing (LSI) and the corresponding term-by-document matrix suggested by Berry and his co-authors. Instead of using deterministic methods to find the required number of first "k" singular triplets, we propose a stochastic approach. First, we use Monte Carlo method to sample and to build much smaller size term-by-document matrix (e.g. we build k x k matrix) from where we then find the first "k" triplets using standard deterministic methods. Second, we investigate how we can reduce the problem to finding the "k"-largest eigenvalues using parallel Monte Carlo methods. We apply these methods to the initial matrix and also to the reduced one. The algorithms are running on a cluster of workstations under MPI and results of the experiments arising in textual retrieval of Web documents as well as comparison of the stochastic methods proposed are presented. (C) 2003 IMACS. Published by Elsevier Science B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In order to calculate unbiased microphysical and radiative quantities in the presence of a cloud, it is necessary to know not only the mean water content but also the distribution of this water content. This article describes a study of the in-cloud horizontal inhomogeneity of ice water content, based on CloudSat data. In particular, by focusing on the relations with variables that are already available in general circulation models (GCMs), a parametrization of inhomogeneity that is suitable for inclusion in GCM simulations is developed. Inhomogeneity is defined in terms of the fractional standard deviation (FSD), which is given by the standard deviation divided by the mean. The FSD of ice water content is found to increase with the horizontal scale over which it is calculated and also with the thickness of the layer. The connection to cloud fraction is more complicated; for small cloud fractions FSD increases as cloud fraction increases while FSD decreases sharply for overcast scenes. The relations to horizontal scale, layer thickness and cloud fraction are parametrized in a relatively simple equation. The performance of this parametrization is tested on an independent set of CloudSat data. The parametrization is shown to be a significant improvement on the assumption of a single-valued global FSD

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this pilot study water was extracted from samples of two Holocene stalagmites from Socotra Island, Yemen, and one Eemian stalagmite from southern continental Yemen. The amount of water extracted per unit mass of stalagmite rock, termed "water yield" hereafter, serves as a measure of its total water content. Based on direct correlation plots of water yields and δ18Ocalcite and on regime shift analyses, we demonstrate that for the studied stalagmites the water yield records vary systematically with the corresponding oxygen isotopic compositions of the calcite (δ18Ocalcite). Within each stalagmite lower δ18Ocalcite values are accompanied by lower water yields and vice versa. The δ18Ocalcite records of the studied stalagmites have previously been interpreted to predominantly reflect the amount of rainfall in the area; thus, water yields can be linked to drip water supply. Higher, and therefore more continuous drip water supply caused by higher rainfall rates, supports homogeneous deposition of calcite with low porosity and therefore a small fraction of water-filled inclusions, resulting in low water yields of the respective samples. A reduction of drip water supply fosters irregular growth of calcite with higher porosity, leading to an increase of the fraction of water-filled inclusions and thus higher water yields. The results are consistent with the literature on stalagmite growth and supported by optical inspection of thin sections of our samples. We propose that for a stalagmite from a dry tropical or subtropical area, its water yield record represents a novel paleo-climate proxy recording changes in drip water supply, which can in turn be interpreted in terms of associated rainfall rates.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Automatically extracting interesting objects from videos is a very challenging task and is applicable to many research areas such robotics, medical imaging, content based indexing and visual surveillance. Automated visual surveillance is a major research area in computational vision and a commonly applied technique in an attempt to extract objects of interest is that of motion segmentation. Motion segmentation relies on the temporal changes that occur in video sequences to detect objects, but as a technique it presents many challenges that researchers have yet to surmount. Changes in real-time video sequences not only include interesting objects, environmental conditions such as wind, cloud cover, rain and snow may be present, in addition to rapid lighting changes, poor footage quality, moving shadows and reflections. The list provides only a sample of the challenges present. This thesis explores the use of motion segmentation as part of a computational vision system and provides solutions for a practical, generic approach with robust performance, using current neuro-biological, physiological and psychological research in primate vision as inspiration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Global climate change results from a small yet persistent imbalance between the amount of sunlight absorbed by Earth and the thermal radiation emitted back to space. An apparent inconsistency has been diagnosed between interannual variations in the net radiation imbalance inferred from satellite measurements and upper-ocean heating rate from in situ measurements, and this inconsistency has been interpreted as ‘missing energy’ in the system. Here we present a revised analysis of net radiation at the top of the atmosphere from satellite data, and we estimate ocean heat content, based on three independent sources. We find that the difference between the heat balance at the top of the atmosphere and upper-ocean heat content change is not statistically significant when accounting for observational uncertainties in ocean measurements, given transitions in instrumentation and sampling. Furthermore, variability in Earth’s energy imbalance relating to El Niño-Southern Oscillation is found to be consistent within observational uncertainties among the satellite measurements, a reanalysis model simulation and one of the ocean heat content records. We combine satellite data with ocean measurements to depths of 1,800 m, and show that between January 2001 and December 2010, Earth has been steadily accumulating energy at a rate of 0.50±0.43 Wm−2 (uncertainties at the 90% confidence level). We conclude that energy storage is continuing to increase in the sub-surface ocean.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Purpose – The purpose of the research was to discover the process of social and environmental report assurance (SERA) and thereby evaluate the benefits, extent of stakeholder inclusivity and/or managerial capture of SERA processes and the dynamics of SERA as it matures. Design/methodology/approach – This paper used semi-structured interviews with 20 accountant and consultant assurors to derive data, which were then coded and analysed, resulting in the identification of four themes. Findings – This paper provides interview evidence on the process of SERA, suggesting that, although there is still managerial capture of SERA, stakeholders are being increasingly included in the process as it matures. SERA is beginning to provide dual-pronged benefits, adding value to management and stakeholders simultaneously. Through the lens of Freirian dialogic theory, it is found that SERA is starting to display some characteristics of a dialogical process, being stakeholder inclusive, demythologising and transformative, with assurors perceiving themselves as a “voice” for stakeholders. Consequently, SERA is becoming an important mechanism for driving forward more stakeholder-inclusive SER, with the SERA process beginning to transform attitudes of management towards their stakeholders through more stakeholder-led SER. However, there remain significant obstacles to dialogic SERA. The paper suggests these could be removed through educative and transformative processes driven by assurors. Originality/value – Previous work on SERA has involved predominantly content-based analysis on assurance statements. However, this paper investigates the details of the SERA process, for the first time using qualitative interview data.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The work presented in this article was performed at the Oriental Institute at the University of Chicago, on objects from their permanent collection: an ancient Egyptian bird mummy and three ancient Sumerian corroded copper-alloy objects. We used a portable, fiber-coupled terahertz time-domain spectroscopic imaging system, which allowed us to measure specimens in both transmission and reflection geometry, and present time- and frequency-based image modes. The results confirm earlier evidence that terahertz imaging can provide complementary information to that obtainable from x-ray CT scans of mummies, giving better visualisation of low density regions. In addition, we demonstrate that terahertz imaging can distinguish mineralized layers in metal artifacts.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Automatic indexing and retrieval of digital data poses major challenges. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions, or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. For a number of years research has been ongoing in the field of ontological engineering with the aim of using ontologies to add such (meta) knowledge to information. In this paper, we describe the architecture of a system (Dynamic REtrieval Analysis and semantic metadata Management (DREAM)) designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval. The DREAM Demonstrator has been evaluated as deployed in the film post-production phase to support the process of storage, indexing and retrieval of large data sets of special effects video clips as an exemplar application domain. This paper provides its performance and usability results and highlights the scope for future enhancements of the DREAM architecture which has proven successful in its first and possibly most challenging proving ground, namely film production, where it is already in routine use within our test bed Partners' creative processes. (C) 2009 Published by Elsevier B.V.

Relevância:

50.00% 50.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.