938 resultados para visual content analysis


Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The recent decline in the effectiveness of some azole fungicides in controlling the wheat pathogen Mycosphaerella graminicola has been associated with mutations in the CYP51 gene encoding the azole target, the eburicol 14 alpha-demethylase (CYP51), an essential enzyme of the ergosterol biosynthesis pathway. In this study, analysis of the sterol content of M. graminicola isolates carrying different variants of the CYP51 gene has revealed quantitative differences in sterol intermediates, particularly the CYP51 substrate eburicol. Together with CYP51 gene expression studies, these data suggest that mutations in the CYP51 gene impact on the activity of the CYP51 protein.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of multimedia content data for very large multimedia content corpora. Current indexing and retrieval applications still use keywords to index multimedia content and those keywords usually do not provide any knowledge about the semantic content of the data. With the increasing amount of multimedia content, it is inefficient to continue with this approach. In this paper, we describe the project DREAM, which addresses such challenges by proposing a new framework for semi-automatic annotation and retrieval of multimedia based on the semantic content. The framework uses the Topic Map Technology, as a tool to model the knowledge automatically extracted from the multimedia content using an Automatic Labelling Engine. We describe how we acquire knowledge from the content and represent this knowledge using the support of NLP to automatically generate Topic Maps. The framework is described in the context of film post-production.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There are still major challenges in the area of automatic indexing and retrieval of digital data. The main problem arises from the ever increasing mass of digital media and the lack of efficient methods for indexing and retrieval of such data based on the semantic content rather than keywords. To enable intelligent web interactions or even web filtering, we need to be capable of interpreting the information base in an intelligent manner. Research has been ongoing for a few years in the field of ontological engineering with the aim of using ontologies to add knowledge to information. In this paper we describe the architecture of a system designed to automatically and intelligently index huge repositories of special effects video clips, based on their semantic content, using a network of scalable ontologies to enable intelligent retrieval.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The vertical distribution of cloud cover has a significant impact on a large number of meteorological and climatic processes. Cloud top altitude and cloud geometrical thickness are then essential. Previous studies established the possibility of retrieving those parameters from multi-angular oxygen A-band measurements. Here we perform a study and comparison of the performances of future instruments. The 3MI (Multi-angle, Multi-channel and Multi-polarization Imager) instrument developed by EUMETSAT, which is an extension of the POLDER/PARASOL instrument, and MSPI (Multi-angles Spectro-Polarimetric Imager) develoloped by NASA's Jet Propulsion Laboratory will measure total and polarized light reflected by the Earth's atmosphere–surface system in several spectral bands (from UV to SWIR) and several viewing geometries. Those instruments should provide opportunities to observe the links between the cloud structures and the anisotropy of the reflected solar radiation into space. Specific algorithms will need be developed in order to take advantage of the new capabilities of this instrument. However, prior to this effort, we need to understand, through a theoretical Shannon information content analysis, the limits and advantages of these new instruments for retrieving liquid and ice cloud properties, and especially, in this study, the amount of information coming from the A-Band channel on the cloud top altitude (CTOP) and geometrical thickness (CGT). We compare the information content of 3MI A-Band in two configurations and that of MSPI. Quantitative information content estimates show that the retrieval of CTOP with a high accuracy is possible in almost all cases investigated. The retrieval of CGT seems less easy but possible for optically thick clouds above a black surface, at least when CGT > 1–2 km.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This presentation was offered as part of the CUNY Library Assessment Conference, Reinventing Libraries: Reinventing Assessment, held at the City University of New York in June 2014.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There is a lack of research on the everyday lives of older people in developing countries. This exploratory study used structured observation and content analysis to examine the presence of older people in public fora, and considered the methods’ potential for understanding older people’s social integration and inclusion. Structured observation occurred of public social spaces in six cities each located in a different developing country, and in one city in the United Kingdom, together with content analysis of the presence of people in newspaper pictures and on television in the selected countries. Results indicated that across all fieldwork sites and data sources, there was a low presence of older people, with women considerably less present than men in developing countries. There was variation across fieldwork sites in older people’s presence by place and time of day, and in their accompanied status. The presence of older people in images drawn from newspapers was associated with the news/non-news nature of the source. The utility of the study’s methodological approach is considered, as is the degree to which the presence of older people in public fora might relate to social integration and inclusion in different cultural contexts.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Content analysis of computer conferences provides a rich source of data for researching and understanding online learning. However, the complexities of using content analysis in a relatively new research field have resulted in researchers often avoiding this method and using more familiar methods such as survey and interview instead. This article discusses content analysis as a methodology, with emphasis on the development of analytical frameworks. The literature indicates that researchers either use or modify existing frameworks or, more commonly, develop new ones, either through grounded theory approaches or the adaptation of existing theories, concepts or models. The development and implementation of two frameworks are then discussed in detail. Both were developed purposively to investigate and evaluate firstly collaborative learning and secondly deep and surface approaches to learning as evidenced in computer conferences. The article concludes with recommendations for framework development.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Content analysis of computer conferences provides a rich source of data for researching and understanding online learning. However the complexities of using content analysis in a relatively new research field have resulted in researchers avoiding its use as a qualitative or quantitative method and using more familiar methods such as survey and interview instead. Ethical issues are also raised that, though ensuring students’ rights, particularly to privacy and with no fear of coercion, are making it difficult for researchers to access and analyse archives of conference data as a research source. This paper suggests a pragmatic but systematic approach to solving these research issues by using several research strategies that are described in the context of the authors’ research and practice.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper examines a methodology for establishing quality in online learning environments. For e-learning to be sustainable in flexible, open and distance learning, its value in learning must be able to be analysed. In the case of computer conferencing, one way to do this is with content analysis. This methodology is discussed with a review of current frameworks. These indicate that while some researchers and evaluators either use or modify existing frameworks, most researchers develop new ones, generally through the adaptation of existing theories, concepts or model, but in some cases through grounded theory approaches. The development and implementation of two frameworks are then discussed in detail. Both were developed to investigate and evaluate both collaborative learning and deep and surface learning as evidenced in computer conferences. Evidence of such learning attributes are precisely the elements of value in e-learning that can be shown through such a methodology. These attributes can then be integrated into courses developed for quality online learning environments.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Most real-world datasets are, to a certain degree, skewed. When considered that they are also large, they become the pinnacle challenge in data analysis. More importantly, we cannot ignore such datasets as they arise frequently in a wide variety of applications. Regardless of the analytic, it is often that the effectiveness of analysis can be improved if the characteristic of the dataset is known in advance. In this paper, we propose a novel technique to preprocess such datasets to obtain this insight. Our work is inspired by the resonance phenomenon, where similar objects resonate to a given response function. The key analytic result of our work is the data terrain, which shows properties of the dataset to enable effective and efficient analysis. We demonstrated our work in the context of various real-world problems. In doing so, we establish it as the tool for preprocessing data before applying computationally expensive algorithms.