4 resultados para semantic mapping
em CentAUR: Central Archive University of Reading - UK
Resumo:
The validity of the linguistic relativity principle continues to stimulate vigorous debate and research. The debate has recently shifted from the behavioural investigation arena to a more biologically grounded field, in which tangible physiological evidence for language effects on perception can be obtained. Using brain potentials in a colour oddball detection task with Greek and English speakers, a recent study suggests that language effects may exist at early stages of perceptual integration [Thierry, G., Athanasopoulos, P., Wiggett, A., Dering, B., & Kuipers, J. (2009). Unconscious effects of language-specific terminology on pre-attentive colour perception. Proceedings of the National Academy of Sciences, 106, 4567–4570]. In this paper, we test whether in Greek speakers exposure to a new cultural environment (UK) with contrasting colour terminology from their native language affects early perceptual processing as indexed by an electrophysiological correlate of visual detection of colour luminance. We also report semantic mapping of native colour terms and colour similarity judgements. Results reveal convergence of linguistic descriptions, cognitive processing, and early perception of colour in bilinguals. This result demonstrates for the first time substantial plasticity in early, pre-attentive colour perception and has important implications for the mechanisms that are involved in perceptual changes during the processes of language learning and acculturation.
Resumo:
A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.