18 resultados para Visual images
em CentAUR: Central Archive University of Reading - UK
Resumo:
This article explores the way users of an online gay chat room negotiate the exchange of photographs and the conduct of video conferencing sessions and how this negotiation changes the way participants manage their interactions and claim and impute social identities. Different modes of communication provide users with different resources for the control of information, affecting not just what users are able to reveal, but also what they are able to conceal. Thus, the shift from a purely textual mode for interacting to one involving visual images fundamentally changes the kinds of identities and relationships available to users. At the same time, the strategies users employ to negotiate these shifts of mode can alter the resources available in different modes. The kinds of social actions made possible through different modes, it is argued, are not just a matter of the modes themselves but also of how modes are introduced into the ongoing flow of interaction.
Resumo:
Objective: Previous research has indicated that temporal factors [specifically, the duration of interstimulus intervals (ISI) during a threat processing task] may influence the nature of processing biases exhibited in nonclinical populations with some degree of eating disorder psychopathology (Meyer et al., Int J Eat Disord, 27, 405-410, 2000). The current study aimed to test this hypothesis by investigating attentional biases for eating-disorder-relevant images and irrelevant visual images (animals) in patients with eating disorders (n = 23) and psychiatric (n = 19) and nonpsychiatric (n = 65) controls. Method: A dot probe task was modified from previous research (Shafran et al., Int Eat Disord, 40, 369-380, 2007), whereby an original ISI of 500 ms was increased to 2.000 ms. Results: Patients with an eating disorder continued to display a bias in the processing of weight stimuli. However, biases noted in previous research for shape and weight stimuli disappeared when the ISI duration was increased in this way. Conclusion: These findings highlight the importance of temporal factors in whether processing biases are displayed and may point to ways in which biases actually work in this population. However, further research is warranted. (C) 2008 by Wiley Periodicals, Inc.
Resumo:
Some poems are inherently dramatic due to their narrative content or the events, characters, places and emotions that are their subject. Others have the potential for dramatisation because of some aural or visual quality of their poetic form. However, if dramatising poems is to be meaningful and effective children need to be taught something about the art form of drama rather than just being left to their own devices. This chapter explores the learning potential of considering the printed text of a poem as a notation of sound, movement, gesture and use of space. The chapter recognises a progression from simple nursery rhymes to the sophisticated use of poetic language in different types of literature that is mirrored in the journey from infants’ clapping games to the dramatic juxtaposition of aural and visual images in theatre and the performing arts.
Resumo:
A driver controls a car by turning the steering wheel or by pressing on the accelerator or the brake. These actions are modelled by Gaussian processes, leading to a stochastic model for the motion of the car. The stochastic model is the basis of a new filter for tracking and predicting the motion of the car, using measurements obtained by fitting a rigid 3D model to a monocular sequence of video images. Experiments show that the filter easily outperforms traditional filters.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor which is devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt to bridge the so called “semantic gap”. The proposed image feature vector model is fundamentally underpinned by the image labelling framework, called Collaterally Confirmed Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts of the images with the state-of-the-art low-level image processing and visual feature extraction techniques for automatically assigning linguistic keywords to image regions. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicates that our proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models.
Resumo:
Seventeen-month-old infants were presented with pairs of images, in silence or with the non-directive auditory stimulus 'look!'. The images had been chosen so that one image depicted an item whose name was known to the infant, and the other image depicted an image whose name was not known to the infant. Infants looked longer at images for which they had names than at images for which they did not have names, despite the absence of any referential input. The experiment controlled for the familiarity of the objects depicted: in each trial, image pairs presented to infants had previously been judged by caregivers to be of roughly equal familiarity. From a theoretical perspective, the results indicate that objects with names are of intrinsic interest to the infant. The possible causal direction for this linkage is discussed and it is concluded that the results are consistent with Whorfian linguistic determinism, although other construals are possible. From a methodological perspective, the results have implications for the use of preferential looking as an index of early word comprehension.
Resumo:
A novel framework for multimodal semantic-associative collateral image labelling, aiming at associating image regions with textual keywords, is described. Both the primary image and collateral textual modalities are exploited in a cooperative and complementary fashion. The collateral content and context based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix, of the visual keywords, A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. Finally, we use Self Organising Maps to examine the classification and retrieval effectiveness of the proposed high-level image feature vector model which is constructed based on the image labelling results.
Resumo:
A novel framework referred to as collaterally confirmed labelling (CCL) is proposed, aiming at localising the visual semantics to regions of interest in images with textual keywords. Both the primary image and collateral textual modalities are exploited in a mutually co-referencing and complementary fashion. The collateral content and context-based knowledge is used to bias the mapping from the low-level region-based visual primitives to the high-level visual concepts defined in a visual vocabulary. We introduce the notion of collateral context, which is represented as a co-occurrence matrix of the visual keywords. A collaborative mapping scheme is devised using statistical methods like Gaussian distribution or Euclidean distance together with collateral content and context-driven inference mechanism. We introduce a novel high-level visual content descriptor that is devised for performing semantic-based image classification and retrieval. The proposed image feature vector model is fundamentally underpinned by the CCL framework. Two different high-level image feature vector models are developed based on the CCL labelling of results for the purposes of image data clustering and retrieval, respectively. A subset of the Corel image collection has been used for evaluating our proposed method. The experimental results to-date already indicate that the proposed semantic-based visual content descriptors outperform both traditional visual and textual image feature models. (C) 2007 Elsevier B.V. All rights reserved.
Resumo:
In this paper, we introduce a novel high-level visual content descriptor devised for performing semantic-based image classification and retrieval. The work can be treated as an attempt for bridging the so called "semantic gap". The proposed image feature vector model is fundamentally underpinned by an automatic image labelling framework, called Collaterally Cued Labelling (CCL), which incorporates the collateral knowledge extracted from the collateral texts accompanying the images with the state-of-the-art low-level visual feature extraction techniques for automatically assigning textual keywords to image regions. A subset of the Corel image collection was used for evaluating the proposed method. The experimental results indicate that our semantic-level visual content descriptors outperform both conventional visual and textual image feature models.
Resumo:
This paper presents recent developments to a vision-based traffic surveillance system which relies extensively on the use of geometrical and scene context. Firstly, a highly parametrised 3-D model is reported, able to adopt the shape of a wide variety of different classes of vehicle (e.g. cars, vans, buses etc.), and its subsequent specialisation to a generic car class which accounts for commonly encountered types of car (including saloon, batchback and estate cars). Sample data collected from video images, by means of an interactive tool, have been subjected to principal component analysis (PCA) to define a deformable model having 6 degrees of freedom. Secondly, a new pose refinement technique using “active” models is described, able to recover both the pose of a rigid object, and the structure of a deformable model; an assessment of its performance is examined in comparison with previously reported “passive” model-based techniques in the context of traffic surveillance. The new method is more stable, and requires fewer iterations, especially when the number of free parameters increases, but shows somewhat poorer convergence. Typical applications for this work include robot surveillance and navigation tasks.
Resumo:
The impact of novel labels on visual processing was investigated across two experiments with infants aged between 9 and 21 months. Infants viewed pairs of images across a series of preferential looking trials. On each trial, one image was novel, and the other image had previously been viewed by the infant. Some infants viewed images in silence; other infants viewed images accompanied by novel labels. The pattern of fixations both across and within trials revealed that infants in the labelling condition took longer to develop a novelty preference than infants in the silent condition. Our findings contrast with prior research by Robinson and Sloutsky (e.g., Robinson & Sloutsky, 2007a; Sloutsky & Robinson, 2008) who found that novel labels did not disrupt visual processing for infants aged over a year. Provided that overall task demands are sufficiently high, it appears that labels can disrupt visual processing for infants during the developmental period of establishing a lexicon. The results suggest that when infants are processing labels and objects, attentional resources are shared across modalities.
Resumo:
This paper presents a previously unpublished Attic lekythos and discusses visual ambiguity as an intentional drawing style used by a vase painter who conceptualised the many possible relationships between pot and user, object and subject. The Gela Painter endowed this hastily manufactured and decorated lekythos with visual effects that drew the viewer into an inherently ambivalent motif: a mounting Dionysos. This motif, like other Dionysian themes, had a vogue in late Archaic times but did not necessarily invoke chthonic associations. It had the potential to be consumed in diverse contexts, including religious festivals, by a wide range of audiences. Such images were not given to the viewer fully through visual perception but through interpretation.
Resumo:
Infrared polarization and intensity imagery provide complementary and discriminative information in image understanding and interpretation. In this paper, a novel fusion method is proposed by effectively merging the information with various combination rules. It makes use of both low-frequency and highfrequency images components from support value transform (SVT), and applies fuzzy logic in the combination process. Images (both infrared polarization and intensity images) to be fused are firstly decomposed into low-frequency component images and support value image sequences by the SVT. Then the low-frequency component images are combined using a fuzzy combination rule blending three sub-combination methods of (1) region feature maximum, (2) region feature weighting average, and (3) pixel value maximum; and the support value image sequences are merged using a fuzzy combination rule fusing two sub-combination methods of (1) pixel energy maximum and (2) region feature weighting. With the variables of two newly defined features, i.e. the low-frequency difference feature for low-frequency component images and the support-value difference feature for support value image sequences, trapezoidal membership functions are proposed and developed in tuning the fuzzy fusion process. Finally the fused image is obtained by inverse SVT operations. Experimental results of visual inspection and quantitative evaluation both indicate the superiority of the proposed method to its counterparts in image fusion of infrared polarization and intensity images.
Resumo:
The aim of Terrorist Transgressions is to analyse the myths inscribed in images of the terrorist and identify how agency is attributed to representation through invocations and inversions of gender stereotypes. In modern discourses on the terrorist the horror experienced in Western societies was the appearance of a new sense of the vulnerability of the body politic, and therefore of the modern self with its direct dependency on security and property. The terrorist has been constructed as the epitome of transgression against economic resources and moral, physical and political boundaries. Although terrorism has been the focus of intense academic activity, cultural representations of the terrorist have received less attention. Yet terrorism is dependent on spectacle and the topic is subject to forceful exposure in popular media. While the terrorist is predominantly aligned with masculinity, women have been active in terrorist organisations since the late 19th century and in suicidal terrorist attacks since the 1980s. Such attacks have confounded constructions of femininity and masculinity, with profound implications for the gendering of violence and horror. The publication arises from an AHRC networking grant, 2011-12, with Birkbeck, and includes collaboration with the army at Sandhurst RMA. The project relates to a wider investigation into feminism, violence and contemporary art.
Resumo:
Scene classification based on latent Dirichlet allocation (LDA) is a more general modeling method known as a bag of visual words, in which the construction of a visual vocabulary is a crucial quantization process to ensure success of the classification. A framework is developed using the following new aspects: Gaussian mixture clustering for the quantization process, the use of an integrated visual vocabulary (IVV), which is built as the union of all centroids obtained from the separate quantization process of each class, and the usage of some features, including edge orientation histogram, CIELab color moments, and gray-level co-occurrence matrix (GLCM). The experiments are conducted on IKONOS images with six semantic classes (tree, grassland, residential, commercial/industrial, road, and water). The results show that the use of an IVV increases the overall accuracy (OA) by 11 to 12% and 6% when it is implemented on the selected and all features, respectively. The selected features of CIELab color moments and GLCM provide a better OA than the implementation over CIELab color moment or GLCM as individuals. The latter increases the OA by only ∼2 to 3%. Moreover, the results show that the OA of LDA outperforms the OA of C4.5 and naive Bayes tree by ∼20%. © 2014 Society of Photo-Optical Instrumentation Engineers (SPIE) [DOI: 10.1117/1.JRS.8.083690]