2 resultados para Visual Working-memory
em Repositório Científico do Instituto Politécnico de Lisboa - Portugal
Resumo:
Este e-working paper integra o debate acerca da diversidade de identidades de género a partir da análise das diferentes formas de representação visual dos corpos veiculadas em algumas das revistas de segmento masculino a circular em Portugal (Maxmen, FHM, Men’s Health e GQ). O trabalho começa por apresentar alguns dados indicativos do surgimento e evolução das vendas das revistas de segmento masculino em Portugal, bem como do perfil dos seus leitores. Posteriormente e a partir das imagens que integram estas publicações desenha-se, então, uma reflexão acerca das diferentes concepções de género associadas às revistas. This e-Working Paper incorporates a debate on the diversity of gender identities through the analysis of different ways of visually representing the body carried in some of the male magazines circulating in Portugal (Maxmen, FHM, Men’s Health and GQ). The work begins by presenting some data regarding the emergence and market evolution of male magazines in Portugal, as well as the reader’s profile. Therefore, through the diversity of images that integrate these publications, there will be a discussion on the different conceptions of gender.
Resumo:
Relevant past events can be remembered when visualizing related pictures. The main difficulty is how to find these photos in a large personal collection. Query definition and image annotation are key issues to overcome this problem. The former is relevant due to the diversity of the clues provided by our memory when recovering a past moment and the later because images need to be annotated with information regarding those clues to be retrieved. Consequently, tools to recover past memories should deal carefully with these two tasks. This paper describes a user interface designed to explore pictures from personal memories. Users can query the media collection in several ways and for this reason an iconic visual language to define queries is proposed. Automatic and semi-automatic annotation is also performed using the image content and the audio information obtained when users show their images to others. The paper also presents the user interface evaluation based on tests with 58 participants.