903 resultados para Audio-visual content classification
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
Com o crescimento da informação disponível na Web, arquivos pessoais e profissionais, protagonizado tanto pelo aumento da capacidade de armazenamento de dados, como pelo aumento exponencial da capacidade de processamento dos computadores, e do fácil acesso a essa mesma informação, um enorme fluxo de produção e distribuição de conteúdos audiovisuais foi gerado. No entanto, e apesar de existirem mecanismos para a indexação desses conteúdos com o objectivo de permitir a pesquisa e acesso aos mesmos, estes apresentam normalmente uma grande complexidade algorítmica ou exigem a contratação de pessoal altamente qualificado, para a verificação e categorização dos conteúdos. Nesta dissertação pretende-se estudar soluções de anotação colaborativa de conteúdos e desenvolver uma ferramenta que facilite a anotação de um arquivo de conteúdos audiovisuais. A abordagem implementada é baseada no conceito dos “Jogos com Propósito” (GWAP – Game With a Purpose) e permite que os utilizadores criem tags (metadatos na forma de palavras-chave) de forma a atribuir um significado a um objecto a ser categorizado. Assim, e como primeiro objectivo, foi desenvolvido um jogo com o propósito não só de entretenimento, mas também que permita a criação de anotações audiovisuais perante os vídeos que são apresentados ao jogador e, que desta forma, se melhore a indexação e categorização dos mesmos. A aplicação desenvolvida permite ainda a visualização dos conteúdos e metadatos categorizados, e com o objectivo de criação de mais um elemento informativo, permite a inserção de um like num determinado instante de tempo do vídeo. A grande vantagem da aplicação desenvolvida reside no facto de adicionar anotações a pontos específicos do vídeo, mais concretamente aos seus instantes de tempo. Trata-se de uma funcionalidade nova, não disponível em outras aplicações de anotação colaborativa de conteúdos audiovisuais. Com isto, o acesso aos conteúdos será bastante mais eficaz pois será possível aceder, por pesquisa, a pontos específicos no interior de um vídeo.
Resumo:
Multimedia Interactive Book (miBook) reflects the development of a new concept of virtual interpretation of traditional text books and audio-visual content. By encompassing new technological approaches, using augmented reality technology, allows the final user to experience a variety of sensorial stimuli while enjoying and interacting with the content; therefore enhancing the learning process. miBook stands for a global educational intention to enable people not only to access but also to appropriate intellectually valuable contents coming from different linguistic and cultural contexts.
Resumo:
Esta investigación describe la situación de cómo Youtube se ha convertido a partir de sus estrategias y plan de mercadeo en la plataforma número uno en variedad de clips de películas, vídeos musicales, video de blogs, entre otros; llegando a popularizarse como una red social. Las redes sociales han desarrollado una nueva forma de comunicar y son una herramienta fundamental para la creación de conocimiento colectivo, es el caso de YouTube buscador de contenido audiovisual y red social que permite a millones de usuarios conectarse alrededor del mundo. Esta plataforma rompe las barreras culturales y de comunicación que anteriormente existían a falta de internet. En este sentido se pretende analizar a YouTube desde una perspectiva administrativa enfocada en el área de mercadeo.
Resumo:
In political debates, the media[tisation] can determine the use of language with the aim to increase their spectacularisation and polarisation, possibly by means of criticism and humour, respectively. These linguistic strategies are often used in order to shape what was defined by Goffman as one’s face. Politicians, in particular, can recur to facework in a double sense: shaping their own face positively and/or that of their opponents negatively. Starting from the sociologic theory of face by Goffman and Levinson, with the help of corpus analysis tools, this research investigated the ways in which various forms of criticism and forms of humour were conducted in 3 electoral debates on a national scale (Germany, Ireland, and New Zealand) and 1 debate for the municipal election in Rome. The transcripts were revised after automatic transcriptions were extracted or found online, of which the audio-visual content is available on the Internet. The CADS research aimed to investigate the role that criticism and humour played within each participant’s discourse, and to identify differences and similarities among the strategies used by political leaders and moderators in different countries, and in different cultural, political, and media contexts.
Resumo:
Trabalho apresentado no âmbito do Mestrado em Engenharia Informática, como requisito parcial para obtenção do grau de Mestre em Engenharia Informática
Resumo:
David Smithin esitys Europeana työpajassa 20.11.2012 Helsingissä.
Resumo:
I denna uppsats har filmljudet i krigsfilmerna Apocalypse Now och Saving Private Ryan undersökts. Detta har gjorts för att försöka bidra med ökad förståelse för filmljudets användningsområde och funktioner, främst för filmerna i fråga, men även för krigsfilm rent generellt. Filmljud i denna kontext omfattar allt det ljud som finns i film, men utesluter dock all ickediegetisk musik. Båda filmerna har undersökts genom en audio-visuell analys. En sådan analys görs genom att detaljgranska båda filmernas ljud- och bildinnehåll var för sig, för att slutligen undersöka samma filmsekvens som helhet då ljudet och bilden satts ihop igen. Den audio-visuella analysmetod som nyttjats i uppsatsen är Michel Chions metod, Masking. De 30 minuter film som analyserades placerades sedan i olika filmljudzoner, där respektive filmljudzons ljudinnehåll bland annat visade vilka främsta huvudfunktioner somfilmljudet hade i dessa filmer. Dessa funktioner är till för att bibehålla åskådarens fokus och intresse, att skapa närhet till rollkaraktärerna, samt att tillföra en hög känsla av realism och närvaro. Intentionerna med filmljudet verkade vara att flytta åskådaren in i filmens verklighet, att låta åskådaren bli ett med filmen. Att återspegla denna känsla av realism, närvaro, fokus samt intresse, visade sig också vara de intentioner som funnits redan i de båda filmernas förproduktionsstadier. Detta bevisar att de lyckats åstadkomma det de eftersträvat. Men om filmljudet använts på samma sätt eller innehar samma funktioner i krigsfilm rent genrellt går inte att säga.I have for this bachelor’s thesis examined the movie sound of the classic warfare movies Apocalypse Now and Saving Private Ryan. This is an attempt to contribute to a more profound comprehension of the appliance and importance of movie sound. In this context movie sound implies all kinds of sounds within the movies, accept from non-diegetic music. These two movies have been examined by an audio-visual analysis. It's done by auditing the sound and picture content separately, and then combined to audit the same sequence as a whole. Michel Chion, which is the founder of this analysis, calls this method Masking. The sound in this 30 minute sequence was then divided into different zones, where every zone represented a certain main function. These functions are provided to create a stronger connection to the characters, sustain the viewers interest and bring a sense of realism and presence. It seems though the intention with the movies sound is to bring the viewers to the scene in hand, and let it become their reality. To mirror this sense of realism, presence, focus and interest, proves to be the intention from an early stage of the production. This bachelor’s thesis demonstrates a success in their endeavours. Although it can’t confirm whether the movie sound have been utilized in the same manner or if they posess the same functions to warefare movies in general.
Resumo:
The ability of integrating into a unified percept sensory inputs deriving from different sensory modalities, but related to the same external event, is called multisensory integration and might represent an efficient mechanism of sensory compensation when a sensory modality is damaged by a cortical lesion. This hypothesis has been discussed in the present dissertation. Experiment 1 explored the role of superior colliculus (SC) in multisensory integration, testing patients with collicular lesions, patients with subcortical lesions not involving the SC and healthy control subjects in a multisensory task. The results revealed that patients with collicular lesions, paralleling the evidence of animal studies, demonstrated a loss of multisensory enhancement, in contrast with control subjects, providing the first lesional evidence in humans of the essential role of SC in mediating audio-visual integration. Experiment 2 investigated the role of cortex in mediating multisensory integrative effects, inducing virtual lesions by inhibitory theta-burst stimulation on temporo-parietal cortex, occipital cortex and posterior parietal cortex, demonstrating that only temporo-parietal cortex was causally involved in modulating the integration of audio-visual stimuli at the same spatial location. Given the involvement of the retino-colliculo-extrastriate pathway in mediating audio-visual integration, the functional sparing of this circuit in hemianopic patients is extremely relevant in the perspective of a multisensory-based approach to the recovery of unisensory defects. Experiment 3 demonstrated the spared functional activity of this circuit in a group of hemianopic patients, revealing the presence of implicit recognition of the fearful content of unseen visual stimuli (i.e. affective blindsight), an ability mediated by the retino-colliculo-extrastriate pathway and its connections with amygdala. Finally, Experiment 4 provided evidence that a systematic audio-visual stimulation is effective in inducing long-lasting clinical improvements in patients with visual field defect and revealed that the activity of the spared retino-colliculo-extrastriate pathway is responsible of the observed clinical amelioration, as suggested by the greater improvement observed in patients with cortical lesions limited to the occipital cortex, compared to patients with lesions extending to other cortical areas, found in tasks high demanding in terms of spatial orienting. Overall, the present results indicated that multisensory integration is mediated by the retino-colliculo-extrastriate pathway and that a systematic audio-visual stimulation, activating this spared neural circuit, is able to affect orientation towards the blind field in hemianopic patients and, therefore, might constitute an effective and innovative approach for the rehabilitation of unisensory visual impairments.
Resumo:
Lesions to the primary geniculo-striate visual pathway cause blindness in the contralesional visual field. Nevertheless, previous studies have suggested that patients with visual field defects may still be able to implicitly process the affective valence of unseen emotional stimuli (affective blindsight) through alternative visual pathways bypassing the striate cortex. These alternative pathways may also allow exploitation of multisensory (audio-visual) integration mechanisms, such that auditory stimulation can enhance visual detection of stimuli which would otherwise be undetected when presented alone (crossmodal blindsight). The present dissertation investigated implicit emotional processing and multisensory integration when conscious visual processing is prevented by real or virtual lesions to the geniculo-striate pathway, in order to further clarify both the nature of these residual processes and the functional aspects of the underlying neural pathways. The present experimental evidence demonstrates that alternative subcortical visual pathways allow implicit processing of the emotional content of facial expressions in the absence of cortical processing. However, this residual ability is limited to fearful expressions. This finding suggests the existence of a subcortical system specialised in detecting danger signals based on coarse visual cues, therefore allowing the early recruitment of flight-or-fight behavioural responses even before conscious and detailed recognition of potential threats can take place. Moreover, the present dissertation extends the knowledge about crossmodal blindsight phenomena by showing that, unlike with visual detection, sound cannot crossmodally enhance visual orientation discrimination in the absence of functional striate cortex. This finding demonstrates, on the one hand, that the striate cortex plays a causative role in crossmodally enhancing visual orientation sensitivity and, on the other hand, that subcortical visual pathways bypassing the striate cortex, despite affording audio-visual integration processes leading to the improvement of simple visual abilities such as detection, cannot mediate multisensory enhancement of more complex visual functions, such as orientation discrimination.
Resumo:
This PhD by publication examines selected practice-based audio-visual works made by the author over a ten-year period, placing them in a critical context. Central to the publications, and the focus of the thesis, is an exploration of the role of sound in the creation of dialectic tension between the audio, the visual and the audience. By first analysing a number of texts (films/videos and key writings) the thesis locates the principal issues and debates around the use of audio in artists’ moving image practice. From this it is argued that asynchronism, first advocated in 1929 by Pudovkin as a response to the advent of synchronised sound, can be used to articulate audio-visual relationships. Central to asynchronism’s application in this paper is a recognition of the propensity for sound and image to adhere, and in visual music for there to be a literal equation of audio with the visual, often married with a quest for the synaesthetic. These elements can either be used in an illusionist fashion, or employed as part of an anti-illusionist strategy for realising dialectic. Using this as a theoretical basis, the paper examines how the publications implement asynchronism, including digital mapping to facilitate innovative reciprocal sound and image combinations, and the asynchronous use of ‘found sound’ from a range of online sources to reframe the moving image. The synthesis of publications and practice demonstrates that asynchronism can both underpin the creation of dialectic, and be an integral component in an audio-visual anti-illusionist methodology.
Resumo:
Personal memories composed of digital pictures are very popular at the moment. To retrieve these media items annotation is required. During the last years, several approaches have been proposed in order to overcome the image annotation problem. This paper presents our proposals to address this problem. Automatic and semi-automatic learning methods for semantic concepts are presented. The automatic method is based on semantic concepts estimated using visual content, context metadata and audio information. The semi-automatic method is based on results provided by a computer game. The paper describes our proposals and presents their evaluations.
Resumo:
Dissertação apresentada à Escola Superior de Comunicação Social como parte dos requisitos para obtenção de grau de mestre em Audiovisual e Multimédia.
Resumo:
Relatório de estágio de mestrado em Ciências da Comunicação (área de especialização em Audiovisual e Multimédia)
Resumo:
Modern cochlear implantation technologies allow deaf patients to understand auditory speech; however, the implants deliver only a coarse auditory input and patients must use long-term adaptive processes to achieve coherent percepts. In adults with post-lingual deafness, the high progress of speech recovery is observed during the first year after cochlear implantation, but there is a large range of variability in the level of cochlear implant outcomes and the temporal evolution of recovery. It has been proposed that when profoundly deaf subjects receive a cochlear implant, the visual cross-modal reorganization of the brain is deleterious for auditory speech recovery. We tested this hypothesis in post-lingually deaf adults by analysing whether brain activity shortly after implantation correlated with the level of auditory recovery 6 months later. Based on brain activity induced by a speech-processing task, we found strong positive correlations in areas outside the auditory cortex. The highest positive correlations were found in the occipital cortex involved in visual processing, as well as in the posterior-temporal cortex known for audio-visual integration. The other area, which positively correlated with auditory speech recovery, was localized in the left inferior frontal area known for speech processing. Our results demonstrate that the visual modality's functional level is related to the proficiency level of auditory recovery. Based on the positive correlation of visual activity with auditory speech recovery, we suggest that visual modality may facilitate the perception of the word's auditory counterpart in communicative situations. The link demonstrated between visual activity and auditory speech perception indicates that visuoauditory synergy is crucial for cross-modal plasticity and fostering speech-comprehension recovery in adult cochlear-implanted deaf patients.