770 resultados para audio-visual methods
Resumo:
No existe en Cuenca un proyecto de investigación periodística y de producción audiovisual que indague, recopile y presente información sobre aquellas profesiones tradicionales heredadas a través del tiempo y que poco a poco se van perdiendo con miras a extinguirse completamente. Este proyecto, de cierta manera, puede ser innovador, ya que involucra dos áreas: comunicación audiovisual y redacción dentro del periodismo. Se involucran por el hecho de presentar información relevante, a través de un producto final, visual y escrito, que enseñe de quéforma estas profesiones son desarrolladas por diferentes actores humanos, sus contextos y sus procesos, con la intención de servir de apoyo investigativo cultural en el ámbito local y nacional.
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
369 p.
Resumo:
We propose a novel technique for conducting robust voice activity detection (VAD) in high-noise recordings. We use Gaussian mixture modeling (GMM) to train two generic models; speech and non-speech. We then score smaller segments of a given (unseen) recording against each of these GMMs to obtain two respective likelihood scores for each segment. These scores are used to compute a dissimilarity measure between pairs of segments and to carry out complete-linkage clustering of the segments into speech and non-speech clusters. We compare the accuracy of our method against state-of-the-art and standardised VAD techniques to demonstrate an absolute improvement of 15% in half-total error rate (HTER) over the best performing baseline system and across the QUT-NOISE-TIMIT database. We then apply our approach to the Audio-Visual Database of American English (AVDBAE) to demonstrate the performance of our algorithm in using visual, audio-visual or a proposed fusion of these features.
Resumo:
TEMA: programa de remediação auditivo-visual computadorizado em escolares com dislexia do desenvolvimento. OBJETIVOS: verificar a eficácia de um programa de remediação auditivo-visual computadorizado em escolares com dislexia do desenvolvimento. Dentre os objetivos específicos, o estudo teve como finalidade comparar o desempenho cognitivo-lingüístico de escolares com dislexia do desenvolvimento com escolares bons leitores; comparar os achados dos procedimentos de avaliação de pré e pós testagem em escolares com dislexia submetidos e não submetidos ao programa; e, por fim, comparar os achados do programa de remediação em escolares com dislexia e escolares bons leitores submetidos ao programa de remediação. MÉTODO: participaram deste estudo 20 escolares, sendo o grupo I (GI) subdivido em: GIe, composto de cinco escolares com dislexia do desenvolvimento submetidos ao programa, e GIc, composto de cinco escolares com dislexia do desenvolvimento não submetidos ao programa. O grupo II (GII) foi subdividido em GIIe, composto de cinco escolares bons leitores submetidos à remediação, e GIIc, composto de cinco escolares bons leitores não submetidos à remediação. Foi realizado o programa de remediação auditivo-visual computadorizado Play-on. RESULTADOS: os resultados deste estudo revelaram que o GI apresentou desempenho inferior em habilidade de processamento auditivo e de consciência fonológica em comparação com o GII em situação de pré-testagem. Entretanto, o GIe apresentou desempenho semelhante ao GII em situação de pós-testagem, evidenciando a eficácia da remediação auditivo-visual em escolares com dislexia do desenvolvimento. CONCLUSÃO: o estudo evidenciou a eficácia do programa de remediação auditivo-visual em escolares com dislexia do desenvolvimento.
Resumo:
BACKGROUND Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. METHODS Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. RESULTS Both aphasic patients and healthy controls mainly fixated the speaker's face. We found a significant co-speech gesture × ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction × ROI × group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker's face compared to healthy controls. CONCLUSION Co-speech gestures guide the observer's attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker's face.
Resumo:
Background: Co-speech gestures are part of nonverbal communication during conversations. They either support the verbal message or provide the interlocutor with additional information. Furthermore, they prompt as nonverbal cues the cooperative process of turn taking. In the present study, we investigated the influence of co-speech gestures on the perception of dyadic dialogue in aphasic patients. In particular, we analysed the impact of co-speech gestures on gaze direction (towards speaker or listener) and fixation of body parts. We hypothesized that aphasic patients, who are restricted in verbal comprehension, adapt their visual exploration strategies. Methods: Sixteen aphasic patients and 23 healthy control subjects participated in the study. Visual exploration behaviour was measured by means of a contact-free infrared eye-tracker while subjects were watching videos depicting spontaneous dialogues between two individuals. Cumulative fixation duration and mean fixation duration were calculated for the factors co-speech gesture (present and absent), gaze direction (to the speaker or to the listener), and region of interest (ROI), including hands, face, and body. Results: Both aphasic patients and healthy controls mainly fixated the speaker’s face. We found a significant co-speech gesture x ROI interaction, indicating that the presence of a co-speech gesture encouraged subjects to look at the speaker. Further, there was a significant gaze direction x ROI x group interaction revealing that aphasic patients showed reduced cumulative fixation duration on the speaker’s face compared to healthy controls. Conclusion: Co-speech gestures guide the observer’s attention towards the speaker, the source of semantic input. It is discussed whether an underlying semantic processing deficit or a deficit to integrate audio-visual information may cause aphasic patients to explore less the speaker’s face. Keywords: Gestures, visual exploration, dialogue, aphasia, apraxia, eye movements
Resumo:
This PhD by publication examines selected practice-based audio-visual works made by the author over a ten-year period, placing them in a critical context. Central to the publications, and the focus of the thesis, is an exploration of the role of sound in the creation of dialectic tension between the audio, the visual and the audience. By first analysing a number of texts (films/videos and key writings) the thesis locates the principal issues and debates around the use of audio in artists’ moving image practice. From this it is argued that asynchronism, first advocated in 1929 by Pudovkin as a response to the advent of synchronised sound, can be used to articulate audio-visual relationships. Central to asynchronism’s application in this paper is a recognition of the propensity for sound and image to adhere, and in visual music for there to be a literal equation of audio with the visual, often married with a quest for the synaesthetic. These elements can either be used in an illusionist fashion, or employed as part of an anti-illusionist strategy for realising dialectic. Using this as a theoretical basis, the paper examines how the publications implement asynchronism, including digital mapping to facilitate innovative reciprocal sound and image combinations, and the asynchronous use of ‘found sound’ from a range of online sources to reframe the moving image. The synthesis of publications and practice demonstrates that asynchronism can both underpin the creation of dialectic, and be an integral component in an audio-visual anti-illusionist methodology.
Resumo:
Grid music systems provide discrete geometric methods for simplified music-making by providing spatialised input to construct patterned music on a 2D matrix layout. While they are conceptually simple, grid systems may be layered to enable complex and satisfying musical results. Grid music systems have been applied to a range of systems from small portable devices up to larger systems. In this paper we will discuss the use of grid music systems in general and present an overview of the HarmonyGrid system we have developed as a new interactive performance system. We discuss a range of issues related to the design and use of larger-scale grid- based interactive performance systems such as the HarmonyGrid.
Resumo:
To sustain an ongoing rapid growth of video information, there is an emerging demand for a sophisticated content-based video indexing system. However, current video indexing solutions are still immature and lack of any standard. This doctoral consists of a research work based on an integrated multi-modal approach for sports video indexing and retrieval. By combining specific features extractable from multiple audio-visual modalities, generic structure and specific events can be detected and classified. During browsing and retrieval, users will benefit from the integration of high-level semantic and some descriptive mid-level features such as whistle and close-up view of player(s).
Resumo:
The performance of visual speech recognition (VSR) systems are significantly influenced by the accuracy of the visual front-end. The current state-of-the-art VSR systems use off-the-shelf face detectors such as Viola- Jones (VJ) which has limited reliability for changes in illumination and head poses. For a VSR system to perform well under these conditions, an accurate visual front end is required. This is an important problem to be solved in many practical implementations of audio visual speech recognition systems, for example in automotive environments for an efficient human-vehicle computer interface. In this paper, we re-examine the current state-of-the-art VSR by comparing off-the-shelf face detectors with the recently developed Fourier Lucas-Kanade (FLK) image alignment technique. A variety of image alignment and visual speech recognition experiments are performed on a clean dataset as well as with a challenging automotive audio-visual speech dataset. Our results indicate that the FLK image alignment technique can significantly outperform off-the shelf face detectors, but requires frequent fine-tuning.
Resumo:
The objective of the present study was to understand the teachers' perception about students' academic stress and other welfare related issues. A group of 125 secondary and higher secondary school teachers (43 male and 82 female) from five schools located in Kolkata were covered in the study following convenience sampling technique. Data were collected by using a semi-structured questionnaire developed by the first author. Findings revealed that more than half of the teachers (55.8% male and 54.9% female) felt that today's students are not brought up in child friendly environment while an overwhelming number of teachers stated that students face some social problems (88.4% male and 96.3% female) which affects their mental health and causes stress (90.7% male and 92.7% female). However, majority of them (79.1% male and 78% female teachers), irrespective of gender, denied the fact that teaching method followed in schools could cause academic stress. Vast majority of the teachers felt that New Education System in India i.e., making Grade X examination (popularly known as secondary examination) optional will not be beneficial for students. So far as motivation of the students is concerned, introducing innovative teaching methods like project work, field visit, using audio-visual aids in the schools has been suggested by more than 95% of the teachers. This apart, most of the teachers suggested reward system in the schools in addition to taking classes seriously by the teachers and punctuality. Reduction of load of home work was also suggested by more than two-fifth teachers. Although corporal punishment has gone down, it is still practiced by some of the teachers' especially male teachers in Kolkata. Male and female teachers differed significantly with respect to two issues only (p < .05) i.e., applying corporal punishment and impact of sexual health education. Male teachers apply more corporal punishment compared to female teachers and secondly, male teachers do not forsee any negative influence of sexual health education.
Resumo:
Spoken term detection (STD) is the task of looking up a spoken term in a large volume of speech segments. In order to provide fast search, speech segments are first indexed into an intermediate representation using speech recognition engines which provide multiple hypotheses for each speech segment. Approximate matching techniques are usually applied at the search stage to compensate the poor performance of automatic speech recognition engines during indexing. Recently, using visual information in addition to audio information has been shown to improve phone recognition performance, particularly in noisy environments. In this paper, we will make use of visual information in the form of lip movements of the speaker in indexing stage and will investigate its effect on STD performance. Particularly, we will investigate if gains in phone recognition accuracy will carry through the approximate matching stage to provide similar gains in the final audio-visual STD system over a traditional audio only approach. We will also investigate the effect of using visual information on STD performance in different noise environments.
Resumo:
These are turbulent times for audio- visual production companies. Radical changes, both inside and outside the organizations, reach across national markets and different genres. For instance, production methods are changing; the demand from audiences and advertisers is changing; power relations between the actors involved in the value chain are changing; and increasing concentration makes the market even more competitive for small independent players. From a perspective of the structure–conduct– performance paradigm (Ramstad, 1997) it is reasonable to expect that these changes on a structural level of the industry will cause the production companies to adapt their strategic behaviour. The current challenges for media companies are a combination of rising complexity and uncertainty in the market (Picard, 2004). The increasing complexity can for instance be observed in the growing number of market segments and in the continuing trend towards cross- media strategies where media companies operate in multiple markets and on multiple platforms...