907 resultados para audio-visual information
Resumo:
The article gives an account of the various microfilming initiatives taken in Malta during the last thirty years. Various archives have managed to microfilm their holdings under co-operation agreements with international societies, or manuscript libraries. The advent of digital technology is now posing new challenges and opportunities for the archives sector. The idea of a National Memory Project that will try to bridge the different approaches in the preservation of records in the various public, private, and ecclesiastical archives in Malta is discussed. Technical challenges are highlighted, as are the opportunities that arise from collaboration and active participation in international projects such as the European Visual Archives (EVA), and the SEEDI initiative.
Resumo:
It has been well documented that traffic accidents that can be avoided occur when the motorists miss or ignore traffic signs. With the attention of drivers getting diverted due to distractions like cell phone conversations, missing traffic signs has become more prevalent. Also, poor weather and other unfriendly driving conditions sometimes makes the motorists not to be alert all the time and see every traffic sign on the road. Besides, most cars do not have any form of traffic assistance. Because of heavy traffic and proliferation of traffic signs on the roads, there is a need for a system that assists the driver not to miss a traffic sign to reduce the probability of an accident. Since visual information is critical for driving, processed video signals from cameras have been chosen to assist drivers. These inexpensive cameras can be easily mounted on the automobile. The objective of the present investigation and the traffic system development is to recognize the traffic signs electronically and alert drivers. For the case study and the system development, five important and critical traffic signs have been selected. They are: STOP, NO ENTER, NO RIGHT TURN, NO LEFT TURN, and YIELD. The system was evaluated processing still pictures taken from the public roads, and the recognition results were presented in an analysis table to indicate the correct identifications and the false ones. The system reached the acceptable recognition rate of 80% for all five traffic signs. The processing rate was about three seconds. The capabilities of MATLAB, VLSI design platforms and coding have been used to generate a visual warning to complement the visual driver support system with a Field Programmable Gate Array (FPGA) on a XUP Virtex-II Pro Development System.
Resumo:
Symptomatic recovery after acute vestibular neuritis (VN) is variable, with around 50% of patients reporting long term vestibular symptoms; hence, it is essential to identify factors related to poor clinical outcome. Here we investigated whether excessive reliance on visual input for spatial orientation (visual dependence) was associated with long term vestibular symptoms following acute VN. Twenty-eight patients with VN and 25 normal control subjects were included. Patients were enrolled at least 6 months after acute illness. Recovery status was not a criterion for study entry, allowing recruitment of patients with a full range of persistent symptoms. We measured visual dependence with a laptop-based Rod-and-Disk Test and severity of symptoms with the Dizziness Handicap Inventory (DHI). The third of patients showing the worst clinical outcomes (mean DHI score 36–80) had significantly greater visual dependence than normal subjects (6.35° error vs. 3.39° respectively, p = 0.03). Asymptomatic patients and those with minor residual symptoms did not differ from controls. Visual dependence was associated with high levels of persistent vestibular symptoms after acute VN. Over-reliance on visual information for spatial orientation is one characteristic of poorly recovered vestibular neuritis patients. The finding may be clinically useful given that visual dependence may be modified through rehabilitation desensitization techniques.
Resumo:
El objetivo de este artículo es doble: por un lado explorar la habilidad de la Unión Europea para llevar a cabo una política audiovisual dirigida al Mercosur y promover las normas de la Convención sobre la diversidad de las expresiones culturales; por otro, analizar el impacto del modelo de política audiovisual de la UE en el desarrollo de la cooperación audiovisual con el Mercosur y centrarse en los principales vectores que configuran el paisaje audiovisual del Mercosur. El texto pretende destacar cómo y por qué la UE persigue una política audiovisual con esa región, cuáles son los propósitos y los límites de actuación. En este sentido, se preocupa por entender cómo la diplomacia audiovisual de la UE interactúa con otros actores, como las acciones gubernamentales llevadas a cabo desde la propia UE y el Mercosur, así como las prácticas del sector privado (Hollywwod y los grandes conglomerados de medios).
Resumo:
For those who are not new to the world of Japanese animation, known mainly as anime, the debate of "dub vs. sub" is by no means anything out of the ordinary, but rather a very heated argument amongst fans. The study will focus on the differences in the US English version between the two approaches of translating audio-visual media, namely subtitling (official subtitles and fanmade subtitles) and dubbing, in a qualitative context. More precisely, which of the two approaches can store the most information from the same audiovisual segment, in order to satisfy the needs of the anime audience. In order to draw substantial conclusions, the analysis will be conducted on a corpus of 1 episode from the first season of the popular mid-nineties TV animated series, Sailor Moon. The main objective of this research is to analyze the three versions and compare the findings to what anime fans expect each of them to provide, in terms of how culture specific terms are handled, how accurate the translation is, localization, censorship, and omission. As for the fans’ opinions, the study will include a survey regarding the personal preference of fans when it comes to choosing between the official subtitled version, the fanmade subtitles and the dubbed version.
Resumo:
Thesis (Ph.D.)--University of Washington, 2016-08
Resumo:
This thesis is an investigation of structural brain abnormalities, as well as multisensory and unisensory processing deficits in autistic traits and Autism Spectrum Disorder (ASD). To achieve this, structural and functional magnetic resonance imaging (fMRI) and psychophysical techniques were employed. ASD is a neurodevelopmental condition which is characterised by the social communication and interaction deficits, as well as repetitive patterns of behaviour, interests and activities. These traits are thought to be present in a typical population. The Autism Spectrum Quotient questionnaire (AQ) was developed to assess the prevalence of autistic traits in the general population. Von dem Hagen et al. (2011) revealed a link between AQ with white matter (WM) and grey matter (GM) volume (using voxel-based-morphometry). However, their findings revealed no difference in GM in areas associated with social cognition. Cortical thickness (CT) measurements are known to be a more direct measure of cortical morphology than GM volume. Therefore, Chapter 2 investigated the relationship between AQ scores and CT in the same sample of participants. This study showed that AQ scores correlated with CT in the left temporo-occipital junction, left posterior cingulate, right precentral gyrus and bilateral precentral sulcus, in a typical population. These areas were previously associated with structural and functional differences in ASD. Thus the findings suggest, to some extent, autistic traits are reflected in brain structure - in the general population. The ability to integrate auditory and visual information is crucial to everyday life, and results are mixed regarding how ASD influences audiovisual integration. To investigate this question, Chapter 3 examined the Temporal Integration Window (TIW), which indicates how precisely sight and sound need to be temporally aligned so that a unitary audiovisual event can be perceived. 26 adult males with ASD and 26 age and IQ-matched typically developed males were presented with flash-beep (BF), point-light drummer, and face-voice (FV) displays with varying degrees of asynchrony and asked to make Synchrony Judgements (SJ) and Temporal Order Judgements (TOJ). Analysis of the data included fitting Gaussian functions as well as using an Independent Channels Model (ICM) to fit the data (Garcia-Perez & Alcala-Quintana, 2012). Gaussian curve fitting for SJs showed that the ASD group had a wider TIW, but for TOJ no group effect was found. The ICM supported these results and model parameters indicated that the wider TIW for SJs in the ASD group was not due to sensory processing at the unisensory level, but rather due to decreased temporal resolution at a decisional level of combining sensory information. Furthermore, when performing TOJ, the ICM revealed a smaller Point of Subjective Simultaneity (PSS; closer to physical synchrony) in the ASD group than in the TD group. Finding that audiovisual temporal processing is different in ASD encouraged us to investigate the neural correlates of multisensory as well as unisensory processing using functional magnetic resonance imaging fMRI. Therefore, Chapter 4 investigated audiovisual, auditory and visual processing in ASD of simple BF displays and complex, social FV displays. During a block design experiment, we measured the BOLD signal when 13 adults with ASD and 13 typically developed (TD) age-sex- and IQ- matched adults were presented with audiovisual, audio and visual information of BF and FV displays. Our analyses revealed that processing of audiovisual as well as unisensory auditory and visual stimulus conditions in both the BF and FV displays was associated with reduced activation in ASD. Audiovisual, auditory and visual conditions of FV stimuli revealed reduced activation in ASD in regions of the frontal cortex, while BF stimuli revealed reduced activation the lingual gyri. The inferior parietal gyrus revealed an interaction between stimulus sensory condition of BF stimuli and group. Conjunction analyses revealed smaller regions of the superior temporal cortex (STC) in ASD to be audiovisual sensitive. Against our predictions, the STC did not reveal any activation differences, per se, between the two groups. However, a superior frontal area was shown to be sensitive to audiovisual face-voice stimuli in the TD group, but not in the ASD group. Overall this study indicated differences in brain activity for audiovisual, auditory and visual processing of social and non-social stimuli in individuals with ASD compared to TD individuals. These results contrast previous behavioural findings, suggesting different audiovisual integration, yet intact auditory and visual processing in ASD. Our behavioural findings revealed audiovisual temporal processing deficits in ASD during SJ tasks, therefore we investigated the neural correlates of SJ in ASD and TD controls. Similar to Chapter 4, we used fMRI in Chapter 5 to investigate audiovisual temporal processing in ASD in the same participants as recruited in Chapter 4. BOLD signals were measured while the ASD and TD participants were asked to make SJ on audiovisual displays of different levels of asynchrony: the participants’ PSS, audio leading visual information (audio first), visual leading audio information (visual first). Whereas no effect of group was found with BF displays, increased putamen activation was observed in ASD participants compared to TD participants when making SJs on FV displays. Investigating SJ on audiovisual displays in the bilateral superior temporal gyrus (STG), an area involved in audiovisual integration (see Chapter 4), we found no group differences or interaction between group and levels of audiovisual asynchrony. The investigation of different levels of asynchrony revealed a complex pattern of results indicating a network of areas more involved in processing PSS than audio first and visual first, as well as areas responding differently to audio first compared to video first. These activation differences between audio first and video first in different brain areas are constant with the view that audio leading and visual leading stimuli are processed differently.
Resumo:
The production and perception of music is a multimodal activity involving auditory, visual and conceptual processing, integrating these with prior knowledge and environmental experience. Musicians utilise expressive physical nuances to highlight salient features of the score. The question arises within the literature as to whether performers’ non-technical, non-sound-producing movements may be communicatively meaningful and convey important structural information to audience members and co-performers. In the light of previous performance research (Vines et al., 2006, Wanderley, 2002, Davidson, 1993), and considering findings within co-speech gestural research and auditory and audio-visual neuroscience, this thesis examines the nature of those movements not directly necessary for the production of sound, and their particular influence on audience perception. Within the current research 3D performance analysis is conducted using the Vicon 12- camera system and Nexus data-processing software. Performance gestures are identified as repeated patterns of motion relating to music structure, which not only express phrasing and structural hierarchy but are consistently and accurately interpreted as such by a perceiving audience. Gestural characteristics are analysed across performers and performance style using two Chopin preludes selected for their diverse yet comparable structures (Opus 28:7 and 6). Effects on perceptual judgements of presentation modes (visual-only, auditory-only, audiovisual, full- and point-light) and viewing conditions are explored. This thesis argues that while performance style is highly idiosyncratic, piano performers reliably generate structural gestures through repeated patterns of upper-body movement. The shapes and locations of phrasing motions are identified particular to the sample of performers investigated. Findings demonstrate that despite the personalised nature of the gestures, performers use increased velocity of movements to emphasise musical structure and that observers accurately and consistently locate phrasing junctures where these patterns and variation in motion magnitude, shape and velocity occur. By viewing performance motions in polar (spherical) rather than cartesian coordinate space it is possible to get mathematically closer to the movement generated by each of the nine performers, revealing distinct patterns of motion relating to phrasing structures, regardless of intended performance style. These patterns are highly individualised both to each performer and performed piece. Instantaneous velocity analysis indicates a right-directed bias of performance motion variation at salient structural features within individual performances. Perceptual analyses demonstrate that audience members are able to accurately and effectively detect phrasing structure from performance motion alone. This ability persists even for degraded point-light performances, where all extraneous environmental information has been removed. The relative contributions of audio, visual and audiovisual judgements demonstrate that the visual component of a performance does positively impact on the over- all accuracy of phrasing judgements, indicating that receivers are most effective in their recognition of structural segmentations when they can both see and hear a performance. Observers appear to make use of a rapid online judgement heuristics, adjusting response processes quickly to adapt and perform accurately across multiple modes of presentation and performance style. In line with existent theories within the literature, it is proposed that this processing ability may be related to cognitive and perceptual interpretation of syntax within gestural communication during social interaction and speech. Findings of this research may have future impact on performance pedagogy, computational analysis and performance research, as well as potentially influencing future investigations of the cognitive aspects of musical and gestural understanding.
Resumo:
Motor learning is based on motor perception and emergent perceptual-motor representations. A lot of behavioral research is related to single perceptual modalities but during last two decades the contribution of multimodal perception on motor behavior was discovered more and more. A growing number of studies indicates an enhanced impact of multimodal stimuli on motor perception, motor control and motor learning in terms of better precision and higher reliability of the related actions. Behavioral research is supported by neurophysiological data, revealing that multisensory integration supports motor control and learning. But the overwhelming part of both research lines is dedicated to basic research. Besides research in the domains of music, dance and motor rehabilitation, there is almost no evidence for enhanced effectiveness of multisensory information on learning of gross motor skills. To reduce this gap, movement sonification is used here in applied research on motor learning in sports. Based on the current knowledge on the multimodal organization of the perceptual system, we generate additional real-time movement information being suitable for integration with perceptual feedback streams of visual and proprioceptive modality. With ongoing training, synchronously processed auditory information should be initially integrated into the emerging internal models, enhancing the efficacy of motor learning. This is achieved by a direct mapping of kinematic and dynamic motion parameters to electronic sounds, resulting in continuous auditory and convergent audiovisual or audio-proprioceptive stimulus arrays. In sharp contrast to other approaches using acoustic information as error-feedback in motor learning settings, we try to generate additional movement information suitable for acceleration and enhancement of adequate sensorimotor representations and processible below the level of consciousness. In the experimental setting, participants were asked to learn a closed motor skill (technique acquisition of indoor rowing). One group was treated with visual information and two groups with audiovisual information (sonification vs. natural sounds). For all three groups learning became evident and remained stable. Participants treated with additional movement sonification showed better performance compared to both other groups. Results indicate that movement sonification enhances motor learning of a complex gross motor skill-even exceeding usually expected acoustic rhythmic effects on motor learning.
Resumo:
[SPA] El objetivo de la investigación es conocer cual es la aportación cuantitativa y cualitativa de la documentación audiovisual en la información que ofrece diariamente la televisión. El marco temporal de la investigación de campo se sitúa en los años 1993 y 1994, en un marco geográfico constituido por los canales que emiten en el estado español. El estudio parte de una aproximación teórica a la documentación periodística, a la documentación audiovisual y a los estudios sobre la comunicación de masas, y lleva a cabo una investigación de campo en tres áreas: 1) Análisis de programas informativos diarios de seis cadenas de televisión (ETB, TVE, Canal Sur, TV3, Antena 3 y Canal+), a través de tres muestras independientes. 2) Análisis de las peticiones de documentación audiovisual realizadas desde las redacciones de programas informativos a los servicios de documentación. 3) Estudio de las funciones, tareas, estructura y organización de los servicios de documentación de televisión, basado en encuestas, visitas y entrevistas. En anexo se ofrece el análisis detallado de 620 noticias, así como la información de los centros de documentación. La investigación concluye afirmando que la documentación audiovisual es uno de los elementos constitutivos de la información de actualidad, tanto por su presencia cuantitativa (más de un 40% de las noticias emitidas la emplean), como por su aportación cualitativa y su utilización generalizada en todas las secciones informativas. Las conclusiones señalan que la importancia de las noticias incide positivamente en el empleo de documentación audiovisual, sintetizan las funciones de esta documentación y las características específicas de su uso. Confirman el carácter de retroalimentación de la documentación informativa en televisión. Señalan un empleo de esta documentación como documentación puramente visual. Y afirman que la documentación audiovisual, además de contribuir en la producción, coadyuva a la calidad de los programas informativos, en la medida en que facilita la tarea de ofrecer una información más completa y contextualizada.
Resumo:
Notre mémoire prend en charge de re-conceptualiser notre nouvel environnement audio-visuel et l’expérience que nous en faisons. À l’ère du numérique et de la dissémination généralisée des images animées, nous circonscrivons une catégorie d’images que nous concevons comme la plus à même d’avoir un impact sur le développement humain. Nous les appelons des images-sons synchrono-photo-temporalisées. Plus spécifiquement, nous cherchons à mettre en lumière leur puissance d’affection et de contrôle en démontrant qu’elles ont une influence certaine sur le processus d’individuation, influence qui est grandement facilitée par l’isotopie structurelle qui existe entre le flux de conscience et leur flux d’écoulement. Par le biais des recherches de Bernard Stiegler, nous remarquons également l’important rôle que jouent l’attention et la mémoire dans le processus d’individuation. L’ensemble de notre réflexion nous fait réaliser à quel point le système d’éducation actuel québécois manque à sa tâche de formation citoyenne en ne dispensant pas un enseignement adéquat des images animées.
Resumo:
Notre mémoire prend en charge de re-conceptualiser notre nouvel environnement audio-visuel et l’expérience que nous en faisons. À l’ère du numérique et de la dissémination généralisée des images animées, nous circonscrivons une catégorie d’images que nous concevons comme la plus à même d’avoir un impact sur le développement humain. Nous les appelons des images-sons synchrono-photo-temporalisées. Plus spécifiquement, nous cherchons à mettre en lumière leur puissance d’affection et de contrôle en démontrant qu’elles ont une influence certaine sur le processus d’individuation, influence qui est grandement facilitée par l’isotopie structurelle qui existe entre le flux de conscience et leur flux d’écoulement. Par le biais des recherches de Bernard Stiegler, nous remarquons également l’important rôle que jouent l’attention et la mémoire dans le processus d’individuation. L’ensemble de notre réflexion nous fait réaliser à quel point le système d’éducation actuel québécois manque à sa tâche de formation citoyenne en ne dispensant pas un enseignement adéquat des images animées.
Resumo:
Signifying road-related events with warnings can be highly beneficial, especially when imminent attention is needed. This thesis describes how modality, urgency and situation can influence driver responses to multimodal displays used as warnings. These displays utilise all combinations of audio, visual and tactile modalities, reflecting different urgency levels. In this way, a new rich set of cues is designed, conveying information multimodally, to enhance reactions during driving, which is a highly visual task. The importance of the signified events to driving is reflected in the warnings, and safety-critical or non-critical situations are communicated through the cues. Novel warning designs are considered, using both abstract displays, with no semantic association to the signified event, and language-based ones, using speech. These two cue designs are compared, to discover their strengths and weaknesses as car alerts. The situations in which the new cues are delivered are varied, by simulating both critical and non-critical events and both manual and autonomous car scenarios. A novel set of guidelines for using multimodal driver displays is finally provided, considering the modalities utilised, the urgency signified, and the situation simulated.
Resumo:
Esta investigación describe la situación de cómo Youtube se ha convertido a partir de sus estrategias y plan de mercadeo en la plataforma número uno en variedad de clips de películas, vídeos musicales, video de blogs, entre otros; llegando a popularizarse como una red social. Las redes sociales han desarrollado una nueva forma de comunicar y son una herramienta fundamental para la creación de conocimiento colectivo, es el caso de YouTube buscador de contenido audiovisual y red social que permite a millones de usuarios conectarse alrededor del mundo. Esta plataforma rompe las barreras culturales y de comunicación que anteriormente existían a falta de internet. En este sentido se pretende analizar a YouTube desde una perspectiva administrativa enfocada en el área de mercadeo.
Resumo:
The neurons in the primary visual cortex that respond to the orientation of visual stimuli were discovered in the late 1950s (Hubel, D.H. & Wiesel, T.N. 1959. J. Physiol. 148:574-591) but how they achieve this response is poorly understood. Recently, experiments have demonstrated that the visual cortex may use the image processing techniques of cross or auto-correlation to detect the streaks in random dot patterns (Barlow, H. & Berry, D.L. 2010. Proc. R. Soc. B. 278: 2069-2075). These experiments made use of sinusoidally modulated random dot patterns and of the so-called Glass patterns - where randomly positioned dot pairs are oriented in a parallel configuration (Glass, L. 1969. Nature. 223: 578-580). The image processing used by the visual cortex could be inferred from how the threshold of detection of these patterns in the presence of random noise varied as a function of the dot density in the patterns. In the present study, the detection thresholds have been measured for other types of patterns including circular, hyperbolic, spiral and radial Glass patterns and an indication of the type of image processing (cross or auto-correlation) by the visual cortex is presented. As a result, it is hoped that this study will contribute to an understanding of what David Marr called the ‘computational goal’ of the primary visual cortex (Marr, D. 1982. Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. New York: Freeman.)