854 resultados para visual information
Resumo:
People possess different sensory modalities to detect, interpret, and efficiently act upon various events in a complex and dynamic environment (Fetsch, DeAngelis, & Angelaki, 2013). Much empirical work has been done to understand the interplay of modalities (e.g. audio-visual interactions, see Calvert, Spence, & Stein, 2004). On the one hand, integration of multimodal input as a functional principle of the brain enables the versatile and coherent perception of the environment (Lewkowicz & Ghazanfar, 2009). On the other hand, sensory integration does not necessarily mean that input from modalities is always weighted equally (Ernst, 2008). Rather, when two or more modalities are stimulated concurrently, one often finds one modality dominating over another. Study 1 and 2 of the dissertation addressed the developmental trajectory of sensory dominance. In both studies, 6-year-olds, 9-year-olds, and adults were tested in order to examine sensory (audio-visual) dominance across different age groups. In Study 3, sensory dominance was put into an applied context by examining verbal and visual overshadowing effects among 4- to 6-year olds performing a face recognition task. The results of Study 1 and Study 2 support default auditory dominance in young children as proposed by Napolitano and Sloutsky (2004) that persists up to 6 years of age. For 9-year-olds, results on privileged modality processing were inconsistent. Whereas visual dominance was revealed in Study 1, privileged auditory processing was revealed in Study 2. Among adults, a visual dominance was observed in Study 1, which has also been demonstrated in preceding studies (see Spence, Parise, & Chen, 2012). No sensory dominance was revealed in Study 2 for adults. Potential explanations are discussed. Study 3 referred to verbal and visual overshadowing effects in 4- to 6-year-olds. The aim was to examine whether verbalization (i.e., verbally describing a previously seen face), or visualization (i.e., drawing the seen face) might affect later face recognition. No effect of visualization on recognition accuracy was revealed. As opposed to a verbal overshadowing effect, a verbal facilitation effect occurred. Moreover, verbal intelligence was a significant predictor for recognition accuracy in the verbalization group but not in the control group. This suggests that strengthening verbal intelligence in children can pay off in non-verbal domains as well, which might have educational implications.
Resumo:
Des interventions ciblant l’amélioration cognitive sont de plus en plus à l’intérêt dans nombreux domaines, y compris la neuropsychologie. Bien qu'il existe de nombreuses méthodes pour maximiser le potentiel cognitif de quelqu’un, ils sont rarement appuyé par la recherche scientifique. D’abord, ce mémoire examine brièvement l'état des interventions d'amélioration cognitives. Il décrit premièrement les faiblesses observées dans ces pratiques et par conséquent il établit un modèle standard contre lequel on pourrait et devrait évaluer les diverses techniques ciblant l'amélioration cognitive. Une étude de recherche est ensuite présenté qui considère un nouvel outil de l'amélioration cognitive, une tâche d’entrainement perceptivo-cognitive : 3-dimensional multiple object tracking (3D-MOT). Il examine les preuves actuelles pour le 3D-MOT auprès du modèle standard proposé. Les résultats de ce projet démontrent de l’augmentation dans les capacités d’attention, de mémoire de travail visuel et de vitesse de traitement d’information. Cette étude représente la première étape dans la démarche vers l’établissement du 3D-MOT comme un outil d’amélioration cognitive.
Resumo:
During locomotion, retinal flow, gaze angle, and vestibular information can contribute to one's perception of self-motion. Their respective roles were investigated during active steering: Retinal flow and gaze angle were biased by altering the visual information during computer-simulated locomotion, and vestibular information was controlled through use of a motorized chair that rotated the participant around his or her vertical axis. Chair rotation was made appropriate for the steering response of the participant or made inappropriate by rotating a proportion of the veridical amount. Large steering errors resulted from selective manipulation of retinal flow and gaze angle, and the pattern of errors provided strong evidence for an additive model of combination. Vestibular information had little or no effect on steering performance, suggesting that vestibular signals are not integrated with visual information for the control of steering at these speeds.
Resumo:
Fundação de Amparo à Pesquisa do Estado de São Paulo (FAPESP)
Resumo:
Acoustically, car cabins are extremely noisy and as a consequence, existing audio-only speech recognition systems, for voice-based control of vehicle functions such as the GPS based navigator, perform poorly. Audio-only speech recognition systems fail to make use of the visual modality of speech (eg: lip movements). As the visual modality is immune to acoustic noise, utilising this visual information in conjunction with an audio only speech recognition system has the potential to improve the accuracy of the system. The field of recognising speech using both auditory and visual inputs is known as Audio Visual Speech Recognition (AVSR). Continuous research in AVASR field has been ongoing for the past twenty-five years with notable progress being made. However, the practical deployment of AVASR systems for use in a variety of real-world applications has not yet emerged. The main reason is due to most research to date neglecting to address variabilities in the visual domain such as illumination and viewpoint in the design of the visual front-end of the AVSR system. In this paper we present an AVASR system in a real-world car environment using the AVICAR database [1], which is publicly available in-car database and we show that the use of visual speech conjunction with the audio modality is a better approach to improve the robustness and effectiveness of voice-only recognition systems in car cabin environments.
Resumo:
Whilst a variety of studies has appeared over the last decade addressing the gap between the potential promised by computers and the reality experienced in the classroom by teachers and students, few have specifically addressed the situation as it pertains to the visual arts classroom. The aim of this study was to explore the reality of the classroom use of computers for three visual arts highschool teachers and determine how computer technology might enrich visual arts teaching and learning. An action research approach was employed to enable the researcher to understand the situation from the teachers' points of view while contributing to their professional practice. The wider social context surrounding this study is characterised by an increase in visual communications brought about by rapid advances in computer technology. The powerful combination of visual imagery and computer technology is illustrated by continuing developments in the print, film and television industries. In particular, the recent growth of interactive multimedia epitomises this combination and is significant to this study as it represents a new form of publishing of great interest to educators and artists alike. In this social context, visual arts education has a significant role to play. By cultivating a critical awareness of the implications of technology use and promoting a creative approach to the application of computer technology within the visual arts, visual arts education is in a position to provide an essential service to students who will leave high school to participate in a visual information age as both consumers and producers.
Resumo:
Investigates the use of temporal lip information, in conjunction with speech information, for robust, text-dependent speaker identification. We propose that significant speaker-dependent information can be obtained from moving lips, enabling speaker recognition systems to be highly robust in the presence of noise. The fusion structure for the audio and visual information is based around the use of multi-stream hidden Markov models (MSHMM), with audio and visual features forming two independent data streams. Recent work with multi-modal MSHMMs has been performed successfully for the task of speech recognition. The use of temporal lip information for speaker identification has been performed previously (T.J. Wark et al., 1998), however this has been restricted to output fusion via single-stream HMMs. We present an extension to this previous work, and show that a MSHMM is a valid structure for multi-modal speaker identification
Resumo:
Visual activity detection of lip movements can be used to overcome the poor performance of voice activity detection based solely in the audio domain, particularly in noisy acoustic conditions. However, most of the research conducted in visual voice activity detection (VVAD) has neglected addressing variabilities in the visual domain such as viewpoint variation. In this paper we investigate the effectiveness of the visual information from the speaker’s frontal and profile views (i.e left and right side views) for the task of VVAD. As far as we are aware, our work constitutes the first real attempt to study this problem. We describe our visual front end approach and the Gaussian mixture model (GMM) based VVAD framework, and report the experimental results using the freely available CUAVE database. The experimental results show that VVAD is indeed possible from profile views and we give a quantitative comparison of VVAD based on frontal and profile views The results presented are useful in the development of multi-modal Human Machine Interaction (HMI) using a single camera, where the speaker’s face may not always be frontal.
Resumo:
This article presents a visual servoing system to follow a 3D moving object by a Micro Unmanned Aerial Vehicle (MUAV). The presented control strategy is based only on the visual information given by an adaptive tracking method based on the colour information. A visual fuzzy system has been developed for servoing the camera situated on a rotary wing MAUV, that also considers its own dynamics. This system is focused on continuously following of an aerial moving target object, maintaining it with a fixed safe distance and centred on the image plane. The algorithm is validated on real flights on outdoors scenarios, showing the robustness of the proposed systems against winds perturbations, illumination and weather changes among others. The obtained results indicate that the proposed algorithms is suitable for complex controls task, such object following and pursuit, flying in formation, as well as their use for indoor navigation
Resumo:
In this paper we use a sequence-based visual localization algorithm to reveal surprising answers to the question, how much visual information is actually needed to conduct effective navigation? The algorithm actively searches for the best local image matches within a sliding window of short route segments or 'sub-routes', and matches sub-routes by searching for coherent sequences of local image matches. In contract to many existing techniques, the technique requires no pre-training or camera parameter calibration. We compare the algorithm's performance to the state-of-the-art FAB-MAP 2.0 algorithm on a 70 km benchmark dataset. Performance matches or exceeds the state of the art feature-based localization technique using images as small as 4 pixels, fields of view reduced by a factor of 250, and pixel bit depths reduced to 2 bits. We present further results demonstrating the system localizing in an office environment with near 100% precision using two 7 bit Lego light sensors, as well as using 16 and 32 pixel images from a motorbike race and a mountain rally car stage. By demonstrating how little image information is required to achieve localization along a route, we hope to stimulate future 'low fidelity' approaches to visual navigation that complement probabilistic feature-based techniques.
Resumo:
This research investigated the prevalence of vision disorders in Queensland Indigenous primary school children, creating the first comprehensive visual profile of Indigenous children. Findings showed reduced convergence ability and reduced visual information processing skills were more common in Indigenous compared to non-Indigenous children. Reduced visual information processing skills were also associated with reduced reading outcomes in both groups of children. As early detection of visual disorders is important, the research also reviewed the delivery of screening programs across Queensland and proposed a model for improved coordination and service delivery of vision screening to Queensland school children.
Resumo:
Review question/objective What are the most effective information sharing strategies used to reduce anxiety in families of patients undergoing elective surgery? This review seeks to synthesize the best available evidence in relation to the most effective information-sharing intervention to reduce anxiety for families waiting for patients undergoing an elective surgical procedure. The specific objectives are to review the effectiveness of evidence of interventions designed to reduce the anxiety of families waiting whilst their loved one undergoes a surgical intervention. A variety of interventions exist and include surgical nurse liaison services, intraoperative reporting either by face-to-face or telephone delivery, informational cards, visual information screens, and intraoperative paging devices for families. Inclusion criteria Types of participants All studies of family members over 18 years of age waiting for patients undergoing an elective surgical procedure will be included, including those waiting for both adult and paediatric patients. Studies of families waiting for other patient populations, eg emergency surgery, chemotherapy or intensive care patients will be excluded. Types of intervention(s)/phenomena of interest All information-sharing Interventions for families of patients undergoing an elective surgical procedure will be included, including but not limited to: surgical nurse liaison services, in-person intraoperative reporting, visual information screens, paging devices, informational cards and telephone delivery of intraoperative progress reports. Interventions that take place during the intraoperative phase of care only will be included in the review. Preadmission information sharing interventions will be excluded. Types of outcomes The outcomes of interest include: Primary outcome: the level of anxiety amongst family members or close relatives whilst waiting for patients undergoing surgery, as measured by a validated instrument (such as the S-Anxiety portion of the State-Trait Anxiety Inventory).4 Secondary outcomes: family satisfaction and other measurements that may be considered indicators of stress and anxiety, such as mean arterial pressure (MAP) and heart rate.
Resumo:
This paper presents a 100 Hz monocular position based visual servoing system to control a quadrotor flying in close proximity to vertical structures approximating a narrow, locally linear shape. Assuming the object boundaries are represented by parallel vertical lines in the image, detection and tracking is achieved using Plücker line representation and a line tracker. The visual information is fused with IMU data in an EKF framework to provide fast and accurate state estimation. A nested control design provides position and velocity control with respect to the object. Our approach is aimed at high performance on-board control for applications allowing only small error margins and without a motion capture system, as required for real world infrastructure inspection. Simulated and ground-truthed experimental results are presented.
Resumo:
Visual information in the form of lip movements of the speaker has been shown to improve the performance of speech recognition and search applications. In our previous work, we proposed cross database training of synchronous hidden Markov models (SHMMs) to make use of external large and publicly available audio databases in addition to the relatively small given audio visual database. In this work, the cross database training approach is improved by performing an additional audio adaptation step, which enables audio visual SHMMs to benefit from audio observations of the external audio models before adding visual modality to them. The proposed approach outperforms the baseline cross database training approach in clean and noisy environments in terms of phone recognition accuracy as well as spoken term detection (STD) accuracy.