813 resultados para Video endoscopia
Resumo:
Objective: The aim of this paper is to bridge the gap between the corpus of imitation research and video-based intervention (VBI) research, and consider the impact imitation skills may be having on VBI outcomes and highlight potential areas for improving efficacy.
Method: A review of the imitation literature was conducted focusing on imitation skill deficits in children with autism followed by a critical review of the video modelling literature focusing on pre-intervention assessment of imitation skills and the impact imitation deficits may have on VBI outcomes.
Results: Children with autism have specific imitation deficits, which may impact VBI outcomes. Imitation training or procedural modifications made to videos may accommodate for these deficits.
Conclusions: There are only six studies where VBI researchers have taken pre-intervention imitation assessments using an assortment of imitation measures. More research is required to develop a standardised multi-dimensional imitation assessment battery that can better inform VBI.
Resumo:
This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.
Resumo:
This chapter describes an experimental system for the recognition of human faces from surveillance video. In surveillance applications, the system must be robust to changes in illumination, scale, pose and expression. The system must also be able to perform detection and recognition rapidly in real time. Our system detects faces using the Viola-Jones face detector, then extracts local features to build a shape-based feature vector. The feature vector is constructed from ratios of lengths and differences in tangents of angles, so as to be robust to changes in scale and rotations in-plane and out-of-plane. Consideration was given to improving the performance and accuracy of both the detection and recognition steps.
Resumo:
A new domain-specific reconfigurable sub-pixel interpolation architecture for multi-standard video Motion Estimation (ME) is presented. The mixed use of parallel and serial-input FIR filters achieves high throughput rate and efficient silicon utilisation. Flexibility has been achieved by using a multiplexed reconfigurable data-path controlled by a selection signal. Silicon design studies show that this can be implemented using 34.8K gates with area and performance that compares very favourably with existing fixed solutions based solely on the H.264 standard. ©2008 IEEE.