141 resultados para Video sequences


Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper describes how worst-case error analysis can be applied to solve some of the practical issues in the development and implementation of a low power, high performance radix-4 FFT chip for digital video applications. The chip has been fabricated using a 0.6 µm CMOS technology and can perform a 64 point complex forward or inverse FFT on real-time video at up to 18 Megasamples per second. It comprises 0.5 million transistors in a die area of 7.8×8 mm and dissipates 1 W, leading to a cost-effective silicon solution for high quality video processing applications. The analysis focuses on the effect that different radix-4 architectural configurations and finite wordlengths has on the FFT output dynamic range. These issues are addressed using both mathematical error models and through extensive simulation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The aim of this paper is to bridge the gap between the corpus of imitation research and video-based intervention (VBI) research, and consider the impact imitation skills may be having on VBI outcomes and highlight potential areas for improving efficacy.

Method: A review of the imitation literature was conducted focusing on imitation skill deficits in children with autism followed by a critical review of the video modelling literature focusing on pre-intervention assessment of imitation skills and the impact imitation deficits may have on VBI outcomes.

Results: Children with autism have specific imitation deficits, which may impact VBI outcomes. Imitation training or procedural modifications made to videos may accommodate for these deficits.

Conclusions: There are only six studies where VBI researchers have taken pre-intervention imitation assessments using an assortment of imitation measures. More research is required to develop a standardised multi-dimensional imitation assessment battery that can better inform VBI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

REMA is an interactive web-based program which predicts endonuclease cut sites in DNA sequences. It analyses Multiple sequences simultaneously and predicts the number and size of fragments as well as provides restriction maps. The users can select single or paired combinations of all commercially available enzymes. Additionally, REMA permits prediction of multiple sequence terminal fragment sizes and suggests suitable restriction enzymes for maximally discriminatory results. REMA is an easy to use, web based program which will have a wide application in molecular biology research. Availability: REMA is written in Perl and is freely available for non-commercial use. Detailed information on installation can be obtained from Jan Szubert (jan.szubert@gmail.com) and the web based application is accessible on the internet at the URL http://www.macaulay.ac.uk/rema. Contact: b.singh@macaulay.ac.uk. (C) 2007 Elsevier B.V. All rights reserved.

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

The adapter molecule CAS is localized primarily within focal adhesions in fibroblasts. Because many of the cellular functions attributed to CAS are likely to be dependent on its presence in focal adhesions, this study was undertaken to identify regions of the protein that are involved in its localization. The SH3 domain of CAS, when expressed in isolation from the rest of the protein, was able to target to focal adhesions, whereas a variant containing a point mutation that rendered the SH3 domain unable to associate with FAK remained cytoplasmic. However, in the context of full-length CAS, this mutation did not prevent CAS localization to focal adhesions. Two other variants of CAS that contained deletions of either the SH3 domain alone, or the SH3 domain together with an adjoining proline-rich region, also retained the capacity to localize to focal adhesions. A second focal adhesion targeting region was mapped to the extreme carboxy terminus of CAS. The identification of this second focal adhesion targeting domain in CAS ascribes a previously unknown function to the highly conserved C terminus of CAS. The regulated targeting of CAS to focal adhesions by two independent domains may reflect the important role of CAS within this subcellular compartment.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter describes an experimental system for the recognition of human faces from surveillance video. In surveillance applications, the system must be robust to changes in illumination, scale, pose and expression. The system must also be able to perform detection and recognition rapidly in real time. Our system detects faces using the Viola-Jones face detector, then extracts local features to build a shape-based feature vector. The feature vector is constructed from ratios of lengths and differences in tangents of angles, so as to be robust to changes in scale and rotations in-plane and out-of-plane. Consideration was given to improving the performance and accuracy of both the detection and recognition steps.