801 resultados para Video similarity


Relevância:

20.00% 20.00%

Publicador:

Resumo:

Objective: The aim of this paper is to bridge the gap between the corpus of imitation research and video-based intervention (VBI) research, and consider the impact imitation skills may be having on VBI outcomes and highlight potential areas for improving efficacy.

Method: A review of the imitation literature was conducted focusing on imitation skill deficits in children with autism followed by a critical review of the video modelling literature focusing on pre-intervention assessment of imitation skills and the impact imitation deficits may have on VBI outcomes.

Results: Children with autism have specific imitation deficits, which may impact VBI outcomes. Imitation training or procedural modifications made to videos may accommodate for these deficits.

Conclusions: There are only six studies where VBI researchers have taken pre-intervention imitation assessments using an assortment of imitation measures. More research is required to develop a standardised multi-dimensional imitation assessment battery that can better inform VBI.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This paper presents the maximum weighted stream posterior (MWSP) model as a robust and efficient stream integration method for audio-visual speech recognition in environments, where the audio or video streams may be subjected to unknown and time-varying corruption. A significant advantage of MWSP is that it does not require any specific measurements of the signal in either stream to calculate appropriate stream weights during recognition, and as such it is modality-independent. This also means that MWSP complements and can be used alongside many of the other approaches that have been proposed in the literature for this problem. For evaluation we used the large XM2VTS database for speaker-independent audio-visual speech recognition. The extensive tests include both clean and corrupted utterances with corruption added in either/both the video and audio streams using a variety of types (e.g., MPEG-4 video compression) and levels of noise. The experiments show that this approach gives excellent performance in comparison to another well-known dynamic stream weighting approach and also compared to any fixed-weighted integration approach in both clean conditions or when noise is added to either stream. Furthermore, our experiments show that the MWSP approach dynamically selects suitable integration weights on a frame-by-frame basis according to the level of noise in the streams and also according to the naturally fluctuating relative reliability of the modalities even in clean conditions. The MWSP approach is shown to maintain robust recognition performance in all tested conditions, while requiring no prior knowledge about the type or level of noise.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

Chronic myelomonocytic leukemia is similar to but a separate entity from both myeloproliferative neoplasms and myelodysplastic syndromes, and shows either myeloproliferative or myelodysplastic features. We ask whether this distinction may have a molecular basis. We established the gene expression profiles of 39 samples of chronic myelomonocytic leukemia (including 12 CD34-positive) and 32 CD34-positive samples of myelodysplastic syndromes by using Affymetrix microarrays, and studied the status of 18 genes by Sanger sequencing and array-comparative genomic hybridization in 53 samples. Analysis of 12 mRNAS from chronic myelomonocytic leukemia established a gene expression signature of 122 probe sets differentially expressed between proliferative and dysplastic cases of chronic myelomonocytic leukemia. As compared to proliferative cases, dysplastic cases over-expressed genes involved in red blood cell biology. When applied to 32 myelodysplastic syndromes, this gene expression signature was able to discriminate refractory anemias with ring sideroblasts from refractory anemias with excess of blasts. By comparing mRNAS from these two forms of myelodysplastic syndromes we derived a second gene expression signature. This signature separated the myelodysplastic and myeloproliferative forms of chronic myelomonocytic leukemias. These results were validated using two independent gene expression data sets. We found that myelodysplastic chronic myelomonocytic leukemias are characterized by mutations in transcription/epigenetic regulators (ASXL1, RUNX1, TET2) and splicing genes (SRSF2) and the absence of mutations in signaling genes. Myelodysplastic chronic myelomonocytic leukemias and refractory anemias with ring sideroblasts share a common expression program suggesting they are part of a continuum, which is not totally explained by their similar but not, however, identical mutation spectrum. © 2013 Ferrata Storti Foundation.

Relevância:

20.00% 20.00%

Publicador:

Resumo:

We address the problem of multi-target tracking in realistic crowded conditions by introducing a novel dual-stage online tracking algorithm. The problem of data-association between tracks and detections, based on appearance, is often complicated by partial occlusion. In the first stage, we address the issue of occlusion with a novel method of robust data-association, that can be used to compute the appearance similarity between tracks and detections without the need for explicit knowledge of the occluded regions. In the second stage, broken tracks are linked based on motion and appearance, using an online-learned linking model. The online-learned motion-model for track linking uses the confident tracks from the first stage tracker as training examples. The new approach has been tested on the town centre dataset and has performance comparable with the present state-of-the-art

Relevância:

20.00% 20.00%

Publicador:

Relevância:

20.00% 20.00%

Publicador:

Resumo:

This chapter describes an experimental system for the recognition of human faces from surveillance video. In surveillance applications, the system must be robust to changes in illumination, scale, pose and expression. The system must also be able to perform detection and recognition rapidly in real time. Our system detects faces using the Viola-Jones face detector, then extracts local features to build a shape-based feature vector. The feature vector is constructed from ratios of lengths and differences in tangents of angles, so as to be robust to changes in scale and rotations in-plane and out-of-plane. Consideration was given to improving the performance and accuracy of both the detection and recognition steps.