973 resultados para Compressed text search
Resumo:
In this work, we have explored the prospect of segmenting crowd flow in H. 264 compressed videos by merely using motion vectors. The motion vectors are extracted by partially decoding the corresponding video sequence in the H. 264 compressed domain. The region of interest ie., crowd flow region is extracted and the motion vectors that spans the region of interest is preprocessed and a collective representation of the motion vectors for the entire video is obtained. The obtained motion vectors for the corresponding video is then clustered by using EM algorithm. Finally, the clusters which converges to a single flow are merged together based on the bhattacharya distance measure between the histogram of the of the orientation of the motion vectors at the boundaries of the clusters. We had implemented our proposed approach on the complex crowd flow dataset provided by 1] and compared our results by using Jaccard measure. Since we are performing crowd flow segmentation in the compressed domain using only motion vectors, our proposed approach performs much faster compared to other pixel domain counterparts still retaining better accuracy.
Resumo:
Aims. In this work we search for the signatures of low-dimensional chaos in the temporal behavior of the Kepler-field blazar W2R 1946+42. Methods. We use a publicly available, similar to 160 000-point-long and mostly equally spaced light curve of W2R 1946+42. We apply the correlation integral method to both real datasets and phase randomized surrogates. Results. We are not able to confirm the presence of low-dimensional chaos in the light curve. This result, however, still leads to some important implications for blazar emission mechanisms, which are discussed.
Resumo:
Large variations in human actions lead to major challenges in computer vision research. Several algorithms are designed to solve the challenges. Algorithms that stand apart, help in solving the challenge in addition to performing faster and efficient manner. In this paper, we propose a human cognition inspired projection based learning for person-independent human action recognition in the H.264/AVC compressed domain and demonstrate a PBL-McRBEN based approach to help take the machine learning algorithms to the next level. Here, we use gradient image based feature extraction process where the motion vectors and quantization parameters are extracted and these are studied temporally to form several Group of Pictures (GoP). The GoP is then considered individually for two different bench mark data sets and the results are classified using person independent human action recognition. The functional relationship is studied using Projection Based Learning algorithm of the Meta-cognitive Radial Basis Function Network (PBL-McRBFN) which has a cognitive and meta-cognitive component. The cognitive component is a radial basis function network while the Meta-Cognitive Component(MCC) employs self regulation. The McC emulates human cognition like learning to achieve better performance. Performance of the proposed approach can handle sparse information in compressed video domain and provides more accuracy than other pixel domain counterparts. Performance of the feature extraction process achieved more than 90% accuracy using the PTIL-McRBFN which catalyzes the speed of the proposed high speed action recognition algorithm. We have conducted twenty random trials to find the performance in GoP. The results are also compared with other well known classifiers in machine learning literature.
Resumo:
In this paper, we propose a H.264/AVC compressed domain human action recognition system with projection based metacognitive learning classifier (PBL-McRBFN). The features are extracted from the quantization parameters and the motion vectors of the compressed video stream for a time window and used as input to the classifier. Since compressed domain analysis is done with noisy, sparse compression parameters, it is a huge challenge to achieve performance comparable to pixel domain analysis. On the positive side, compressed domain allows rapid analysis of videos compared to pixel level analysis. The classification results are analyzed for different values of Group of Pictures (GOP) parameter, time window including full videos. The functional relationship between the features and action labels are established using PBL-McRBFN with a cognitive and meta-cognitive component. The cognitive component is a radial basis function, while the meta-cognitive component employs self-regulation to achieve better performance in subject independent action recognition task. The proposed approach is faster and shows comparable performance with respect to the state-of-the-art pixel domain counterparts. It employs partial decoding, which rules out the complexity of full decoding, and minimizes computational load and memory usage. This results in reduced hardware utilization and increased speed of classification. The results are compared with two benchmark datasets and show more than 90% accuracy using the PBL-McRBFN. The performance for various GOP parameters and group of frames are obtained with twenty random trials and compared with other well-known classifiers in machine learning literature. (C) 2015 Elsevier B.V. All rights reserved.
Resumo:
The problem of scaling up data integration, such that new sources can be quickly utilized as they are discovered, remains elusive: Global schemas for integrated data are difficult to develop and expand, and schema and record matching techniques are limited by the fact that data and metadata are often under-specified and must be disambiguated by data experts. One promising approach is to avoid using a global schema, and instead to develop keyword search-based data integration-where the system lazily discovers associations enabling it to join together matches to keywords, and return ranked results. The user is expected to understand the data domain and provide feedback about answers' quality. The system generalizes such feedback to learn how to correctly integrate data. A major open challenge is that under this model, the user only sees and offers feedback on a few ``top-'' results: This result set must be carefully selected to include answers of high relevance and answers that are highly informative when feedback is given on them. Existing systems merely focus on predicting relevance, by composing the scores of various schema and record matching algorithms. In this paper, we show how to predict the uncertainty associated with a query result's score, as well as how informative feedback is on a given result. We build upon these foundations to develop an active learning approach to keyword search-based data integration, and we validate the effectiveness of our solution over real data from several very different domains.
Resumo:
This paper discusses a novel high-speed approach for human action recognition in H.264/AVC compressed domain. The proposed algorithm utilizes cues from quantization parameters and motion vectors extracted from the compressed video sequence for feature extraction and further classification using Support Vector Machines (SVM). The ultimate goal of the proposed work is to portray a much faster algorithm than pixel domain counterparts, with comparable accuracy, utilizing only the sparse information from compressed video. Partial decoding rules out the complexity of full decoding, and minimizes computational load and memory usage, which can result in reduced hardware utilization and faster recognition results. The proposed approach can handle illumination changes, scale, and appearance variations, and is robust to outdoor as well as indoor testing scenarios. We have evaluated the performance of the proposed method on two benchmark action datasets and achieved more than 85 % accuracy. The proposed algorithm classifies actions with speed (> 2,000 fps) approximately 100 times faster than existing state-of-the-art pixel-domain algorithms.
Resumo:
In this paper, we have proposed an anomaly detection algorithm based on Histogram of Oriented Motion Vectors (HOMV) 1] in sparse representation framework. Usual behavior is learned at each location by sparsely representing the HOMVs over learnt normal feature bases obtained using an online dictionary learning algorithm. In the end, anomaly is detected based on the likelihood of the occurrence of sparse coefficients at that location. The proposed approach is found to be robust compared to existing methods as demonstrated in the experiments on UCSD Ped1 and UCSD Ped2 datasets.
Resumo:
Real time anomaly detection is the need of the hour for any security applications. In this article, we have proposed a real time anomaly detection for H.264 compressed video streams utilizing pre-encoded motion vectors (MVs). The proposed work is principally motivated by the observation that MVs have distinct characteristics during anomaly than usual. Our observation shows that H.264 MV magnitude and orientation contain relevant information which can be used to model the usual behavior (UB) effectively. This is subsequently extended to detect abnormality/anomaly based on the probability of occurrence of a behavior. The performance of the proposed algorithm was evaluated and bench-marked on UMN and Ped anomaly detection video datasets, with a detection rate of 70 frames per sec resulting in 90x and 250x speedup, along with on-par detection accuracy compared to the state-of-the-art algorithms.
Resumo:
Executing authenticated computation on outsourced data is currently an area of major interest in cryptology. Large databases are being outsourced to untrusted servers without appreciable verification mechanisms. As adversarial server could produce erroneous output, clients should not trust the server's response blindly. Primitive set operations like union, set difference, intersection etc. can be invoked on outsourced data in different concrete settings and should be verifiable by the client. One such interesting adaptation is to authenticate email search result where the untrusted mail server has to provide a proof along with the search result. Recently Ohrimenko et al. proposed a scheme for authenticating email search. We suggest significant improvements over their proposal in terms of client computation and communication resources by properly recasting it in two-party settings. In contrast to Ohrimenko et al. we are able to make the number of bilinear pairing evaluation, the costliest operation in verification procedure, independent of the result set cardinality for union operation. We also provide an analytical comparison of our scheme with their proposal which is further corroborated through experiments.
Resumo:
In this paper, we search for the regions of the phenomenological minimal supersymmetric standard model (pMSSM) parameter space where one can expect to have moderate Higgs mixing angle (alpha) with relatively light (up to 600 GeV) additional Higgses after satisfying the current LHC data. We perform a global fit analysis using most updated data (till December 2014) from the LHC and Tevatron experiments. The constraints coming from the precision measurements of the rare b-decays B-s -> mu(+)mu(-) and b -> s gamma are also considered. We find that low M-A(less than or similar to 350) and high tan beta(greater than or similar to 25) regions are disfavored by the combined effect of the global analysis and flavor data. However, regions with Higgs mixing angle alpha similar to 0.1-0.8 are still allowed by the current data. We then study the existing direct search bounds on the heavy scalar/pseudoscalar (H/A) and charged Higgs boson (H-+/-) masses and branchings at the LHC. It has been found that regions with low to moderate values of tan beta with light additional Higgses (mass <= 600 GeV) are unconstrained by the data, while the regions with tan beta > 20 are excluded considering the direct search bounds by the LHC-8 data. The possibility to probe the region with tan beta <= 20 at the high luminosity run of LHC are also discussed, giving special attention to the H -> hh, H/A -> t (t) over bar and H/A -> tau(+)tau(-) decay modes.
Resumo:
Image and video analysis requires rich features that can characterize various aspects of visual information. These rich features are typically extracted from the pixel values of the images and videos, which require huge amount of computation and seldom useful for real-time analysis. On the contrary, the compressed domain analysis offers relevant information pertaining to the visual content in the form of transform coefficients, motion vectors, quantization steps, coded block patterns with minimal computational burden. The quantum of work done in compressed domain is relatively much less compared to pixel domain. This paper aims to survey various video analysis efforts published during the last decade across the spectrum of video compression standards. In this survey, we have included only the analysis part, excluding the processing aspect of compressed domain. This analysis spans through various computer vision applications such as moving object segmentation, human action recognition, indexing, retrieval, face detection, video classification and object tracking in compressed videos.
Resumo:
Forty-six lectin domains which have homologues among well established eukaryotic and bacterial lectins of known three-dimensional structure, have been identified through a search of 165 archeal genomes using a multipronged approach involving domain recognition, sequence search and analysis of binding sites. Twenty-one of them have the 7-bladed -propeller lectin fold while 16 have the -trefoil fold and 7 the legume lectin fold. The remainder assumes the C-type lectin, the -prism I and the tachylectin folds. Acceptable models of almost all of them could be generated using the appropriate lectins of known three-dimensional structure as templates, with binding sites at one or more expected locations. The work represents the first comprehensive bioinformatic study of archeal lectins. The presence of lectins with the same fold in all domains of life indicates their ancient origin well before the divergence of the three branches. Further work is necessary to identify archeal lectins which have no homologues among eukaryotic and bacterial species. Proteins 2016; 84:21-30. (c) 2015 Wiley Periodicals, Inc.
Resumo:
Computer Assisted Assessment (CAA) has been existing for several years now. While some forms of CAA do not require sophisticated text understanding (e.g., multiple choice questions), there are also student answers that consist of free text and require analysis of text in the answer. Research towards the latter till date has concentrated on two main sub-tasks: (i) grading of essays, which is done mainly by checking the style, correctness of grammar, and coherence of the essay and (ii) assessment of short free-text answers. In this paper, we present a structured view of relevant research in automated assessment techniques for short free-text answers. We review papers spanning the last 15 years of research with emphasis on recent papers. Our main objectives are two folds. First we present the survey in a structured way by segregating information on dataset, problem formulation, techniques, and evaluation measures. Second we present a discussion on some of the potential future directions in this domain which we hope would be helpful for researchers.
Resumo:
We study an s-channel resonance R as a viable candidate to fit the diboson excess reported by ATLAS. We compute the contribution of the similar to 2 TeV resonance R to semileptonic and leptonic final states at the 13 TeV LHC. To explain the absence of an excess in the semileptonic channel, we explore the possibility where the particle R decays to additional light scalars X, X or X, Y. A modified analysis strategy has been proposed to study the three-particle final state of the resonance decay and to identify decay channels of X. Associated production of R with gauge bosons has been studied in detail to identify the production mechanism of R. We construct comprehensive categories for vector and scalar beyond-standard-model particles which may play the role of particles R, X, Y and find alternate channels to fix the new couplings and search for these particles.