915 resultados para Databases, Bibliographic
Resumo:
An action is typically composed of different parts of the object moving in particular sequences. The presence of different motions (represented as a 1D histogram) has been used in the traditional bag-of-words (BoW) approach for recognizing actions. However the interactions among the motions also form a crucial part of an action. Different object-parts have varying degrees of interactions with the other parts during an action cycle. It is these interactions we want to quantify in order to bring in additional information about the actions. In this paper we propose a causality based approach for quantifying the interactions to aid action classification. Granger causality is used to compute the cause and effect relationships for pairs of motion trajectories of a video. A 2D histogram descriptor for the video is constructed using these pairwise measures. Our proposed method of obtaining pairwise measures for videos is also applicable for large datasets. We have conducted experiments on challenging action recognition databases such as HMDB51 and UCF50 and shown that our causality descriptor helps in encoding additional information regarding the actions and performs on par with the state-of-the art approaches. Due to the complementary nature, a further increase in performance can be observed by combining our approach with state-of-the-art approaches.
Resumo:
Speech polarity detection is a crucial first step in many speech processing techniques. In this paper, an algorithm is proposed that improvises the existing technique using the skewness of the voice source (VS) signal. Here, the integrated linear prediction residual (ILPR) is used as the VS estimate, which is obtained using linear prediction on long-term frames of the low-pass filtered speech signal. This excludes the unvoiced regions from analysis and also reduces the computation. Further, a modified skewness measure is proposed for decision, which also considers the magnitude of the skewness of the ILPR along with its sign. With the detection error rate (DER) as the performance metric, the algorithm is tested on 8 large databases and its performance (DER=0.20%) is found to be comparable to that of the best technique (DER=0.06%) on both clean and noisy speech. Further, the proposed method is found to be ten times faster than the best technique.
Resumo:
Executing authenticated computation on outsourced data is currently an area of major interest in cryptology. Large databases are being outsourced to untrusted servers without appreciable verification mechanisms. As adversarial server could produce erroneous output, clients should not trust the server's response blindly. Primitive set operations like union, set difference, intersection etc. can be invoked on outsourced data in different concrete settings and should be verifiable by the client. One such interesting adaptation is to authenticate email search result where the untrusted mail server has to provide a proof along with the search result. Recently Ohrimenko et al. proposed a scheme for authenticating email search. We suggest significant improvements over their proposal in terms of client computation and communication resources by properly recasting it in two-party settings. In contrast to Ohrimenko et al. we are able to make the number of bilinear pairing evaluation, the costliest operation in verification procedure, independent of the result set cardinality for union operation. We also provide an analytical comparison of our scheme with their proposal which is further corroborated through experiments.
Resumo:
We propose a completely automatic approach for recognizing low resolution face images captured in uncontrolled environment. The approach uses multidimensional scaling to learn a common transformation matrix for the entire face which simultaneously transforms the facial features of the low resolution and the high resolution training images such that the distance between them approximates the distance had both the images been captured under the same controlled imaging conditions. Stereo matching cost is used to obtain the similarity of two images in the transformed space. Though this gives very good recognition performance, the time taken for computing the stereo matching cost is significant. To overcome this limitation, we propose a reference-based approach in which each face image is represented by its stereo matching cost from a few reference images. Experimental evaluation on the real world challenging databases and comparison with the state-of-the-art super-resolution, classifier based and cross modal synthesis techniques show the effectiveness of the proposed algorithm.
Resumo:
Salient object detection has become an important task in many image processing applications. The existing approaches exploit background prior and contrast prior to attain state of the art results. In this paper, instead of using background cues, we estimate the foreground regions in an image using objectness proposals and utilize it to obtain smooth and accurate saliency maps. We propose a novel saliency measure called `foreground connectivity' which determines how tightly a pixel or a region is connected to the estimated foreground. We use the values assigned by this measure as foreground weights and integrate these in an optimization framework to obtain the final saliency maps. We extensively evaluate the proposed approach on two benchmark databases and demonstrate that the results obtained are better than the existing state of the art approaches.
Resumo:
Amino acid substitution matrices play an essential role in protein sequence alignment, a fundamental task in bioinformatics. Most widely used matrices, such as PAM matrices derived from homologous sequences and BLOSUM matrices derived from aligned segments of PROSITE, did not integrate conformation information in their construction. There are a few structure-based matrices, which are derived from limited data of structure alignment. Using databases PDB_SELECT and DSSP, we create a database of sequence-conformation blocks which explicitly represent sequence-structure relationship. Members in a block are identical in conformation and are highly similar in sequence. From this block database, we derive a conformation-specific amino acid substitution matrix CBSM60. The matrix shows an improved performance in conformational segment search and homolog detection.
Resumo:
This paper presents a novel coarse-to-fine global localization approach inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by scale-invariant transformation feature descriptors are used as natural landmarks. They are indexed into two databases: a location vector space model (LVSM) and a location database. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the LVSM is fast, but not accurate enough, whereas localization from the location database using a voting algorithm is relatively slow, but more accurate. The integration of coarse and fine stages makes fast and reliable localization possible. If necessary, the localization result can be verified by epipolar geometry between the representative view in the database and the view to be localized. In addition, the localization system recovers the position of the camera by essential matrix decomposition. The localization system has been tested in indoor and outdoor environments. The results show that our approach is efficient and reliable. © 2006 IEEE.
Resumo:
This paper presents a novel coarse-to-fine global localization approach that is inspired by object recognition and text retrieval techniques. Harris-Laplace interest points characterized by SIFT descriptors are used as natural land-marks. These descriptors are indexed into two databases: an inverted index and a location database. The inverted index is built based on a visual vocabulary learned from the feature descriptors. In the location database, each location is directly represented by a set of scale invariant descriptors. The localization process consists of two stages: coarse localization and fine localization. Coarse localization from the inverted index is fast but not accurate enough; whereas localization from the location database using voting algorithm is relatively slow but more accurate. The combination of coarse and fine stages makes fast and reliable localization possible. In addition, if necessary, the localization result can be verified by epipolar geometry between the representative view in database and the view to be localized. Experimental results show that our approach is efficient and reliable. ©2005 IEEE.
Resumo:
Este artículo tiene como objetivo realizar un balance de la política exterior de Raúl Alfonsín (1983-1989). Dicho análisis se sustentará en una contextualización histórica de su acción externa, en los fundamentos y creencias que sostuvieron su acción internacional y en sus objetivos principales. La reconstrucción en este punto de la investigación se basa especialmente en fuentes bibliográficas, dado que forma parte de los aspectos teóricos de un análisis más amplio, que incluye un análisis de casos de dicho período.
Resumo:
Descreve o atual panorama normativo para as cotas raciais no Brasil com base em estudo realizado com utilização de a análise documental e bibliográfica. Os resultados indicaram que a ausência de uma norma federal implicou baixa adesão ao sistema de cotas, o que é ratificado pelo insignificante número de Instituições Públicas de Ensino Superior (Ipes) que adotaram norma de cota racial - apenas 17,79%. Verificou-se, ainda, que essa ausência cria lacunas na adoção de diretrizes nacionais para a interpretação e a compreensão das ações afirmativas. Tais lacunas refletem diretamente no ciclo da política pública, comprometendo a avaliação e o acompanhamento da efetividade e do sucesso da política, o que é extremamente perigoso para a segurança jurídica na área de direitos humanos e para a garantia da equidade de fato nos espaços político, econômico e social.
Resumo:
Pesquisa focada na definição de um modelo teórico-sistêmico de Gestão do Conhecimento Estratégico (GCE), estando inserida nos estudos da Gestão do Conhecimento (GC) e da Gestão da Informação (GI), considerando conceitos relacionados ao conhecimento (tácito e explícito), a estratégias (perspectivas e abordagens) e aos agentes envolvidos (decisores e estrategistas; novatos e experientes). A construção do modelo se vale de visões da Ciência da Informação, da Administração e da Psicologia Cognitiva. A metodologia empregada utiliza o método abdutivo de pesquisa (uso concomitante dos métodos indutivo e dedutivo), valendo-se da análise bibliográfica (para sustentação teórica do modelo), do estudo comparado (para a avaliação de diferentes modelos de GC e de abordagens e perspectivas estratégicas) e da pesquisa descritiva ou de campo (para validação do modelo junto a profissionais da área em estudo). Os resultados indicam que é possível definir-se um modelo de Gestão do Conhecimento estratégico e que muitos trabalhos podem ser desenvolvidos, derivados da proposta apresentada nesta tese.
Resumo:
Descreve o atual panorama normativo para as cotas raciais no Brasil. Os resultados indicaram que a ausência de uma norma federal implicou na baixa adesão ao sistema de cotas, o que é ratificado pelo insignificante número de Instituições Públicas de Ensino Superior - IPES que adotaram norma de cota racial - apenas 17,79%. Verificou-se, ainda, que essa ausência cria lacunas na adoção de diretrizes nacionais para a interpretação e a compreensão das ações afirmativas. Tais lacunas refletem diretamente no ciclo da política pública, comprometendo a avaliação e o acompanhamento da efetividade e do sucesso da política, o que é extremamente perigoso para a segurança jurídica na área de direitos humanos e para a garantia da equidade de fato nos espaços político, econômico e sociais.
Resumo:
A desorganização de dados em bancos de dados automatizados implica em ineficiência operacional causada por redundâncias, inconsistências, baixo reuso, riscos informacionais, enfim, menor valor agregado da TI aos objetivos organizacionais. Apoiado por técnicas de gerenciamento de projetos, busca-se em um estudo de caso na Câmara dos Deputados a otimização da organização dos dados em bancos de dados automatizados.
Resumo:
[ES] Este proyecto surgió en el marco del proyecto de investigación: