922 resultados para Biometric Descriptor
Resumo:
Understanding the overall catalytic activity trend for rational catalyst design is one of the core goals in heterogeneous catalysis. In the past two decades, the development of density functional theory (DFT) and surface kinetics make it feasible to theoretically evaluate and predict the catalytic activity variation of catalysts within a descriptor-based framework. Thereinto, the concept of the volcano curve, which reveals the general activity trend, usually constitutes the basic foundation of catalyst screening. However, although it is a widely accepted concept in heterogeneous catalysis, its origin lacks a clear physical picture and definite interpretation. Herein, starting with a brief review of the development of the catalyst screening framework, we use a two-step kinetic model to refine and clarify the origin of the volcano curve with a full analytical analysis by integrating the surface kinetics and the results of first-principles calculations. It is mathematically demonstrated that the volcano curve is an essential property in catalysis, which results from the self-poisoning effect accompanying the catalytic adsorption process. Specifically, when adsorption is strong, it is the rapid decrease of surface free sites rather than the augmentation of energy barriers that inhibits the overall reaction rate and results in the volcano curve. Some interesting points and implications in assisting catalyst screening are also discussed based on the kinetic derivation. Moreover, recent applications of the volcano curve for catalyst design in two important photoelectrocatalytic processes (the hydrogen evolution reaction and dye-sensitized solar cells) are also briefly discussed.
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
[EN]This paper describes an approach for detection of frontal faces in real time (20-35Hz) for further processing. This approach makes use of a combination of previous detection tracking and color for selecting interest areas. On those areas, later facial features such as eyes, nose and mouth are searched based on geometric tests, appearance veri cation, temporal and spatial coherence. The system makes use of very simple techniques applied in a cascade approach, combined and coordinated with temporal information for improving performance. This module is a component of a complete system designed for detection, tracking and identi cation of individuals [1].
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
[EN]In face recognition, where high-dimensional representation spaces are generally used, it is very important to take advantage of all the available information. In particular, many labelled facial images will be accumulated while the recognition system is functioning, and due to practical reasons some of them are often discarded. In this paper, we propose an algorithm for using this information. The algorithm has the fundamental characteristic of being incremental. On the other hand, the algorithm makes use of a combination of classification results for the images in the input sequence. Experiments with sequences obtained with a real person detection and tracking system allow us to analyze the performance of the algorithm, as well as its potential improvements.
Resumo:
Los principales recursos pesqueros pelágicos de interés económico en el Perú son anchoveta (Engraulis ringens), jurel (Trachurus murphyi) y caballa (Scomber japonicus) [3]. Para su evaluación, se lleva a cabo cruceros de evaluación acústica en los que se integra información de ecoabundancia y proporción de tallas por especie para obtener valores de biomasa y abundancia. Sin embargo, para especies no objetivo (como jurel), dichos valores resultan poco confiables por la lejanía entre los puntos de muestreo biométrico y acústico. Para resolver este inconveniente, el presente trabajo propuso utilizar modelos empíricos (de tipo GAM y GLM) integrando variables ambientales y de seguimiento de desembarques con la finalidad de generar índices relativos y absolutos para anchoveta y jurel en el período de 1996-2013 dentro del área de las 200 mn frente a la costa peruana. Los resultados obtenidos realzaron la importancia de los lances de comprobación para la obtención de estimaciones robustas de biomasa. Así mismo, se observó que, para anchoveta, los modelos empíricos sí produjeron un buen índice relativo y absoluto, mejorando la utilización de la ecoabundancia por sí sola. Para jurel, sin embargo, el modelo final calibrado resultó en la obtención de un mejor índice relativo. Se recomienda además, la obtención de información de tallas y pesos medios de desembarques para jurel con la finalidad de mejorar las estimaciones de biomasa y abundancia.
Resumo:
In computer vision, training a model that performs classification effectively is highly dependent on the extracted features, and the number of training instances. Conventionally, feature detection and extraction are performed by a domain-expert who, in many cases, is expensive to employ and hard to find. Therefore, image descriptors have emerged to automate these tasks. However, designing an image descriptor still requires domain-expert intervention. Moreover, the majority of machine learning algorithms require a large number of training examples to perform well. However, labelled data is not always available or easy to acquire, and dealing with a large dataset can dramatically slow down the training process. In this paper, we propose a novel Genetic Programming based method that automatically synthesises a descriptor using only two training instances per class. The proposed method combines arithmetic operators to evolve a model that takes an image and generates a feature vector. The performance of the proposed method is assessed using six datasets for texture classification with different degrees of rotation, and is compared with seven domain-expert designed descriptors. The results show that the proposed method is robust to rotation, and has significantly outperformed, or achieved a comparable performance to, the baseline methods.
Resumo:
AIRES, Kelson R. T. ; ARAÚJO, Hélder J. ; MEDEIROS, Adelardo A. D. . Plane Detection from Monocular Image Sequences. In: VISUALIZATION, IMAGING AND IMAGE PROCESSING, 2008, Palma de Mallorca, Spain. Proceedings..., Palma de Mallorca: VIIP, 2008
Resumo:
The use of human brain electroencephalography (EEG) signals for automatic person identi cation has been investigated for a decade. It has been found that the performance of an EEG-based person identication system highly depends on what feature to be extracted from multi-channel EEG signals. Linear methods such as Power Spectral Density and Autoregressive Model have been used to extract EEG features. However these methods assumed that EEG signals are stationary. In fact, EEG signals are complex, non-linear, non-stationary, and random in nature. In addition, other factors such as brain condition or human characteristics may have impacts on the performance, however these factors have not been investigated and evaluated in previous studies. It has been found in the literature that entropy is used to measure the randomness of non-linear time series data. Entropy is also used to measure the level of chaos of braincomputer interface systems. Therefore, this thesis proposes to study the role of entropy in non-linear analysis of EEG signals to discover new features for EEG-based person identi- cation. Five dierent entropy methods including Shannon Entropy, Approximate Entropy, Sample Entropy, Spectral Entropy, and Conditional Entropy have been proposed to extract entropy features that are used to evaluate the performance of EEG-based person identication systems and the impacts of epilepsy, alcohol, age and gender characteristics on these systems. Experiments were performed on the Australian EEG and Alcoholism datasets. Experimental results have shown that, in most cases, the proposed entropy features yield very fast person identication, yet with compatible accuracy because the feature dimension is low. In real life security operation, timely response is critical. The experimental results have also shown that epilepsy, alcohol, age and gender characteristics have impacts on the EEG-based person identication systems.
Resumo:
Estudo de caso único exploratório e descritivo voltado a analisar a indexação em uma das bibliotecas universitárias do SIB/FURG. Os objetivos específicos, delimitados a partir do contexto já citado, foram: a) Identificar e analisar, através de mapeamento cognitivo, os procedimentos metodológicos empregados na indexação nas atividades de análise, síntese e representação da informação; b) Identificar os conceitos/noções com maior importância na percepção da indexadora quanto ao processo de indexação e as relações entre tais conceitos de forma a construir o mapa cognitivo do processo a partir da percepção da indexadora; e c) Descrever e analisar a indexação de livros na unidade em estudo sob aspecto da análise, síntese e representação destes através da aplicação do Protocolo Verbal. As técnicas utilizadas para a coleta de informação no estudo de caso único foram a Self-Q e o Protocolo Verbal, ambas centradas na abordagem qualitativa. Conclui-se, a partir da construção do mapa cognitivo da indexadora, que as noções/conceitos que sustentam sua prática voltam-se, em sua maioria, a aspectos de caráter procedimental. Percebeu-se também que a prática de indexação ocorre desconectada dos princípios de especificidade e exaustividade. Sobre a indexação de livros conclui-se que, na unidade sob estudo, as operações de análise são desenvolvidas de modo empírico através da leitura e interpretação de partes do documento indexado. Identificou-se que o enfoque da prática não recai apenas no documento mas também, no usuário. A análise e síntese ocorrem de forma integrada, sendo que, em alguns momentos, a síntese é desenvolvida a partir do conhecimento dos descritores do tesauro. A delimitação dos conceitos, por sua vez, foi influenciada, por vezes, pelo(a): uso de termos já empregados na unidade em que atua/sistema, presença do descritor no sumário, conhecimento das demandas dos usuários, área de domínio em que indexa e percepção enquanto profissional. Percebeu-se que não existem níveis definidos quanto a exaustividade e especificidade na indexação. Na representação dos conceitos foram identificadas dificuldades ocasionadas pela ausência de relacionamentos entre termos e/ou ausência de termos voltados a área indexada no tesauro empregado. Conclui-se que faz-se necessário o desenvolvimento de uma política de indexação formalizada para basilar a prática desenvolvida no SIB/FURG.
Resumo:
For some years now the Internet and World Wide Web communities have envisaged moving to a next generation of Web technologies by promoting a globally unique, and persistent, identifier for identifying and locating many forms of published objects . These identifiers are called Universal Resource Names (URNs) and they hold out the prospect of being able to refer to an object by what it is (signified by its URN), rather than by where it is (the current URL technology). One early implementation of URN ideas is the Unicode-based Handle technology, developed at CNRI in Reston Virginia. The Digital Object Identifier (DOI) is a specific URN naming convention proposed just over 5 years ago and is now administered by the International DOI organisation, founded by a consortium of publishers and based in Washington DC. The DOI is being promoted for managing electronic content and for intellectual rights management of it, either using the published work itself, or, increasingly via metadata descriptors for the work in question. This paper describes the use of the CNRI handle parser to navigate a corpus of papers for the Electronic Publishing journal. These papers are in PDF format and based on our server in Nottingham. For each paper in the corpus a metadata descriptor is prepared for every citation appearing in the References section. The important factor is that the underlying handle is resolved locally in the first instance. In some cases (e.g. cross-citations within the corpus itself and links to known resources elsewhere) the handle can be handed over to CNRI for further resolution. This work shows the encouraging prospect of being able to use persistent URNs not only for intellectual property negotiations but also for search and discovery. In the test domain of this experiment every single resource, referred to within a given paper, can be resolved, at least to the level of metadata about the referred object. If the Web were to become more fully URN aware then a vast directed graph of linked resources could be accessed, via persistent names. Moreover, if these names delivered embedded metadata when resolved, the way would be open for a new generation of vastly more accurate and intelligent Web search engines.
Resumo:
AIRES, Kelson R. T.; ARAÚJO, Hélder J.; MEDEIROS, Adelardo A. D. Plane Detection Using Affine Homography. In: CONGRESSO BRASILEIRO DE AUTOMÁTICA, 2008, Juiz de Fora, MG: Anais... do CBA 2008.
Resumo:
Context: Mobile applications support a set of user-interaction features that are independent of the application logic. Rotating the device, scrolling, or zooming are examples of such features. Some bugs in mobile applications can be attributed to user-interaction features. Objective: This paper proposes and evaluates a bug analyzer based on user-interaction features that uses digital image processing to find bugs. Method: Our bug analyzer detects bugs by comparing the similarity between images taken before and after a user-interaction. SURF, an interest point detector and descriptor, is used to compare the images. To evaluate the bug analyzer, we conducted a case study with 15 randomly selected mobile applications. First, we identified user-interaction bugs by manually testing the applications. Images were captured before and after applying each user-interaction feature. Then, image pairs were processed with SURF to obtain interest points, from which a similarity percentage was computed, to finally decide whether there was a bug. Results: We performed a total of 49 user-interaction feature tests. When manually testing the applications, 17 bugs were found, whereas when using image processing, 15 bugs were detected. Conclusions: 8 out of 15 mobile applications tested had bugs associated to user-interaction features. Our bug analyzer based on image processing was able to detect 88% (15 out of 17) of the user-interaction bugs found with manual testing.