952 resultados para Fingerprint recognition method


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordenação de Aperfeiçoamento de Pessoal de Nível Superior (CAPES)

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Imagens de radar de abertura sintética (SAR) vem sendo bem mais utilizadas do que antes nas aplicações de geociências em regiões tropicais úmidas. Nesta investigação, uma imagem RADARSAT-1, na banda C, polarização HH adquirida em 1998 foi usada para o mapeamento costeiro e avaliação da cobertura da terra na área de Bragança, norte do Brasil. Imagem do radar aerotransportado GEMS-1000, na banda X, polarização HH, adquirida em 1972 durante o projeto RADAM foi também utilizada para avaliar as variações costeiras ocorridas nas últimas três décadas. A pesquisa tem confirmado a utilidade da imagem RADARSAT-1 para o mapeamento geomorfológico e avaliação da cobertura da terra, particularmente em costas de manguezal de macromaré. Além disso, um novo método para estimar as variações da linha de costa baseado na superposição de vetores extraídos de diferentes imagens SAR, com alta acurácia geométrica, tem mostrado que a planície costeira de Bragança tem estado sujeita a severa erosão responsável pelo recuo de aproximadamente 32 km2 e acreção de 20 km2, resultando em uma perda de área de manguezal de aproximadamente 12 km2. Como perspectiva de aplicação, dados SAR orbitais e aerotransportados provaram ser uma importante fonte de informação tanto para o mapeamento geomorfológico, quando para o monitoramento de modificações costeiras em ambientes tropicais úmidos.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this letter, a speech recognition algorithm based on the least-squares method is presented. Particularly, the intention is to exemplify how such a traditional numerical technique can be applied to solve a signal processing problem that is usually treated by using more elaborated formulations.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Facial reconstruction is a method that seeks to recreate a person's facial appearance from his/her skull. This technique can be the last resource used in a forensic investigation, when identification techniques such as DNA analysis, dental records, fingerprints and radiographic comparison cannot be used to identify a body or skeletal remains. To perform facial reconstruction, the data of facial soft tissue thickness are necessary. Scientific literature has described differences in the thickness of facial soft tissue between ethnic groups. There are different databases of soft tissue thickness published in the scientific literature. There are no literature records of facial reconstruction works carried out with data of soft tissues obtained from samples of Brazilian subjects. There are also no reports of digital forensic facial reconstruction performed in Brazil. There are two databases of soft tissue thickness published for the Brazilian population: one obtained from measurements performed in fresh cadavers (fresh cadavers' pattern), and another from measurements using magnetic resonance imaging (Magnetic Resonance pattern). This study aims to perform three different characterized digital forensic facial reconstructions (with hair, eyelashes and eyebrows) of a Brazilian subject (based on an international pattern and two Brazilian patterns for soft facial tissue thickness), and evaluate the digital forensic facial reconstructions comparing them to photos of the individual and other nine subjects. The DICOM data of the Computed Tomography (CT) donated by a volunteer were converted into stereolitography (STL) files and used for the creation of the digital facial reconstructions. Once the three reconstructions were performed, they were compared to photographs of the subject who had the face reconstructed and nine other subjects. Thirty examiners participated in this recognition process. The target subject was recognized by 26.67% of the examiners in the reconstruction performed with the Brazilian Magnetic Resonance Pattern, 23.33% in the reconstruction performed with the Brazilian Fresh Cadavers Pattern and 20.00% in the reconstruction performed with the International Pattern, in which the target-subject was the most recognized subject in the first two patterns. The rate of correct recognitions of the target subject indicate that the digital forensic facial reconstruction, conducted with parameters used in this study, may be a useful tool. (C) 2011 Elsevier Ireland Ltd. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

One of the problems in the analysis of nucleus-nucleus collisions is to get information on the value of the impact parameter b. This work consists in the application of pattern recognition techniques aimed at associating values of b to groups of events. To this end, a support vec- tor machine (SVM) classifier is adopted to analyze multifragmentation reactions. This method allows to backtracing the values of b through a particular multidimensional analysis. The SVM classification con- sists of two main phase. In the first one, known as training phase, the classifier learns to discriminate the events that are generated by two different model:Classical Molecular Dynamics (CMD) and Heavy- Ion Phase-Space Exploration (HIPSE) for the reaction: 58Ni +48 Ca at 25 AMeV. To check the classification of events in the second one, known as test phase, what has been learned is tested on new events generated by the same models. These new results have been com- pared to the ones obtained through others techniques of backtracing the impact parameter. Our tests show that, following this approach, the central collisions and peripheral collisions, for the CMD events, are always better classified with respect to the classification by the others techniques of backtracing. We have finally performed the SVM classification on the experimental data measured by NUCL-EX col- laboration with CHIMERA apparatus for the previous reaction.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Antibody microarrays are of great research interest because of their potential application as biosensors for high-throughput protein and pathogen screening technologies. In this active area, there is still a need for novel structures and assemblies providing insight in binding interactions such as spherical and annulus-shaped protein structures, e.g. for the utilization of curved surfaces for the enhanced protein-protein interactions and detection of antigens. Therefore, the goal of the presented work was to establish a new technique for the label-free detection of bio-molecules and bacteria on topographically structured surfaces, suitable for antibody binding.rnIn the first part of the presented thesis, the fabrication of monolayers of inverse opals with 10 μm diameter and the immobilization of antibodies on their interior surface is described. For this purpose, several established methods for the linking of antibodies to glass, including Schiff bases, EDC/S-NHS chemistry and the biotin-streptavidin affinity system, were tested. The employed methods included immunofluorescence and image analysis by phase contrast microscopy. It could be shown that these methods were not successful in terms of antibody immobilization and adjacent bacteria binding. Hence, a method based on the application of an active-ester-silane was introduced. It showed promising results but also the need for further analysis. Especially the search for alternative antibodies addressing other antigens on the exterior of bacteria will be sought-after in the future.rnAs a consequence of the ability to control antibody-functionalized surfaces, a new technique employing colloidal templating to yield large scale (~cm2) 2D arrays of antibodies against E. coli K12, eGFP and human integrin αvβ3 on a versatile useful glass surface is presented. The antibodies were swept to reside around the templating microspheres during solution drying, and physisorbed on the glass. After removing the microspheres, the formation of annuli-shaped antibody structures was observed. The preserved antibody structure and functionality is shown by binding the specific antigens and secondary antibodies. The improved detection of specific bacteria from a crude solution compared to conventional “flat” antibody surfaces and the setting up of an integrin-binding platform for targeted recognition and surface interactions of eukaryotic cells is demonstrated. The structures were investigated by atomic force, confocal and fluorescence microscopy. Operational parameters like drying time, temperature, humidity and surfactants were optimized to obtain a stable antibody structure.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Automatically recognizing faces captured under uncontrolled environments has always been a challenging topic in the past decades. In this work, we investigate cohort score normalization that has been widely used in biometric verification as means to improve the robustness of face recognition under challenging environments. In particular, we introduce cohort score normalization into undersampled face recognition problem. Further, we develop an effective cohort normalization method specifically for the unconstrained face pair matching problem. Extensive experiments conducted on several well known face databases demonstrate the effectiveness of cohort normalization on these challenging scenarios. In addition, to give a proper understanding of cohort behavior, we study the impact of the number and quality of cohort samples on the normalization performance. The experimental results show that bigger cohort set size gives more stable and often better results to a point before the performance saturates. And cohort samples with different quality indeed produce different cohort normalization performance. Recognizing faces gone after alterations is another challenging problem for current face recognition algorithms. Face image alterations can be roughly classified into two categories: unintentional (e.g., geometrics transformations introduced by the acquisition devide) and intentional alterations (e.g., plastic surgery). We study the impact of these alterations on face recognition accuracy. Our results show that state-of-the-art algorithms are able to overcome limited digital alterations but are sensitive to more relevant modifications. Further, we develop two useful descriptors for detecting those alterations which can significantly affect the recognition performance. In the end, we propose to use the Structural Similarity (SSIM) quality map to detect and model variations due to plastic surgeries. Extensive experiments conducted on a plastic surgery face database demonstrate the potential of SSIM map for matching face images after surgeries.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In recent years, Deep Learning techniques have shown to perform well on a large variety of problems both in Computer Vision and Natural Language Processing, reaching and often surpassing the state of the art on many tasks. The rise of deep learning is also revolutionizing the entire field of Machine Learning and Pattern Recognition pushing forward the concepts of automatic feature extraction and unsupervised learning in general. However, despite the strong success both in science and business, deep learning has its own limitations. It is often questioned if such techniques are only some kind of brute-force statistical approaches and if they can only work in the context of High Performance Computing with tons of data. Another important question is whether they are really biologically inspired, as claimed in certain cases, and if they can scale well in terms of "intelligence". The dissertation is focused on trying to answer these key questions in the context of Computer Vision and, in particular, Object Recognition, a task that has been heavily revolutionized by recent advances in the field. Practically speaking, these answers are based on an exhaustive comparison between two, very different, deep learning techniques on the aforementioned task: Convolutional Neural Network (CNN) and Hierarchical Temporal memory (HTM). They stand for two different approaches and points of view within the big hat of deep learning and are the best choices to understand and point out strengths and weaknesses of each of them. CNN is considered one of the most classic and powerful supervised methods used today in machine learning and pattern recognition, especially in object recognition. CNNs are well received and accepted by the scientific community and are already deployed in large corporation like Google and Facebook for solving face recognition and image auto-tagging problems. HTM, on the other hand, is known as a new emerging paradigm and a new meanly-unsupervised method, that is more biologically inspired. It tries to gain more insights from the computational neuroscience community in order to incorporate concepts like time, context and attention during the learning process which are typical of the human brain. In the end, the thesis is supposed to prove that in certain cases, with a lower quantity of data, HTM can outperform CNN.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Microneurography is a method suitable for recording intraneural single or multiunit action potentials in conscious subjects. Microneurography has rarely been applied to animal experiments, where more invasive methods, like the teased fiber recording technique, are widely used. We have tested the feasibility of microneurographic recordings from the peripheral nerves of rats. Tungsten microelectrodes were inserted into the sciatic nerve at mid-thigh level. Single or multiunit action potentials evoked by regular electrical stimulation were recorded, digitized and displayed as a raster plot of latencies. The method allows unambiguous recording and recognition of single C-fiber action potentials from an in vivo preparation, with minimal disruption of the nerve being recorded. Multiple C-fibers can be recorded simultaneously for several hours, and if the animal is allowed to recover, repeated recording sessions can be obtained from the same nerve at the same level over a period of weeks or months. Also, single C units can be functionally identified by their changes in latency to natural stimuli, and insensitive units can be recognized as 'silent' nociceptors or sympathetic efferents by their distinctive profiles of activity-dependent slowing during repetitive electrical stimulation, or by the effect on spontaneous efferent activity of a proximal anesthetic block. Moreover, information about the biophysical properties of C axons can be obtained from their latency recovery cycles. Finally, we show that this preparation is potentially suitable for the study of C-fiber behavior in models of neuropathies and nerve lesions, both under resting conditions and in response to drug administration.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Coordinated eye and head movements simultaneously occur to scan the visual world for relevant targets. However, measuring both eye and head movements in experiments allowing natural head movements may be challenging. This paper provides an approach to study eye-head coordination: First, we demonstra- te the capabilities and limits of the eye-head tracking system used, and compare it to other technologies. Second, a beha- vioral task is introduced to invoke eye-head coordination. Third, a method is introduced to reconstruct signal loss in video- based oculography caused by cornea reflection artifacts in order to extend the tracking range. Finally, parameters of eye- head coordination are identified using EHCA (eye-head co- ordination analyzer), a MATLAB software which was developed to analyze eye-head shifts. To demonstrate the capabilities of the approach, a study with 11 healthy subjects was performed to investigate motion behavior. The approach presented here is discussed as an instrument to explore eye-head coordination, which may lead to further insights into attentional and motor symptoms of certain neurological or psychiatric diseases, e.g., schizophrenia.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In this paper we present a solution to the problem of action and gesture recognition using sparse representations. The dictionary is modelled as a simple concatenation of features computed for each action or gesture class from the training data, and test data is classified by finding sparse representation of the test video features over this dictionary. Our method does not impose any explicit training procedure on the dictionary. We experiment our model with two kinds of features, by projecting (i) Gait Energy Images (GEIs) and (ii) Motion-descriptors, to a lower dimension using Random projection. Experiments have shown 100% recognition rate on standard datasets and are compared to the results obtained with widely used SVM classifier.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

1 Natural soil profiles may be interpreted as an arrangement of parts which are characterized by properties like hydraulic conductivity and water retention function. These parts form a complicated structure. Characterizing the soil structure is fundamental in subsurface hydrology because it has a crucial influence on flow and transport and defines the patterns of many ecological processes. We applied an image analysis method for recognition and classification of visual soil attributes in order to model flow and transport through a man-made soil profile. Modeled and measured saturation-dependent effective parameters were compared. We found that characterizing and describing conductivity patterns in soils with sharp conductivity contrasts is feasible. Differently, solving flow and transport on the basis of these conductivity maps is difficult and, in general, requires special care for representation of small-scale processes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We developed a novel combinatorial method termed restriction endonuclease protection selection and amplification (REPSA) to identify consensus binding sites of DNA-binding ligands. REPSA uses a unique enzymatic selection based on the inhibition of cleavage by a type IIS restriction endonuclease, an enzyme that cleaves DNA at a site distal from its recognition sequence. Sequences bound by a ligand are protected from cleavage while unprotected sequences are cleaved. This enzymatic selection occurs in solution under mild conditions and is dependant only on the DNA-binding ability of the ligand. Thus, REPSA is useful for a broad range of ligands including all classes of DNA-binding ligands, weakly binding ligands, mixed populations of ligands, and unknown ligands. Here I describe REPSA and the application of this method to select the consensus DNA-binding sequences of three representative DNA-binding ligands; a nucleic acid (triplex-forming single-stranded DNA), a protein (the TATA-binding protein), and a small molecule (Distamycin A). These studies generated new information regarding the specificity of these ligands in addition to establishing their DNA-binding sequences. ^