952 resultados para Fingerprint recognition method
Resumo:
In this work, a modified version of the elastic bunch graph matching (EBGM) algorithm for face recognition is introduced. First, faces are detected by using a fuzzy skin detector based on the RGB color space. Then, the fiducial points for the facial graph are extracted automatically by adjusting a grid of points to the result of an edge detector. After that, the position of the nodes, their relation with their neighbors and their Gabor jets are calculated in order to obtain the feature vector defining each face. A self-organizing map (SOM) framework is shown afterwards. Thus, the calculation of the winning neuron and the recognition process are performed by using a similarity function that takes into account both the geometric and texture information of the facial graph. The set of experiments carried out for our SOM-EBGM method shows the accuracy of our proposal when compared with other state-of the-art methods.
Resumo:
* This work was financially supported by RFBR-04-01-00858.
Resumo:
Motivated by a recently proposed biologically inspired face recognition approach, we investigated the relation between human behavior and a computational model based on Fourier-Bessel (FB) spatial patterns. We measured human recognition performance of FB filtered face images using an 8-alternative forced-choice method. Test stimuli were generated by converting the images from the spatial to the FB domain, filtering the resulting coefficients with a band-pass filter, and finally taking the inverse FB transformation of the filtered coefficients. The performance of the computational models was tested using a simulation of the psychophysical experiment. In the FB model, face images were first filtered by simulated V1- type neurons and later analyzed globally for their content of FB components. In general, there was a higher human contrast sensitivity to radially than to angularly filtered images, but both functions peaked at the 11.3-16 frequency interval. The FB-based model presented similar behavior with regard to peak position and relative sensitivity, but had a wider frequency band width and a narrower response range. The response pattern of two alternative models, based on local FB analysis and on raw luminance, strongly diverged from the human behavior patterns. These results suggest that human performance can be constrained by the type of information conveyed by polar patterns, and consequently that humans might use FB-like spatial patterns in face processing.
Resumo:
The broad use of transgenic and gene-targeted mice has established bone marrow-derived macrophages (BMDM) as important mammalian host cells for investigation of the macrophages biology. Over the last decade, extensive research has been done to determine how to freeze and store viable hematopoietic human cells; however, there is no information regarding generation of BMDM from frozen murine bone marrow (BM) cells. Here, we establish a highly efficient protocol to freeze murine BM cells and further generate BMDM. Cryopreserved murine BM cells maintain their potential for BMDM differentiation for more than 6 years. We compared BMDM obtained from fresh and frozen BM cells and found that both are similarly able to trigger the expression of CD80 and CD86 in response to LPS or infection with the intracellular bacteria Legionella pneumophila. Additionally, BMDM obtained from fresh or frozen BM cells equally restrict or support the intracellular multiplication of pathogens such as L. pneumophila and the protozoan parasite Leishmania (L.) amazonensis. Although further investigation are required to support the use of the method for generation of dendritic cells, preliminary experiments indicate that bone marrow-derived dendritic cells can also be generated from cryopreserved BM cells. Overall, the method described and validated herein represents a technical advance as it allows ready and easy generation of BMDM from a stock of frozen BM cells.
Resumo:
Optical monitoring systems are necessary to manufacture multilayer thin-film optical filters with low tolerance on spectrum specification. Furthermore, to have better accuracy on the measurement of film thickness, direct monitoring is a must. Direct monitoring implies acquiring spectrum data from the optical component undergoing the film deposition itself, in real time. In making film depositions on surfaces of optical components, the high vacuum evaporator chamber is the most popular equipment. Inside the evaporator, at the top of the chamber, there is a metallic support with several holes where the optical components are assembled. This metallic support has rotary motion to promote film homogenization. To acquire a measurement of the spectrum of the film in deposition, it is necessary to pass a light beam through a glass witness undergoing the film deposition process, and collect a sample of the light beam using a spectrometer. As both the light beam and the light collector are stationary, a synchronization system is required to identify the moment at which the optical component passes through the light beam.
Resumo:
This paper proposes a novel computer vision approach that processes video sequences of people walking and then recognises those people by their gait. Human motion carries different information that can be analysed in various ways. The skeleton carries motion information about human joints, and the silhouette carries information about boundary motion of the human body. Moreover, binary and gray-level images contain different information about human movements. This work proposes to recover these different kinds of information to interpret the global motion of the human body based on four different segmented image models, using a fusion model to improve classification. Our proposed method considers the set of the segmented frames of each individual as a distinct class and each frame as an object of this class. The methodology applies background extraction using the Gaussian Mixture Model (GMM), a scale reduction based on the Wavelet Transform (WT) and feature extraction by Principal Component Analysis (PCA). We propose four new schemas for motion information capture: the Silhouette-Gray-Wavelet model (SGW) captures motion based on grey level variations; the Silhouette-Binary-Wavelet model (SBW) captures motion based on binary information; the Silhouette-Edge-Binary model (SEW) captures motion based on edge information and the Silhouette Skeleton Wavelet model (SSW) captures motion based on skeleton movement. The classification rates obtained separately from these four different models are then merged using a new proposed fusion technique. The results suggest excellent performance in terms of recognising people by their gait.
Resumo:
We present a fast method for finding optimal parameters for a low-resolution (threading) force field intended to distinguish correct from incorrect folds for a given protein sequence. In contrast to other methods, the parameterization uses information from >10(7) misfolded structures as well as a set of native sequence-structure pairs. In addition to testing the resulting force field's performance on the protein sequence threading problem, results are shown that characterize the number of parameters necessary for effective structure recognition.
Resumo:
We describe two ways of optimizing score functions for protein sequence to structure threading. The first method adjusts parameters to improve sequence to structure alignment. The second adjusts parameters so as to improve a score function's ability to rank alignments calculated in the first score function. Unlike those functions known as knowledge-based force fields, the resulting parameter sets do not rely on Boltzmann statistics, have no claim to representing free energies and are purely constructions for recognizing protein folds. The methods give a small improvement, but suggest that functions can be profitably optimized for very specific aspects of protein fold recognition, Proteins 1999;36:454-461. (C) 1999 Wiley-Liss, Inc.
Resumo:
Recent studies have demonstrated that spatial patterns of fMRI BOLD activity distribution over the brain may be used to classify different groups or mental states. These studies are based on the application of advanced pattern recognition approaches and multivariate statistical classifiers. Most published articles in this field are focused on improving the accuracy rates and many approaches have been proposed to accomplish this task. Nevertheless, a point inherent to most machine learning methods (and still relatively unexplored in neuroimaging) is how the discriminative information can be used to characterize groups and their differences. In this work, we introduce the Maximum Uncertainty Linear Discrimination Analysis (MLDA) and show how it can be applied to infer groups` patterns by discriminant hyperplane navigation. In addition, we show that it naturally defines a behavioral score, i.e., an index quantifying the distance between the states of a subject from predefined groups. We validate and illustrate this approach using a motor block design fMRI experiment data with 35 subjects. (C) 2008 Elsevier Inc. All rights reserved.
Resumo:
Background Although fatigue is a ubiquitous symptom across countries, clinical descriptions of chronic fatigue syndrome have arisen from a limited number of high-income countries. This might reflect differences in true prevalence or clinical recognition influenced by sociocultural factors. Aims To compare the prevalence, physician recognition and diagnosis of chronic fatigue syndrome in London and Sao Paulo. Method Primary care patients in London (n=2459) and Sao Paulo n=3914) were surveyed for the prevalence of chronic fatigue syndrome. Medical records were reviewed for the physician recognition and diagnosis. Results The prevalence of chronic fatigue syndrome according to Centers for Disease Control 1994 criteria was comparable in Britain and Brazil, 2.1% v. 1.6% (P=0.20). Medical records review identified 11 diagnosed cases of chronic fatigue syndrome in Britain, but none in Brazil (P<0.001). Conclusions The primary care prevalence of chronic fatigue syndrome was similar in two Culturally and economically distinct nations. However, doctors are unlikely to recognise and label chronic fatigue syndrome as a discrete disorder in Brazil. The recognition of this illness rather than the illness itself may be culturally induced.
Resumo:
Blasting has been the most frequently used method for rock breakage since black powder was first used to fragment rocks, more than two hundred years ago. This paper is an attempt to reassess standard design techniques used in blasting by providing an alternative approach to blast design. The new approach has been termed asymmetric blasting. Based on providing real time rock recognition through the capacity of measurement while drilling (MWD) techniques, asymmetric blasting is an approach to deal with rock properties as they occur in nature, i.e., randomly and asymmetrically spatially distributed. It is well accepted that performance of basic mining operations, such as excavation and crushing rely on a broken rock mass which has been pre conditioned by the blast. By pre-conditioned we mean well fragmented, sufficiently loose and with adequate muckpile profile. These muckpile characteristics affect loading and hauling [1]. The influence of blasting does not end there. Under the Mine to Mill paradigm, blasting has a significant leverage on downstream operations such as crushing and milling. There is a body of evidence that blasting affects mineral liberation [2]. Thus, the importance of blasting has increased from simply fragmenting and loosing the rock mass, to a broader role that encompasses many aspects of mining, which affects the cost of the end product. A new approach is proposed in this paper which facilitates this trend 'to treat non-homogeneous media (rock mass) in a non-homogeneous manner (an asymmetrical pattern) in order to achieve an optimal result (in terms of muckpile size distribution).' It is postulated there are no logical reasons (besides the current lack of means to infer rock mass properties in the blind zones of the bench and onsite precedents) for drilling a regular blast pattern over a rock mass that is inherently heterogeneous. Real and theoretical examples of such a method are presented.
Resumo:
Dental implant recognition in patients without available records is a time-consuming and not straightforward task. The traditional method is a complete user-dependent process, where the expert compares a 2D X-ray image of the dental implant with a generic database. Due to the high number of implants available and the similarity between them, automatic/semi-automatic frameworks to aide implant model detection are essential. In this study, a novel computer-aided framework for dental implant recognition is suggested. The proposed method relies on image processing concepts, namely: (i) a segmentation strategy for semi-automatic implant delineation; and (ii) a machine learning approach for implant model recognition. Although the segmentation technique is the main focus of the current study, preliminary details of the machine learning approach are also reported. Two different scenarios are used to validate the framework: (1) comparison of the semi-automatic contours against implant’s manual contours of 125 X-ray images; and (2) classification of 11 known implants using a large reference database of 601 implants. Regarding experiment 1, 0.97±0.01, 2.24±0.85 pixels and 11.12±6 pixels of dice metric, mean absolute distance and Hausdorff distance were obtained, respectively. In experiment 2, 91% of the implants were successfully recognized while reducing the reference database to 5% of its original size. Overall, the segmentation technique achieved accurate implant contours. Although the preliminary classification results prove the concept of the current work, more features and an extended database should be used in a future work.
Resumo:
The use of iris recognition for human authentication has been spreading in the past years. Daugman has proposed a method for iris recognition, composed by four stages: segmentation, normalization, feature extraction, and matching. In this paper we propose some modifications and extensions to Daugman's method to cope with noisy images. These modifications are proposed after a study of images of CASIA and UBIRIS databases. The major modification is on the computationally demanding segmentation stage, for which we propose a faster and equally accurate template matching approach. The extensions on the algorithm address the important issue of pre-processing that depends on the image database, being mandatory when we have a non infra-red camera, like a typical WebCam. For this scenario, we propose methods for reflection removal and pupil enhancement and isolation. The tests, carried out by our C# application on grayscale CASIA and UBIRIS images show that the template matching segmentation method is more accurate and faster than the previous one, for noisy images. The proposed algorithms are found to be efficient and necessary when we deal with non infra-red images and non uniform illumination.
Resumo:
As a result of the stressful conditions in aquaculture facilities there is a high risk of bacterial infections among cultured fish. Chlortetracycline (CTC) is one of the antimicrobials used to solve this problem. It is a broad spectrum antibacterial active against a wide range of Gram-positive and Gram-negative bacteria. Numerous analytical methods for screening, identifying, and quantifying CTC in animal products have been developed over the years. An alternative and advantageous method should rely on expeditious and efficient procedures providing highly specific and sensitive measurements in food samples. Ion-selective electrodes (ISEs) could meet these criteria. The only ISE reported in literature for this purpose used traditional electro-active materials. A selectivity enhancement could however be achieved after improving the analyte recognition by molecularly imprinted polymers (MIPs). Several MIP particles were synthesized and used as electro-active materials. ISEs based in methacrylic acid monomers showed the best analytical performance according to slope (62.5 and 68.6 mV/decade) and detection limit (4.1×10−5 and 5.5×10−5 mol L−1). The electrodes displayed good selectivity. The ISEs are not affected by pH changes ranging from 2.5 to 13. The sensors were successfully applied to the analysis of serum, urine and fish samples.