960 resultados para Optical pattern recognition.


Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper, a novel pattern recognition scheme, global harmonic subspace analysis (GHSA), is developed for face recognition. In the proposed scheme, global harmonic features are extracted at the semantic scale to capture the 2-D semantic spatial structures of a face image. Laplacian Eigenmap is applied to discriminate faces in their global harmonic subspace. Experimental results on the Yale and PIE face databases show that the proposed GHSA scheme achieves an improvement in face recognition accuracy when compared with conventional subspace approaches, and a further investigation shows that the proposed GHSA scheme has impressive robustness to noise.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This study investigates face recognition with partial occlusion, illumination variation and their combination, assuming no prior information about the mismatch, and limited training data for each person. The authors extend their previous posterior union model (PUM) to give a new method capable of dealing with all these problems. PUM is an approach for selecting the optimal local image features for recognition to improve robustness to partial occlusion. The extension is in two stages. First, authors extend PUM from a probability-based formulation to a similarity-based formulation, so that it operates with as little as one single training sample to offer robustness to partial occlusion. Second, they extend this new formulation to make it robust to illumination variation, and to combined illumination variation and partial occlusion, by a novel combination of multicondition relighting and optimal feature selection. To evaluate the new methods, a number of databases with various simulated and realistic occlusion/illumination mismatches have been used. The results have demonstrated the improved robustness of the new methods.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Gabor features have been recognized as one of the most successful face representations. Encouraged by the results given by this approach, other kind of facial representations based on Steerable Gaussian first order kernels and Harris corner detector are proposed in this paper. In order to reduce the high dimensional feature space, PCA and LDA techniques are employed. Once the features have been extracted, AdaBoost learning algorithm is used to select and combine the most representative features. The experimental results on XM2VTS database show an encouraging recognition rate, showing an important improvement with respect to face descriptors only based on Gabor filters.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

This paper presents a novel method that leverages reasoning capabilities in a computer vision system dedicated to human action recognition. The proposed methodology is decomposed into two stages. First, a machine learning based algorithm - known as bag of words - gives a first estimate of action classification from video sequences, by performing an image feature analysis. Those results are afterward passed to a common-sense reasoning system, which analyses, selects and corrects the initial estimation yielded by the machine learning algorithm. This second stage resorts to the knowledge implicit in the rationality that motivates human behaviour. Experiments are performed in realistic conditions, where poor recognition rates by the machine learning techniques are significantly improved by the second stage in which common-sense knowledge and reasoning capabilities have been leveraged. This demonstrates the value of integrating common-sense capabilities into a computer vision pipeline. © 2012 Elsevier B.V. All rights reserved.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Distinct neural populations carry signals from short-wave (S) cones. We used individual differences to test whether two types of pathways, those that receive excitatory input (S+) and those that receive inhibitory input (S-), contribute independently to psychophysical performance. We also conducted a genome-wide association study (GWAS) to look for genetic correlates of the individual differences. Our psychophysical test was based on the Cambridge Color Test, but detection thresholds were measured separately for S-cone spatial increments and decrements. Our participants were 1060 healthy adults aged 16-40. Test-retest reliabilities for thresholds were good (ρ=0.64 for S-cone increments, 0.67 for decrements and 0.73 for the average of the two). "Regression scores," isolating variability unique to incremental or decremental sensitivity, were also reliable (ρ=0.53 for increments and ρ=0.51 for decrements). The correlation between incremental and decremental thresholds was ρ=0.65. No genetic markers reached genome-wide significance (p-7). We identified 18 "suggestive" loci (p-5). The significant test-retest reliabilities show stable individual differences in S-cone sensitivity in a normal adult population. Though a portion of the variance in sensitivity is shared between incremental and decremental sensitivity, over 26% of the variance is stable across individuals, but unique to increments or decrements, suggesting distinct neural substrates. Some of the variability in sensitivity is likely to be genetic. We note that four of the suggestive associations found in the GWAS are with genes that are involved in glucose metabolism or have been associated with diabetes.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

The OSCAR test, a clinical device that uses counterphase flicker photometry, is believed to be sensitive to the relative numbers of long-wavelength and middle-wavelength cones in the retina, as well as to individual variations in the spectral positions of the photopigments. As part of a population study of individual variations in perception, we obtained OSCAR settings from 1058 participants. We report the distribution characteristics for this cohort. A randomly selected subset of participants was tested twice at an interval of at least one week: the test-retest reliability (Spearman's rho) was 0.80. In a whole-genome association analysis we found a provisional association with a single nucleotide polymorphism (rs16844995). This marker is close to the gene RXRG, which encodes a nuclear receptor, retinoid X receptor γ. This nuclear receptor is already known to have a role in the differentiation of cones during the development of the eye, and we suggest that polymorphisms in or close to RXRG influence the relative probability with which long-wave and middle-wave opsin genes are expressed in human cones.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Recent work suggests that the human ear varies significantly between different subjects and can be used for identification. In principle, therefore, using ears in addition to the face within a recognition system could improve accuracy and robustness, particularly for non-frontal views. The paper describes work that investigates this hypothesis using an approach based on the construction of a 3D morphable model of the head and ear. One issue with creating a model that includes the ear is that existing training datasets contain noise and partial occlusion. Rather than exclude these regions manually, a classifier has been developed which automates this process. When combined with a robust registration algorithm the resulting system enables full head morphable models to be constructed efficiently using less constrained datasets. The algorithm has been evaluated using registration consistency, model coverage and minimalism metrics, which together demonstrate the accuracy of the approach. To make it easier to build on this work, the source code has been made available online.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Social signals and interpretation of carried information is of high importance in Human Computer Interaction. Often used for affect recognition, the cues within these signals are displayed in various modalities. Fusion of multi-modal signals is a natural and interesting way to improve automatic classification of emotions transported in social signals. Throughout most present studies, uni-modal affect recognition as well as multi-modal fusion, decisions are forced for fixed annotation segments across all modalities. In this paper, we investigate the less prevalent approach of event driven fusion, which indirectly accumulates asynchronous events in all modalities for final predictions. We present a fusion approach, handling short-timed events in a vector space, which is of special interest for real-time applications. We compare results of segmentation based uni-modal classification and fusion schemes to the event driven fusion approach. The evaluation is carried out via detection of enjoyment-episodes within the audiovisual Belfast Story-Telling Corpus.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Burkholderia cenocepacia causes opportunistic infections in plants, insects, animals, and humans, suggesting that “virulence” depends on the host and its innate susceptibility to infection. We hypothesized that modifications in key bacterial molecules recognized by the innate immune system modulate host responses to B. cenocepacia. Indeed, modification of lipo- polysaccharide (LPS) with 4-amino-4-deoxy-L-arabinose and flagellin glycosylation attenuates B. cenocepacia infection in Arabi- dopsis thaliana and Galleria mellonella insect larvae. However, B. cenocepacia LPS and flagellin triggered rapid bursts of nitric oxide and reactive oxygen species in A. thaliana leading to activation of the PR-1 defense gene. These responses were drastically reduced in plants with fls2 (flagellin FLS2 host receptor kinase), Atnoa1 (nitric oxide-associated protein 1), and dnd1-1 (reduced production of nitric oxide) null mutations. Together, our results indicate that LPS modification and flagellin glycosylation do not affect recognition by plant receptors but are required for bacteria to establish overt infection.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

In this paper we propose a novel recurrent neural networkarchitecture for video-based person re-identification.Given the video sequence of a person, features are extracted from each frame using a convolutional neural network that incorporates a recurrent final layer, which allows information to flow between time-steps. The features from all time steps are then combined using temporal pooling to give an overall appearance feature for the complete sequence. The convolutional network, recurrent layer, and temporal pooling layer, are jointly trained to act as a feature extractor for video-based re-identification using a Siamese network architecture.Our approach makes use of colour and optical flow information in order to capture appearance and motion information which is useful for video re-identification. Experiments are conduced on the iLIDS-VID and PRID-2011 datasets to show that this approach outperforms existing methods of video-based re-identification.

https://github.com/niallmcl/Recurrent-Convolutional-Video-ReID
Project Source Code

Relevância:

90.00% 90.00%

Publicador:

Resumo:

With the rapid development of internet-of-things (IoT), face scrambling has been proposed for privacy protection during IoT-targeted image/video distribution. Consequently in these IoT applications, biometric verification needs to be carried out in the scrambled domain, presenting significant challenges in face recognition. Since face models become chaotic signals after scrambling/encryption, a typical solution is to utilize traditional data-driven face recognition algorithms. While chaotic pattern recognition is still a challenging task, in this paper we propose a new ensemble approach – Many-Kernel Random Discriminant Analysis (MK-RDA) to discover discriminative patterns from chaotic signals. We also incorporate a salience-aware strategy into the proposed ensemble method to handle chaotic facial patterns in the scrambled domain, where random selections of features are made on semantic components via salience modelling. In our experiments, the proposed MK-RDA was tested rigorously on three human face datasets: the ORL face dataset, the PIE face dataset and the PUBFIG wild face dataset. The experimental results successfully demonstrate that the proposed scheme can effectively handle chaotic signals and significantly improve the recognition accuracy, making our method a promising candidate for secure biometric verification in emerging IoT applications.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Models of visual perception are based on image representations in cortical area V1 and higher areas which contain many cell layers for feature extraction. Basic simple, complex and end-stopped cells provide input for line, edge and keypoint detection. In this paper we present an improved method for multi-scale line/edge detection based on simple and complex cells. We illustrate the line/edge representation for object reconstruction, and we present models for multi-scale face (object) segregation and recognition that can be embedded into feedforward dorsal and ventral data streams (the “what” and “where” subsystems) with feedback streams from higher areas for obtaining translation, rotation and scale invariance.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Object recognition requires that templates with canonical views are stored in memory. Such templates must somehow be normalised. In this paper we present a novel method for obtaining 2D translation, rotation and size invariance. Cortical simple, complex and end-stopped cells provide multi-scale maps of lines, edges and keypoints. These maps are combined such that objects are characterised. Dynamic routing in neighbouring neural layers allows feature maps of input objects and stored templates to converge. We illustrate the construction of group templates and the invariance method for object categorisation and recognition in the context of a cortical architecture, which can be applied in computer vision.

Relevância:

90.00% 90.00%

Publicador:

Resumo:

Human electrophysiological studies support a model whereby sensitivity to so-called illusory contour stimuli is first seen within the lateral occipital complex. A challenge to this model posits that the lateral occipital complex is a general site for crude region-based segmentation, based on findings of equivalent hemodynamic activations in the lateral occipital complex to illusory contour and so-called salient region stimuli, a stimulus class that lacks the classic bounding contours of illusory contours. Using high-density electrical mapping of visual evoked potentials, we show that early lateral occipital cortex activity is substantially stronger to illusory contour than to salient region stimuli, whereas later lateral occipital complex activity is stronger to salient region than to illusory contour stimuli. Our results suggest that equivalent hemodynamic activity to illusory contour and salient region stimuli probably reflects temporally integrated responses, a result of the poor temporal resolution of hemodynamic imaging. The temporal precision of visual evoked potentials is critical for establishing viable models of completion processes and visual scene analysis. We propose that crude spatial segmentation analyses, which are insensitive to illusory contours, occur first within dorsal visual regions, not the lateral occipital complex, and that initial illusory contour sensitivity is a function of the lateral occipital complex.