223 resultados para handwritten recognition
Resumo:
Many state of the art vision-based Simultaneous Localisation And Mapping (SLAM) and place recognition systems compute the salience of visual features in their environment. As computing salience can be problematic in radically changing environments new low resolution feature-less systems have been introduced, such as SeqSLAM, all of which consider the whole image. In this paper, we implement a supervised classifier system (UCS) to learn the salience of image regions for place recognition by feature-less systems. SeqSLAM only slightly benefits from the results of training, on the challenging real world Eynsham dataset, as it already appears to filter less useful regions of a panoramic image. However, when recognition is limited to specific image regions performance improves by more than an order of magnitude by utilising the learnt image region saliency. We then investigate whether the region salience generated from the Eynsham dataset generalizes to another car-based dataset using a perspective camera. The results suggest the general applicability of an image region salience mask for optimizing route-based navigation applications.
Resumo:
In the field of face recognition, Sparse Representation (SR) has received considerable attention during the past few years. Most of the relevant literature focuses on holistic descriptors in closed-set identification applications. The underlying assumption in SR-based methods is that each class in the gallery has sufficient samples and the query lies on the subspace spanned by the gallery of the same class. Unfortunately, such assumption is easily violated in the more challenging face verification scenario, where an algorithm is required to determine if two faces (where one or both have not been seen before) belong to the same person. In this paper, we first discuss why previous attempts with SR might not be applicable to verification problems. We then propose an alternative approach to face verification via SR. Specifically, we propose to use explicit SR encoding on local image patches rather than the entire face. The obtained sparse signals are pooled via averaging to form multiple region descriptors, which are then concatenated to form an overall face descriptor. Due to the deliberate loss spatial relations within each region (caused by averaging), the resulting descriptor is robust to misalignment & various image deformations. Within the proposed framework, we evaluate several SR encoding techniques: l1-minimisation, Sparse Autoencoder Neural Network (SANN), and an implicit probabilistic technique based on Gaussian Mixture Models. Thorough experiments on AR, FERET, exYaleB, BANCA and ChokePoint datasets show that the proposed local SR approach obtains considerably better and more robust performance than several previous state-of-the-art holistic SR methods, in both verification and closed-set identification problems. The experiments also show that l1-minimisation based encoding has a considerably higher computational than the other techniques, but leads to higher recognition rates.
Resumo:
Abstract. In recent years, sparse representation based classification(SRC) has received much attention in face recognition with multipletraining samples of each subject. However, it cannot be easily applied toa recognition task with insufficient training samples under uncontrolledenvironments. On the other hand, cohort normalization, as a way of mea-suring the degradation effect under challenging environments in relationto a pool of cohort samples, has been widely used in the area of biometricauthentication. In this paper, for the first time, we introduce cohort nor-malization to SRC-based face recognition with insufficient training sam-ples. Specifically, a user-specific cohort set is selected to normalize theraw residual, which is obtained from comparing the test sample with itssparse representations corresponding to the gallery subject, using poly-nomial regression. Experimental results on AR and FERET databases show that cohort normalization can bring SRC much robustness against various forms of degradation factors for undersampled face recognition.
Resumo:
To recognize faces in video, face appearances have been widely modeled as piece-wise local linear models which linearly approximate the smooth yet non-linear low dimensional face appearance manifolds. The choice of representations of the local models is crucial. Most of the existing methods learn each local model individually meaning that they only anticipate variations within each class. In this work, we propose to represent local models as Gaussian distributions which are learned simultaneously using the heteroscedastic probabilistic linear discriminant analysis (PLDA). Each gallery video is therefore represented as a collection of such distributions. With the PLDA, not only the within-class variations are estimated during the training, the separability between classes is also maximized leading to an improved discrimination. The heteroscedastic PLDA itself is adapted from the standard PLDA to approximate face appearance manifolds more accurately. Instead of assuming a single global within-class covariance, the heteroscedastic PLDA learns different within-class covariances specific to each local model. In the recognition phase, a probe video is matched against gallery samples through the fusion of point-to-model distances. Experiments on the Honda and MoBo datasets have shown the merit of the proposed method which achieves better performance than the state-of-the-art technique.
Resumo:
Increasing awareness of the benefits of stimulating entrepreneurial behaviour in small and medium enterprises has fostered strong interest in innovation programs. Recently many western countries have invested in design innovation for better firm performance. This research presents some early findings from a study of companies that participated in a holistic approach to design innovation, where the outcomes include better business performance and better market positioning in global markets. Preliminary findings from in-depth semi-structured interviews indicate the importance of firm openness to new ways of working and to developing new processes of strategic entrepreneurship. Implications for theory and practice are discussed.
Resumo:
There is an army of bottom of the pyramid entrepreneurs (BOPE) who have the potential to transform developing economies, if they can identify and exploit business opportunities. BOPE could have unidentified resources that could lead to the recognition of radical new opportunities. This study paper asks how environmental factors and identification of resources affect Opportunity Recognition by BOP entrepreneurs in developing economies. To investigate this research question we conduct a literature review and plan semi-structured interviews of existing and nascent entrepreneurs in the largest and arguably the poorest country in Africa, the Democratic Republic of the Congo. In this paper we review the context of BOPE and describe the methodology we will use to gather and analyse data. Finally, we describe our access to suitable respondents for this study and how it will be conducted.
Resumo:
This paper investigates advanced channel compensation techniques for the purpose of improving i-vector speaker verification performance in the presence of high intersession variability using the NIST 2008 and 2010 SRE corpora. The performance of four channel compensation techniques: (a) weighted maximum margin criterion (WMMC), (b) source-normalized WMMC (SN-WMMC), (c) weighted linear discriminant analysis (WLDA), and; (d) source-normalized WLDA (SN-WLDA) have been investigated. We show that, by extracting the discriminatory information between pairs of speakers as well as capturing the source variation information in the development i-vector space, the SN-WLDA based cosine similarity scoring (CSS) i-vector system is shown to provide over 20% improvement in EER for NIST 2008 interview and microphone verification and over 10% improvement in EER for NIST 2008 telephone verification, when compared to SN-LDA based CSS i-vector system. Further, score-level fusion techniques are analyzed to combine the best channel compensation approaches, to provide over 8% improvement in DCF over the best single approach, (SN-WLDA), for NIST 2008 interview/ telephone enrolment-verification condition. Finally, we demonstrate that the improvements found in the context of CSS also generalize to state-of-the-art GPLDA with up to 14% relative improvement in EER for NIST SRE 2010 interview and microphone verification and over 7% relative improvement in EER for NIST SRE 2010 telephone verification.
Resumo:
Odours emitted by flowers are complex blends of volatile compounds. These odours are learnt by flower-visiting insect species, improving their recognition of rewarding flowers and thus foraging efficiency. We investigated the flexibility of floral odour learning by testing whether adult moths recognize single compounds common to flowers on which they forage. Dual choice preference tests on Helicoverpa armigera moths allowed free flying moths to forage on one of three flower species; Argyranthemum frutescens (federation daisy), Cajanus cajan (pigeonpea) or Nicotiana tabacum (tobacco). Results showed that, (i) a benzenoid (phenylacetaldehyde) and a monoterpene (linalool) were subsequently recognized after visits to flowers that emitted these volatile constituents, (ii) in a preference test, other monoterpenes in the flowers' odour did not affect the moths' ability to recognize the monoterpene linalool and (iii) relative preferences for two volatiles changed after foraging experience on a single flower species that emitted both volatiles. The importance of using free flying insects and real flowers to understand the mechanisms involved in floral odour learning in nature are discussed in the context of our findings.
Resumo:
In this paper we use the algorithm SeqSLAM to address the question, how little and what quality of visual information is needed to localize along a familiar route? We conduct a comprehensive investigation of place recognition performance on seven datasets while varying image resolution (primarily 1 to 512 pixel images), pixel bit depth, field of view, motion blur, image compression and matching sequence length. Results confirm that place recognition using single images or short image sequences is poor, but improves to match or exceed current benchmarks as the matching sequence length increases. We then present place recognition results from two experiments where low-quality imagery is directly caused by sensor limitations; in one, place recognition is achieved along an unlit mountain road by using noisy, long-exposure blurred images, and in the other, two single pixel light sensors are used to localize in an indoor environment. We also show failure modes caused by pose variance and sequence aliasing, and discuss ways in which they may be overcome. By showing how place recognition along a route is feasible even with severely degraded image sequences, we hope to provoke a re-examination of how we develop and test future localization and mapping systems.
Resumo:
Uncooperative iris identification systems at a distance suffer from poor resolution of the acquired iris images, which significantly degrades iris recognition performance. Super-resolution techniques have been employed to enhance the resolution of iris images and improve the recognition performance. However, most existing super-resolution approaches proposed for the iris biometric super-resolve pixel intensity values, rather than the actual features used for recognition. This paper thoroughly investigates transferring super-resolution of iris images from the intensity domain to the feature domain. By directly super-resolving only the features essential for recognition, and by incorporating domain specific information from iris models, improved recognition performance compared to pixel domain super-resolution can be achieved. A framework for applying super-resolution to nonlinear features in the feature-domain is proposed. Based on this framework, a novel feature-domain super-resolution approach for the iris biometric employing 2D Gabor phase-quadrant features is proposed. The approach is shown to outperform its pixel domain counterpart, as well as other feature domain super-resolution approaches and fusion techniques.
Resumo:
The recognition and enforcement of foreign judgments is an aspect of private international law, and concerns situations where a successful party to litigation seeks to rely on a judgment obtained in one court, in a court in another jurisdiction. The most common example where the recognition and enforcement of foreign judgments may arise is where a party who has obtained a favourable judgment in one state or country may seek to recognise and enforce the judgment in another state or country. This occurs because there is no sufficient asset in the state or country where the judgment was rendered to satisfy that judgment. As technological advancements in communications over vast geographical distances have improved exponentially in recent years, there has been an increase in cross-border transactions, as well as litigation arising from these transactions. As a result, the recognition and enforcement of foreign judgments is of increasing importance, since a party who has obtained a judgment in cross-border litigation may wish to recognise and enforce the judgment in another state or country, where the defendant’s assets may be located without having to re-litigate substantive issues that have already been resolved in another court. The purpose of the study is to examine whether the current state of laws for the recognition and enforcement of foreign judgments in Australia, the United States and the European Community are in line with modern-commercial needs. The study is conducted by weighing two competing objectives between the notion of finality of litigation, which encourages courts to recognise and enforce judgments foreign to them, on the one hand, and the adequacy of protection to safeguard the recognition and enforcement proceedings, so that there would be no injustice or unfairness if a foreign judgment is recognised and enforced, on the other. The findings of the study are as follows. In both Australia and the United States, there is a different approach concerning the recognition and enforcement of judgments rendered by courts interstate or in a foreign country. In order to maintain a single and integrated nation, there are constitutional and legislative requirements authorising courts to give conclusive effects to interstate judgments. In contrast, if the recognition and enforcement actions involve judgments rendered by a foreign country’s court, an Australian or a United States court will not recognise and enforce the foreign judgment unless the judgment has satisfied a number of requirements and does not fall under any of the exceptions to justify its non-recognition and non-enforcement. In the European Community, the Brussels I Regulation which governs the recognition and enforcement of judgments among European Union Member States has created a scheme, whereby there is only a minimal requirement that needs to be satisfied for the purposes of recognition and enforcement. Moreover, a judgment that is rendered by a Member State and based on any of the jurisdictional bases set forth in the Brussels I Regulation is entitled to be recognised and enforced in another Member State without further review of its underlying jurisdictional basis. However, there are concerns as to the adequacy of protection available under the Brussels I Regulation to safeguard the judgment-enforcing Member States, as well as those against whom recognition or enforcement is sought. This dissertation concludes by making two recommendations aimed at improving the means by which foreign judgments are recognised and enforced in the selected jurisdictions. The first is for the law in both Australia and the United States to undergo reform, including: adopting the real and substantial connection test as the new jurisdictional basis for the purposes of recognition and enforcement; liberalising the existing defences to safeguard the application of the real and substantial connection test; extending the application of the Foreign Judgments Act 1991 (Cth) in Australia to include at least its important trading partners; and implementing a federal statutory scheme in the United States to govern the recognition and enforcement of foreign judgments. The second recommendation is to introduce a convention on jurisdiction and the recognition and enforcement of foreign judgments. The convention will be a convention double, which provides uniform standards for the rules of jurisdiction a court in a contracting state must exercise when rendering a judgment and a set of provisions for the recognition and enforcement of resulting judgments.
Resumo:
BACKGROUND & AIMS Metabolomics is comprehensive analysis of low-molecular-weight endogenous metabolites in a biological sample. It could enable mapping of perturbations of early biochemical changes in diseases and hence provide an opportunity to develop predictive biomarkers that could provide valuable insights into the mechanisms of diseases. The aim of this study was to elucidate the changes in endogenous metabolites and to phenotype the metabolic profiling of d-galactosamine (GalN)-inducing acute hepatitis in rats by UPLC-ESI MS. METHODS The systemic biochemical actions of GalN administration (ip, 400 mg/kg) have been investigated in male wistar rats using conventional clinical chemistry, liver histopathology and metabolomic analysis of UPLC- ESI MS of urine. The urine was collected predose (-24 to 0 h) and 0-24, 24-48, 48-72, 72-96 h post-dose. Mass spectrometry of the urine was analysed visually and via conjunction with multivariate data analysis. RESULTS Results demonstrated that there was a time-dependent biochemical effect of GalN dosed on the levels of a range of low-molecular-weight metabolites in urine, which was correlated with developing phase of the GalN-inducing acute hepatitis. Urinary excretion of beta-hydroxybutanoic acid and citric acid was decreased following GalN dosing, whereas that of glycocholic acid, indole-3-acetic acid, sphinganine, n-acetyl-l-phenylalanine, cholic acid and creatinine excretion was increased, which suggests that several key metabolic pathways such as energy metabolism, lipid metabolism and amino acid metabolism were perturbed by GalN. CONCLUSION This metabolomic investigation demonstrates that this robust non-invasive tool offers insight into the metabolic states of diseases.
Resumo:
In this paper, we explore the effectiveness of patch-based gradient feature extraction methods when applied to appearance-based gait recognition. Extending existing popular feature extraction methods such as HOG and LDP, we propose a novel technique which we term the Histogram of Weighted Local Directions (HWLD). These 3 methods are applied to gait recognition using the GEI feature, with classification performed using SRC. Evaluations on the CASIA and OULP datasets show significant improvements using these patch-based methods over existing implementations, with the proposed method achieving the highest recognition rate for the respective datasets. In addition, the HWLD can easily be extended to 3D, which we demonstrate using the GEV feature on the DGD dataset, observing improvements in performance.
Resumo:
A significant amount of speech is typically required for speaker verification system development and evaluation, especially in the presence of large intersession variability. This paper introduces a source and utterance duration normalized linear discriminant analysis (SUN-LDA) approaches to compensate session variability in short-utterance i-vector speaker verification systems. Two variations of SUN-LDA are proposed where normalization techniques are used to capture source variation from both short and full-length development i-vectors, one based upon pooling (SUN-LDA-pooled) and the other on concatenation (SUN-LDA-concat) across the duration and source-dependent session variation. Both the SUN-LDA-pooled and SUN-LDA-concat techniques are shown to provide improvement over traditional LDA on NIST 08 truncated 10sec-10sec evaluation conditions, with the highest improvement obtained with the SUN-LDA-concat technique achieving a relative improvement of 8% in EER for mis-matched conditions and over 3% for matched conditions over traditional LDA approaches.
Resumo:
Submission recommended addition of a new 'self-enacting' preamble and enacting words to the Commownealth Constitution, and replacement of the 'race power' by a series of more specific powers relating to the recognition of native title and laws of the indigenous people.