994 resultados para Bank Check Recognition
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
We describe a novel method for human activity segmentation and interpretation in surveillance applications based on Gabor filter-bank features. A complex human activity is modeled as a sequence of elementary human actions like walking, running, jogging, boxing, hand-waving etc. Since human silhouette can be modeled by a set of rectangles, the elementary human actions can be modeled as a sequence of a set of rectangles with different orientations and scales. The activity segmentation is based on Gabor filter-bank features and normalized spectral clustering. The feature trajectories of an action category are learnt from training example videos using dynamic time warping. The combined segmentation and the recognition processes are very efficient as both the algorithms share the same framework and Gabor features computed for the former can be used for the later. We have also proposed a simple shadow detection technique to extract good silhouette which is necessary for good accuracy of an action recognition technique.
Resumo:
This paper introduces a new technique for palmprint recognition based on Fisher Linear Discriminant Analysis (FLDA) and Gabor filter bank. This method involves convolving a palmprint image with a bank of Gabor filters at different scales and rotations for robust palmprint features extraction. Once these features are extracted, FLDA is applied for dimensionality reduction and class separability. Since the palmprint features are derived from the principal lines, wrinkles and texture along the palm area. One should carefully consider this fact when selecting the appropriate palm region for the feature extraction process in order to enhance recognition accuracy. To address this problem, an improved region of interest (ROI) extraction algorithm is introduced. This algorithm allows for an efficient extraction of the whole palm area by ignoring all the undesirable parts, such as the fingers and background. Experiments have shown that the proposed method yields attractive performances as evidenced by an Equal Error Rate (EER) of 0.03%.
Resumo:
A mosaic of two WorldView-2 high resolution multispectral images (Acquisition dates: October 2010 and April 2012), in conjunction with field survey data, was used to create a habitat map of the Danajon Bank, Philippines (10°15'0'' N, 124°08'0'' E) using an object-based approach. To create the habitat map, we conducted benthic cover (seafloor) field surveys using two methods. Firstly, we undertook georeferenced point intercept transects (English et al., 1997). For ten sites we recorded habitat cover types at 1 m intervals on 10 m long transects (n= 2,070 points). Second, we conducted geo-referenced spot check surveys, by placing a viewing bucket in the water to estimate the percent cover benthic cover types (n = 2,357 points). Survey locations were chosen to cover a diverse and representative subset of habitats found in the Danajon Bank. The combination of methods was a compromise between the higher accuracy of point intercept transects and the larger sample area achievable through spot check surveys (Roelfsema and Phinn, 2008, doi:10.1117/12.804806). Object-based image analysis, using the field data as calibration data, was used to classify the image mosaic at each of the reef, geomorphic and benthic community levels. The benthic community level segregated the image into a total of 17 pure and mixed benthic classes.
Resumo:
A fundamental part of many authentication protocols which authenticate a party to a human involves the human recognizing or otherwise processing a message received from the party. Examples include typical implementations of Verified by Visa in which a message, previously stored by the human at a bank, is sent by the bank to the human to authenticate the bank to the human; or the expectation that humans will recognize or verify an extended validation certificate in a HTTPS context. This paper presents general definitions and building blocks for the modelling and analysis of human recognition in authentication protocols, allowing the creation of proofs for protocols which include humans. We cover both generalized trawling and human-specific targeted attacks. As examples of the range of uses of our construction, we use the model presented in this paper to prove the security of a mutual authentication login protocol and a human-assisted device pairing protocol.
Resumo:
It Is well established that a sequence template along with the database is a powerful tool for identifying the biological function of proteins. Here, we describe a method for predicting the catalytic nature of certain proteins among the several protein structures deposited in the Protein Data Bank (PDB) For the present study, we considered a catalytic triad template (Ser-His-Asp) found in serine proteases We found that a geometrically optimized active site template can be used as a highly selective tool for differentiating an active protein among several inactive proteins, based on their Ser-His-Asp interactions. For any protein to be proteolytic in nature, the bond angle between Ser O-gamma-Ser H-gamma His N-epsilon 2 in the catalytic triad needs to be between 115 degrees and 140 degrees The hydrogen bond distance between Ser H-gamma His N-epsilon 2 is more flexible in nature and it varies from 2 0 angstrom to 27 angstrom while in the case of His H-delta 1 Asp O-delta 1, it is from 1.6 angstrom to 2.0 angstrom In terms of solvent accessibility, most of the active proteins lie in the range of 10-16 angstrom(2), which enables easy accessibility to the substrate These observations hold good for most catalytic triads and they can be employed to predict proteolytic nature of these catalytic triads (C) 2010 Elsevier B V All rights reserved.
Resumo:
In this paper, we propose a novel dexterous technique for fast and accurate recognition of online handwritten Kannada and Tamil characters. Based on the primary classifier output and prior knowledge, the best classifier is chosen from set of three classifiers for second stage classification. Prior knowledge is obtained through analysis of the confusion matrix of primary classifier which helped in identifying the multiple sets of confused characters. Further, studies were carried out to check the performance of secondary classifiers in disambiguating among the confusion sets. Using this technique we have achieved an average accuracy of 92.6% for Kannada characters on the MILE lab dataset and 90.2% for Tamil characters on the HP Labs dataset.
Resumo:
Study of emotions in human-computer interaction is a growing research area. This paper shows an attempt to select the most significant features for emotion recognition in spoken Basque and Spanish Languages using different methods for feature selection. RekEmozio database was used as the experimental data set. Several Machine Learning paradigms were used for the emotion classification task. Experiments were executed in three phases, using different sets of features as classification variables in each phase. Moreover, feature subset selection was applied at each phase in order to seek for the most relevant feature subset. The three phases approach was selected to check the validity of the proposed approach. Achieved results show that an instance-based learning algorithm using feature subset selection techniques based on evolutionary algorithms is the best Machine Learning paradigm in automatic emotion recognition, with all different feature sets, obtaining a mean of 80,05% emotion recognition rate in Basque and a 74,82% in Spanish. In order to check the goodness of the proposed process, a greedy searching approach (FSS-Forward) has been applied and a comparison between them is provided. Based on achieved results, a set of most relevant non-speaker dependent features is proposed for both languages and new perspectives are suggested.
Resumo:
Obtaining as much particulate material as possible from questioned items is desirable in forensic science as this allows a range of analyses to be undertaken and the retention of material for others to check. A method of maximising particulate recovery is described using a kidnap case, where minimal staining on clothing (socks) remained as possible indications of where the victim had been held captive. Police intelligence led to a hostage scene that was sampled. Brushing of the socks recovered about 50 sand grains with some silt: ultrasonic agitation and centrifuging recovered over 300 grains of sand, silt and clay. These were visually compared to scene and control samples, allowing exclusion of 52 samples and the retention of one comparison sample as well as other possibles, saving time and money, but maximising sample quantity and quality. © 2011 Elsevier Ireland Ltd.
Resumo:
Bail-in is quickly becoming a predominant approach to banking resolution. The EU Bank Recovery Resolution Directive and the US Federal Deposit Insurance Corporation’s single point of entry strategy envisage creditors’ recapitalisations
to resolve a failing financial institution. However, this legislation focuses on the domestic aspects of bail-in, leaving the question of how it is applied
to a cross-border banking group open. Cross-border banking resolution has been historically subject to coordination failures, which have resulted in disorderly resolutions with dangerous systemic effects. The goal of this article is to assess whether bail-in is subject to the same coordination problems that affect other resolution tools, and to discuss the logic of international legal cooperation in bail-in policies. We demonstrate that, in spite of the evident benefit in terms of fiscal sustainability, bail-in suffers from complex coordination problems which, if not addressed, might lead to regulatory arbitrage and lengthy court battles, and, ultimately, may disrupt resolutions. We argue that only a binding legal regime can address those problems. In doing so, we discuss the recent Financial Stability
Board’s proposal on cross-border recognition of resolution action, and the role of international law in promoting cooperation in banking resolution.
Resumo:
The reforms in Indian banking sector since 1991 is deliberated mostly in terms of the significant measures that were implemented in order to develop a more vibrant, healthy, stable and efficient banking sector in India. The effect of a highly regulated banking environment on asset quality, productivity and performance of banks necessitated the reform process and resulted the incorporation of prudential norms for income recognition, asset classification and provisioning and capital adequacy norms, in line with international best practices. The improvements in asset quality and a reduction in non-performing assets were the primary objective enunciated in the reform measures. In this context, the present research critically evaluates the trend in movement of nonperforming assets of public sector banks in India during the period 2000-01 to 2011-12, thereby facilitates an evaluation of the effectiveness of NPA management in the post-millennium period. The non-performing assets is not a function of loan/advance alone, but is influenced by other bank performance indicators and also by the macroeconomic variables. In addition to explaining the trend in the movement of NPA, this research also explained the moderating and mediating role of various bank performance and macroeconomic indicators on incidence of NPA
Resumo:
Shifts in credit supply could have a bearing on house prices e.g. through financial innovations and changes in regulation independently of the existence of a bank lending channel of monetary policy. This paper assesses the responses of US house prices to an exogenous credit supply shock and compares them with the effects from variations in credit supply associated with a bank lending channel. The contribution of the study is twofold. First, innovations in credit supply are identified using a mortgage mix variable, thereby accounting for the market-based financial intermediaries. As a robustness check a survey variable of bank lending standards for mortgage loans is also used. Second, the policy-induced credit supply effect on house prices is disentangled and compared with the effect from an exogenous credit supply shock. It is shown that in the first 3 years credit supply shocks affect house prices exogenously rather than through the bank lending channel. Monetary policy has still a large impact on house prices, even when the bank lending channel is ‘turned off’.
Resumo:
On March 4, 1999, the newly appointed President of the Brazilian Central Bank, Mr Armínio Fraga, raised interest rates to a staggering 45% per annum. The objective of that decision was to keep foreign investors assets in Brazil, and prevent the country from default. At the time, Brazil suffered frem an enormously intense crisis of confidence, and fears of such default were widespread. Mr Fraga was walking a very fine line when making that decision, for it could bring forth unintended effects: the market, already concerned about Brazil's sustainability, could perceive the increased rate as an irreversible step towards the abyss inevitable default. Economic theory postulates the rational actor model as the driving force behind economic decision-making. The objective of this thesis is to present and discuss the hypothesis that this particular decision, and by extension many others, are better explained threugh the recognition-primed decision mode!.