156 resultados para Signature Verification, Forgery Detection, Fuzzy Modeling
em University of Queensland eSpace - Australia
Resumo:
This paper presents an innovative approach for signature verification and forgery detection based on fuzzy modeling. The signature image is binarized and resized to a fixed size window and is then thinned. The thinned image is then partitioned into a fixed number of eight sub-images called boxes. This partition is done using the horizontal density approximation approach. Each sub-image is then further resized and again partitioned into twelve further sub-images using the uniform partitioning approach. The features of consideration are normalized vector angle (α) from each box. Each feature extracted from sample signatures gives rise to a fuzzy set. Since the choice of a proper fuzzification function is crucial for verification, we have devised a new fuzzification function with structural parameters, which is able to adapt to the variations in fuzzy sets. This function is employed to develop a complete forgery detection and verification system.
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
In this paper, we present a new scheme for off-line recognition of multi-font numerals using the Takagi-Sugeno (TS) model. In this scheme, the binary image of a character is partitioned into a fixed number of sub-images called boxes. The features consist of normalized vector distances (gamma) from each box. Each feature extracted from different fonts gives rise to a fuzzy set. However, when we have a small number of fonts as in the case of multi-font numerals, the choice of a proper fuzzification function is crucial. Hence, we have devised a new fuzzification function involving parameters, which take account of the variations in the fuzzy sets. The new fuzzification function is employed in the TS model for the recognition of multi-font numerals.
Resumo:
We investigate the emission of multimodal polarized light from light emitting devices due to spin-aligned carrier injection. The results are derived through operator Langevin equations, which include thermal and carrier-injection fluctuations, as well as nonradiative recombination and electronic g-factor temperature dependence. We study the dynamics of the optoelectronic processes and show how the temperature-dependent g factor and magnetic field affect the degree of polarization of the emitted light. In addition, at high temperatures, thermal fluctuation reduces the efficiency of the optoelectronic detection method for measuring the degree of spin polarization of carrier injection into nonmagnetic semicondutors.
Resumo:
Within the information systems field, the task of conceptual modeling involves building a representation of selected phenomena in some domain. High-quality conceptual-modeling work is important because it facilitates early detection and correction of system development errors. It also plays an increasingly important role in activities like business process reengineering and documentation of best-practice data and process models in enterprise resource planning systems. Yet little research has been undertaken on many aspects of conceptual modeling. In this paper, we propose a framework to motivate research that addresses the following fundamental question: How can we model the world to better facilitate our developing, implementing, using, and maintaining more valuable information systems? The framework comprises four elements: conceptual-modeling grammars, conceptual-modeling methods, conceptual-modeling scripts, and conceptual-modeling contexts. We provide examples of the types of research that have already been undertaken on each element and illustrate research opportunities that exist.
Resumo:
Primates have X chromosome genes for cone photopigments with sensitivity maxima from 535 to 562 nm. Old World monkeys and apes (catarrhines) and the New World ( platyrrhine) genus Alouatta have separate genes for 535-nm ( medium wavelength; M) and 562-nm ( long wavelength; L) pigments. These pigments, together with a 425-nm ( short wavelength) pigment, permit trichromatic color vision. Other platyrrhines and prosimians have a single X chromosome gene but often with alleles for two or three M/L photopigments. Consequently, heterozygote females are trichromats, but males and homozygote females are dichromats. The criteria that affect the evolution of M/L alleles and maintain genetic polymorphism remain a puzzle, but selection for finding food may be important. We compare different types of color vision for detecting more than 100 plant species consumed by tamarins ( Saguinus spp.) in Peru. There is evidence that both frequency-dependent selection on homozygotes and heterozygote advantage favor M/L polymorphism and that trichromatic color vision is most advantageous in dim light. Also, whereas the 562-nm allele is present in all species, the occurrence of 535- to 556-nm alleles varies between species. This variation probably arises because trichromatic color vision favors widely separated pigments and equal frequencies of 535/543- and 562-nm alleles, whereas in dichromats, long-wavelength pigment alleles are fitter.
Resumo:
Background: This paper describes SeqDoC, a simple, web-based tool to carry out direct comparison of ABI sequence chromatograms. This allows the rapid identification of single nucleotide polymorphisms (SNPs) and point mutations without the need to install or learn more complicated analysis software. Results: SeqDoC produces a subtracted trace showing differences between a reference and test chromatogram, and is optimised to emphasise those characteristic of single base changes. It automatically aligns sequences, and produces straightforward graphical output. The use of direct comparison of the sequence chromatograms means that artefacts introduced by automatic base-calling software are avoided. Homozygous and heterozygous substitutions and insertion/deletion events are all readily identified. SeqDoC successfully highlights nucleotide changes missed by the Staden package 'tracediff' program. Conclusion: SeqDoC is ideal for small-scale SNP identification, for identification of changes in random mutagenesis screens, and for verification of PCR amplification fidelity. Differences are highlighted, not interpreted, allowing the investigator to make the ultimate decision on the nature of the change.
Resumo:
A blind nonlinear interference cancellation receiver for code-division multiple-access- (CDMA-) based communication systems operating over Rayleigh flat-fading channels is proposed. The receiver which assumes knowledge of the signature waveforms of all the users is implemented in an asynchronous CDMA environment. Unlike the conventional MMSE receiver, the proposed blind ICA multiuser detector is shown to be robust without training sequences and with only knowledge of the signature waveforms. It has achieved nearly the same performance of the conventional training-based MMSE receiver. Several comparisons and experiments are performed based on examining BER performance in AWGN and Rayleigh fading in order to verify the validity of the proposed blind ICA multiuser detector.
Resumo:
An approach and strategy for automatic detection of buildings from aerial images using combined image analysis and interpretation techniques is described in this paper. It is undertaken in several steps. A dense DSM is obtained by stereo image matching and then the results of multi-band classification, the DSM, and Normalized Difference Vegetation Index (NDVI) are used to reveal preliminary building interest areas. From these areas, a shape modeling algorithm has been used to precisely delineate their boundaries. The Dempster-Shafer data fusion technique is then applied to detect buildings from the combination of three data sources by a statistically-based classification. A number of test areas, which include buildings of different sizes, shape, and roof color have been investigated. The tests are encouraging and demonstrate that all processes in this system are important for effective building detection.
Resumo:
Fuzzy signal detection analysis can be a useful complementary technique to traditional signal detection theory analysis methods, particularly in applied settings. For example, traffic situations are better conceived as being on a continuum from no potential for hazard to high potential, rather than either having potential or not having potential. This study examined the relative contribution of sensitivity and response bias to explaining differences in the hazard perception performance of novices and experienced drivers, and the effect of a training manipulation. Novice drivers and experienced drivers were compared (N = 64). Half the novices received training, while the experienced drivers and half the novices remained untrained. Participants completed a hazard perception test and rated potential for hazard in occluded scenes. The response latency of participants to the hazard perception test replicated previous findings of experienced/novice differences and trained/untrained differences. Fuzzy signal detection analysis of both the hazard perception task and the occluded rating task suggested that response bias may be more central to hazard perception test performance than sensitivity, with trained and experienced drivers responding faster and with a more liberal bias than untrained novices. Implications for driver training and the hazard perception test are discussed.
Resumo:
A narrow absorption feature in an atomic or molecular gas (such as iodine or methane) is used as the frequency reference in many stabilized lasers. As part of the stabilization scheme an optical frequency dither is applied to the laser. In optical heterodyne experiments, this dither is transferred to the RF beat signal, reducing the spectral power density and hence the signal to noise ratio over that in the absence of dither. We removed the dither by mixing the raw beat signal with a dithered local oscillator signal. When the dither waveform is matched to that of the reference laser the output signal from the mixer is rendered dither free. Application of this method to a Winters iodine-stabilized helium-neon laser reduced the bandwidth of the beat signal from 6 MHz to 390 kHz, thereby lowering the detection threshold from 5 pW of laser power to 3 pW. In addition, a simple signal detection model is developed which predicts similar threshold reductions.
Resumo:
Using spontaneous parametric down-conversion, we produce polarization-entangled states of two photons and characterize them using two-photon tomography to measure the density matrix. A controllable decoherence is imposed on the states by passing the photons through thick, adjustable birefringent elements. When the system is subject to collective decoherence, one particular entangled state is seen to be decoherence-free, as predicted by theory. Such decoherence-free systems may have an important role for the future of quantum computation and information processing.
Resumo:
The reflectance signatures of plantation pine canopy and understorey components were measured using a spectro-radiometer. The aim was to establish whether differences observed in the reflectance signature of stressed and unstressed pine needles were consistent with observed differences in the reflectance of multispectral Landsat Thematic Mapper (TM) images of healthy and stressed forest. Because overall scene reflectance includes the contribution of each scene component, needle reflectance may not be representative of canopy reflectance. In this investigation, a limited dataset of reflectance signatures from stressed and unstressed needles confirmed the negative relationship between pine needle health and reflectance which was observed in visible red wavelengths. However, the reflectance contribution from bushes, pine needle litter and bare soil tended to reinforce this relationship suggesting that in this instance, overall scene reflectance is comprised of the proportional reflectance of each scene component. In near infrared wavelengths, differences between healthy and stressed needle reflectance suggested a strong positive relationship between reflectance and tree health. For Landsat TM images, previous research had only observed a weak positive relationship between stand health and near infrared reflectance in these pine canopies. This suggests that for multispectral Landsat TM images, reflectance of near infrared light from pine canopies may be affected by other factors which may include the scattering of light within canopies. These results are seen as promising for the use of hyperspectral images to detect stand health, provided that pixel reflectance is not influenced by other scene components.
Resumo:
Business process design is primarily driven by process improvement objectives. However, the role of control objectives stemming from regulations and standards is becoming increasingly important for businesses in light of recent events that led to some of the largest scandals in corporate history. As organizations strive to meet compliance agendas, there is an evident need to provide systematic approaches that assist in the understanding of the interplay between (often conflicting) business and control objectives during business process design. In this paper, our objective is twofold. We will firstly present a research agenda in the space of business process compliance, identifying major technical and organizational challenges. We then tackle a part of the overall problem space, which deals with the effective modeling of control objectives and subsequently their propagation onto business process models. Control objective modeling is proposed through a specialized modal logic based on normative systems theory, and the visualization of control objectives on business process models is achieved procedurally. The proposed approach is demonstrated in the context of a purchase-to-pay scenario.