12 resultados para Off-line training
em University of Queensland eSpace - Australia
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
This paper presents an innovative approach for signature verification and forgery detection based on fuzzy modeling. The signature image is binarized and resized to a fixed size window and is then thinned. The thinned image is then partitioned into a fixed number of eight sub-images called boxes. This partition is done using the horizontal density approximation approach. Each sub-image is then further resized and again partitioned into twelve further sub-images using the uniform partitioning approach. The features of consideration are normalized vector angle (α) from each box. Each feature extracted from sample signatures gives rise to a fuzzy set. Since the choice of a proper fuzzification function is crucial for verification, we have devised a new fuzzification function with structural parameters, which is able to adapt to the variations in fuzzy sets. This function is employed to develop a complete forgery detection and verification system.
Resumo:
OBJECTIVES We sought to determine whether assessment of left ventricular (LV) function with real-time (RT) three-dimensional echocardiography (3DE) could reduce the variation of sequential LV measurements and provide greater accuracy than two-dimensional echocardiography (2DE). BACKGROUND Real-time 3DE has become feasible as a standard clinical tool, but its accuracy for LV assessment has not been validated. METHODS Unselected patients (n = 50; 41 men; age, 64 +/- 8 years) presenting for evaluation of LV function were studied with 2DE and RT-3DE. Test-retest variation was performed by a complete restudy by a separate sonographer within 1 h without alteration of hemodynamics or therapy. Magnetic resonance imaging (MRI) images were obtained during a breath-hold, and measurements were made off-line. RESULTS The test-retest variation showed similar measurements for volumes but wider scatter of LV mass measurements with M-mode and 2DE than 3DE. The average MRI end-diastolic volume was 172 +/- 53 ml; LV volumes were underestimated by 2DE (mean difference, -54 +/- 33; p < 0.01) but only slightly by RT-3DE (-4 +/- 29; p = 0.31). Similarly, end-systolic volume by MRI (91 +/- 53 ml) was underestimated by 2DE (mean difference, -28 +/- 28; p < 0.01) and by RT-3DE (mean difference, -3 +/- 18; p = 0.23). Ejection fraction by MRI was similar by 2DE (p = 0.76) and RT-3DE (p = 0.74). Left ventricular mass (183 +/- 50 g) was overestimated by M-mode (mean difference, 68 +/- 86 g; p < 0.01) and 2DE (16 +/- 57; p = 0.04) but not RT-3DE (0 +/- 38 g; p = 0.94). There was good inter- and intra-observer correlation between RT-3DE by two sonographers for volumes, ejection fraction, and mass. CONCLUSIONS Real-time 3DE is a feasible approach to reduce test-retest variation of LV volume, ejection fraction, and mass measurements in follow-up LV assessment in daily practice. (C) 2004 by the American College of Cardiology Foundation.
Resumo:
Due to the complexities involved with measuring activated sludge floc size distributions, this parameter has largely been ignored by wastewater researchers and practitioners. One of the major reasons has been that instruments able to measure particle size distributions were complex, expensive and only provided off-line measurements. The Focused Beam Reflectance Method (FBRM) is one of the rare techniques able to measure the particle size distribution in situ. This paper introduces the technique for monitoring wastewater treatment systems and compares its performance with other sizing techniques. The issue of the optimal focal point is discussed, and similar conclusions as found in the literature for other particulate systems are drawn. The study also demonstrates the capabilities of the FBRM in evaluating the performance of settling tanks. Interestingly, the floc size distributions did not vary with position inside the settling tank flocculator. This was an unexpected finding, and seriously questioned the need for a flocculator in the settling tank. It is conjectured that the invariable size distributions were caused by the unique combination of high solids concentration, low shear and zeolite dosing. (C) 2004 Society of Chemical Industry.
Resumo:
A major impediment to developing real-time computer vision systems has been the computational power and level of skill required to process video streams in real-time. This has meant that many researchers have either analysed video streams off-line or used expensive dedicated hardware acceleration techniques. Recent software and hardware developments have greatly eased the development burden of realtime image analysis leading to the development of portable systems using cheap PC hardware and software exploiting the Multimedia Extension (MMX) instruction set of the Intel Pentium chip. This paper describes the implementation of a computationally efficient computer vision system for recognizing hand gestures using efficient coding and MMX-acceleration to achieve real-time performance on low cost hardware.
Resumo:
In this paper, we present a new scheme for off-line recognition of multi-font numerals using the Takagi-Sugeno (TS) model. In this scheme, the binary image of a character is partitioned into a fixed number of sub-images called boxes. The features consist of normalized vector distances (gamma) from each box. Each feature extracted from different fonts gives rise to a fuzzy set. However, when we have a small number of fonts as in the case of multi-font numerals, the choice of a proper fuzzification function is crucial. Hence, we have devised a new fuzzification function involving parameters, which take account of the variations in the fuzzy sets. The new fuzzification function is employed in the TS model for the recognition of multi-font numerals.
Resumo:
The interaction between the growth of flexible forms of employment and employer funded training is important for understanding labour market performance. In particular, the idea of a trade-off has been advanced to describe potential market failures in the employment of flexible workers. This study finds that evidence of a trade-off is apparent in both the incidence and intensity of employer funded training. Flexible workers receive training that is 50-80% less intense than the workforce average. Casual workers - especially males - suffer more acutely from the trade-off. This suggests that flexible production externalities may seriously reduce human capital formation in the workforce.
Resumo:
Objective: Inpatient length of stay (LOS) is an important measure of hospital activity, health care resource consumption, and patient acuity. This research work aims at developing an incremental expectation maximization (EM) based learning approach on mixture of experts (ME) system for on-line prediction of LOS. The use of a batchmode learning process in most existing artificial neural networks to predict LOS is unrealistic, as the data become available over time and their pattern change dynamically. In contrast, an on-line process is capable of providing an output whenever a new datum becomes available. This on-the-spot information is therefore more useful and practical for making decisions, especially when one deals with a tremendous amount of data. Methods and material: The proposed approach is illustrated using a real example of gastroenteritis LOS data. The data set was extracted from a retrospective cohort study on all infants born in 1995-1997 and their subsequent admissions for gastroenteritis. The total number of admissions in this data set was n = 692. Linked hospitalization records of the cohort were retrieved retrospectively to derive the outcome measure, patient demographics, and associated co-morbidities information. A comparative study of the incremental learning and the batch-mode learning algorithms is considered. The performances of the learning algorithms are compared based on the mean absolute difference (MAD) between the predictions and the actual LOS, and the proportion of predictions with MAD < 1 day (Prop(MAD < 1)). The significance of the comparison is assessed through a regression analysis. Results: The incremental learning algorithm provides better on-line prediction of LOS when the system has gained sufficient training from more examples (MAD = 1.77 days and Prop(MAD < 1) = 54.3%), compared to that using the batch-mode learning. The regression analysis indicates a significant decrease of MAD (p-value = 0.063) and a significant (p-value = 0.044) increase of Prop(MAD
Resumo:
The processes that take place during the development of a heating are difficult to visualise. Bulk coal self-heating tests at The University of Queensland (UQ) using a two-metre column are providing graphic evidence of the stages that occur during a heating. Data obtained from these tests, both temperature and corresponding off-gas evolution can be transformed into what is effectively a video-replay of the heating event. This is achieved by loading both sets of data into a newly developed animation package called Hotspot. The resulting animation is ideal for spontaneous combustion training purposes as the viewer can readily identify the different hot spot stages and corresponding off-gas signatures. Colour coding of the coal temperature, as the hot spot forms, highlights its location in the coal pile and shows its ability to migrate upwind. An added benefit of the package is that once a mine has been tested in the UQ two-metre column, there is a permanent record of that particular coals performance for mine personnel to view.