940 resultados para attribute-based signature
Resumo:
In broader catchment scale investigations, there is a need to understand and ultimately exploit the spatial variation of agricultural crops for an improved economic return. In many instances, this spatial variation is temporally unstable and may be different for various crop attributes and crop species. In the Australian sugar industry, the opportunity arose to evaluate the performance of 231 farms in the Tully Mill area in far north Queensland using production information on cane yield (t/ha) and CCS ( a fresh weight measure of sucrose content in the cane) accumulated over a 12-year period. Such an arrangement of data can be expressed as a 3-way array where a farm x attribute x year matrix can be evaluated and interactions considered. Two multivariate techniques, the 3-way mixture method of clustering and the 3-mode principal component analysis, were employed to identify meaningful relationships between farms that performed similarly for both cane yield and CCS. In this context, farm has a spatial component and the aim of this analysis was to determine if systematic patterns in farm performance expressed by cane yield and CCS persisted over time. There was no spatial relationship between cane yield and CCS. However, the analysis revealed that the relationship between farms was remarkably stable from one year to the next for both attributes and there was some spatial aggregation of farm performance in parts of the mill area. This finding is important, since temporally consistent spatial variation may be exploited to improve regional production. Alternatively, the putative causes of the spatial variation may be explored to enhance the understanding of sugarcane production in the wet tropics of Australia.
Resumo:
Automatic signature verification is a well-established and an active area of research with numerous applications such as bank check verification, ATM access, etc. This paper proposes a novel approach to the problem of automatic off-line signature verification and forgery detection. The proposed approach is based on fuzzy modeling that employs the Takagi-Sugeno (TS) model. Signature verification and forgery detection are carried out using angle features extracted from box approach. Each feature corresponds to a fuzzy set. The features are fuzzified by an exponential membership function involved in the TS model, which is modified to include structural parameters. The structural parameters are devised to take account of possible variations due to handwriting styles and to reflect moods. The membership functions constitute weights in the TS model. The optimization of the output of the TS model with respect to the structural parameters yields the solution for the parameters. We have also derived two TS models by considering a rule for each input feature in the first formulation (Multiple rules) and by considering a single rule for all input features in the second formulation. In this work, we have found that TS model with multiple rules is better than TS model with single rule for detecting three types of forgeries; random, skilled and unskilled from a large database of sample signatures in addition to verifying genuine signatures. We have also devised three approaches, viz., an innovative approach and two intuitive approaches using the TS model with multiple rules for improved performance. (C) 2004 Pattern Recognition Society. Published by Elsevier Ltd. All rights reserved.
Resumo:
One of critical challenges in automatic recognition of TV commercials is to generate a unique, robust and compact signature. Uniqueness indicates the ability to identify the similarity among the commercial video clips which may have slight content variation. Robustness means the ability to match commercial video clips containing the same content but probably with different digitalization/encoding, some noise data, and/or transmission and recording distortion. Efficiency is about the capability of effectively matching commercial video sequences with a low computation cost and storage overhead. In this paper, we present a binary signature based method, which meets all the three criteria above, by combining the techniques of ordinal and color measurements. Experimental results on a real large commercial video database show that our novel approach delivers a significantly better performance comparing to the existing methods.
Resumo:
The existing method of pipeline health monitoring, which requires an entire pipeline to be inspected periodically, is both time-wasting and expensive. A risk-based model that reduces the amount of time spent on inspection has been presented. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests efficient design and operation philosophy, construction methodology and logical insurance plans. The risk-based model uses Analytic Hierarchy Process (AHP), a multiple attribute decision-making technique, to identify the factors that influence failure on specific segments and analyzes their effects by determining probability of risk factors. The severity of failure is determined through consequence analysis. From this, the effect of a failure caused by each risk factor can be established in terms of cost, and the cumulative effect of failure is determined through probability analysis. The technique does not totally eliminate subjectivity, but it is an improvement over the existing inspection method.
Resumo:
Conventional project management techniques are not always sufficient to ensure time, cost and quality achievement of large-scale construction projects due to complexity in planning, design and implementation processes. The main reasons for project non-achievement are changes in scope and design, changes in government policies and regulations, unforeseen inflation, underestimation and improper estimation. Projects that are exposed to such an uncertain environment can be effectively managed with the application of risk management throughout the project's life cycle. However, the effectiveness of risk management depends on the technique through which the effects of risk factors are analysed/quantified. This study proposes the Analytic Hierarchy Process (AHP), a multiple attribute decision making technique, as a tool for risk analysis because it can handle subjective as well as objective factors in a decision model that are conflicting in nature. This provides a decision support system (DSS) to project management for making the right decision at the right time for ensuring project success in line with organisation policy, project objectives and a competitive business environment. The whole methodology is explained through a case application of a cross-country petroleum pipeline project in India and its effectiveness in project management is demonstrated.
Resumo:
Sparse code division multiple access (CDMA), a variation on the standard CDMA method in which the spreading (signature) matrix contains only a relatively small number of nonzero elements, is presented and analysed using methods of statistical physics. The analysis provides results on the performance of maximum likelihood decoding for sparse spreading codes in the large system limit. We present results for both cases of regular and irregular spreading matrices for the binary additive white Gaussian noise channel (BIAWGN) with a comparison to the canonical (dense) random spreading code. © 2007 IOP Publishing Ltd.
Resumo:
The existing method of pipeline monitoring, which requires an entire pipeline to be inspected periodically, wastes time and is expensive. A risk-based model that reduces the amount of time spent on inspection has been developed. This model not only reduces the cost of maintaining petroleum pipelines, but also suggests an efficient design and operation philosophy, construction method and logical insurance plans.The risk-based model uses analytic hierarchy process, a multiple attribute decision-making technique, to identify factors that influence failure on specific segments and analyze their effects by determining the probabilities of risk factors. The severity of failure is determined through consequence analysis, which establishes the effect of a failure in terms of cost caused by each risk factor and determines the cumulative effect of failure through probability analysis.
Resumo:
A transmission performance investigation using ultra-long Raman fibre laser based amplification with different co-pump power is presented. We attribute Q2 factor degradation to RIN of co-pump and induced fibre laser as well as increased SBS.
Resumo:
This paper presents an effective decision making system for leak detection based on multiple generalized linear models and clustering techniques. The training data for the proposed decision system is obtained by setting up an experimental pipeline fully operational distribution system. The system is also equipped with data logging for three variables; namely, inlet pressure, outlet pressure, and outlet flow. The experimental setup is designed such that multi-operational conditions of the distribution system, including multi pressure and multi flow can be obtained. We then statistically tested and showed that pressure and flow variables can be used as signature of leak under the designed multi-operational conditions. It is then shown that the detection of leakages based on the training and testing of the proposed multi model decision system with pre data clustering, under multi operational conditions produces better recognition rates in comparison to the training based on the single model approach. This decision system is then equipped with the estimation of confidence limits and a method is proposed for using these confidence limits for obtaining more robust leakage recognition results.
Resumo:
In this paper, we present one approach for extending the learning set of a classification algorithm with additional metadata. It is used as a base for giving appropriate names to found regularities. The analysis of correspondence between connections established in the attribute space and existing links between concepts can be used as a test for creation of an adequate model of the observed world. Meta-PGN classifier is suggested as a possible tool for establishing these connections. Applying this approach in the field of content-based image retrieval of art paintings provides a tool for extracting specific feature combinations, which represent different sides of artists' styles, periods and movements.
Resumo:
We present, for the first time, a detailed investigation of the impact of second order co-propagating Raman pumping on long-haul 100G WDM DP-QPSK coherent transmission of up to 7082 km using Raman fibre laser based configurations. Signal power and noise distributions along the fibre for each pumping scheme were characterised both numerically and experimentally. Based on these pumping schemes, the Q factor penalties versus co-pump power ratios were experimentally measured and quantified. A significant Q factor penalty of up to 4.15 dB was observed after 1666 km using symmetric bidirectional pumping, compared with counter-pumping only. Our results show that whilst using co-pumping minimises the intra-cavity signal power variation and amplification noise, the Q factor penalty with co-pumping was too great for any advantage to be seen. The relative intensity noise (RIN) characteristics of the induced fibre laser and the output signal, and the intra-cavity RF spectra of the fibre laser are also presented. We attribute the Q factor degradation to RIN induced penalty due to RIN being transferred from the first order fibre laser and second order co-pump to the signal. More importantly, there were two different fibre lasing regimes contributing to the amplification. It was random distributed feedback lasing when using counter-pumping only and conventional Fabry-Perot cavity lasing when using all bidirectional pumping schemes. This also results in significantly different performances due to different laser cavity lengths for these two classes of laser.
Resumo:
A significant body of research investigates the acceptance of computer-based support (including devices and applications ranging from e-mail to specialized clinical systems, like PACS) among clinicians. Much of this research has focused on measuring the usability of systems using characteristics related to the clarity of interactions and ease of use. We propose that an important attribute of any clinical computer-based support tool is the intrinsic motivation of the end-user (i.e. a clinician) to use the system in practice. In this paper we present the results of a study that investigated factors motivating medical doctors (MDs) to use computer-based support. Our results demonstrate that MDs value computer-based support, find it useful and easy to use, however, uptake is hindered by perceived incompetence, and pressure and tension associated with using technology.
Resumo:
Power converters are a key, but vulnerable component in switched reluctance motor (SRM) drives. In this paper, a new fault diagnosis scheme for SRM converters is proposed based on the wavelet packet decomposition (WPD) with a dc-link current sensor. Open- and short-circuit faults of the power switches in an asymmetrical half-bridge converter are analyzed in details. In order to obtain the fault signature from the phase currents, two pulse-width modulation signals with phase shift are injected into the lower-switches of the converter to extract the excitation current, and the WPD algorithm is then applied to the detected currents for fault diagnosis. Moreover, a discrete degree of the wavelet packet node energy is chosen as the fault coefficient. The converter faults can be diagnosed and located directly by determining the changes in the discrete degree from the detected currents. The proposed scheme requires only one current sensor in the dc link, while conventional methods need one sensor for each phase or additional detection circuits. The experimental results on a 750-W three-phase SRM are presented to confirm the effectiveness of the proposed fault diagnosis scheme.
Resumo:
This dissertation develops an image processing framework with unique feature extraction and similarity measurements for human face recognition in the thermal mid-wave infrared portion of the electromagnetic spectrum. The goals of this research is to design specialized algorithms that would extract facial vasculature information, create a thermal facial signature and identify the individual. The objective is to use such findings in support of a biometrics system for human identification with a high degree of accuracy and a high degree of reliability. This last assertion is due to the minimal to no risk for potential alteration of the intrinsic physiological characteristics seen through thermal infrared imaging. The proposed thermal facial signature recognition is fully integrated and consolidates the main and critical steps of feature extraction, registration, matching through similarity measures, and validation through testing our algorithm on a database, referred to as C-X1, provided by the Computer Vision Research Laboratory at the University of Notre Dame. Feature extraction was accomplished by first registering the infrared images to a reference image using the functional MRI of the Brain’s (FMRIB’s) Linear Image Registration Tool (FLIRT) modified to suit thermal infrared images. This was followed by segmentation of the facial region using an advanced localized contouring algorithm applied on anisotropically diffused thermal images. Thermal feature extraction from facial images was attained by performing morphological operations such as opening and top-hat segmentation to yield thermal signatures for each subject. Four thermal images taken over a period of six months were used to generate thermal signatures and a thermal template for each subject, the thermal template contains only the most prevalent and consistent features. Finally a similarity measure technique was used to match signatures to templates and the Principal Component Analysis (PCA) was used to validate the results of the matching process. Thirteen subjects were used for testing the developed technique on an in-house thermal imaging system. The matching using an Euclidean-based similarity measure showed 88% accuracy in the case of skeletonized signatures and templates, we obtained 90% accuracy for anisotropically diffused signatures and templates. We also employed the Manhattan-based similarity measure and obtained an accuracy of 90.39% for skeletonized and diffused templates and signatures. It was found that an average 18.9% improvement in the similarity measure was obtained when using diffused templates. The Euclidean- and Manhattan-based similarity measure was also applied to skeletonized signatures and templates of 25 subjects in the C-X1 database. The highly accurate results obtained in the matching process along with the generalized design process clearly demonstrate the ability of the thermal infrared system to be used on other thermal imaging based systems and related databases. A novel user-initialization registration of thermal facial images has been successfully implemented. Furthermore, the novel approach at developing a thermal signature template using four images taken at various times ensured that unforeseen changes in the vasculature did not affect the biometric matching process as it relied on consistent thermal features.
Resumo:
Stable isotopes are important tools for understanding the trophic roles of elasmobranchs. However, whether different tissues provide consistent stable isotope values within an individual are largely unknown. To address this, the relationships among carbon and nitrogen isotope values were quantified for blood, muscle, and fin from juvenile bull sharks (Carcharhinus leucas) and blood and fin from large tiger sharks (Galeocerdo cuvier) collected in two different ecosystems. We also investigated the relationship between shark size and the magnitude of differences in isotopic values between tissues. Isotope values were significantly positively correlated for all paired tissue comparisons, but R2 values were much higher for δ13C than for δ15N. Paired differences between isotopic values of tissues were relatively small but varied significantly with shark total length, suggesting that shark size can be an important factor influencing the magnitude of differences in isotope values of different tissues. For studies of juvenile sharks, care should be taken in using slow turnover tissues like muscle and fin, because they may retain a maternal signature for an extended time. Although correlations were relatively strong, results suggest that correction factors should be generated for the desired study species and may only allow coarse-scale comparisons between studies using different tissue types.