819 resultados para Classification error rate
Resumo:
A new theoretical model of Pattern Recognition principles was proposed, which is based on "matter cognition" instead of "matter classification" in traditional statistical Pattern Recognition. This new model is closer to the function of human being, rather than traditional statistical Pattern Recognition using "optimal separating" as its main principle. So the new model of Pattern Recognition is called the Biomimetic Pattern Recognition (BPR)(1). Its mathematical basis is placed on topological analysis of the sample set in the high dimensional feature space. Therefore, it is also called the Topological Pattern Recognition (TPR). The fundamental idea of this model is based on the fact of the continuity in the feature space of any one of the certain kinds of samples. We experimented with the Biomimetic Pattern Recognition (BPR) by using artificial neural networks, which act through covering the high dimensional geometrical distribution of the sample set in the feature space. Onmidirectionally cognitive tests were done on various kinds of animal and vehicle models of rather similar shapes. For the total 8800 tests, the correct recognition rate is 99.87%. The rejection rate is 0.13% and on the condition of zero error rates, the correct rate of BPR was much better than that of RBF-SVM.
Resumo:
Nucleosides in human urine and serum have frequently been studied as a possible biomedical marker for cancer, acquired immune deficiency syndrome (AIDS) and the whole-body turnover of RNAs. Fifteen normal and modified nucleosides were determined in 69 urine and 42 serum samples using high-performance liquid chromatography (HPLC). Artificial neural networks have been used as a powerful pattern recognition tool to distinguish cancer patients from healthy persons. The recognition rate for the training set reached 100%. In the validating set, 95.8 and 92.9% of people were correctly classified into cancer patients and healthy persons when urine and serum were used as the sample for measuring the nucleosides. The results show that the artificial neural network technique is better than principal component analysis for the classification of healthy persons and cancer patients based on nucleoside data. (C) 2002 Elsevier Science B.V. All rights reserved.
Resumo:
Bagnold-type bed-load equations are widely used for the determination of sediment transport rate in marine environments. The accuracy of these equations depends upon the definition of the coefficient k(1) in the equations, which is a function of particle size. Hardisty (1983) has attempted to establish the relationship between k(1) and particle size, but there is an error in his analytical result. Our reanalysis of the original flume data results in new formulae for the coefficient. Furthermore, we found that the k(1) values should be derived using u(1) and u(1cr) data; the use of the vertical mean velocity in flumes to replace u(1) will lead to considerably higher k(1) values and overestimation of sediment transport rates.
Resumo:
In order to developing reservoir of Upper of Ng at high-speed and high-efficient in Chengdao oilfield which is located in the bally shallow sea, the paper builds up a series of theory and means predicting and descripting reservoir in earlier period of oilfield development. There are some conclusions as follows. 1. It is the first time to form a series of technique of fine geological modeling of the channel-sandy reservoir by means of mainly seismic methods. These technique include the logging restriction seismic inversion, the whole three dimension seismic interpretation, seismic properties analysis and so on which are used to the 3-dimension distributing prediction of sandy body, structure and properties of the channel reservoir by a lot of the seismic information and a small quantity of the drilling and the logging information in the earlier stage of the oil-field development. It is the first time that these methods applied to production and the high-speed development of the shallow sea oilfield. The prediction sandy body was modified by the data of new drilling, the new reservoir prediction thinking of traced inversion is built. The applied effect of the technique was very well, according to approximately 200 wells belonging to 30 well groups in Chengdao oilfield, the drilling succeeded rate of the predicting sandy body reached 100%, the error total thickness only was 8%. 2. The author advanced the thinking and methods of the forecasting residual-oil prediction at the earlier stage of production. Based on well data and seismic data, correlation of sediment units was correlated by cycle-correlation and classification control methods, and the normalization and finely interpretation of the well logging and sedimentation micro-facies were acquired. On the region of poor well, using the logging restriction inversion technique and regarding finished drilling production well as the new restriction condition, the sand body distributing and its property were predicted again and derived 3-dimension pool geologic model including structure, reservoir, fluid, reservoir engineering parameter and producing dynamic etc. According to the reservoir geologic model, the reservoir engineering design was optimized, the tracking simulation of the reservoir numerical simulation was done by means of the dynamic data (pressure, yield and water content) of development well, the production rule and oil-water distributing rule was traced, the distributing of the remaining oil was predicted and controlled. The dynamic reservoir modeling method in metaphase of development was taken out. Based on the new drilling data, the static reservoir geologic model was momentarily modified, the research of the flow units was brought up including identifying flow units, evaluating flow units capability and establishing the fine flow units model; according to the dynamic data of production and well testing data, the dynamic tracing reservoir description was realized through the constant modification of the reservoir geologic model restricted these dynamic data by the theory of well testing and the reservoir numerical simulation. It was built the dynamic tracing reservoir model, which was used to track survey of the remaining oil on earlier period. The reservoir engineering tracking analysis technique on shallow sea oilfield was founded. After renewing the structure history since tertiary in Chengdao area by the balance section technique and estimating the activity character of the Chengbei fault by the sealing fault analysis technique, the meandering stream sediment pattern of the Upper of Ng was founded in which the meandering border was the uppermost reservoir unit. Based on the specialty of the lower rock component maturity and the structure maturity, the author founded 3 kinds of pore structure pattern in the Guanshang member of Chengdao oil-field in which the storing space mainly was primary (genetic) inter-granular pore, little was secondary solution pore and the inter-crystal pore tiny pore, and the type of throat mainly distributed as the slice shape and the contract neck shape. The positive rhythmic was briefly type included the simple positive rhythm, the complex positive rhythm and the compound rhythm. Interbed mainly is mudstone widely, the physical properties and the calcite interbed distribute localized. 5. The author synthetically analyzed the influence action of the micro-heterogeneity, the macro-heterogeneity and the structure heterogeneity to the oilfield water flood development. The efficiency of water flood is well in tiny structure of convex type or even type at top and bottom in which the water breakthrough of oil well is soon at the high part of structure when inject at the low part of structure, and the efficiency of water flood is poor in tiny structure of concave type at top and bottom. The remaining oil was controlled by sedimentary facies; the water flooding efficiency is well in the border or channel bar and is bad in the floodplain or the levee. The separation and inter layer have a little influence to the non-obvious positive rhythm reservoir, in which the remaining oil commonly locate within the 1-3 meter of the lower part of the separation and inter layer with lower water flooding efficiency.
Resumo:
Binary image classifiction is a problem that has received much attention in recent years. In this paper we evaluate a selection of popular techniques in an effort to find a feature set/ classifier combination which generalizes well to full resolution image data. We then apply that system to images at one-half through one-sixteenth resolution, and consider the corresponding error rates. In addition, we further observe generalization performance as it depends on the number of training images, and lastly, compare the system's best error rates to that of a human performing an identical classification task given teh same set of test images.
Resumo:
Pritchard, L., Corne, D., Kell, D.B., Rowland, J. & Winson, M. (2005) A general model of error-prone PCR. Journal of Theoretical Biology 234, 497-509.
Resumo:
One-and two-dimensional cellular automata which are known to be fault-tolerant are very complex. On the other hand, only very simple cellular automata have actually been proven to lack fault-tolerance, i.e., to be mixing. The latter either have large noise probability ε or belong to the small family of two-state nearest-neighbor monotonic rules which includes local majority voting. For a certain simple automaton L called the soldiers rule, this problem has intrigued researchers for the last two decades since L is clearly more robust than local voting: in the absence of noise, L eliminates any finite island of perturbation from an initial configuration of all 0's or all 1's. The same holds for a 4-state monotonic variant of L, K, called two-line voting. We will prove that the probabilistic cellular automata Kε and Lε asymptotically lose all information about their initial state when subject to small, strongly biased noise. The mixing property trivially implies that the systems are ergodic. The finite-time information-retaining quality of a mixing system can be represented by its relaxation time Relax(⋅), which measures the time before the onset of significant information loss. This is known to grow as (1/ε)^c for noisy local voting. The impressive error-correction ability of L has prompted some researchers to conjecture that Relax(Lε) = 2^(c/ε). We prove the tight bound 2^(c1log^21/ε) < Relax(Lε) < 2^(c2log^21/ε) for a biased error model. The same holds for Kε. Moreover, the lower bound is independent of the bias assumption. The strong bias assumption makes it possible to apply sparsity/renormalization techniques, the main tools of our investigation, used earlier in the opposite context of proving fault-tolerance.
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Single-channel Fusion ARTMAP is functionally equivalent to Fuzzy ART during unsupervised learning and to Fuzzy ARTMAP during supervised learning. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, become inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called paraellel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of them. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network. Fusion ARTMAP's multi-channel coding is illustrated by simulations of the Quadruped Mammal database.
Resumo:
Fusion ARTMAP is a self-organizing neural network architecture for multi-channel, or multi-sensor, data fusion. Fusion ARTMAP generalizes the fuzzy ARTMAP architecture in order to adaptively classify multi-channel data. The network has a symmetric organization such that each channel can be dynamically configured to serve as either a data input or a teaching input to the system. An ART module forms a compressed recognition code within each channel. These codes, in turn, beco1ne inputs to a single ART system that organizes the global recognition code. When a predictive error occurs, a process called parallel match tracking simultaneously raises vigilances in multiple ART modules until reset is triggered in one of thmn. Parallel match tracking hereby resets only that portion of the recognition code with the poorest match, or minimum predictive confidence. This internally controlled selective reset process is a type of credit assignment that creates a parsimoniously connected learned network.
Resumo:
This article introduces a new neural network architecture, called ARTMAP, that autonomously learns to classify arbitrarily many, arbitrarily ordered vectors into recognition categories based on predictive success. This supervised learning system is built up from a pair of Adaptive Resonance Theory modules (ARTa and ARTb) that are capable of self-organizing stable recognition categories in response to arbitrary sequences of input patterns. During training trials, the ARTa module receives a stream {a^(p)} of input patterns, and ARTb receives a stream {b^(p)} of input patterns, where b^(p) is the correct prediction given a^(p). These ART modules are linked by an associative learning network and an internal controller that ensures autonomous system operation in real time. During test trials, the remaining patterns a^(p) are presented without b^(p), and their predictions at ARTb are compared with b^(p). Tested on a benchmark machine learning database in both on-line and off-line simulations, the ARTMAP system learns orders of magnitude more quickly, efficiently, and accurately than alternative algorithms, and achieves 100% accuracy after training on less than half the input patterns in the database. It achieves these properties by using an internal controller that conjointly maximizes predictive generalization and minimizes predictive error by linking predictive success to category size on a trial-by-trial basis, using only local operations. This computation increases the vigilance parameter ρa of ARTa by the minimal amount needed to correct a predictive error at ARTb· Parameter ρa calibrates the minimum confidence that ARTa must have in a category, or hypothesis, activated by an input a^(p) in order for ARTa to accept that category, rather than search for a better one through an automatically controlled process of hypothesis testing. Parameter ρa is compared with the degree of match between a^(p) and the top-down learned expectation, or prototype, that is read-out subsequent to activation of an ARTa category. Search occurs if the degree of match is less than ρa. ARTMAP is hereby a type of self-organizing expert system that calibrates the selectivity of its hypotheses based upon predictive success. As a result, rare but important events can be quickly and sharply distinguished even if they are similar to frequent events with different consequences. Between input trials ρa relaxes to a baseline vigilance pa When ρa is large, the system runs in a conservative mode, wherein predictions are made only if the system is confident of the outcome. Very few false-alarm errors then occur at any stage of learning, yet the system reaches asymptote with no loss of speed. Because ARTMAP learning is self stabilizing, it can continue learning one or more databases, without degrading its corpus of memories, until its full memory capacity is utilized.
Resumo:
This article describes neural network models for adaptive control of arm movement trajectories during visually guided reaching and, more generally, a framework for unsupervised real-time error-based learning. The models clarify how a child, or untrained robot, can learn to reach for objects that it sees. Piaget has provided basic insights with his concept of a circular reaction: As an infant makes internally generated movements of its hand, the eyes automatically follow this motion. A transformation is learned between the visual representation of hand position and the motor representation of hand position. Learning of this transformation eventually enables the child to accurately reach for visually detected targets. Grossberg and Kuperstein have shown how the eye movement system can use visual error signals to correct movement parameters via cerebellar learning. Here it is shown how endogenously generated arm movements lead to adaptive tuning of arm control parameters. These movements also activate the target position representations that are used to learn the visuo-motor transformation that controls visually guided reaching. The AVITE model presented here is an adaptive neural circuit based on the Vector Integration to Endpoint (VITE) model for arm and speech trajectory generation of Bullock and Grossberg. In the VITE model, a Target Position Command (TPC) represents the location of the desired target. The Present Position Command (PPC) encodes the present hand-arm configuration. The Difference Vector (DV) population continuously.computes the difference between the PPC and the TPC. A speed-controlling GO signal multiplies DV output. The PPC integrates the (DV)·(GO) product and generates an outflow command to the arm. Integration at the PPC continues at a rate dependent on GO signal size until the DV reaches zero, at which time the PPC equals the TPC. The AVITE model explains how self-consistent TPC and PPC coordinates are autonomously generated and learned. Learning of AVITE parameters is regulated by activation of a self-regulating Endogenous Random Generator (ERG) of training vectors. Each vector is integrated at the PPC, giving rise to a movement command. The generation of each vector induces a complementary postural phase during which ERG output stops and learning occurs. Then a new vector is generated and the cycle is repeated. This cyclic, biphasic behavior is controlled by a specialized gated dipole circuit. ERG output autonomously stops in such a way that, across trials, a broad sample of workspace target positions is generated. When the ERG shuts off, a modulator gate opens, copying the PPC into the TPC. Learning of a transformation from TPC to PPC occurs using the DV as an error signal that is zeroed due to learning. This learning scheme is called a Vector Associative Map, or VAM. The VAM model is a general-purpose device for autonomous real-time error-based learning and performance of associative maps. The DV stage serves the dual function of reading out new TPCs during performance and reading in new adaptive weights during learning, without a disruption of real-time operation. YAMs thus provide an on-line unsupervised alternative to the off-line properties of supervised error-correction learning algorithms. YAMs and VAM cascades for learning motor-to-motor and spatial-to-motor maps are described. YAM models and Adaptive Resonance Theory (ART) models exhibit complementary matching, learning, and performance properties that together provide a foundation for designing a total sensory-cognitive and cognitive-motor autonomous system.
Resumo:
As a by-product of the ‘information revolution’ which is currently unfolding, lifetimes of man (and indeed computer) hours are being allocated for the automated and intelligent interpretation of data. This is particularly true in medical and clinical settings, where research into machine-assisted diagnosis of physiological conditions gains momentum daily. Of the conditions which have been addressed, however, automated classification of allergy has not been investigated, even though the numbers of allergic persons are rising, and undiagnosed allergies are most likely to elicit fatal consequences. On the basis of the observations of allergists who conduct oral food challenges (OFCs), activity-based analyses of allergy tests were performed. Algorithms were investigated and validated by a pilot study which verified that accelerometer-based inquiry of human movements is particularly well-suited for objective appraisal of activity. However, when these analyses were applied to OFCs, accelerometer-based investigations were found to provide very poor separation between allergic and non-allergic persons, and it was concluded that the avenues explored in this thesis are inadequate for the classification of allergy. Heart rate variability (HRV) analysis is known to provide very significant diagnostic information for many conditions. Owing to this, electrocardiograms (ECGs) were recorded during OFCs for the purpose of assessing the effect that allergy induces on HRV features. It was found that with appropriate analysis, excellent separation between allergic and nonallergic subjects can be obtained. These results were, however, obtained with manual QRS annotations, and these are not a viable methodology for real-time diagnostic applications. Even so, this was the first work which has categorically correlated changes in HRV features to the onset of allergic events, and manual annotations yield undeniable affirmation of this. Fostered by the successful results which were obtained with manual classifications, automatic QRS detection algorithms were investigated to facilitate the fully automated classification of allergy. The results which were obtained by this process are very promising. Most importantly, the work that is presented in this thesis did not obtain any false positive classifications. This is a most desirable result for OFC classification, as it allows complete confidence to be attributed to classifications of allergy. Furthermore, these results could be particularly advantageous in clinical settings, as machine-based classification can detect the onset of allergy which can allow for early termination of OFCs. Consequently, machine-based monitoring of OFCs has in this work been shown to possess the capacity to significantly and safely advance the current state of clinical art of allergy diagnosis
Resumo:
Future high speed communications networks will transmit data predominantly over optical fibres. As consumer and enterprise computing will remain the domain of electronics, the electro-optical conversion will get pushed further downstream towards the end user. Consequently, efficient tools are needed for this conversion and due to many potential advantages, including low cost and high output powers, long wavelength Vertical Cavity Surface Emitting Lasers (VCSELs) are a viable option. Drawbacks, such as broader linewidths than competing options, can be mitigated through the use of additional techniques such as Optical Injection Locking (OIL) which can require significant expertise and expensive equipment. This thesis addresses these issues by removing some of the experimental barriers to achieving performance increases via remote OIL. Firstly, numerical simulations of the phase and the photon and carrier numbers of an OIL semiconductor laser allowed the classification of the stable locking phase limits into three distinct groups. The frequency detuning of constant phase values (ø) was considered, in particular ø = 0 where the modulation response parameters were shown to be independent of the linewidth enhancement factor, α. A new method to estimate α and the coupling rate in a single experiment was formulated. Secondly, a novel technique to remotely determine the locked state of a VCSEL based on voltage variations of 2mV−30mV during detuned injection has been developed which can identify oscillatory and locked states. 2D & 3D maps of voltage, optical and electrical spectra illustrate corresponding behaviours. Finally, the use of directly modulated VCSELs as light sources for passive optical networks was investigated by successful transmission of data at 10 Gbit/s over 40km of single mode fibre (SMF) using cost effective electronic dispersion compensation to mitigate errors due to wavelength chirp. A widely tuneable MEMS-VCSEL was established as a good candidate for an externally modulated colourless source after a record error free transmission at 10 Gbit/s over 50km of SMF across a 30nm single mode tuning range. The ability to remotely set the emission wavelength using the novel methods developed in this thesis was demonstrated.
Resumo:
The detection of dense harmful algal blooms (HABs) by satellite remote sensing is usually based on analysis of chlorophyll-a as a proxy. However, this approach does not provide information about the potential harm of bloom, nor can it identify the dominant species. The developed HAB risk classification method employs a fully automatic data-driven approach to identify key characteristics of water leaving radiances and derived quantities, and to classify pixels into “harmful”, “non-harmful” and “no bloom” categories using Linear Discriminant Analysis (LDA). Discrimination accuracy is increased through the use of spectral ratios of water leaving radiances, absorption and backscattering. To reduce the false alarm rate the data that cannot be reliably classified are automatically labelled as “unknown”. This method can be trained on different HAB species or extended to new sensors and then applied to generate independent HAB risk maps; these can be fused with other sensors to fill gaps or improve spatial or temporal resolution. The HAB discrimination technique has obtained accurate results on MODIS and MERIS data, correctly identifying 89% of Phaeocystis globosa HABs in the southern North Sea and 88% of Karenia mikimotoi blooms in the Western English Channel. A linear transformation of the ocean colour discriminants is used to estimate harmful cell counts, demonstrating greater accuracy than if based on chlorophyll-a; this will facilitate its integration into a HAB early warning system operating in the southern North Sea.
Resumo:
Aims: To assess the reliability of drug use reports by young respondents, this study examined the extent of recanting previous drug use reports within an ongoing longitudinal survey of adolescent drug use. Here, recanting was defined as a positive report of life-time drug use that was subsequently denied 1 year later. The covariates of recanting were also studied. Design: An ongoing longitudinal survey of young adolescents (Belfast Youth Development Study) in Northern Ireland. Setting: Pencil and paper questionnaires were administered to pupils within participating schools. Measurements: Measures analysed included (a) recanting rates across 13 substances, (b) educational characteristics, (c) offending behaviour and (d) socioeconomic status. Findings: High levels of drug use recanting were identified, ranging from 7% of past alcohol use to 87% of past magic mushroom use. Recanting increased with the social stigma of the substance used. Denying past alcohol use was associated with being male, attending a catholic school, having positive attitudes towards school, having negative education expectations and not reporting any offending behaviour. Recanting alcohol intoxication was associated with being male and not reporting serious offending behaviour. Cannabis recanting was associated with having negative education expectations, receiving drugs education and not reporting serious offending behaviour. Conclusions: The high levels of recanting uncovered cast doubts on the reliability of drug use reports from young adolescents. Failure to address this response error may lead to biased prevalence estimates, particularly within school surveys and drug education evaluation trials.