56 resultados para Robust Learning Algorithm
Resumo:
An experiment investigated whether exposure to orthography facilitates oral vocabulary learning. A total of 58 typically developing children aged 8-9 years were taught 12 nonwords. Children were trained to associate novel phonological forms with pictures of novel objects. Pictures were used as referents to represent novel word meanings. For half of the nonwords children were additionally exposed to orthography, although they were not alerted to its presence, nor were they instructed to use it. After this training phase a nonword-picture matching posttest was used to assess learning of nonword meaning, and a spelling posttest was used to assess learning of nonword orthography. Children showed robust learning for novel spelling patterns after incidental exposure to orthography. Further, we observed stronger learning for nonword-referent pairings trained with orthography. The degree of orthographic facilitation observed in posttests was related to children's reading levels, with more advanced readers showing more benefit from the presence of orthography.
Resumo:
Our digital universe is rapidly expanding,more and more daily activities are digitally recorded, data arrives in streams, it needs to be analyzed in real time and may evolve over time. In the last decade many adaptive learning algorithms and prediction systems, which can automatically update themselves with the new incoming data, have been developed. The majority of those algorithms focus on improving the predictive performance and assume that model update is always desired as soon as possible and as frequently as possible. In this study we consider potential model update as an investment decision, which, as in the financial markets, should be taken only if a certain return on investment is expected. We introduce and motivate a new research problem for data streams ? cost-sensitive adaptation. We propose a reference framework for analyzing adaptation strategies in terms of costs and benefits. Our framework allows to characterize and decompose the costs of model updates, and to asses and interpret the gains in performance due to model adaptation for a given learning algorithm on a given prediction task. Our proof-of-concept experiment demonstrates how the framework can aid in analyzing and managing adaptation decisions in the chemical industry.
Resumo:
This paper represents the first step in an on-going work for designing an unsupervised method based on genetic algorithm for intrusion detection. Its main role in a broader system is to notify of an unusual traffic and in that way provide the possibility of detecting unknown attacks. Most of the machine-learning techniques deployed for intrusion detection are supervised as these techniques are generally more accurate, but this implies the need of labeling the data for training and testing which is time-consuming and error-prone. Hence, our goal is to devise an anomaly detector which would be unsupervised, but at the same time robust and accurate. Genetic algorithms are robust and able to avoid getting stuck in local optima, unlike the rest of clustering techniques. The model is verified on KDD99 benchmark dataset, generating a solution competitive with the solutions of the state-of-the-art which demonstrates high possibilities of the proposed method.
Resumo:
This letter introduces a new robust nonlinear identification algorithm using the Predicted REsidual Sums of Squares (PRESS) statistic and for-ward regression. The major contribution is to compute the PRESS statistic within a framework of a forward orthogonalization process and hence construct a model with a good generalization property. Based on the properties of the PRESS statistic the proposed algorithm can achieve a fully automated procedure without resort to any other validation data set for iterative model evaluation.
Resumo:
An alternative blind deconvolution algorithm for white-noise driven minimum phase systems is presented and verified by computer simulation. This algorithm uses a cost function based on a novel idea: variance approximation and series decoupling (VASD), and suggests that not all autocorrelation function values are necessary to implement blind deconvolution.
Resumo:
This paper presents a novel approach to the automatic classification of very large data sets composed of terahertz pulse transient signals, highlighting their potential use in biochemical, biomedical, pharmaceutical and security applications. Two different types of THz spectra are considered in the classification process. Firstly a binary classification study of poly-A and poly-C ribonucleic acid samples is performed. This is then contrasted with a difficult multi-class classification problem of spectra from six different powder samples that although have fairly indistinguishable features in the optical spectrum, they also possess a few discernable spectral features in the terahertz part of the spectrum. Classification is performed using a complex-valued extreme learning machine algorithm that takes into account features in both the amplitude as well as the phase of the recorded spectra. Classification speed and accuracy are contrasted with that achieved using a support vector machine classifier. The study systematically compares the classifier performance achieved after adopting different Gaussian kernels when separating amplitude and phase signatures. The two signatures are presented as feature vectors for both training and testing purposes. The study confirms the utility of complex-valued extreme learning machine algorithms for classification of the very large data sets generated with current terahertz imaging spectrometers. The classifier can take into consideration heterogeneous layers within an object as would be required within a tomographic setting and is sufficiently robust to detect patterns hidden inside noisy terahertz data sets. The proposed study opens up the opportunity for the establishment of complex-valued extreme learning machine algorithms as new chemometric tools that will assist the wider proliferation of terahertz sensing technology for chemical sensing, quality control, security screening and clinic diagnosis. Furthermore, the proposed algorithm should also be very useful in other applications requiring the classification of very large datasets.
Resumo:
Sclera segmentation is shown to be of significant importance for eye and iris biometrics. However, sclera segmentation has not been extensively researched as a separate topic, but mainly summarized as a component of a broader task. This paper proposes a novel sclera segmentation algorithm for colour images which operates at pixel-level. Exploring various colour spaces, the proposed approach is robust to image noise and different gaze directions. The algorithm’s robustness is enhanced by a two-stage classifier. At the first stage, a set of simple classifiers is employed, while at the second stage, a neural network classifier operates on the probabilities’ space generated by the classifiers at stage 1. The proposed method was ranked the 1st in Sclera Segmentation Benchmarking Competition 2015, part of BTAS 2015, with a precision of 95.05% corresponding to a recall of 94.56%.
Resumo:
Infants (12 to 17 months) were taught 2 novel words for 2 images of novel objects, by pairing isolated auditory labels with to-be-associated images. Comprehension was tested using a preferential looking task in which the infant was presented with both images together with an isolated auditory label. The auditory label usually, but not always, matched one of the images. Infants looked preferentially at images that matched the auditory stimulus. The experiment controlled within-subjects for both side bias and preference for previously named items. Infants showed learning after 12 presentations of the new words. Evidence is presented that, in certain circumstances, the duration of longest look at a target may be a more robust measure of target preference than overall looking time. The experiment provides support for previous demonstrations of rapid word learning by pre-vocabulary spurt children, and offers some methodological improvements to the preferential looking task.
Resumo:
In this review, we consider three possible criteria by which knowledge might be regarded as implicit or inaccessible: It might be implicit only in the sense that it is difficult to articulate freely, or it might be implicit according to either an objective threshold or a subjective threshold. We evaluate evidence for these criteria in relation to artificial grammar learning, the control of complex systems, and sequence learning, respectively. We argue that the convincing evidence is not yet in, but construing the implicit nature of implicit learning in terms of a subjective threshold is most likely to prove fruitful for future research. Furthermore, the subjective threshold criterion may demarcate qualitatively different types of knowledge. We argue that (1) implicit, rather than explicit, knowledge is often relatively inflexible in transfer to different domains, (2) implicit, rather than explicit, learning occurs when attention is focused on specific items and not underlying rules, and (3) implicit learning and the resulting knowledge are often relatively robust.
Resumo:
Ant colonies in nature provide a good model for a distributed, robust and adaptive routing algorithm. This paper proposes the adoption of the same strategy for the routing of packets in an Active Network. Traditional store-and-forward routers are replaced by active intermediate systems, which are able to perform computations on transient packets, in a way that results very helpful for developing and dynamically deploying new protocols. The adoption of the Active Networks paradigm associated with a cooperative learning environment produces a robust, decentralized routing algorithm capable of adapting to network traffic conditions.
Resumo:
Background: Impairments in explicit memory have been observed in Holocaust survivors with posttraumatic stress disorder. Methods: To evaluate which memory components are preferentially affected, the California Verbal Learning Test was administered to Holocaust survivors with (n = 36) and without (n = 26) posttraumatic stress disorder, and subjects not exposed to the Holocaust (n = 40). Results: Posttraumatic stress disorder subjects showed impairments in learning and short-term and delayed retention compared to nonexposed subjects; survivors without posttraumatic stress disorder did not. Impairments in learning, but not retention, were retained after controlling fir intelligence quotient. Older age was associated with poorer learning and memory performance in the posttraumatic stress disorder group only. Conclusions: The most robust impairment observed in posttraumatic stress disorder was in verbal learning, which may be a risk factor for or consequence of chronic posttraumatic stress disorder. The negative association between performance and age may reflect accelerated cognitive decline in posttraumatic stress disorder.
Resumo:
This paper proposes a new iterative algorithm for OFDM joint data detection and phase noise (PHN) cancellation based on minimum mean square prediction error. We particularly highlight the problem of "overfitting" such that the iterative approach may converge to a trivial solution. Although it is essential for this joint approach, the overfitting problem was relatively less studied in existing algorithms. In this paper, specifically, we apply a hard decision procedure at every iterative step to overcome the overfitting. Moreover, compared with existing algorithms, a more accurate Pade approximation is used to represent the phase noise, and finally a more robust and compact fast process based on Givens rotation is proposed to reduce the complexity to a practical level. Numerical simulations are also given to verify the proposed algorithm.
Resumo:
In this paper new robust nonlinear model construction algorithms for a large class of linear-in-the-parameters models are introduced to enhance model robustness, including three algorithms using combined A- or D-optimality or PRESS statistic (Predicted REsidual Sum of Squares) with regularised orthogonal least squares algorithm respectively. A common characteristic of these algorithms is that the inherent computation efficiency associated with the orthogonalisation scheme in orthogonal least squares or regularised orthogonal least squares has been extended such that the new algorithms are computationally efficient. A numerical example is included to demonstrate effectiveness of the algorithms. Copyright (C) 2003 IFAC.