9 resultados para sequential learning

em Deakin Research Online - Australia


Relevância:

70.00% 70.00%

Publicador:

Resumo:

In this brief, a new neural network model called generalized adaptive resonance theory (GART) is introduced. GART is a hybrid model that comprises a modified Gaussian adaptive resonance theory (MGA) and the generalized regression neural network (GRNN). It is an enhanced version of the GRNN, which preserves the online learning properties of adaptive resonance theory (ART). A series of empirical studies to assess the effectiveness of GART in classification, regression, and time series prediction tasks is conducted. The results demonstrate that GART is able to produce good performances as compared with those of other methods, including the online sequential extreme learning machine (OSELM) and sequential learning radial basis function (RBF) neural network models.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

Driving phenomenon is a repetitive process, that permits sequential learning under identifying the proper change periods. Sequential filtering is widely used for tracking and prediction of state dynamics. However, it suffers at abrupt changes, which cause sudden incremental prediction error. We provide a sequential filtering approach using online Bayesian detection of change points to decrease prediction error generally, and specifically at abrupt changes. The approach learns from optimally detected segments for identifying driving behaviour. Change points detection is done by the Pruned Exact Linear Time algorithm. Computational cost of our approach is bounded by the cost of the implemented sequential filter. This computational performance is suitable to the online nature of motion simulator's delay reduction. The approach was tested on a simulated driving scenario using Vortex by CM Labs. The state dimensions are simulated 2D space coordinates, and velocity. Particle filter was used for online sequential filtering. Prediction results show that change-point detection improves the quality of state estimation compared to traditional sequential filters, and is more suitable for predicting behavioural activities.

Relevância:

60.00% 60.00%

Publicador:

Resumo:

The idea of meta-cognitive learning has enriched the landscape of evolving systems, because it emulates three fundamental aspects of human learning: what-to-learn; how-to-learn; and when-to-learn. However, existing meta-cognitive algorithms still exclude Scaffolding theory, which can realize a plug-and-play classifier. Consequently, these algorithms require laborious pre- and/or post-training processes to be carried out in addition to the main training process. This paper introduces a novel meta-cognitive algorithm termed GENERIC-Classifier (gClass), where the how-to-learn part constitutes a synergy of Scaffolding Theory - a tutoring theory that fosters the ability to sort out complex learning tasks, and Schema Theory - a learning theory of knowledge acquisition by humans. The what-to-learn aspect adopts an online active learning concept by virtue of an extended conflict and ignorance method, making gClass an incremental semi-supervised classifier, whereas the when-to-learn component makes use of the standard sample reserved strategy. A generalized version of the Takagi-Sugeno Kang (TSK) fuzzy system is devised to serve as the cognitive constituent. That is, the rule premise is underpinned by multivariate Gaussian functions, while the rule consequent employs a subset of the non-linear Chebyshev polynomial. Thorough empirical studies, confirmed by their corresponding statistical tests, have numerically validated the efficacy of gClass, which delivers better classification rates than state-of-the-art classifiers while having less complexity.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Despite the fact that developmental coordination disorder (DCD) is characterised by a deficit in the ability to learn or automate motor skills, few studies have examined motor learning over repeated trials. In this study we examined procedural learning in a group of 10 children with DCD (aged 8–12 years) and age-matched controls without DCD. The learning task was modelled on that of Nissen and Bullemer [Cognitive Psychology 19 (1987) 1]. Children performed a serial reaction time (SRT) task in which they were required to learn a spatial sequence that repeated itself every 10 trials. Children were not aware of the repetition. Spatial targets were four (horizontal) locations presented on a computer monitor. Children responded using four response keys with the same horizontal mapping as the stimulus. They were tested over five blocks of 100 trials each. The first four blocks presented the same repeating sequence, while the fifth block was randomised. Procedural learning was indexed by the slope of the regression of RT on blocks 1–4. Results showed that most children displayed strong procedural learning of the sequence, despite having no explicit knowledge about it. Overall, there was no group difference in the magnitude of learning over blocks of trials – most children performed within the normal range. Procedural learning for simple sequential movements appears to be intact in children with DCD. This suggests that cortico-striatal circuits that are strongly implicated in the sequencing of simple movements appear to be function normally in DCD.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Many Australian tertiary institutions provide support for academic staff in the design and development of online teaching and learning resources, often employing a centralised unit staffed with educational and instructional designers, multimedia and online developers, audio/video producers and graphic artists. It is not unusual for these units to have evolved from print-based distance education providers and consequently the design and development processes inherent within those units are often steeped in ‘traditional’ sequential instructional development models. We argue that these models are no longer valid for effectively working with academic staff given the dynamic nature of online learning environments and the diversity of skills to implement effective online learning. This paper therefore presents an extended instructional design model in which the development cycle for online teaching and learning materials uses a scaffolding strategy in order to cater for learner-centred activities and to maximise scarce developer and academic resources. The model also integrates accepted phases of the instructional development process to provide guidelines for the disposition of staff and to more accurately reflect the creation of resources as learning design rather than instructional design. It is a model that builds on instructional design processes and integrates concepts of team-based development, shared understanding and the development of relevant communities of practice.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper reports findings from a project that examined the extent and nature of the contribution of rural schools to their communities’ development beyond traditional forms of education of young people. Case study communities in five Australian States participated in the project, funded by the Rural Industries Research and Development Corporation. Communities and schools that share the belief that education is the responsibility of the whole community and work together, drawing on skills and knowledge of the community as a whole, experience benefits that extend far beyond producing a well-educated group of young people. The level of maturity of the school– community partnership dictates how schools and communities go about developing and sustaining new linkages, or joint projects. Twelve characteristics central to the success of school–community partnerships were identified. The characteristics are largely sequential in that later characteristics build on earlier ones. Underscoring these characteristics is the importance of collective learning activities including teamwork and network building, which have been identified elsewhere as key social capital building activities. A generic model of the relationship between the indicators of effective school–community partnerships and the level of maturity of those partnerships is forwarded.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Inspired by the hierarchical hidden Markov models (HHMM), we present the hierarchical semi-Markov conditional random field (HSCRF), a generalisation of embedded undirected Markov chains to model complex hierarchical, nested Markov processes. It is parameterised in a discriminative framework and has polynomial time algorithms for learning and inference. Importantly, we develop efficient algorithms for learning and constrained inference in a partially-supervised setting, which is important issue in practice where labels can only be obtained sparsely. We demonstrate the HSCRF in two applications: (i) recognising human activities of daily living (ADLs) from indoor surveillance cameras, and (ii) noun-phrase chunking. We show that the HSCRF is capable of learning rich hierarchical models with reasonable accuracy in both fully and partially observed data cases.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Zero-day or unknown malware are created using code obfuscation techniques that can modify the parent code to produce offspring copies which have the same functionality but with different signatures. Current techniques reported in literature lack the capability of detecting zero-day malware with the required accuracy and efficiency. In this paper, we have proposed and evaluated a novel method of employing several data mining techniques to detect and classify zero-day malware with high levels of accuracy and efficiency based on the frequency of Windows API calls. This paper describes the methodology employed for the collection of large data sets to train the classifiers, and analyses the performance results of the various data mining algorithms adopted for the study using a fully automated tool developed in this research to conduct the various experimental investigations and evaluation. Through the performance results of these algorithms from our experimental analysis, we are able to evaluate and discuss the advantages of one data mining algorithm over the other for accurately detecting zero-day malware successfully. The data mining framework employed in this research learns through analysing the behavior of existing malicious and benign codes in large datasets. We have employed robust classifiers, namely Naïve Bayes (NB) Algorithm, k−Nearest Neighbor (kNN) Algorithm, Sequential Minimal Optimization (SMO) Algorithm with 4 differents kernels (SMO - Normalized PolyKernel, SMO – PolyKernel, SMO – Puk, and SMO- Radial Basis Function (RBF)), Backpropagation Neural Networks Algorithm, and J48 decision tree and have evaluated their performance. Overall, the automated data mining system implemented for this study has achieved high true positive (TP) rate of more than 98.5%, and low false positive (FP) rate of less than 0.025, which has not been achieved in literature so far. This is much higher than the required commercial acceptance level indicating that our novel technique is a major leap forward in detecting zero-day malware. This paper also offers future directions for researchers in exploring different aspects of obfuscations that are affecting the IT world today.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: As more and more researchers are turning to big data for new opportunities of biomedical discoveries, machine learning models, as the backbone of big data analysis, are mentioned more often in biomedical journals. However, owing to the inherent complexity of machine learning methods, they are prone to misuse. Because of the flexibility in specifying machine learning models, the results are often insufficiently reported in research articles, hindering reliable assessment of model validity and consistent interpretation of model outputs.

OBJECTIVE: To attain a set of guidelines on the use of machine learning predictive models within clinical settings to make sure the models are correctly applied and sufficiently reported so that true discoveries can be distinguished from random coincidence.

METHODS: A multidisciplinary panel of machine learning experts, clinicians, and traditional statisticians were interviewed, using an iterative process in accordance with the Delphi method.

RESULTS: The process produced a set of guidelines that consists of (1) a list of reporting items to be included in a research article and (2) a set of practical sequential steps for developing predictive models.

CONCLUSIONS: A set of guidelines was generated to enable correct application of machine learning models and consistent reporting of model specifications and results in biomedical research. We believe that such guidelines will accelerate the adoption of big data analysis, particularly with machine learning methods, in the biomedical research community.