838 resultados para Modeling Rapport Using Machine Learning


Relevância:

100.00% 100.00%

Publicador:

Resumo:

The Vapnik-Chervonenkis (VC) dimension is a combinatorial measure of a certain class of machine learning problems, which may be used to obtain upper and lower bounds on the number of training examples needed to learn to prescribed levels of accuracy. Most of the known bounds apply to the Probably Approximately Correct (PAC) framework, which is the framework within which we work in this paper. For a learning problem with some known VC dimension, much is known about the order of growth of the sample-size requirement of the problem, as a function of the PAC parameters. The exact value of sample-size requirement is however less well-known, and depends heavily on the particular learning algorithm being used. This is a major obstacle to the practical application of the VC dimension. Hence it is important to know exactly how the sample-size requirement depends on VC dimension, and with that in mind, we describe a general algorithm for learning problems having VC dimension 1. Its sample-size requirement is minimal (as a function of the PAC parameters), and turns out to be the same for all non-trivial learning problems having VC dimension 1. While the method used cannot be naively generalised to higher VC dimension, it suggests that optimal algorithm-dependent bounds may improve substantially on current upper bounds.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper presents a forecasting technique for forward energy prices, one day ahead. This technique combines a wavelet transform and forecasting models such as multi- layer perceptron, linear regression or GARCH. These techniques are applied to real data from the UK gas markets to evaluate their performance. The results show that the forecasting accuracy is improved significantly by using the wavelet transform. The methodology can be also applied to forecasting market clearing prices and electricity/gas loads.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

In this paper we introduce and illustrate non-trivial upper and lower bounds on the learning curves for one-dimensional Gaussian Processes. The analysis is carried out emphasising the effects induced on the bounds by the smoothness of the random process described by the Modified Bessel and the Squared Exponential covariance functions. We present an explanation of the early, linearly-decreasing behavior of the learning curves and the bounds as well as a study of the asymptotic behavior of the curves. The effects of the noise level and the lengthscale on the tightness of the bounds are also discussed.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis presents an investigation, of synchronisation and causality, motivated by problems in computational neuroscience. The thesis addresses both theoretical and practical signal processing issues regarding the estimation of interdependence from a set of multivariate data generated by a complex underlying dynamical system. This topic is driven by a series of problems in neuroscience, which represents the principal background motive behind the material in this work. The underlying system is the human brain and the generative process of the data is based on modern electromagnetic neuroimaging methods . In this thesis, the underlying functional of the brain mechanisms are derived from the recent mathematical formalism of dynamical systems in complex networks. This is justified principally on the grounds of the complex hierarchical and multiscale nature of the brain and it offers new methods of analysis to model its emergent phenomena. A fundamental approach to study the neural activity is to investigate the connectivity pattern developed by the brain’s complex network. Three types of connectivity are important to study: 1) anatomical connectivity refering to the physical links forming the topology of the brain network; 2) effective connectivity concerning with the way the neural elements communicate with each other using the brain’s anatomical structure, through phenomena of synchronisation and information transfer; 3) functional connectivity, presenting an epistemic concept which alludes to the interdependence between data measured from the brain network. The main contribution of this thesis is to present, apply and discuss novel algorithms of functional connectivities, which are designed to extract different specific aspects of interaction between the underlying generators of the data. Firstly, a univariate statistic is developed to allow for indirect assessment of synchronisation in the local network from a single time series. This approach is useful in inferring the coupling as in a local cortical area as observed by a single measurement electrode. Secondly, different existing methods of phase synchronisation are considered from the perspective of experimental data analysis and inference of coupling from observed data. These methods are designed to address the estimation of medium to long range connectivity and their differences are particularly relevant in the context of volume conduction, that is known to produce spurious detections of connectivity. Finally, an asymmetric temporal metric is introduced in order to detect the direction of the coupling between different regions of the brain. The method developed in this thesis is based on a machine learning extensions of the well known concept of Granger causality. The thesis discussion is developed alongside examples of synthetic and experimental real data. The synthetic data are simulations of complex dynamical systems with the intention to mimic the behaviour of simple cortical neural assemblies. They are helpful to test the techniques developed in this thesis. The real datasets are provided to illustrate the problem of brain connectivity in the case of important neurological disorders such as Epilepsy and Parkinson’s disease. The methods of functional connectivity in this thesis are applied to intracranial EEG recordings in order to extract features, which characterize underlying spatiotemporal dynamics before during and after an epileptic seizure and predict seizure location and onset prior to conventional electrographic signs. The methodology is also applied to a MEG dataset containing healthy, Parkinson’s and dementia subjects with the scope of distinguishing patterns of pathological from physiological connectivity.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Stochastic differential equations arise naturally in a range of contexts, from financial to environmental modeling. Current solution methods are limited in their representation of the posterior process in the presence of data. In this work, we present a novel Gaussian process approximation to the posterior measure over paths for a general class of stochastic differential equations in the presence of observations. The method is applied to two simple problems: the Ornstein-Uhlenbeck process, of which the exact solution is known and can be compared to, and the double-well system, for which standard approaches such as the ensemble Kalman smoother fail to provide a satisfactory result. Experiments show that our variational approximation is viable and that the results are very promising as the variational approximate solution outperforms standard Gaussian process regression for non-Gaussian Markov processes.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This thesis describes a novel connectionist machine utilizing induction by a Hilbert hypercube representation. This representation offers a number of distinct advantages which are described. We construct a theoretical and practical learning machine which lies in an area of overlap between three disciplines - neural nets, machine learning and knowledge acquisition - hence it is refered to as a "coalesced" machine. To this unifying aspect is added the various advantages of its orthogonal lattice structure as against less structured nets. We discuss the case for such a fundamental and low level empirical learning tool and the assumptions behind the machine are clearly outlined. Our theory of an orthogonal lattice structure the Hilbert hypercube of an n-dimensional space using a complemented distributed lattice as a basis for supervised learning is derived from first principles on clearly laid out scientific principles. The resulting "subhypercube theory" was implemented in a development machine which was then used to test the theoretical predictions again under strict scientific guidelines. The scope, advantages and limitations of this machine were tested in a series of experiments. Novel and seminal properties of the machine include: the "metrical", deterministic and global nature of its search; complete convergence invariably producing minimum polynomial solutions for both disjuncts and conjuncts even with moderate levels of noise present; a learning engine which is mathematically analysable in depth based upon the "complexity range" of the function concerned; a strong bias towards the simplest possible globally (rather than locally) derived "balanced" explanation of the data; the ability to cope with variables in the network; and new ways of reducing the exponential explosion. Performance issues were addressed and comparative studies with other learning machines indicates that our novel approach has definite value and should be further researched.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

There has been substantial research into the role of distance learning in education. Despite the rise in the popularity and practice of this form of learning in business, there has not been a parallel increase in the amount of research carried out in this field. An extensive investigation was conducted into the entire distance learning system of a multi-national company with particular emphasis on the design, implementation and evaluation of the materials. In addition, the performance and attitudes of trainees were examined. The results of a comparative study indicated that trainees using distance learning had significantly higher test scores than trainees using conventional face-to-face training. The influence of the previous distance learning experience, educational background and selected study environment of trainees was investigated. Trainees with previous experience of distance learning were more likely to complete the course and with significantly higher test scores than trainees with no previous experience. The more advanced the educational background of trainees, the greater the likelihood of their completing the course, although there was no significant difference in the test scores achieved. Trainees preferred to use the materials at home and those opting to study in this environment scored significantly higher than those studying in the office, the study room at work or in a combination of environments. The influence of learning styles (Kolb, 1976) was tested. The results indicated that the convergers had the greatest completion rates and scored significantly higher than trainees with the assimilator, accommodator and diverger learning styles. The attitudes of the trainees, supervisors and trainers were examined using questionnaire, interview and discussion techniques. The findings highlighted the potential problems of lack of awareness and low motivation which could prove to be major obstacles to the success of distance learning in business.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

This paper fills an important gap in the human resource development (HRD) literature by considering the role that NGO intermediation initiatives can play in bringing together and developing corporate procurement officials (CPOs) and ethnic minority business owner-managers (EMBOs) supplying goods and services. It has been suggested that such initiatives hold great promise in helping ethnic minority businesses escape from their disadvantageous sectoral concentration in the UK. Using situated learning theory as an application lens, the main aim of this paper is to demonstrate how nurturing communities of practice of CPOs and EMBOs and facilitating their interaction can help their professional development and their approaches to procuring and supplying, respectively. The paper reports on the authors' experience with an action research programme encompassing two intermediation initiatives of this kind. The lessons drawn from this study are useful for all those concerned with HRD for inclusive procurement; intermediaries promoting inclusive procurement, large procurers who are willing to engage with supplier diversity and ethnic minority suppliers who wish to access corporate procurement systems and 'break-out'. © 2013 Taylor & Francis.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Objective: Recently, much research has been proposed using nature inspired algorithms to perform complex machine learning tasks. Ant colony optimization (ACO) is one such algorithm based on swarm intelligence and is derived from a model inspired by the collective foraging behavior of ants. Taking advantage of the ACO in traits such as self-organization and robustness, this paper investigates ant-based algorithms for gene expression data clustering and associative classification. Methods and material: An ant-based clustering (Ant-C) and an ant-based association rule mining (Ant-ARM) algorithms are proposed for gene expression data analysis. The proposed algorithms make use of the natural behavior of ants such as cooperation and adaptation to allow for a flexible robust search for a good candidate solution. Results: Ant-C has been tested on the three datasets selected from the Stanford Genomic Resource Database and achieved relatively high accuracy compared to other classical clustering methods. Ant-ARM has been tested on the acute lymphoblastic leukemia (ALL)/acute myeloid leukemia (AML) dataset and generated about 30 classification rules with high accuracy. Conclusions: Ant-C can generate optimal number of clusters without incorporating any other algorithms such as K-means or agglomerative hierarchical clustering. For associative classification, while a few of the well-known algorithms such as Apriori, FP-growth and Magnum Opus are unable to mine any association rules from the ALL/AML dataset within a reasonable period of time, Ant-ARM is able to extract associative classification rules.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Sentiment analysis or opinion mining aims to use automated tools to detect subjective information such as opinions, attitudes, and feelings expressed in text. This paper proposes a novel probabilistic modeling framework based on Latent Dirichlet Allocation (LDA), called joint sentiment/topic model (JST), which detects sentiment and topic simultaneously from text. Unlike other machine learning approaches to sentiment classification which often require labeled corpora for classifier training, the proposed JST model is fully unsupervised. The model has been evaluated on the movie review dataset to classify the review sentiment polarity and minimum prior information have also been explored to further improve the sentiment classification accuracy. Preliminary experiments have shown promising results achieved by JST.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Combining the results of classifiers has shown much promise in machine learning generally. However, published work on combining text categorizers suggests that, for this particular application, improvements in performance are hard to attain. Explorative research using a simple voting system is presented and discussed in the light of a probabilistic model that was originally developed for safety critical software. It was found that typical categorization approaches produce predictions which are too similar for combining them to be effective since they tend to fail on the same records. Further experiments using two less orthodox categorizers are also presented which suggest that combining text categorizers can be successful, provided the essential element of ‘difference’ is considered.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Improving bit error rates in optical communication systems is a difficult and important problem. The error correction must take place at high speed and be extremely accurate. We show the feasibility of using hardware implementable machine learning techniques. This may enable some error correction at the speed required.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

The primary questions addressed in this paper are the following: what are the factors that affect students’ adoption of an e-learning system and what are the relationships among these factors? This paper investigates and identifies some of the major factors affecting students’ adoption of an e-learning system in a university in Jordan. E-learning adoption is approached from the information systems acceptance point of view. This suggests that a prior condition for learning effectively using e-learning systems is that students must actually use them. Thus, a greater knowledge of the factors that affect IT adoption and their interrelationships is a pre-cursor to a better understanding of student acceptance of e-learning systems. In turn, this will help and guide those who develop, implement, and deliver e-learning systems. In this study, an extended version of the Technology Acceptance Model (TAM) was developed to investigate the underlying factors that influence students’ decisions to use an e-learning system. The TAM was populated using data gathered from a survey of 486 undergraduate students using the Moodle based e-learning system at the Arab Open University. The model was estimated using Structural Equation Modelling (SEM). A path model was developed to analyze the relationships between the factors to explain students’ adoption of the e-learning system. Whilst findings support existing literature about prior experience affecting perceptions, they also point to surprising group effects, which may merit future exploration.

Relevância:

100.00% 100.00%

Publicador:

Resumo:

Technology intermediaries are seen as potent vehicles for addressing perennial problems in transferring technology from university to industry in developed and developing countries. This paper examines what constitutes effective user-end intermediation in a low-technology, developing economy context, which is an under-researched topic. The social learning in technological innovation framework is extended using situated learning theory in a longitudinal instrumental case study of an exemplar technology intermediation programme. The paper documents the role that academic-related research and advisory centres can play as intermediaries in brokering, facilitating and configuring technology, against the backdrop of a group of small-scale pisciculture businesses in a rural area of Colombia. In doing so, it demonstrates how technology intermediation activities can be optimized in the domestication and innofusion of technology amongst end-users. The design components featured in this instrumental case of intermediation can inform policy making and practice relating to technology transfer from university to rural industry. Future research on this subject should consider the intermediation components put forward, as well as the impact of such interventions, in different countries and industrial sectors. Such research would allow for theoretical replication and help improve technology domestication and innofusion in different contexts, especially in less-developed countries.