106 resultados para Learning Bayesian Networks


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Traffic congestion is one of the major problems in modern cities. This study applies machine learning methods to determine green times in order to minimize in an isolated intersection. Q-learning and neural networks are applied here to set signal light times and minimize total delays. It is assumed that an intersection behaves in a similar fashion to an intelligent agent learning how to set green times in each cycle based on traffic information. Here, a comparison between Q-learning and neural network is presented. In Q-learning, considering continuous green time requires a large state space, making the learning process practically impossible. In contrast to Q-learning methods, the neural network model can easily set the appropriate green time to fit the traffic demand. The performance of the proposed neural network is compared with two traditional alternatives for controlling traffic lights. Simulation results indicate that the application of the proposed method greatly reduces the total delay in the network compared to the alternative methods.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Artificial neural networks are an effective means of allowing software agents to learn about and filter aspects of their domain. In this paper we explore the use of artificial neural networks in the context of dance performance. The software agent’s neural network is presented with movement in the form of motion capture streams, both pre-recorded and live. Learning can be viewed as analogous to rehearsal, recognition and response to performance. The interrelationship between the software agent and dancer throughout the process is considered as a potential means of allowing the agent to function beyond its limited self-contained capability.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Developing an efficient and accurate hydrologic forecasting model is crucial to managing water resources and flooding issues. In this study, response surface (RS) models including multiple linear regression (MLR), quadratic response surface (QRS), and nonlinear response surface (NRS) were applied to daily runoff (e.g., discharge and water level) prediction. Two catchments, one in southeast China and the other in western Canada, were used to demonstrate the applicability of the proposed models. Their performances were compared with artificial neural network (ANN) models, trained with the learning algorithms of the gradient descent with adaptive learning rate (ANN-GDA) and Levenberg-Marquardt (ANN-LM). The performances of both RS and ANN in relation to the lags used in the input data, the length of the training samples, long-term (monthly and yearly) predictions, and peak value predictions were also analyzed. The results indicate that the QRS and NRS were able to obtain equally good performance in runoff prediction, as compared with ANN-GDA and ANN-LM, but require lower computational efforts. The RS models bring practical benefits in their application to hydrologic forecasting, particularly in the cases of short-term flood forecasting (e.g., hourly) due to fast training capability, and could be considered as an alternative to ANN

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This study investigated how experienced teachers learned Information and Communication Technologies (ICT) during their professional development. With the introduction of ICT, experienced teachers encountered change becoming virtually displaced persons – digital immigrants; new settlers – endeavouring to obtain digital citizenship in order to survive in the information age. In the process, these teachers moved from learning how to push buttons, to applying software, and finally to changing their practice. They learned collectively and individually, in communities and networks, like immigrants and adult learners: by doing, experimenting and reflecting on ICT. Unfortunately, for these teachers-as-pedagogues, their focus on pedagogical theory during the action research they conducted, was not fully investigated or embraced during the year-long study. This study used a participant observation qualitative methodology to follow teachers in their university classroom. Interviews were conducted and documentation collected and verified by the teacher educator. The application of Kolb‘s, Gardner‘s, and Vygotsky‘s work allowed for the observation of these teachers within their sociocultural contexts. Kolb‘s work helped to understand their learning processes and Gardner‘s work indicated the learning abilities that these teachers valued in the new ICT environment. Meanwhile Vygotsky‘s work – and in particular three concepts, uchit, perezhivanija, and mislenija – presented a richer and more informed basis to understand immigration and change. Finally, this research proposes that teachers learn ICT through what is termed a hyperuchit model, consisting of developments; action; interaction; and reflection. The recommendation is that future teacher university ICT professional learning incorporates this hyperuchit model

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Understanding human activities is an important research topic, most noticeably in assisted-living and healthcare monitoring environments. Beyond simple forms of activity (e.g., an RFID event of entering a building), learning latent activities that are more semantically interpretable, such as sitting at a desk, meeting with people, or gathering with friends, remains a challenging problem. Supervised learning has been the typical modeling choice in the past. However, this requires labeled training data, is unable to predict never-seen-before activity, and fails to adapt to the continuing growth of data over time. In this chapter, we explore the use of a Bayesian nonparametric method, in particular the hierarchical Dirichlet process, to infer latent activities from sensor data acquired in a pervasive setting. Our framework is unsupervised, requires no labeled data, and is able to discover new activities as data grows. We present experiments on extracting movement and interaction activities from sociometric badge signals and show how to use them for detecting of subcommunities. Using the popular Reality Mining dataset, we further demonstrate the extraction of colocation activities and use them to automatically infer the structure of social subgroups. © 2014 Elsevier Inc. All rights reserved.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Using the prediction of cancer outcome as a model, we have tested the hypothesis that through analysing routinely collected digital data contained in an electronic administrative record (EAR), using machine-learning techniques, we could enhance conventional methods in predicting clinical outcomes.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

In less than a decade, architectural education has, in some ways, significantly evolved. The advent of computation has not so much triggered the change, but Social Networks (SN) have ignited a novel way of learning, interaction and knowledge construction. SN enable learners to engage with friends, tutors, professionals and peers, form the base for learning resources, allow students to make their voices heard, to listen to other views and much more. They offer a more authentic, inter-professional and integrated problem based, Just-in-Time (JIT), Just-in-Place (JIP) learning. Online SN work in close association with offline SN to form a blended social learning realm-the Social Network Learning Cloud (SNLC)-that greatly enables and enhances students' learning in a far more influential way than any other learning means, resources or methods do. This paper presents a SNLC for architectural education that provides opportunities for linking the academic Learning Management Systems (LMS) with private or professional SN such that it enhances the learning experience and deepens the knowledge of the students. The paper proposes ways of utilising SNLC in other learning and teaching areas of the curriculum and concludes with directions of how SNLC then may be employed in professional settings.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This paper aims at optimally adjusting a set of green times for traffic lights in a single intersection with the purpose of minimizing travel delay time and traffic congestion. Neural network (NN) and fuzzy logic system (FLS) are two methods applied to develop intelligent traffic timing controller. For this purpose, an intersection is considered and simulated as an intelligent agent that learns how to set green times in each cycle based on the traffic information. The training approach and data for both these learning methods are similar. Both methods use genetic algorithm to tune their parameters during learning. Finally, The performance of the two intelligent learning methods is compared with the performance of simple fixed-time method. Simulation results indicate that both intelligent methods significantly reduce the total delay in the network compared to the fixed-time method.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Hidden patterns and contexts play an important part in intelligent pervasive systems. Most of the existing works have focused on simple forms of contexts derived directly from raw signals. High-level constructs and patterns have been largely neglected or remained under-explored in pervasive computing, mainly due to the growing complexity over time and the lack of efficient principal methods to extract them. Traditional parametric modeling approaches from machine learning find it difficult to discover new, unseen patterns and contexts arising from continuous growth of data streams due to its practice of training-then-prediction paradigm. In this work, we propose to apply Bayesian nonparametric models as a systematic and rigorous paradigm to continuously learn hidden patterns and contexts from raw social signals to provide basic building blocks for context-aware applications. Bayesian nonparametric models allow the model complexity to grow with data, fitting naturally to several problems encountered in pervasive computing. Under this framework, we use nonparametric prior distributions to model the data generative process, which helps towards learning the number of latent patterns automatically, adapting to changes in data and discovering never-seen-before patterns, contexts and activities. The proposed methods are agnostic to data types, however our work shall demonstrate to two types of signals: accelerometer activity data and Bluetooth proximal data. © 2014 IEEE.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

As pharmaceutical firms try to market their products and reduce costs, vertically integrated structureshamper innovation processes. Yet, pharmaceutical firms must innovate to compete. Outsourcing knowledgeintensive activities to knowledge process organizations (KPOs) serves to reduce innovation process obstacles.Grounded in diffusion theory and strategic management literature, this conceptual paper explores fourinterrelated strategic concepts: core competencies, economies of scale and scope, knowledge sharing,and learning. This paper claims that (a) accumulated core competencies of multinational pharmaceuticalcompanies (MPCs) erode over time and these companies become dependent on KPOs (b) MPCs mustunderstand how KPOs manage core competencies (c) economies of scope benefit KPOs enabling them tosustain competitive advantages for their MPC partners, meanwhile the benefits from economies of both scaleand scope shift from MPCs to KPOs (d) KPOs need to monitor their rate of learning to remain competitive.The paper identifies implications for industrial managers and directions for future research.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We present a Bayesian nonparametric framework for multilevel clustering which utilizes group- level context information to simultaneously discover low-dimensional structures of the group contents and partitions groups into clusters. Using the Dirichlet process as the building block, our model constructs a product base-measure with a nested structure to accommodate content and context observations at multiple levels. The proposed model possesses properties that link the nested Dinchiet processes (nDP) and the Dirichlet process mixture models (DPM) in an interesting way: integrating out all contents results in the DPM over contexts, whereas integrating out group-specific contexts results in the nDP mixture over content variables. We provide a Polyaurn view of the model and an efficient collapsed Gibbs inference procedure. Extensive experiments on real-world datasets demonstrate the advantage of utilizing context information via our model in both text and image domains.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Autism Spectrum Disorder (ASD) is growing at a staggering rate, but, little is known about the cause of this condition. Inferring learning patterns from therapeutic performance data, and subsequently clustering ASD children into subgroups, is important to understand this domain, and more importantly to inform evidence-based intervention. However, this data-driven task was difficult in the past due to insufficiency of data to perform reliable analysis. For the first time, using data from a recent application for early intervention in autism (TOBY Play pad), whose download count is now exceeding 4500, we present in this paper the automatic discovery of learning patterns across 32 skills in sensory, imitation and language. We use unsupervised learning methods for this task, but a notorious problem with existing methods is the correct specification of number of patterns in advance, which in our case is even more difficult due to complexity of the data. To this end, we appeal to recent Bayesian nonparametric methods, in particular the use of Bayesian Nonparametric Factor Analysis. This model uses Indian Buffet Process (IBP) as prior on a binary matrix of infinite columns to allocate groups of intervention skills to children. The optimal number of learning patterns as well as subgroup assignments are inferred automatically from data. Our experimental results follow an exploratory approach, present different newly discovered learning patterns. To provide quantitative results, we also report the clustering evaluation against K-means and Nonnegative matrix factorization (NMF). In addition to the novelty of this new problem, we were able to demonstrate the suitability of Bayesian nonparametric models over parametric rivals.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Electronic medical record (EMR) offers promises for novel analytics. However, manual feature engineering from EMR is labor intensive because EMR is complex - it contains temporal, mixed-type and multimodal data packed in irregular episodes. We present a computational framework to harness EMR with minimal human supervision via restricted Boltzmann machine (RBM). The framework derives a new representation of medical objects by embedding them in a low-dimensional vector space. This new representation facilitates algebraic and statistical manipulations such as projection onto 2D plane (thereby offering intuitive visualization), object grouping (hence enabling automated phenotyping), and risk stratification. To enhance model interpretability, we introduced two constraints into model parameters: (a) nonnegative coefficients, and (b) structural smoothness. These result in a novel model called eNRBM (EMR-driven nonnegative RBM). We demonstrate the capability of the eNRBM on a cohort of 7578 mental health patients under suicide risk assessment. The derived representation not only shows clinically meaningful feature grouping but also facilitates short-term risk stratification. The F-scores, 0.21 for moderate-risk and 0.36 for high-risk, are significantly higher than those obtained by clinicians and competitive with the results obtained by support vector machines.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Treatments of cancer cause severe side effects called toxicities. Reduction of such effects is crucial in cancer care. To impact care, we need to predict toxicities at fortnightly intervals. This toxicity data differs from traditional time series data as toxicities can be caused by one treatment on a given day alone, and thus it is necessary to consider the effect of the singular data vector causing toxicity. We model the data before prediction points using the multiple instance learning, where each bag is composed of multiple instances associated with daily treatments and patient-specific attributes, such as chemotherapy, radiotherapy, age and cancer types. We then formulate a Bayesian multi-task framework to enhance toxicity prediction at each prediction point. The use of the prior allows factors to be shared across task predictors. Our proposed method simultaneously captures the heterogeneity of daily treatments and performs toxicity prediction at different prediction points. Our method was evaluated on a real-word dataset of more than 2000 cancer patients and had achieved a better prediction accuracy in terms of AUC than the state-of-art baselines.