899 resultados para Multiple kernel learning


Relevância:

30.00% 30.00%

Publicador:

Resumo:

Radioactive soil-contamination mapping and risk assessment is a vital issue for decision makers. Traditional approaches for mapping the spatial concentration of radionuclides employ various regression-based models, which usually provide a single-value prediction realization accompanied (in some cases) by estimation error. Such approaches do not provide the capability for rigorous uncertainty quantification or probabilistic mapping. Machine learning is a recent and fast-developing approach based on learning patterns and information from data. Artificial neural networks for prediction mapping have been especially powerful in combination with spatial statistics. A data-driven approach provides the opportunity to integrate additional relevant information about spatial phenomena into a prediction model for more accurate spatial estimates and associated uncertainty. Machine-learning algorithms can also be used for a wider spectrum of problems than before: classification, probability density estimation, and so forth. Stochastic simulations are used to model spatial variability and uncertainty. Unlike regression models, they provide multiple realizations of a particular spatial pattern that allow uncertainty and risk quantification. This paper reviews the most recent methods of spatial data analysis, prediction, and risk mapping, based on machine learning and stochastic simulations in comparison with more traditional regression models. The radioactive fallout from the Chernobyl Nuclear Power Plant accident is used to illustrate the application of the models for prediction and classification problems. This fallout is a unique case study that provides the challenging task of analyzing huge amounts of data ('hard' direct measurements, as well as supplementary information and expert estimates) and solving particular decision-oriented problems.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We assessed decision-making capacity and emotional reactivity in 20 patients with multiple sclerosis (MS) and in 16 healthy subjects using the Gambling Task (GT), a model of real-life decision making, and the skin conductance response (SCR). Demographic, neurological, affective, and cognitive parameters were analyzed in MS patients for their effect on decision-making performance. MS patients persisted longer (slope, -3.6%) than the comparison group (slope, -6.4%) in making disadvantageous choices as the GT progressed (p < 0.001), suggesting significant slower learning in MS. Patients with higher Expanded Disability Status Scale scores (EDSS >2.0) showed a different pattern of impairment in the learning process compared with patients with lower functional impairment (EDSS </=2.0). This slower learning was associated with impaired emotional reactivity (anticipatory SCR 3.9 vs 6.1 microSiemens [microS] for patients vs the comparison group, p < 0.0001; post-choice SCR 3.9 vs 6.2 microS, p < 0.0001), but not with executive dysfunction. Impaired emotional dimensions of behavior (assessed using the Dysexecutive Questionnaire, p < 0.002) also correlated with slower learning. Given the considerable consequences that impaired decision making can have on daily life, we suggest that this factor may contribute to handicap and altered quality of life secondary to MS and is dependent on emotional experience. Ann Neurol 2004.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Avalanche forecasting is a complex process involving the assimilation of multiple data sources to make predictions over varying spatial and temporal resolutions. Numerically assisted forecasting often uses nearest neighbour methods (NN), which are known to have limitations when dealing with high dimensional data. We apply Support Vector Machines to a dataset from Lochaber, Scotland to assess their applicability in avalanche forecasting. Support Vector Machines (SVMs) belong to a family of theoretically based techniques from machine learning and are designed to deal with high dimensional data. Initial experiments showed that SVMs gave results which were comparable with NN for categorical and probabilistic forecasts. Experiments utilising the ability of SVMs to deal with high dimensionality in producing a spatial forecast show promise, but require further work.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

We show how nonlinear embedding algorithms popular for use with shallow semi-supervised learning techniques such as kernel methods can be applied to deep multilayer architectures, either as a regularizer at the output layer, or on each layer of the architecture. This provides a simple alternative to existing approaches to deep learning whilst yielding competitive error rates compared to those methods, and existing shallow semi-supervised techniques.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Phasic activation of dopaminergic neurons is associated with reward-predicting cues and supports learning during behavioral adaptation. While noncontingent activation of dopaminergic neurons in the ventral tegmental are (VTA) is sufficient for passive behavioral conditioning, it remains unknown whether the phasic dopaminergic signal is truly reinforcing. In this study, we first targeted the expression of channelrhodopsin-2 to dopaminergic neurons of the VTA and optimized optogenetically evoked dopamine transients. Second, we showed that phasic activation of dopaminergic neurons in freely moving mice causally enhances positive reinforcing actions in a food-seeking operant task. Interestingly, such effect was not found in the absence of food reward. We further found that phasic activation of dopaminergic neurons is sufficient to reactivate previously extinguished food-seeking behavior in the absence of external cues. This was also confirmed using a single-session reversal paradigm. Collectively, these data suggest that activation of dopaminergic neurons facilitates the development of positive reinforcement during reward-seeking and behavioral flexibility.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This questionnaire aims to evaluate your experience of taking part in the project you are carrying out at the university. The questionnaire is anonymous and will not take more than 10 minutes of your time to complete. We would appreciate your honest opinion, in order that the data we gather here can be as useful as possible for improving the project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This questionnaire aims to evaluate your experience of taking part in the project you are carrying out at the university. The questionnaire is anonymous and will not take more than 10 minutes of your time to complete. We would appreciate your honest opinion, in order that the data we gather here can be as useful as possible for improving the project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This questionnaire aims to evaluate your experience of taking part in the project you are carrying out at the university. The questionnaire is anonymous and will not take more than 10 minutes of your time to complete. We would appreciate your honest opinion, in order that the data we gather here can be as useful as possible for improving the project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

This questionnaire aims to evaluate your experience of taking part in the project you are carrying out at the university. The questionnaire is anonymous and will not take more than 10 minutes of your time to complete. We would appreciate your honest opinion, in order that the data we gather here can be as useful as possible for improving the project.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

Notre consommation en eau souterraine, en particulier comme eau potable ou pour l'irrigation, a considérablement augmenté au cours des années. De nombreux problèmes font alors leur apparition, allant de la prospection de nouvelles ressources à la remédiation des aquifères pollués. Indépendamment du problème hydrogéologique considéré, le principal défi reste la caractérisation des propriétés du sous-sol. Une approche stochastique est alors nécessaire afin de représenter cette incertitude en considérant de multiples scénarios géologiques et en générant un grand nombre de réalisations géostatistiques. Nous rencontrons alors la principale limitation de ces approches qui est le coût de calcul dû à la simulation des processus d'écoulements complexes pour chacune de ces réalisations. Dans la première partie de la thèse, ce problème est investigué dans le contexte de propagation de l'incertitude, oú un ensemble de réalisations est identifié comme représentant les propriétés du sous-sol. Afin de propager cette incertitude à la quantité d'intérêt tout en limitant le coût de calcul, les méthodes actuelles font appel à des modèles d'écoulement approximés. Cela permet l'identification d'un sous-ensemble de réalisations représentant la variabilité de l'ensemble initial. Le modèle complexe d'écoulement est alors évalué uniquement pour ce sousensemble, et, sur la base de ces réponses complexes, l'inférence est faite. Notre objectif est d'améliorer la performance de cette approche en utilisant toute l'information à disposition. Pour cela, le sous-ensemble de réponses approximées et exactes est utilisé afin de construire un modèle d'erreur, qui sert ensuite à corriger le reste des réponses approximées et prédire la réponse du modèle complexe. Cette méthode permet de maximiser l'utilisation de l'information à disposition sans augmentation perceptible du temps de calcul. La propagation de l'incertitude est alors plus précise et plus robuste. La stratégie explorée dans le premier chapitre consiste à apprendre d'un sous-ensemble de réalisations la relation entre les modèles d'écoulement approximé et complexe. Dans la seconde partie de la thèse, cette méthodologie est formalisée mathématiquement en introduisant un modèle de régression entre les réponses fonctionnelles. Comme ce problème est mal posé, il est nécessaire d'en réduire la dimensionnalité. Dans cette optique, l'innovation du travail présenté provient de l'utilisation de l'analyse en composantes principales fonctionnelles (ACPF), qui non seulement effectue la réduction de dimensionnalités tout en maximisant l'information retenue, mais permet aussi de diagnostiquer la qualité du modèle d'erreur dans cet espace fonctionnel. La méthodologie proposée est appliquée à un problème de pollution par une phase liquide nonaqueuse et les résultats obtenus montrent que le modèle d'erreur permet une forte réduction du temps de calcul tout en estimant correctement l'incertitude. De plus, pour chaque réponse approximée, une prédiction de la réponse complexe est fournie par le modèle d'erreur. Le concept de modèle d'erreur fonctionnel est donc pertinent pour la propagation de l'incertitude, mais aussi pour les problèmes d'inférence bayésienne. Les méthodes de Monte Carlo par chaîne de Markov (MCMC) sont les algorithmes les plus communément utilisés afin de générer des réalisations géostatistiques en accord avec les observations. Cependant, ces méthodes souffrent d'un taux d'acceptation très bas pour les problèmes de grande dimensionnalité, résultant en un grand nombre de simulations d'écoulement gaspillées. Une approche en deux temps, le "MCMC en deux étapes", a été introduite afin d'éviter les simulations du modèle complexe inutiles par une évaluation préliminaire de la réalisation. Dans la troisième partie de la thèse, le modèle d'écoulement approximé couplé à un modèle d'erreur sert d'évaluation préliminaire pour le "MCMC en deux étapes". Nous démontrons une augmentation du taux d'acceptation par un facteur de 1.5 à 3 en comparaison avec une implémentation classique de MCMC. Une question reste sans réponse : comment choisir la taille de l'ensemble d'entrainement et comment identifier les réalisations permettant d'optimiser la construction du modèle d'erreur. Cela requiert une stratégie itérative afin que, à chaque nouvelle simulation d'écoulement, le modèle d'erreur soit amélioré en incorporant les nouvelles informations. Ceci est développé dans la quatrième partie de la thèse, oú cette méthodologie est appliquée à un problème d'intrusion saline dans un aquifère côtier. -- Our consumption of groundwater, in particular as drinking water and for irrigation, has considerably increased over the years and groundwater is becoming an increasingly scarce and endangered resource. Nofadays, we are facing many problems ranging from water prospection to sustainable management and remediation of polluted aquifers. Independently of the hydrogeological problem, the main challenge remains dealing with the incomplete knofledge of the underground properties. Stochastic approaches have been developed to represent this uncertainty by considering multiple geological scenarios and generating a large number of realizations. The main limitation of this approach is the computational cost associated with performing complex of simulations in each realization. In the first part of the thesis, we explore this issue in the context of uncertainty propagation, where an ensemble of geostatistical realizations is identified as representative of the subsurface uncertainty. To propagate this lack of knofledge to the quantity of interest (e.g., the concentration of pollutant in extracted water), it is necessary to evaluate the of response of each realization. Due to computational constraints, state-of-the-art methods make use of approximate of simulation, to identify a subset of realizations that represents the variability of the ensemble. The complex and computationally heavy of model is then run for this subset based on which inference is made. Our objective is to increase the performance of this approach by using all of the available information and not solely the subset of exact responses. Two error models are proposed to correct the approximate responses follofing a machine learning approach. For the subset identified by a classical approach (here the distance kernel method) both the approximate and the exact responses are knofn. This information is used to construct an error model and correct the ensemble of approximate responses to predict the "expected" responses of the exact model. The proposed methodology makes use of all the available information without perceptible additional computational costs and leads to an increase in accuracy and robustness of the uncertainty propagation. The strategy explored in the first chapter consists in learning from a subset of realizations the relationship between proxy and exact curves. In the second part of this thesis, the strategy is formalized in a rigorous mathematical framework by defining a regression model between functions. As this problem is ill-posed, it is necessary to reduce its dimensionality. The novelty of the work comes from the use of functional principal component analysis (FPCA), which not only performs the dimensionality reduction while maximizing the retained information, but also allofs a diagnostic of the quality of the error model in the functional space. The proposed methodology is applied to a pollution problem by a non-aqueous phase-liquid. The error model allofs a strong reduction of the computational cost while providing a good estimate of the uncertainty. The individual correction of the proxy response by the error model leads to an excellent prediction of the exact response, opening the door to many applications. The concept of functional error model is useful not only in the context of uncertainty propagation, but also, and maybe even more so, to perform Bayesian inference. Monte Carlo Markov Chain (MCMC) algorithms are the most common choice to ensure that the generated realizations are sampled in accordance with the observations. Hofever, this approach suffers from lof acceptance rate in high dimensional problems, resulting in a large number of wasted of simulations. This led to the introduction of two-stage MCMC, where the computational cost is decreased by avoiding unnecessary simulation of the exact of thanks to a preliminary evaluation of the proposal. In the third part of the thesis, a proxy is coupled to an error model to provide an approximate response for the two-stage MCMC set-up. We demonstrate an increase in acceptance rate by a factor three with respect to one-stage MCMC results. An open question remains: hof do we choose the size of the learning set and identify the realizations to optimize the construction of the error model. This requires devising an iterative strategy to construct the error model, such that, as new of simulations are performed, the error model is iteratively improved by incorporating the new information. This is discussed in the fourth part of the thesis, in which we apply this methodology to a problem of saline intrusion in a coastal aquifer.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: The structure and organisation of ecological interactions within an ecosystem is modified by the evolution and coevolution of the individual species it contains. Understanding how historical conditions have shaped this architecture is vital for understanding system responses to change at scales from the microbial upwards. However, in the absence of a group selection process, the collective behaviours and ecosystem functions exhibited by the whole community cannot be organised or adapted in a Darwinian sense. A long-standing open question thus persists: Are there alternative organising principles that enable us to understand and predict how the coevolution of the component species creates and maintains complex collective behaviours exhibited by the ecosystem as a whole? RESULTS: Here we answer this question by incorporating principles from connectionist learning, a previously unrelated discipline already using well-developed theories on how emergent behaviours arise in simple networks. Specifically, we show conditions where natural selection on ecological interactions is functionally equivalent to a simple type of connectionist learning, 'unsupervised learning', well-known in neural-network models of cognitive systems to produce many non-trivial collective behaviours. Accordingly, we find that a community can self-organise in a well-defined and non-trivial sense without selection at the community level; its organisation can be conditioned by past experience in the same sense as connectionist learning models habituate to stimuli. This conditioning drives the community to form a distributed ecological memory of multiple past states, causing the community to: a) converge to these states from any random initial composition; b) accurately restore historical compositions from small fragments; c) recover a state composition following disturbance; and d) to correctly classify ambiguous initial compositions according to their similarity to learned compositions. We examine how the formation of alternative stable states alters the community's response to changing environmental forcing, and we identify conditions under which the ecosystem exhibits hysteresis with potential for catastrophic regime shifts. CONCLUSIONS: This work highlights the potential of connectionist theory to expand our understanding of evo-eco dynamics and collective ecological behaviours. Within this framework we find that, despite not being a Darwinian unit, ecological communities can behave like connectionist learning systems, creating internal conditions that habituate to past environmental conditions and actively recalling those conditions. REVIEWERS: This article was reviewed by Prof. Ricard V Solé, Universitat Pompeu Fabra, Barcelona and Prof. Rob Knight, University of Colorado, Boulder.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The thesis deals with the phenomenon of learning between organizations in innovation networks that develop new products, services or processes. Inter organizational learning is studied especially at the level of the network. The role of the network can be seen as twofold: either the network is a context for inter organizational learning, if the learner is something else than the network (organization, group, individual), or the network itself is the learner. Innovations are regarded as a primary source of competitiveness and renewal in organizations. Networking has become increasingly common particularly because of the possibility to extend the resource base of the organization through partnerships and to concentrate on core competencies. Especially in innovation activities, networks provide the possibility to answer the complex needs of the customers faster and to share the costs and risks of the development work. Networked innovation activities are often organized in practice as distributed virtual teams, either within one organization or as cross organizational co operation. The role of technology is considered in the research mainly as an enabling tool for collaboration and learning. Learning has been recognized as one important collaborative process in networks or as a motivation for networking. It is even more important in the innovation context as an enabler of renewal, since the essence of the innovation process is creating new knowledge, processes, products and services. The thesis aims at providing enhanced understanding about the inter organizational learning phenomenon in and by innovation networks, especially concentrating on the network level. The perspectives used in the research are the theoretical viewpoints and concepts, challenges, and solutions for learning. The methods used in the study are literature reviews and empirical research carried out with semi structured interviews analyzed with qualitative content analysis. The empirical research concentrates on two different areas, firstly on the theoretical approaches to learning that are relevant to innovation networks, secondly on learning in virtual innovation teams. As a result, the research identifies insights and implications for learning in innovation networks from several viewpoints on organizational learning. Using multiple perspectives allows drawing a many sided picture of the learning phenomenon that is valuable because of the versatility and complexity of situations and challenges of learning in the context of innovation and networks. The research results also show some of the challenges of learning and possible solutions for supporting especially network level learning.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The skill of programming is a key asset for every computer science student. Many studies have shown that this is a hard skill to learn and the outcomes of programming courses have often been substandard. Thus, a range of methods and tools have been developed to assist students’ learning processes. One of the biggest fields in computer science education is the use of visualizations as a learning aid and many visualization based tools have been developed to aid the learning process during last few decades. Studies conducted in this thesis focus on two different visualizationbased tools TRAKLA2 and ViLLE. This thesis includes results from multiple empirical studies about what kind of effects the introduction and usage of these tools have on students’ opinions and performance, and what kind of implications there are from a teacher’s point of view. The results from studies in this thesis show that students preferred to do web-based exercises, and felt that those exercises contributed to their learning. The usage of the tool motivated students to work harder during their course, which was shown in overall course performance and drop-out statistics. We have also shown that visualization-based tools can be used to enhance the learning process, and one of the key factors is the higher and active level of engagement (see. Engagement Taxonomy by Naps et al., 2002). The automatic grading accompanied with immediate feedback helps students to overcome obstacles during the learning process, and to grasp the key element in the learning task. These kinds of tools can help us to cope with the fact that many programming courses are overcrowded with limited teaching resources. These tools allows us to tackle this problem by utilizing automatic assessment in exercises that are most suitable to be done in the web (like tracing and simulation) since its supports students’ independent learning regardless of time and place. In summary, we can use our course’s resources more efficiently to increase the quality of the learning experience of the students and the teaching experience of the teacher, and even increase performance of the students. There are also methodological results from this thesis which contribute to developing insight into the conduct of empirical evaluations of new tools or techniques. When we evaluate a new tool, especially one accompanied with visualization, we need to give a proper introduction to it and to the graphical notation used by tool. The standard procedure should also include capturing the screen with audio to confirm that the participants of the experiment are doing what they are supposed to do. By taken such measures in the study of the learning impact of visualization support for learning, we can avoid drawing false conclusion from our experiments. As computer science educators, we face two important challenges. Firstly, we need to start to deliver the message in our own institution and all over the world about the new – scientifically proven – innovations in teaching like TRAKLA2 and ViLLE. Secondly, we have the relevant experience of conducting teaching related experiment, and thus we can support our colleagues to learn essential know-how of the research based improvement of their teaching. This change can transform academic teaching into publications and by utilizing this approach we can significantly increase the adoption of the new tools and techniques, and overall increase the knowledge of best-practices. In future, we need to combine our forces and tackle these universal and common problems together by creating multi-national and multiinstitutional research projects. We need to create a community and a platform in which we can share these best practices and at the same time conduct multi-national research projects easily.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

BACKGROUND: E-learning techniques are spreading at great speed in medicine, raising concerns about the impact of adopting them. Websites especially designed to host courses are becoming more common. There is a lack of evidence that these systems could enhance student knowledge acquisition. GOAL: To evaluate the impact of using dedicated-website tools over cognition of medical students exposed to a first-aid course. METHODS: Prospective study of 184 medical students exposed to a twenty-hour first-aid course. We generated a dedicated-website with several sections (lectures, additional reading material, video and multiple choice exercises). We constructed variables expressing the student's access to each section. The evaluation was composed of fifty multiple-choice tests, based on clinical problems. We used multiple linear regression to adjust for potential confounders. RESULTS: There was no association of website intensity of exposure and the outcome - beta-coeficient 0.27 (95%CI - 0.454 - 1.004). These findings were not altered after adjustment for potential confounders - 0.165 (95%CI -0.628 - 0.960). CONCLUSION: A dedicated website with passive and active capabilities for aiding in person learning had not shown association with a better outcome.

Relevância:

30.00% 30.00%

Publicador:

Resumo:

The dissertation seeks to explore how to improve users‘ adoption of mobile learning in current education systems. Considering the difference between basic and tertiary education in China, the research consists of two separate but interrelated parts, which focus on the use of mobile learning in basic and tertiary education contexts, respectively. In the dissertation, two adoption frameworks are developed based on previous studies. The frameworks are then evaluated using different technologies. Concerning mobile learning use in basic education settings, case study methodology is utilized. A leading provider of mobile learning services and products in China, Noah Ltd., is investigated. Multiple sources of evidence are collected to test the framework. Regarding mobile learning adoption in tertiary education contexts, survey research methodology is utilized. Based on 209 useful responses, the framework is evaluated using structural equation modelling technology. Four proposed determinants of intention to use are evaluated, which are perceived ease of use, perceived near-term usefulness, perceived ong-term usefulness and personal innovativeness. The dissertation provides a number of new insights for both researchers and practitioners. In particular, the dissertation specifies a practical solution to deal with the disruptive effects of mobile learning in basic education, which keeps the use of mobile learning away from the schools across such as European countries. A list of new and innovative mobile learning technologies is systematically introduced as well. Further, the research identifies several key factors driving mobile learning adoption in tertiary education settings. In theory, the dissertation suggests that since the technology acceptance model is initiated in work-oriented innovations by testing employees, it is not necessarily the best model for studying educational innovations. The results also suggest that perceived longterm usefulness for educational systems should be as important as perceived usefulness for utilitarian systems, and perceived enjoyment for hedonic systems. A classification based on the nature of systems purpose (utilitarian, hedonic or educational) would contribute to a better understanding of the essence of IT innovation adoption.