881 resultados para Learning Bayesian Networks
Resumo:
Managing programming exercises require several heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. These tools would be too specific to incorporate in an e-Learning platform. Even if they could be provided as pluggable components, the burden of maintaining them would be prohibitive to institutions with few courses in those domains. This work presents a standard based approach for the coordination of a network of e-Learning systems participating on the automatic evaluation of programming exercises. The proposed approach uses a pivot component to orchestrate the interaction among all the systems using communication standards. This approach was validated through its effective use on classroom and we present some preliminary results.
Resumo:
Sensor/actuator networks promised to extend automated monitoring and control into industrial processes. Avionic system is one of the prominent technologies that can highly gain from dense sensor/actuator deployments. An aircraft with smart sensing skin would fulfill the vision of affordability and environmental friendliness properties by reducing the fuel consumption. Achieving these properties is possible by providing an approximate representation of the air flow across the body of the aircraft and suppressing the detected aerodynamic drags. To the best of our knowledge, getting an accurate representation of the physical entity is one of the most significant challenges that still exists with dense sensor/actuator network. This paper offers an efficient way to acquire sensor readings from very large sensor/actuator network that are located in a small area (dense network). It presents LIA algorithm, a Linear Interpolation Algorithm that provides two important contributions. First, it demonstrates the effectiveness of employing a transformation matrix to mimic the environmental behavior. Second, it renders a smart solution for updating the previously defined matrix through a procedure called learning phase. Simulation results reveal that the average relative error in LIA algorithm can be reduced by as much as 60% by exploiting transformation matrix.
Resumo:
This Thesis describes the application of automatic learning methods for a) the classification of organic and metabolic reactions, and b) the mapping of Potential Energy Surfaces(PES). The classification of reactions was approached with two distinct methodologies: a representation of chemical reactions based on NMR data, and a representation of chemical reactions from the reaction equation based on the physico-chemical and topological features of chemical bonds. NMR-based classification of photochemical and enzymatic reactions. Photochemical and metabolic reactions were classified by Kohonen Self-Organizing Maps (Kohonen SOMs) and Random Forests (RFs) taking as input the difference between the 1H NMR spectra of the products and the reactants. The development of such a representation can be applied in automatic analysis of changes in the 1H NMR spectrum of a mixture and their interpretation in terms of the chemical reactions taking place. Examples of possible applications are the monitoring of reaction processes, evaluation of the stability of chemicals, or even the interpretation of metabonomic data. A Kohonen SOM trained with a data set of metabolic reactions catalysed by transferases was able to correctly classify 75% of an independent test set in terms of the EC number subclass. Random Forests improved the correct predictions to 79%. With photochemical reactions classified into 7 groups, an independent test set was classified with 86-93% accuracy. The data set of photochemical reactions was also used to simulate mixtures with two reactions occurring simultaneously. Kohonen SOMs and Feed-Forward Neural Networks (FFNNs) were trained to classify the reactions occurring in a mixture based on the 1H NMR spectra of the products and reactants. Kohonen SOMs allowed the correct assignment of 53-63% of the mixtures (in a test set). Counter-Propagation Neural Networks (CPNNs) gave origin to similar results. The use of supervised learning techniques allowed an improvement in the results. They were improved to 77% of correct assignments when an ensemble of ten FFNNs were used and to 80% when Random Forests were used. This study was performed with NMR data simulated from the molecular structure by the SPINUS program. In the design of one test set, simulated data was combined with experimental data. The results support the proposal of linking databases of chemical reactions to experimental or simulated NMR data for automatic classification of reactions and mixtures of reactions. Genome-scale classification of enzymatic reactions from their reaction equation. The MOLMAP descriptor relies on a Kohonen SOM that defines types of bonds on the basis of their physico-chemical and topological properties. The MOLMAP descriptor of a molecule represents the types of bonds available in that molecule. The MOLMAP descriptor of a reaction is defined as the difference between the MOLMAPs of the products and the reactants, and numerically encodes the pattern of bonds that are broken, changed, and made during a chemical reaction. The automatic perception of chemical similarities between metabolic reactions is required for a variety of applications ranging from the computer validation of classification systems, genome-scale reconstruction (or comparison) of metabolic pathways, to the classification of enzymatic mechanisms. Catalytic functions of proteins are generally described by the EC numbers that are simultaneously employed as identifiers of reactions, enzymes, and enzyme genes, thus linking metabolic and genomic information. Different methods should be available to automatically compare metabolic reactions and for the automatic assignment of EC numbers to reactions still not officially classified. In this study, the genome-scale data set of enzymatic reactions available in the KEGG database was encoded by the MOLMAP descriptors, and was submitted to Kohonen SOMs to compare the resulting map with the official EC number classification, to explore the possibility of predicting EC numbers from the reaction equation, and to assess the internal consistency of the EC classification at the class level. A general agreement with the EC classification was observed, i.e. a relationship between the similarity of MOLMAPs and the similarity of EC numbers. At the same time, MOLMAPs were able to discriminate between EC sub-subclasses. EC numbers could be assigned at the class, subclass, and sub-subclass levels with accuracies up to 92%, 80%, and 70% for independent test sets. The correspondence between chemical similarity of metabolic reactions and their MOLMAP descriptors was applied to the identification of a number of reactions mapped into the same neuron but belonging to different EC classes, which demonstrated the ability of the MOLMAP/SOM approach to verify the internal consistency of classifications in databases of metabolic reactions. RFs were also used to assign the four levels of the EC hierarchy from the reaction equation. EC numbers were correctly assigned in 95%, 90%, 85% and 86% of the cases (for independent test sets) at the class, subclass, sub-subclass and full EC number level,respectively. Experiments for the classification of reactions from the main reactants and products were performed with RFs - EC numbers were assigned at the class, subclass and sub-subclass level with accuracies of 78%, 74% and 63%, respectively. In the course of the experiments with metabolic reactions we suggested that the MOLMAP / SOM concept could be extended to the representation of other levels of metabolic information such as metabolic pathways. Following the MOLMAP idea, the pattern of neurons activated by the reactions of a metabolic pathway is a representation of the reactions involved in that pathway - a descriptor of the metabolic pathway. This reasoning enabled the comparison of different pathways, the automatic classification of pathways, and a classification of organisms based on their biochemical machinery. The three levels of classification (from bonds to metabolic pathways) allowed to map and perceive chemical similarities between metabolic pathways even for pathways of different types of metabolism and pathways that do not share similarities in terms of EC numbers. Mapping of PES by neural networks (NNs). In a first series of experiments, ensembles of Feed-Forward NNs (EnsFFNNs) and Associative Neural Networks (ASNNs) were trained to reproduce PES represented by the Lennard-Jones (LJ) analytical potential function. The accuracy of the method was assessed by comparing the results of molecular dynamics simulations (thermal, structural, and dynamic properties) obtained from the NNs-PES and from the LJ function. The results indicated that for LJ-type potentials, NNs can be trained to generate accurate PES to be used in molecular simulations. EnsFFNNs and ASNNs gave better results than single FFNNs. A remarkable ability of the NNs models to interpolate between distant curves and accurately reproduce potentials to be used in molecular simulations is shown. The purpose of the first study was to systematically analyse the accuracy of different NNs. Our main motivation, however, is reflected in the next study: the mapping of multidimensional PES by NNs to simulate, by Molecular Dynamics or Monte Carlo, the adsorption and self-assembly of solvated organic molecules on noble-metal electrodes. Indeed, for such complex and heterogeneous systems the development of suitable analytical functions that fit quantum mechanical interaction energies is a non-trivial or even impossible task. The data consisted of energy values, from Density Functional Theory (DFT) calculations, at different distances, for several molecular orientations and three electrode adsorption sites. The results indicate that NNs require a data set large enough to cover well the diversity of possible interaction sites, distances, and orientations. NNs trained with such data sets can perform equally well or even better than analytical functions. Therefore, they can be used in molecular simulations, particularly for the ethanol/Au (111) interface which is the case studied in the present Thesis. Once properly trained, the networks are able to produce, as output, any required number of energy points for accurate interpolations.
Resumo:
In these days the learning experience is no longer confined within the four walls of a classroom. Computers and primarily the internet have broadened this horizon by creating a way of delivering education that is known as e-learning. In the meantime, the internet, or more precisely, the Web is heading towards a new paradigm where the user is no longer just a consumer of information and becomes an active part in the communication. This two-way channel where the user takes the role of the producer of content triggered the appearance of new types of services such as Social Networks, Blogs and Wikis. To seize this second generation of communities and services, educational vendors are willing to develop e-learning systems focused on the new and emergent users needs. This paper describes the analysis and specification of an e-learning environment at our School (ESEIG) towards this new Web generation, called PEACE – Project for ESEIG Academic Environment. This new model relies on the integration of several services controlled by teachers and students such as social networks, repositories libraries, e-portfolios and e-conference sytems, intelligent tutors, recommendation systems, automatic evaluators, virtual classrooms and 3D avatars.
Resumo:
It is widely accepted that solving programming exercises is fundamental to learn how to program. Nevertheless, solving exercises is only effective if students receive an assessment on their work. An exercise solved wrong will consolidate a false belief, and without feedback many students will not be able to overcome their difficulties. However, creating, managing and accessing a large number of exercises, covering all the points in the curricula of a programming course, in classes with large number of students, can be a daunting task without the appropriated tools working in unison. This involves a diversity of tools, from the environments where programs are coded, to automatic program evaluators providing feedback on the attempts of students, passing through the authoring, management and sequencing of programming exercises as learning objects. We believe that the integration of these tools will have a great impact in acquiring programming skills. Our research objective is to manage and coordinate a network of eLearning systems where students can solve computer programming exercises. Networks of this kind include systems such as learning management systems (LMS), evaluation engines (EE), learning objects repositories (LOR) and exercise resolution environments (ERE). Our strategy to achieve the interoperability among these tools is based on a shared definition of programming exercise as a Learning Object (LO).
Resumo:
The LMS plays an indisputable role in the majority of the eLearning environments. This eLearning system type is often used for presenting, solving and grading simple exercises. However, exercises from complex domains, such as computer programming, require heterogeneous systems such as evaluation engines, learning objects repositories and exercise resolution environments. The coordination of networks of such disparate systems is rather complex. This work presents a standard approach for the coordination of a network of eLearning systems supporting the resolution of exercises. The proposed approach use a pivot component embedded in the LMS with two roles: provide an exercise resolution environment and coordinate the communication between the LMS and other systems exposing their functions as web services. The integration of the pivot component with the LMS relies on the Learning Tools Interoperability. The validation of this approach is made through the integration of the component with LMSs from two vendors.
Resumo:
The confluence of education with the evolution of technology boosted the paradigm shift of the face-to-face learning to distance learning. In this scenario e-Learning plays an essential role as a facilitator of the teaching/learning process. However new demands associated with the new Web paradigm require that existent e-Learning environments characterized mostly by monolithic systems begin interacting with new specialized services. In this decentralized scenario the definition of a strategy of interoperability is the cornerstone to ensure the standardization communication among systems. This paper presents a definition of an interoperability strategy for an e-Learning environment at our School (ESEIG) called PEACE – Project for ESEIG Academic Content Environment. This new interoperability model relies on the application of several coordination and integration standards on several services, controlled by teachers and students, and included in the PEACE environment such as social networks, repositories, libraries, e-portfolios, intelligent tutors, recommendation systems and virtual classrooms.
Resumo:
This paper discusses the changes brought by the communication revolution in teaching and learning in the scope of LSP. Its aim is to provide an insight on how teaching which was bi-dimensional, turned into a multidimensional system, gathering other complementary resources that have transformed, in a incredibly short time, the ways we receive share and store information, for instance as professionals, and keep in touch with our peers. The increasing rise of electronic publications, the incredible boom of social and professional networks, search engines, blogs, list servs, forums, e-mail blasts, Facebook pages, YouTube contents, Tweets and Apps, have twisted the way information is conveyed. Classes ceased to be predictable and have been empowered by digital platforms, innumerous and different data repositories (TILDE, IATE, LINGUEE, and so many other terminological data banks) that have definitely transformed the academic world in general and tertiary education in particular. There is a bulk of information to be digested by students, who are no longer passive but instead responsible and active for their academic outcomes. The question is whether they possess the tools to select only what is accurate and important for a certain subject or assignment, due to that overflow? Due to the reduction of the number of course years in most degrees, after the implementation of Bologna and the shrinking of the curricula contents, have students the possibility of developing critical thinking? Both teaching and learning rely on digital resources to improve the speed of the spreading of knowledge. But have those changes been effective to promote really communication? Furthermore, with the increasing Apps that have already been developed and will continue to appear for learning foreign languages, for translation among others, will the students feel the need of learning them once they have those Apps. These are some the questions we would like to discuss in our paper.
Resumo:
This paper presents several forecasting methodologies based on the application of Artificial Neural Networks (ANN) and Support Vector Machines (SVM), directed to the prediction of the solar radiance intensity. The methodologies differ from each other by using different information in the training of the methods, i.e, different environmental complementary fields such as the wind speed, temperature, and humidity. Additionally, different ways of considering the data series information have been considered. Sensitivity testing has been performed on all methodologies in order to achieve the best parameterizations for the proposed approaches. Results show that the SVM approach using the exponential Radial Basis Function (eRBF) is capable of achieving the best forecasting results, and in half execution time of the ANN based approaches.
Resumo:
High-content analysis has revolutionized cancer drug discovery by identifying substances that alter the phenotype of a cell, which prevents tumor growth and metastasis. The high-resolution biofluorescence images from assays allow precise quantitative measures enabling the distinction of small molecules of a host cell from a tumor. In this work, we are particularly interested in the application of deep neural networks (DNNs), a cutting-edge machine learning method, to the classification of compounds in chemical mechanisms of action (MOAs). Compound classification has been performed using image-based profiling methods sometimes combined with feature reduction methods such as principal component analysis or factor analysis. In this article, we map the input features of each cell to a particular MOA class without using any treatment-level profiles or feature reduction methods. To the best of our knowledge, this is the first application of DNN in this domain, leveraging single-cell information. Furthermore, we use deep transfer learning (DTL) to alleviate the intensive and computational demanding effort of searching the huge parameter's space of a DNN. Results show that using this approach, we obtain a 30% speedup and a 2% accuracy improvement.
Resumo:
A personalização é um aspeto chave de uma interação homem-computador efetiva. Numa era em que existe uma abundância de informação e tantas pessoas a interagir com ela, de muitas maneiras, a capacidade de se ajustar aos seus utilizadores é crucial para qualquer sistema moderno. A criação de sistemas adaptáveis é um domínio bastante complexo que necessita de métodos muito específicos para ter sucesso. No entanto, nos dias de hoje ainda não existe um modelo ou arquitetura padrão para usar nos sistemas adaptativos modernos. A principal motivação desta tese é a proposta de uma arquitetura para modelação do utilizador que seja capaz de incorporar diferentes módulos necessários para criar um sistema com inteligência escalável com técnicas de modelação. Os módulos cooperam de forma a analisar os utilizadores e caracterizar o seu comportamento, usando essa informação para fornecer uma experiência de sistema customizada que irá aumentar não só a usabilidade do sistema mas também a produtividade e conhecimento do utilizador. A arquitetura proposta é constituída por três componentes: uma unidade de informação do utilizador, uma estrutura matemática capaz de classificar os utilizadores e a técnica a usar quando se adapta o conteúdo. A unidade de informação do utilizador é responsável por conhecer os vários tipos de indivíduos que podem usar o sistema, por capturar cada detalhe de interações relevantes entre si e os seus utilizadores e também contém a base de dados que guarda essa informação. A estrutura matemática é o classificador de utilizadores, e tem como tarefa a sua análise e classificação num de três perfis: iniciado, intermédio ou avançado. Tanto as redes de Bayes como as neuronais são utilizadas, e uma explicação de como as preparar e treinar para lidar com a informação do utilizador é apresentada. Com o perfil do utilizador definido torna-se necessária uma técnica para adaptar o conteúdo do sistema. Nesta proposta, uma abordagem de iniciativa mista é apresentada tendo como base a liberdade de tanto o utilizador como o sistema controlarem a comunicação entre si. A arquitetura proposta foi desenvolvida como parte integrante do projeto ADSyS - um sistema de escalonamento dinâmico - utilizado para resolver problemas de escalonamento sujeitos a eventos dinâmicos. Possui uma complexidade elevada mesmo para utilizadores frequentes, daí a necessidade de adaptar o seu conteúdo de forma a aumentar a sua usabilidade. Com o objetivo de avaliar as contribuições deste trabalho, um estudo computacional acerca do reconhecimento dos utilizadores foi desenvolvido, tendo por base duas sessões de avaliação de usabilidade com grupos de utilizadores distintos. Foi possível concluir acerca dos benefícios na utilização de técnicas de modelação do utilizador com a arquitetura proposta.
Resumo:
Special issue of Anthropology in Action originated from the Working Images Conference, a joint meeting of TAN and VAN EASA networks
Resumo:
This paper presents an application of an Artificial Neural Network (ANN) to the prediction of stock market direction in the US. Using a multilayer perceptron neural network and a backpropagation algorithm for the training process, the model aims at learning the hidden patterns in the daily movement of the S&P500 to correctly identify if the market will be in a Trend Following or Mean Reversion behavior. The ANN is able to produce a successful investment strategy which outperforms the buy and hold strategy, but presents instability in its overall results which compromises its practical application in real life investment decisions.
Resumo:
Agents have two forecasting models, one consistent with the unique rational expectations equilibrium, another that assumes a time-varying parameter structure. When agents use Bayesian updating to choose between models in a self-referential system, we find that learning dynamics lead to selection of one of the two models. However, there are parameter regions for which the non-rational forecasting model is selected in the long-run. A key structural parameter governing outcomes measures the degree of expectations feedback in Muth's model of price determination.
Resumo:
In this study we elicit agents’ prior information set regarding a public good, exogenously give information treatments to survey respondents and subsequently elicit willingness to pay for the good and posterior information sets. The design of this field experiment allows us to perform theoretically motivated hypothesis testing between different updating rules: non-informative updating, Bayesian updating, and incomplete updating. We find causal evidence that agents imperfectly update their information sets. We also field causal evidence that the amount of additional information provided to subjects relative to their pre-existing information levels can affect stated WTP in ways consistent overload from too much learning. This result raises important (though familiar) issues for the use of stated preference methods in policy analysis.