804 resultados para Computational learning theory
Resumo:
In this thesis, a machine learning approach was used to develop a predictive model for residual methanol concentration in industrial formalin produced at the Akzo Nobel factory in Kristinehamn, Sweden. The MATLABTM computational environment supplemented with the Statistics and Machine LearningTM toolbox from the MathWorks were used to test various machine learning algorithms on the formalin production data from Akzo Nobel. As a result, the Gaussian Process Regression algorithm was found to provide the best results and was used to create the predictive model. The model was compiled to a stand-alone application with a graphical user interface using the MATLAB CompilerTM.
Resumo:
In contemporary times family business research has been dominated by three theoretical perspectives; principal-agent theory, stewardship theory, and resource-based view theory (Siebels 2012) but at the same time scholars argue that what still needs further attention is how underlying processes and phenomena can be explained (Melin, Nordqvist & Sharma 2014). In order to understand themes such as repression or relations of asymmetry the suggestion in this chapter is to move towards a critical stance of thinking which involves problematizing the obvious issues in family firms (Alvesson & Deetz 2000) and moreover allowing the critical perspective to destabilize assumptions made within earlier research (Freire, 1974). By discussing critical theory in general but foremost the Freirean (1970, 1974) critical pedagogy specifically, the arguments in the chapter revolves around how critical pedagogy can open up for a more novel view on family business. The purpose is via critical pedagogy discuss family business from a limited situation perspective, and to argue for a Freirean (1970) dialogue as means of developing a critical consciousness for family members in the family business context. The chapter concludes with some recommendations on platforms or common grounds in which dialogue and raising of consciousness can occur in which the concept can open up possibilities for interesting learning transfer and bring multidimensional knowledge into the family firm.
Resumo:
Data sources are often dispersed geographically in real life applications. Finding a knowledge model may require to join all the data sources and to run a machine learning algorithm on the joint set. We present an alternative based on a Multi Agent System (MAS): an agent mines one data source in order to extract a local theory (knowledge model) and then merges it with the previous MAS theory using a knowledge fusion technique. This way, we obtain a global theory that summarizes the distributed knowledge without spending resources and time in joining data sources. New experiments have been executed including statistical significance analysis. The results show that, as a result of knowledge fusion, the accuracy of initial theories is significantly improved as well as the accuracy of the monolithic solution.
Resumo:
Ausgangspunkt dieser Arbeit ist die Frage: „Wovon hängt die Bewertung von Service Learning-Projekten durch GeographielehrerInnen ab?“. Der empirischen Beantwortung dieser Forschungsfrage geht eine theoretische und literaturgestütze Potenzialanalyse von „Service Learning im Geographieunterricht“ voraus. Diese offenbart, dass Service Learning als innovativer und vielversprechender konzeptioneller Ansatz für einen an den Bildungsstandards (DGFG 2012) orientierten und modernen Geographieunterricht bewertet werden kann. Sie zeigt jedoch auch, dass Service Learning - trotz dieser vermeintlichen Potenzialvielfalt - bislang nahezu keine Anwendung im Geographieun-terricht gefunden hat. Es ist das Ziel dieser Arbeit zu erforschen, welche Gründe und Barrieren für den zu-rückhaltenden Umgang der GeographielehrerInnen mit der Unterrichtskonzeption vor-liegen und - in einer positiven Betrachtungsweise und als Forschungsschwerpunkt - welche Akzeptanzkomponenten für Service Learning im Geographieunterricht sich aus den Wahrnehmungs- und Bewertungsmustern von LehrerInnen, die erstmalig ein Ser-vice Learning-Projekt umgesetzt haben, ableiten lassen. Diese so gewonnenen Akzep-tanzkomponenten werden abschließend im Sinne der Grounded Theory (GLASER & STRAUSS 1967) zu einem Gelingensbedingungsgefüge verknüpft. Dieses kann als wis-senschaftlich hergeleitete Hilfestellung für Initiations- und Implementationsprozesse der Unterrichtskonzeption Service Learning in den Geographieunterricht verstanden werden und richtet sich somit an GeographielehrerInnen, FachleiterInnen und Geographiedidak-tikerInnen.
Resumo:
El presente trabajo se realizó con el objetivo de tener una visión completa de las teorías del liderazgo, teniendo de este una concepción como proceso y poder examinar las diversas formas de aplicación en las organizaciones contemporáneas. El tema es enfocado desde la perspectiva organizacional, un mundo igualmente complejo, sin desconocer su importancia en otros ámbitos como la educación, la política o la dirección del estado. Su enfoque tiene que ver con el estudio académico del cual es la culminación y se enmarca dentro de la perspectiva constitucional de la Carta Política Colombiana que reconoce la importancia capital que tienen la actividad económica y la iniciativa privada en la constitución de empresas. Las diversas visiones del liderazgo han sido aplicadas de distintas maneras en las organizaciones contemporáneas y han generado diversos resultados. Hoy, no es posible pensar en una organización que no haya definido su forma de liderazgo y en consecuencia, confluyen en el campo empresarial multitud de teorías, sin que pueda afirmarse que una sola de ellas permita el manejo adecuado y el cumplimiento de los objetivos misionales. Por esta razón se ha llegado a concebir el liderazgo como una función compleja, en un mundo donde las organizaciones mismas se caracterizan no solo por la complejidad de sus acciones y de su conformación, sino también porque esta característica pertenece también al mundo de la globalización. Las organizaciones concebidas como máquinas que en sentido metafórico logran reconstituirse sus estructuras a medida que están en interacción con otras en el mundo globalizado. Adaptarse a las cambiantes circunstancias hace de las organizaciones conglomerados en permanente dinámica y evolución. En este ámbito puede decirse que el liderazgo es también complejo y que es el liderazgo transformacional el que más se acerca al sentido de la complejidad.
Resumo:
The purpose of this article is to present the results obtained from a questionnaire applied to Costa Rican high school students, in order to know their perspectives about geometry teaching and learning. The results show that geometry classes in high school education have been based on a traditional system of teaching, where the teacher presents the theory; he presents examples and exercises that should be solved by students, which emphasize in the application and memorization of formulas. As a consequence, visualization processes, argumentation and justification don’t have a preponderant role. Geometry is presented to students like a group of definitions, formulas, and theorems completely far from their reality and, where the examples and exercises don’t possess any relationship with their context. As a result, it is considered not important, because it is not applicable to real life situations. Also, the students consider that, to be successful in geometry, it is necessary to know how to use the calculator, to carry out calculations, to have capacity to memorize definitions, formulas and theorems, to possess capacity to understand the geometric drawings and to carry out clever exercises to develop a practical ability.
Resumo:
That humans and animals learn from interaction with the environment is a foundational idea underlying nearly all theories of learning and intelligence. Learning that certain outcomes are associated with specific actions or stimuli (both internal and external), is at the very core of the capacity to adapt behaviour to environmental changes. In the present work, appetitive and aversive reinforcement learning paradigms have been used to investigate the fronto-striatal loops and behavioural correlates of adaptive and maladaptive reinforcement learning processes, aiming to a deeper understanding of how cortical and subcortical substrates interacts between them and with other brain systems to support learning. By combining a large variety of neuroscientific approaches, including behavioral and psychophysiological methods, EEG and neuroimaging techniques, these studies aim at clarifying and advancing the knowledge of the neural bases and computational mechanisms of reinforcement learning, both in normal and neurologically impaired population.
Resumo:
Asymmetric organocatalysed reactions are one of the most fascinating synthetic strategies which one can adopt in order to induct a desired chirality into a reaction product. From all the possible practical applications of small organic molecules in catalytic reaction, amine–based catalysis has attracted a lot of attention during the past two decades. The high interest in asymmetric aminocatalytic pathways is to account to the huge variety of carbonyl compounds that can be functionalized by many different reactions of their corresponding chiral–enamine or –iminium ion as activated nucleophile and electrophile, respectively. Starting from the employment of L–Proline, many useful substrates have been proposed in order to further enhance the catalytic performances of these reaction in terms of enantiomeric excess values, yield, conversion of the substrate and turnover number. In particular, in the last decade the use of chiral and quasi–enantiomeric primary amine species has got a lot of attention in the field. Contemporaneously, many studies have been carried out in order to highlight the mechanism through which these kinds of substrates induct chirality into the desired products. In this scenario, computational chemistry has played a crucial role due to the possibility of simulating and studying any kind of reaction and the transition state structures involved. In the present work the transition state geometries of primary amine–catalysed Michael addition reaction of cyclohexanone to trans–β–nitrostyrene with different organic acid cocatalysts has been studied through different computational techniques such as density functional theory based quantum mechanics calculation and force–field directed molecular simulations.
Resumo:
The dissertation starts by providing a description of the phenomena related to the increasing importance recently acquired by satellite applications. The spread of such technology comes with implications, such as an increase in maintenance cost, from which derives the interest in developing advanced techniques that favor an augmented autonomy of spacecrafts in health monitoring. Machine learning techniques are widely employed to lay a foundation for effective systems specialized in fault detection by examining telemetry data. Telemetry consists of a considerable amount of information; therefore, the adopted algorithms must be able to handle multivariate data while facing the limitations imposed by on-board hardware features. In the framework of outlier detection, the dissertation addresses the topic of unsupervised machine learning methods. In the unsupervised scenario, lack of prior knowledge of the data behavior is assumed. In the specific, two models are brought to attention, namely Local Outlier Factor and One-Class Support Vector Machines. Their performances are compared in terms of both the achieved prediction accuracy and the equivalent computational cost. Both models are trained and tested upon the same sets of time series data in a variety of settings, finalized at gaining insights on the effect of the increase in dimensionality. The obtained results allow to claim that both models, combined with a proper tuning of their characteristic parameters, successfully comply with the role of outlier detectors in multivariate time series data. Nevertheless, under this specific context, Local Outlier Factor results to be outperforming One-Class SVM, in that it proves to be more stable over a wider range of input parameter values. This property is especially valuable in unsupervised learning since it suggests that the model is keen to adapting to unforeseen patterns.
Resumo:
This controlled experiment examined how academic achievement and cognitive, emotional and social aspects of perceived learning are affected by the level of medium naturalness (face-to-face, one-way and two-way videoconferencing) and by learners’ personality traits (extroversion–introversion and emotional stability–neuroticism). The Media Naturalness Theory explains the degree of medium naturalness by comparing its characteristics to face-to-face communication, considered to be the most natural form of communication. A total of 76 participants were randomly assigned to three experimental conditions: face-to-face, one-way and two-way videoconferencing. E-learning conditions were conducted through Zoom videoconferencing, which enables natural and spontaneous communication. Findings shed light on the trade-off involved in media naturalness: one-way videoconferencing, the less natural learning condition, enhanced the cognitive aspect of perceived learning but compromised the emotional and social aspects. Regarding the impact of personality, neurotic students tended to enjoy and succeed more in face-to-face learning, whereas emotionally stable students enjoyed and succeeded in all of the learning conditions. Extroverts tended to enjoy more natural learning environments but had lower achievements in these conditions. In accordance with the ‘poor get richer’ principle, introverts enjoyed environments with a low level of medium naturalness. However, they remained focused and had higher achievements in the face-to-face learning.
Resumo:
Recent years have seen a focus on responding to student expectations in higher education. As a result, a number of technology-enhanced learning (TEL) policies have stipulated a requirement for a minimum virtual learning environment (VLE) standard to provide a consistent student experience. This paper offers insight into an under-researched area of such a VLE standard policy development using a case study of one university. With reference to the implementation staircase model, this study takes cue from the view that an institutional VLE template can affect lower levels directly, sidestepping the chain in the implementation staircase. The Group's activity whose remit is to design and develop a VLE template, therefore, becomes significant. The study, drawing on activity theory, explores the mediating role of such a Group. Factors of success and sources of tension are analysed to understand the interaction between the individuals and the collective agency of Group members. The paper identifies implications to practice for similar TEL development projects. Success factors identified demonstrated the importance of good project management principles, establishing clear rules and division of labour for TEL development groups. One key finding is that Group members are needed to draw on both different and shared mediating artefacts, supporting the conclusion that the nature of the group's composition and the situated expertise of its members are crucial for project success. The paper's theoretical contribution is an enhanced representation of a TEL policy implementation staircase.
Resumo:
The proliferation of Web-based learning objects makes finding and evaluating resources a considerable hurdle for learners to overcome. While established learning analytics methods provide feedback that can aid learner evaluation of learning resources, the adequacy and reliability of these methods is questioned. Because engagement with online learning is different from other Web activity, it is important to establish pedagogically relevant measures that can aid the development of distinct, automated analysis systems. Content analysis is often used to examine online discussion in educational settings, but these instruments are rarely compared with each other which leads to uncertainty regarding their validity and reliability. In this study, participation in Massive Open Online Course (MOOC) comment forums was evaluated using four different analytical approaches: the Digital Artefacts for Learning Engagement (DiAL-e) framework, Bloom's Taxonomy, Structure of Observed Learning Outcomes (SOLO) and Community of Inquiry (CoI). Results from this study indicate that different approaches to measuring cognitive activity are closely correlated and are distinct from typical interaction measures. This suggests that computational approaches to pedagogical analysis may provide useful insights into learning processes.
Resumo:
Biology is now a “Big Data Science” thanks to technological advancements allowing the characterization of the whole macromolecular content of a cell or a collection of cells. This opens interesting perspectives, but only a small portion of this data may be experimentally characterized. From this derives the demand of accurate and efficient computational tools for automatic annotation of biological molecules. This is even more true when dealing with membrane proteins, on which my research project is focused leading to the development of two machine learning-based methods: BetAware-Deep and SVMyr. BetAware-Deep is a tool for the detection and topology prediction of transmembrane beta-barrel proteins found in Gram-negative bacteria. These proteins are involved in many biological processes and primary candidates as drug targets. BetAware-Deep exploits the combination of a deep learning framework (bidirectional long short-term memory) and a probabilistic graphical model (grammatical-restrained hidden conditional random field). Moreover, it introduced a modified formulation of the hydrophobic moment, designed to include the evolutionary information. BetAware-Deep outperformed all the available methods in topology prediction and reported high scores in the detection task. Glycine myristoylation in Eukaryotes is the binding of a myristic acid on an N-terminal glycine. SVMyr is a fast method based on support vector machines designed to predict this modification in dataset of proteomic scale. It uses as input octapeptides and exploits computational scores derived from experimental examples and mean physicochemical features. SVMyr outperformed all the available methods for co-translational myristoylation prediction. In addition, it allows (as a unique feature) the prediction of post-translational myristoylation. Both the tools here described are designed having in mind best practices for the development of machine learning-based tools outlined by the bioinformatics community. Moreover, they are made available via user-friendly web servers. All this make them valuable tools for filling the gap between sequential and annotated data.
Resumo:
Inverse problems are at the core of many challenging applications. Variational and learning models provide estimated solutions of inverse problems as the outcome of specific reconstruction maps. In the variational approach, the result of the reconstruction map is the solution of a regularized minimization problem encoding information on the acquisition process and prior knowledge on the solution. In the learning approach, the reconstruction map is a parametric function whose parameters are identified by solving a minimization problem depending on a large set of data. In this thesis, we go beyond this apparent dichotomy between variational and learning models and we show they can be harmoniously merged in unified hybrid frameworks preserving their main advantages. We develop several highly efficient methods based on both these model-driven and data-driven strategies, for which we provide a detailed convergence analysis. The arising algorithms are applied to solve inverse problems involving images and time series. For each task, we show the proposed schemes improve the performances of many other existing methods in terms of both computational burden and quality of the solution. In the first part, we focus on gradient-based regularized variational models which are shown to be effective for segmentation purposes and thermal and medical image enhancement. We consider gradient sparsity-promoting regularized models for which we develop different strategies to estimate the regularization strength. Furthermore, we introduce a novel gradient-based Plug-and-Play convergent scheme considering a deep learning based denoiser trained on the gradient domain. In the second part, we address the tasks of natural image deblurring, image and video super resolution microscopy and positioning time series prediction, through deep learning based methods. We boost the performances of supervised, such as trained convolutional and recurrent networks, and unsupervised deep learning strategies, such as Deep Image Prior, by penalizing the losses with handcrafted regularization terms.
Resumo:
The Standard Model (SM) of particle physics predicts the existence of a Higgs field responsible for the generation of particles' mass. However, some aspects of this theory remain unsolved, supposing the presence of new physics Beyond the Standard Model (BSM) with the production of new particles at a higher energy scale compared to the current experimental limits. The search for additional Higgs bosons is, in fact, predicted by theoretical extensions of the SM including the Minimal Supersymmetry Standard Model (MSSM). In the MSSM, the Higgs sector consists of two Higgs doublets, resulting in five physical Higgs particles: two charged bosons $H^{\pm}$, two neutral scalars $h$ and $H$, and one pseudoscalar $A$. The work presented in this thesis is dedicated to the search of neutral non-Standard Model Higgs bosons decaying to two muons in the model independent MSSM scenario. Proton-proton collision data recorded by the CMS experiment at the CERN LHC at a center-of-mass energy of 13 TeV are used, corresponding to an integrated luminosity of $35.9\ \text{fb}^{-1}$. Such search is sensitive to neutral Higgs bosons produced either via gluon fusion process or in association with a $\text{b}\bar{\text{b}}$ quark pair. The extensive usage of Machine and Deep Learning techniques is a fundamental element in the discrimination between signal and background simulated events. A new network structure called parameterised Neural Network (pNN) has been implemented, replacing a whole set of single neural networks trained at a specific mass hypothesis value with a single neural network able to generalise well and interpolate in the entire mass range considered. The results of the pNN signal/background discrimination are used to set a model independent 95\% confidence level expected upper limit on the production cross section times branching ratio, for a generic $\phi$ boson decaying into a muon pair in the 130 to 1000 GeV range.